Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,359 | 22,056,591,250 | IssuesEvent | 2022-05-30 13:25:09 | Homebrew/homebrew-core | https://api.github.com/repos/Homebrew/homebrew-core | closed | luajit probably needs to be deprecated | help wanted maintainer feedback | - The latest release is from 2015, and the latest beta is from 2017
- It's heavily patched
- Every new macOS version requires an additional patch
- Upstream's recommendation is to “build from git HEAD”, and they won't apparently ship new releases: https://github.com/LuaJIT/LuaJIT/issues/648#issuecomment-752404043
The reason I'm not doing a pull request directly is that a lot of things depend on luajit, so I want to open a discussion and figure out the best way to handle this. Can some of these be migrated to one of the lua formulas? | True | luajit probably needs to be deprecated - - The latest release is from 2015, and the latest beta is from 2017
- It's heavily patched
- Every new macOS version requires an additional patch
- Upstream's recommendation is to “build from git HEAD”, and they won't apparently ship new releases: https://github.com/LuaJIT/LuaJIT/issues/648#issuecomment-752404043
The reason I'm not doing a pull request directly is that a lot of things depend on luajit, so I want to open a discussion and figure out the best way to handle this. Can some of these be migrated to one of the lua formulas? | main | luajit probably needs to be deprecated the latest release is from and the latest beta is from it s heavily patched every new macos version requires an additional patch upstream s recommendation is to “build from git head” and they won t apparently ship new releases the reason i m not doing a pull request directly is that a lot of things depend on luajit so i want to open a discussion and figure out the best way to handle this can some of these be migrated to one of the lua formulas | 1 |
351 | 3,252,424,356 | IssuesEvent | 2015-10-19 14:49:33 | tethysplatform/tethys | https://api.github.com/repos/tethysplatform/tethys | closed | Upgrade gsconfig dependency of tethys_dataset_services to 1.0.0 | enhancement maintain dependencies | Upgrade the gsconfig library to 1.0.0 and test for bugs. | True | Upgrade gsconfig dependency of tethys_dataset_services to 1.0.0 - Upgrade the gsconfig library to 1.0.0 and test for bugs. | main | upgrade gsconfig dependency of tethys dataset services to upgrade the gsconfig library to and test for bugs | 1 |
4,435 | 23,049,059,974 | IssuesEvent | 2022-07-24 10:57:29 | Lissy93/dashy | https://api.github.com/repos/Lissy93/dashy | closed | [BUG] Multiple-Pages not working | 👤 Awaiting Maintainer Response | Hi there!
Actuall i'm building my Dashy environment.
I wanted to split dashy with multiple pages.
As i read the docs, section multiple pages,
the following code:
`pages:
- name: Proxmox
path: './conf-proxmox.yml'`
Is not working for me, i must add the yml file is in the public folder, as mentionned in the docs
When i build and restart Dashy, i see the proxmox button

It just end up to https://mydashy.fr/home/Proxmox
Problem is i cant see the monitoring...
Thanks for all help and explaining! | True | [BUG] Multiple-Pages not working - Hi there!
Actuall i'm building my Dashy environment.
I wanted to split dashy with multiple pages.
As i read the docs, section multiple pages,
the following code:
`pages:
- name: Proxmox
path: './conf-proxmox.yml'`
Is not working for me, i must add the yml file is in the public folder, as mentionned in the docs
When i build and restart Dashy, i see the proxmox button

It just end up to https://mydashy.fr/home/Proxmox
Problem is i cant see the monitoring...
Thanks for all help and explaining! | main | multiple pages not working hi there actuall i m building my dashy environment i wanted to split dashy with multiple pages as i read the docs section multiple pages the following code pages name proxmox path conf proxmox yml is not working for me i must add the yml file is in the public folder as mentionned in the docs when i build and restart dashy i see the proxmox button it just end up to problem is i cant see the monitoring thanks for all help and explaining | 1 |
350,024 | 10,477,331,945 | IssuesEvent | 2019-09-23 20:39:19 | avalonmediasystem/avalon | https://api.github.com/repos/avalonmediasystem/avalon | closed | Thumbnail grabbing modal too big for small screens | 6.x abandoned low priority wontfix | Thumbnail grabbing modal buttons appear below the fold for small screen sizes. | 1.0 | Thumbnail grabbing modal too big for small screens - Thumbnail grabbing modal buttons appear below the fold for small screen sizes. | non_main | thumbnail grabbing modal too big for small screens thumbnail grabbing modal buttons appear below the fold for small screen sizes | 0 |
204,249 | 15,896,285,959 | IssuesEvent | 2021-04-11 16:54:11 | mlr-org/mlr3spatiotempcv | https://api.github.com/repos/mlr-org/mlr3spatiotempcv | opened | Re-categorize methods in pkgdown reference | Priority: Medium Status: Pending Type: Documentation | Current:
- Spatial
- Spatiotemporal
Maybe better:
- Spatial
- Spatiotemporal
- Feature space
Also it might be helpful to add an additional grouping identifier into the title of certain methods that rely on the same idea.
Example:
"[Buffering] <method title" | 1.0 | Re-categorize methods in pkgdown reference - Current:
- Spatial
- Spatiotemporal
Maybe better:
- Spatial
- Spatiotemporal
- Feature space
Also it might be helpful to add an additional grouping identifier into the title of certain methods that rely on the same idea.
Example:
"[Buffering] <method title" | non_main | re categorize methods in pkgdown reference current spatial spatiotemporal maybe better spatial spatiotemporal feature space also it might be helpful to add an additional grouping identifier into the title of certain methods that rely on the same idea example method title | 0 |
432,473 | 30,284,846,000 | IssuesEvent | 2023-07-08 14:39:57 | OHDSI/GIS | https://api.github.com/repos/OHDSI/GIS | closed | Restructure the ERD | documentation | Restructure (following suit to the GIS proposal) and rename on the website | 1.0 | Restructure the ERD - Restructure (following suit to the GIS proposal) and rename on the website | non_main | restructure the erd restructure following suit to the gis proposal and rename on the website | 0 |
4,472 | 23,319,942,758 | IssuesEvent | 2022-08-08 15:29:27 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: The overflow menu item is truncated automatically when the text length is more | type: bug 🐛 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
carbon-components, carbon-components-react
### Browser
Chrome
### Package version
10.50
### React version
17.0.2
### Description
The overflow menu item text is truncated automatically when the text is some what larger text. See image below

Please let me know is there a fix/workaround for this as we wont be able to upgrade the carbon component version now. Thanks
### Reproduction/example
https://codesandbox.io/s/flamboyant-mccarthy-wi9f66?file=/src/index.js
### Steps to reproduce
1. Add overmenu component
2. Add an item with a bigger text
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: The overflow menu item is truncated automatically when the text length is more - ### Package
carbon-components, carbon-components-react
### Browser
Chrome
### Package version
10.50
### React version
17.0.2
### Description
The overflow menu item text is truncated automatically when the text is some what larger text. See image below

Please let me know is there a fix/workaround for this as we wont be able to upgrade the carbon component version now. Thanks
### Reproduction/example
https://codesandbox.io/s/flamboyant-mccarthy-wi9f66?file=/src/index.js
### Steps to reproduce
1. Add overmenu component
2. Add an item with a bigger text
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | the overflow menu item is truncated automatically when the text length is more package carbon components carbon components react browser chrome package version react version description the overflow menu item text is truncated automatically when the text is some what larger text see image below please let me know is there a fix workaround for this as we wont be able to upgrade the carbon component version now thanks reproduction example steps to reproduce add overmenu component add an item with a bigger text code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
127,993 | 27,171,266,709 | IssuesEvent | 2023-02-17 19:40:47 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | opened | [PNI Refactor] Add documentation to pni-sort-dropdown.js | engineering buyer's guide 🛍 code cleanup needs grooming | Add documentation (in [JSDoc style](https://jsdoc.app/)) to `source/js/buyers-guide/search/pni-sort-dropdown.js`.
e.g.,
```js
/**
* Represents a book.
* @constructor
* @param {string} title - The title of the book.
* @param {string} author - The author of the book.
*/
function Book(title, author) {
}
```
| 1.0 | [PNI Refactor] Add documentation to pni-sort-dropdown.js - Add documentation (in [JSDoc style](https://jsdoc.app/)) to `source/js/buyers-guide/search/pni-sort-dropdown.js`.
e.g.,
```js
/**
* Represents a book.
* @constructor
* @param {string} title - The title of the book.
* @param {string} author - The author of the book.
*/
function Book(title, author) {
}
```
| non_main | add documentation to pni sort dropdown js add documentation in to source js buyers guide search pni sort dropdown js e g js represents a book constructor param string title the title of the book param string author the author of the book function book title author | 0 |
271,466 | 29,506,336,787 | IssuesEvent | 2023-06-03 11:05:24 | MatBenfield/news | https://api.github.com/repos/MatBenfield/news | closed | [SecurityWeek] Enzo Biochem Ransomware Attack Exposes Information of 2.5M Individuals | SecurityWeek Stale |
Enzo Biochem says the clinical test information of roughly 2.47 million individuals was exposed in a recent ransomware attack.
The post [Enzo Biochem Ransomware Attack Exposes Information of 2.5M Individuals](https://www.securityweek.com/enzo-biochem-ransomware-attack-exposes-information-of-2-5m-individuals/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/enzo-biochem-ransomware-attack-exposes-information-of-2-5m-individuals/>
| True | [SecurityWeek] Enzo Biochem Ransomware Attack Exposes Information of 2.5M Individuals -
Enzo Biochem says the clinical test information of roughly 2.47 million individuals was exposed in a recent ransomware attack.
The post [Enzo Biochem Ransomware Attack Exposes Information of 2.5M Individuals](https://www.securityweek.com/enzo-biochem-ransomware-attack-exposes-information-of-2-5m-individuals/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/enzo-biochem-ransomware-attack-exposes-information-of-2-5m-individuals/>
| non_main | enzo biochem ransomware attack exposes information of individuals enzo biochem says the clinical test information of roughly million individuals was exposed in a recent ransomware attack the post appeared first on | 0 |
7,328 | 3,082,726,826 | IssuesEvent | 2015-08-24 00:46:35 | california-civic-data-coalition/django-calaccess-raw-data | https://api.github.com/repos/california-civic-data-coalition/django-calaccess-raw-data | opened | Add documentation for the ``payee_st`` field on the ``LexpCd`` database model | documentation enhancement small |
## Your mission
Add documentation for the ``payee_st`` field on the ``LexpCd`` database model.
## Here's how
**Step 1**: Claim this ticket by leaving a comment below. Tell everyone you're ON IT!
**Step 2**: Open up the file that contains this model. It should be in <a href="https://github.com/california-civic-data-coalition/django-calaccess-raw-data/blob/master/calaccess_raw/models/lobbying.py">calaccess_raw.models.lobbying.py</a>.
**Step 3**: Hit the little pencil button in the upper-right corner of the code box to begin editing the file.

**Step 4**: Find this model and field in the file. (Clicking into the box and searching with CTRL-F can help you here.) Once you find it, we expect the field to lack the ``help_text`` field typically used in Django to explain what a field contains.
```python
effect_dt = fields.DateField(
null=True,
db_column="EFFECT_DT"
)
```
**Step 5**: In a separate tab, open up the <a href="Quilmes">official state documentation</a> and find the page that defines all the fields in this model.

**Step 6**: Find the row in that table's definition table that spells out what this field contains. If it lacks documentation. Note that in the ticket and close it now.

**Step 7**: Return to the GitHub tab.
**Step 8**: Add the state's label explaining what's in the field, to our field definition by inserting it a ``help_text`` argument. That should look something like this:
```python
effect_dt = fields.DateField(
null=True,
db_column="EFFECT_DT",
# Add a help_text argument like the one here, but put your string in instead.
help_text="The other values in record were effective as of this date"
)
```
**Step 9**: Scroll down below the code box and describe the change you've made in the commit message. Press the button below.

**Step 10**: Review your changes and create a pull request submitting them to the core team for inclusion.

That's it! Mission accomplished!
| 1.0 | Add documentation for the ``payee_st`` field on the ``LexpCd`` database model -
## Your mission
Add documentation for the ``payee_st`` field on the ``LexpCd`` database model.
## Here's how
**Step 1**: Claim this ticket by leaving a comment below. Tell everyone you're ON IT!
**Step 2**: Open up the file that contains this model. It should be in <a href="https://github.com/california-civic-data-coalition/django-calaccess-raw-data/blob/master/calaccess_raw/models/lobbying.py">calaccess_raw.models.lobbying.py</a>.
**Step 3**: Hit the little pencil button in the upper-right corner of the code box to begin editing the file.

**Step 4**: Find this model and field in the file. (Clicking into the box and searching with CTRL-F can help you here.) Once you find it, we expect the field to lack the ``help_text`` field typically used in Django to explain what a field contains.
```python
effect_dt = fields.DateField(
null=True,
db_column="EFFECT_DT"
)
```
**Step 5**: In a separate tab, open up the <a href="Quilmes">official state documentation</a> and find the page that defines all the fields in this model.

**Step 6**: Find the row in that table's definition table that spells out what this field contains. If it lacks documentation. Note that in the ticket and close it now.

**Step 7**: Return to the GitHub tab.
**Step 8**: Add the state's label explaining what's in the field, to our field definition by inserting it a ``help_text`` argument. That should look something like this:
```python
effect_dt = fields.DateField(
null=True,
db_column="EFFECT_DT",
# Add a help_text argument like the one here, but put your string in instead.
help_text="The other values in record were effective as of this date"
)
```
**Step 9**: Scroll down below the code box and describe the change you've made in the commit message. Press the button below.

**Step 10**: Review your changes and create a pull request submitting them to the core team for inclusion.

That's it! Mission accomplished!
| non_main | add documentation for the payee st field on the lexpcd database model your mission add documentation for the payee st field on the lexpcd database model here s how step claim this ticket by leaving a comment below tell everyone you re on it step open up the file that contains this model it should be in a href step hit the little pencil button in the upper right corner of the code box to begin editing the file step find this model and field in the file clicking into the box and searching with ctrl f can help you here once you find it we expect the field to lack the help text field typically used in django to explain what a field contains python effect dt fields datefield null true db column effect dt step in a separate tab open up the official state documentation and find the page that defines all the fields in this model step find the row in that table s definition table that spells out what this field contains if it lacks documentation note that in the ticket and close it now step return to the github tab step add the state s label explaining what s in the field to our field definition by inserting it a help text argument that should look something like this python effect dt fields datefield null true db column effect dt add a help text argument like the one here but put your string in instead help text the other values in record were effective as of this date step scroll down below the code box and describe the change you ve made in the commit message press the button below step review your changes and create a pull request submitting them to the core team for inclusion that s it mission accomplished | 0 |
125,021 | 26,577,567,931 | IssuesEvent | 2023-01-22 01:40:49 | hugh-mend/Java-Demo-Log4J | https://api.github.com/repos/hugh-mend/Java-Demo-Log4J | opened | Code Security Report: 37 high severity findings, 88 total findings | code security findings | # Code Security Report
**Latest Scan:** 2023-01-22 01:40am
**Total Findings:** 88
**Tested Project Files:** 102
**Detected Programming Languages:** 1
<!-- SAST-MANUAL-SCAN-START -->
- [ ] Check this box to manually trigger a scan
<!-- SAST-MANUAL-SCAN-END -->
## Language: Java
| Severity | CWE | Vulnerability Type | Count |
|-|-|-|-|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-94](https://cwe.mitre.org/data/definitions/94.html)|Code Injection|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-22](https://cwe.mitre.org/data/definitions/22.html)|Path/Directory Traversal|9|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-73](https://cwe.mitre.org/data/definitions/73.html)|File Manipulation|8|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-79](https://cwe.mitre.org/data/definitions/79.html)|Cross-Site Scripting|18|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-918](https://cwe.mitre.org/data/definitions/918.html)|Server Side Request Forgery|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-338](https://cwe.mitre.org/data/definitions/338.html)|Weak Pseudo-Random|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-244](https://cwe.mitre.org/data/definitions/244.html)|Heap Inspection|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-501](https://cwe.mitre.org/data/definitions/501.html)|Trust Boundary Violation|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-209](https://cwe.mitre.org/data/definitions/209.html)|Error Messages Information Exposure|15|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-601](https://cwe.mitre.org/data/definitions/601.html)|Unvalidated/Open Redirect|14|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-117](https://cwe.mitre.org/data/definitions/117.html)|Log Forging|4|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-113](https://cwe.mitre.org/data/definitions/113.html)|HTTP Header Injection|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-20](https://cwe.mitre.org/data/definitions/20.html)|Session Poisoning|5|
### Details
> The below list presents the 20 most relevant findings that need your attention. To view information on the remaining findings, navigate to the [Mend SAST Application](https://saas.mend.io/sast/#/scans/edceb2f3-0cdd-480d-a378-8ae3450e6707/details).
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>Code Injection (CWE-94) : 1</summary>
#### Findings
<details>
<summary>vulnerabilities/CodeInjectionServlet.java:65</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L60-L65
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L44
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L45
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L46
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L47
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L61
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L65
</details>
</details>
</details>
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>Path/Directory Traversal (CWE-22) : 9</summary>
#### Findings
<details>
<summary>vulnerabilities/UnrestrictedExtensionUploadServlet.java:84</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L79-L84
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedSizeUploadServlet.java:84</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L79-L84
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
</details>
</details>
<details>
<summary>vulnerabilities/NullByteInjectionServlet.java:46</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L41-L46
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L40
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L46
</details>
</details>
<details>
<summary>vulnerabilities/MailHeaderInjectionServlet.java:133</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L128-L133
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L127
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L133
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedExtensionUploadServlet.java:135</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L130-L135
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L106
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L135
</details>
</details>
<details>
<summary>vulnerabilities/XEEandXXEServlet.java:196</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L191-L196
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L148
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L161
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L192
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L196
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedSizeUploadServlet.java:114</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L109-L114
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L111
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L114
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedSizeUploadServlet.java:127</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L122-L127
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L111
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L127
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedExtensionUploadServlet.java:110</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L105-L110
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L106
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L110
</details>
</details>
</details>
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>File Manipulation (CWE-73) : 8</summary>
#### Findings
<details>
<summary>utils/MultiPartFileUtils.java:38</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:38</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:38</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38
</details>
</details>
<details>
<summary>vulnerabilities/MailHeaderInjectionServlet.java:142</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L137-L142
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L141
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L142
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:38</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:33</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28-L33
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L81
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:33</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28-L33
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L80
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:33</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28-L33
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L148
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L157
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33
</details>
</details>
</details>
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>Cross-Site Scripting (CWE-79) : 2</summary>
#### Findings
<details>
<summary>servlets/AbstractServlet.java:94</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L89-L94
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/performance/CreatingUnnecessaryObjectsServlet.java#L21
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/performance/CreatingUnnecessaryObjectsServlet.java#L28
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/performance/CreatingUnnecessaryObjectsServlet.java#L68
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L31
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L94
</details>
</details>
<details>
<summary>servlets/AbstractServlet.java:94</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L89-L94
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/troubles/TruncationErrorServlet.java#L21
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/troubles/TruncationErrorServlet.java#L30
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/troubles/TruncationErrorServlet.java#L44
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L31
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L94
</details>
</details>
</details>
| 1.0 | Code Security Report: 37 high severity findings, 88 total findings - # Code Security Report
**Latest Scan:** 2023-01-22 01:40am
**Total Findings:** 88
**Tested Project Files:** 102
**Detected Programming Languages:** 1
<!-- SAST-MANUAL-SCAN-START -->
- [ ] Check this box to manually trigger a scan
<!-- SAST-MANUAL-SCAN-END -->
## Language: Java
| Severity | CWE | Vulnerability Type | Count |
|-|-|-|-|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-94](https://cwe.mitre.org/data/definitions/94.html)|Code Injection|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-22](https://cwe.mitre.org/data/definitions/22.html)|Path/Directory Traversal|9|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-73](https://cwe.mitre.org/data/definitions/73.html)|File Manipulation|8|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-79](https://cwe.mitre.org/data/definitions/79.html)|Cross-Site Scripting|18|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High|[CWE-918](https://cwe.mitre.org/data/definitions/918.html)|Server Side Request Forgery|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-338](https://cwe.mitre.org/data/definitions/338.html)|Weak Pseudo-Random|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-244](https://cwe.mitre.org/data/definitions/244.html)|Heap Inspection|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-501](https://cwe.mitre.org/data/definitions/501.html)|Trust Boundary Violation|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium|[CWE-209](https://cwe.mitre.org/data/definitions/209.html)|Error Messages Information Exposure|15|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-601](https://cwe.mitre.org/data/definitions/601.html)|Unvalidated/Open Redirect|14|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-117](https://cwe.mitre.org/data/definitions/117.html)|Log Forging|4|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-113](https://cwe.mitre.org/data/definitions/113.html)|HTTP Header Injection|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low|[CWE-20](https://cwe.mitre.org/data/definitions/20.html)|Session Poisoning|5|
### Details
> The below list presents the 20 most relevant findings that need your attention. To view information on the remaining findings, navigate to the [Mend SAST Application](https://saas.mend.io/sast/#/scans/edceb2f3-0cdd-480d-a378-8ae3450e6707/details).
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>Code Injection (CWE-94) : 1</summary>
#### Findings
<details>
<summary>vulnerabilities/CodeInjectionServlet.java:65</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L60-L65
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L44
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L45
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L46
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L47
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L61
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L65
</details>
</details>
</details>
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>Path/Directory Traversal (CWE-22) : 9</summary>
#### Findings
<details>
<summary>vulnerabilities/UnrestrictedExtensionUploadServlet.java:84</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L79-L84
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedSizeUploadServlet.java:84</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L79-L84
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
</details>
</details>
<details>
<summary>vulnerabilities/NullByteInjectionServlet.java:46</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L41-L46
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L40
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L46
</details>
</details>
<details>
<summary>vulnerabilities/MailHeaderInjectionServlet.java:133</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L128-L133
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L127
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L133
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedExtensionUploadServlet.java:135</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L130-L135
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L106
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L135
</details>
</details>
<details>
<summary>vulnerabilities/XEEandXXEServlet.java:196</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L191-L196
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L148
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L161
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L192
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L196
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedSizeUploadServlet.java:114</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L109-L114
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L111
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L114
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedSizeUploadServlet.java:127</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L122-L127
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L111
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L127
</details>
</details>
<details>
<summary>vulnerabilities/UnrestrictedExtensionUploadServlet.java:110</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L105-L110
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L106
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L110
</details>
</details>
</details>
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>File Manipulation (CWE-73) : 8</summary>
#### Findings
<details>
<summary>utils/MultiPartFileUtils.java:38</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:38</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:38</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38
</details>
</details>
<details>
<summary>vulnerabilities/MailHeaderInjectionServlet.java:142</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L137-L142
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L141
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L142
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:38</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:33</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28-L33
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L81
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:33</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28-L33
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L80
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33
</details>
</details>
<details>
<summary>utils/MultiPartFileUtils.java:33</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28-L33
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L148
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L157
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L28
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33
</details>
</details>
</details>
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20>Cross-Site Scripting (CWE-79) : 2</summary>
#### Findings
<details>
<summary>servlets/AbstractServlet.java:94</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L89-L94
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/performance/CreatingUnnecessaryObjectsServlet.java#L21
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/performance/CreatingUnnecessaryObjectsServlet.java#L28
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/performance/CreatingUnnecessaryObjectsServlet.java#L68
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L31
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L94
</details>
</details>
<details>
<summary>servlets/AbstractServlet.java:94</summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L89-L94
<details>
<summary> Trace </summary>
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/troubles/TruncationErrorServlet.java#L21
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/troubles/TruncationErrorServlet.java#L30
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/troubles/TruncationErrorServlet.java#L44
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L31
https://github.com/hugh-mend/Java-Demo-Log4J/blob/05dcf189b81da05c2da90ee1d184aa3cf974a4a0/src/main/java/org/t246osslab/easybuggy/core/servlets/AbstractServlet.java#L94
</details>
</details>
</details>
| non_main | code security report high severity findings total findings code security report latest scan total findings tested project files detected programming languages check this box to manually trigger a scan language java severity cwe vulnerability type count high injection high traversal high manipulation high scripting high side request forgery medium pseudo random medium inspection medium boundary violation medium messages information exposure low redirect low forging low header injection low poisoning details the below list presents the most relevant findings that need your attention to view information on the remaining findings navigate to the code injection cwe findings vulnerabilities codeinjectionservlet java trace path directory traversal cwe findings vulnerabilities unrestrictedextensionuploadservlet java trace vulnerabilities unrestrictedsizeuploadservlet java trace vulnerabilities nullbyteinjectionservlet java trace vulnerabilities mailheaderinjectionservlet java trace vulnerabilities unrestrictedextensionuploadservlet java trace vulnerabilities xeeandxxeservlet java trace vulnerabilities unrestrictedsizeuploadservlet java trace vulnerabilities unrestrictedsizeuploadservlet java trace vulnerabilities unrestrictedextensionuploadservlet java trace file manipulation cwe findings utils multipartfileutils java trace utils multipartfileutils java trace utils multipartfileutils java trace vulnerabilities mailheaderinjectionservlet java trace utils multipartfileutils java trace utils multipartfileutils java trace utils multipartfileutils java trace utils multipartfileutils java trace cross site scripting cwe findings servlets abstractservlet java trace servlets abstractservlet java trace | 0 |
48,501 | 7,435,165,974 | IssuesEvent | 2018-03-26 13:28:30 | livecli/livecli | https://api.github.com/repos/livecli/livecli | opened | Add Application for E2 receiver | documentation | From
https://github.com/livecli/ipk
- Add url to **Livecli Applications**
and/or
- move guide to the website | 1.0 | Add Application for E2 receiver - From
https://github.com/livecli/ipk
- Add url to **Livecli Applications**
and/or
- move guide to the website | non_main | add application for receiver from add url to livecli applications and or move guide to the website | 0 |
759,643 | 26,604,505,882 | IssuesEvent | 2023-01-23 18:11:23 | Accenture/sfmc-devtools | https://api.github.com/repos/Accenture/sfmc-devtools | closed | [FEATURE] refresh emails in active triggeredSends / journeys | enhancement c/asset c/triggeredSendDefinition PRIORITY | if you update an email, any triggeredSend that is currently active needs to be paused, published and started again.
i propose the command:
`refresh <bu> [type]` - pause, publish, starts ALL triggered sends on given BU that are currently ACTIVE (started/running)
with **type** getting implemented for **triggeredSendDefinition** alone at this point, to which the method should default
additional options (potentially added later):
`refresh <bu> [type] "email:<email key>,tsd:<triggered send key>,journey:<interaction key>"` - pause, publish, starts triggered sends that use the given email keys, have the given tsd key or are part of the given journey key
it would:
- take asset keys (content builder) as parameter
- caches the ids for these asset keys from the server
- caches triggeredSends from the server
- finds relevant triggeredSends that use the given email/asset keys
- then executes pause, publish, start on them. | 1.0 | [FEATURE] refresh emails in active triggeredSends / journeys - if you update an email, any triggeredSend that is currently active needs to be paused, published and started again.
i propose the command:
`refresh <bu> [type]` - pause, publish, starts ALL triggered sends on given BU that are currently ACTIVE (started/running)
with **type** getting implemented for **triggeredSendDefinition** alone at this point, to which the method should default
additional options (potentially added later):
`refresh <bu> [type] "email:<email key>,tsd:<triggered send key>,journey:<interaction key>"` - pause, publish, starts triggered sends that use the given email keys, have the given tsd key or are part of the given journey key
it would:
- take asset keys (content builder) as parameter
- caches the ids for these asset keys from the server
- caches triggeredSends from the server
- finds relevant triggeredSends that use the given email/asset keys
- then executes pause, publish, start on them. | non_main | refresh emails in active triggeredsends journeys if you update an email any triggeredsend that is currently active needs to be paused published and started again i propose the command refresh pause publish starts all triggered sends on given bu that are currently active started running with type getting implemented for triggeredsenddefinition alone at this point to which the method should default additional options potentially added later refresh email tsd journey pause publish starts triggered sends that use the given email keys have the given tsd key or are part of the given journey key it would take asset keys content builder as parameter caches the ids for these asset keys from the server caches triggeredsends from the server finds relevant triggeredsends that use the given email asset keys then executes pause publish start on them | 0 |
4,392 | 22,536,522,044 | IssuesEvent | 2022-06-25 09:54:16 | wkentaro/gdown | https://api.github.com/repos/wkentaro/gdown | closed | --remaining-ok flag | bug status: wip-by-maintainer | Hi 😊
I've encountered a issue when i use gdown:
~~~
gdown GDRIVE_FOLDER_URL -O /tmp/folder --folder
~~~
and then i got
~~~
The gdrive folder with url: GDRIVE_FOLDER_URL has at least 50 files, gdrive can't download more than this limit, if you are ok with this, please run again with --remaining-ok flag.
~~~
It worked when i use it about one month ago
even I add --remaining-ok in my terminal, or using python thing
~~~
import gdown
gdown.download(GDRIVE_FOLDER_URL, remaining_ok = True)
~~~ | True | --remaining-ok flag - Hi 😊
I've encountered a issue when i use gdown:
~~~
gdown GDRIVE_FOLDER_URL -O /tmp/folder --folder
~~~
and then i got
~~~
The gdrive folder with url: GDRIVE_FOLDER_URL has at least 50 files, gdrive can't download more than this limit, if you are ok with this, please run again with --remaining-ok flag.
~~~
It worked when i use it about one month ago
even I add --remaining-ok in my terminal, or using python thing
~~~
import gdown
gdown.download(GDRIVE_FOLDER_URL, remaining_ok = True)
~~~ | main | remaining ok flag hi 😊 i ve encountered a issue when i use gdown gdown gdrive folder url o tmp folder folder and then i got the gdrive folder with url gdrive folder url has at least files gdrive can t download more than this limit if you are ok with this please run again with remaining ok flag it worked when i use it about one month ago even i add remaining ok in my terminal or using python thing import gdown gdown download gdrive folder url remaining ok true | 1 |
133,803 | 12,553,744,634 | IssuesEvent | 2020-06-06 23:15:11 | molpopgen/fwdpy11 | https://api.github.com/repos/molpopgen/fwdpy11 | opened | Manual example has now output | documentation | The output from tstimeseries.rst doesn't show up. Must have something to do with blacken-docs? | 1.0 | Manual example has now output - The output from tstimeseries.rst doesn't show up. Must have something to do with blacken-docs? | non_main | manual example has now output the output from tstimeseries rst doesn t show up must have something to do with blacken docs | 0 |
829 | 4,467,012,395 | IssuesEvent | 2016-08-25 01:52:13 | gogits/gogs | https://api.github.com/repos/gogits/gogs | closed | FR: API to close already existing issue | status/assigned to maintainer status/needs feedback | Probably via edit, though it can have endpoint of its own.
| True | FR: API to close already existing issue - Probably via edit, though it can have endpoint of its own.
| main | fr api to close already existing issue probably via edit though it can have endpoint of its own | 1 |
5,028 | 25,801,862,753 | IssuesEvent | 2022-12-11 03:28:39 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | opened | fix caching on azure | 🐛 bug 🚧 maintainer issue | **Description of the bug**
Our caching has been working somewhat intermittently.
I've ran our pipelines after an empty commit to see our caching take place. On the same agent, we got:
- a cache hit:
https://dev.azure.com/spiderlightning/slight/_build/results?buildId=357&view=logs&j=70fcc8e8-cc68-58a0-49dd-bf3991baaf6b&t=a1c8d2d5-f3e0-5740-cb25-d150119fd493
- a cache miss:
https://dev.azure.com/spiderlightning/slight/_build/results?buildId=357&view=logs&j=70fcc8e8-cc68-58a0-49dd-bf3991baaf6b&t=5cfd1f4a-b154-515d-6662-392410763baa
That said, most of them result in cache misses. Checking the caching post job, I see:
<img width="730" alt="image" src="https://user-images.githubusercontent.com/39843321/206884894-5aed3748-146b-4b48-bb7b-271fb085d025.png">
It should say:
<img width="885" alt="image" src="https://user-images.githubusercontent.com/39843321/206884905-f6ffa5ba-502c-4c97-ad00-88dff6a83aed.png">
I'm not too sure what's causing this issue. The keys are fine, and the path is correct — I've tried multiple configurations, and even changing to CacheBeta.
**To Reproduce**
n/a
**Additional context**
n/a | True | fix caching on azure - **Description of the bug**
Our caching has been working somewhat intermittently.
I've ran our pipelines after an empty commit to see our caching take place. On the same agent, we got:
- a cache hit:
https://dev.azure.com/spiderlightning/slight/_build/results?buildId=357&view=logs&j=70fcc8e8-cc68-58a0-49dd-bf3991baaf6b&t=a1c8d2d5-f3e0-5740-cb25-d150119fd493
- a cache miss:
https://dev.azure.com/spiderlightning/slight/_build/results?buildId=357&view=logs&j=70fcc8e8-cc68-58a0-49dd-bf3991baaf6b&t=5cfd1f4a-b154-515d-6662-392410763baa
That said, most of them result in cache misses. Checking the caching post job, I see:
<img width="730" alt="image" src="https://user-images.githubusercontent.com/39843321/206884894-5aed3748-146b-4b48-bb7b-271fb085d025.png">
It should say:
<img width="885" alt="image" src="https://user-images.githubusercontent.com/39843321/206884905-f6ffa5ba-502c-4c97-ad00-88dff6a83aed.png">
I'm not too sure what's causing this issue. The keys are fine, and the path is correct — I've tried multiple configurations, and even changing to CacheBeta.
**To Reproduce**
n/a
**Additional context**
n/a | main | fix caching on azure description of the bug our caching has been working somewhat intermittently i ve ran our pipelines after an empty commit to see our caching take place on the same agent we got a cache hit a cache miss that said most of them result in cache misses checking the caching post job i see img width alt image src it should say img width alt image src i m not too sure what s causing this issue the keys are fine and the path is correct — i ve tried multiple configurations and even changing to cachebeta to reproduce n a additional context n a | 1 |
48,753 | 25,794,098,206 | IssuesEvent | 2022-12-10 11:07:20 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | Consecutive range checks are not combined | tenet-performance | ### Description
Consider [the following code](https://sharplab.io/#v2:C4LghgzgtgPgxAOwK4BsVgEYoKYAIAmAlhJjgLABQlAAgIwBsu1ATLgIIBCAwgCIBK2YACdC2AG5gUlAN6Vc8pg1yEEwdkKFgAnmwDGu7BAgAKFcADaAXVyZd+AJRyFsigrdMA7DYx3zABmsAam9fWiCQ/HNmcNtIgGZLAG4neQBfSlSgA==) (inspired by https://github.com/dotnet/csharplang/issues/4082#issuecomment-1344991671):
```c#
static int ArrayAccess(int[] abcd)
{
return abcd[0] + abcd[1] + abcd[2] + abcd[3];
}
```
Note that it accesses the array multiple times at increasing indexes, which I think is a relatively common pattern. This results in one range check for every array access:
```asm
L0000: mov eax, [ecx+4]
L0003: test eax, eax
L0005: je short L0025
L0007: mov edx, [ecx+8]
L000a: cmp eax, 1
L000d: jbe short L0025
L000f: add edx, [ecx+0xc]
L0012: cmp eax, 2
L0015: jbe short L0025
L0017: add edx, [ecx+0x10]
L001a: cmp eax, 3
L001d: jbe short L0025
L001f: mov eax, edx
L0021: add eax, [ecx+0x14]
L0024: ret
L0025: call 0x71efa060
L002a: int3
```
This is annoying, because only the check for `abcd[3]` is necessary, since there are no side-effects between the array accesses. Even though these range checks are likely to be well predicted by the CPU, I think it would be nice if they could be elided by the JIT.
This may very well be a duplicate, but I couldn't find it. Or it may not be worth tracking, in which case, feel free to close.
### Configuration
Current SharpLab: Core CLR 7.0.22.51805 on x86
### Regression?
Not that I know of.
| True | Consecutive range checks are not combined - ### Description
Consider [the following code](https://sharplab.io/#v2:C4LghgzgtgPgxAOwK4BsVgEYoKYAIAmAlhJjgLABQlAAgIwBsu1ATLgIIBCAwgCIBK2YACdC2AG5gUlAN6Vc8pg1yEEwdkKFgAnmwDGu7BAgAKFcADaAXVyZd+AJRyFsigrdMA7DYx3zABmsAam9fWiCQ/HNmcNtIgGZLAG4neQBfSlSgA==) (inspired by https://github.com/dotnet/csharplang/issues/4082#issuecomment-1344991671):
```c#
static int ArrayAccess(int[] abcd)
{
return abcd[0] + abcd[1] + abcd[2] + abcd[3];
}
```
Note that it accesses the array multiple times at increasing indexes, which I think is a relatively common pattern. This results in one range check for every array access:
```asm
L0000: mov eax, [ecx+4]
L0003: test eax, eax
L0005: je short L0025
L0007: mov edx, [ecx+8]
L000a: cmp eax, 1
L000d: jbe short L0025
L000f: add edx, [ecx+0xc]
L0012: cmp eax, 2
L0015: jbe short L0025
L0017: add edx, [ecx+0x10]
L001a: cmp eax, 3
L001d: jbe short L0025
L001f: mov eax, edx
L0021: add eax, [ecx+0x14]
L0024: ret
L0025: call 0x71efa060
L002a: int3
```
This is annoying, because only the check for `abcd[3]` is necessary, since there are no side-effects between the array accesses. Even though these range checks are likely to be well predicted by the CPU, I think it would be nice if they could be elided by the JIT.
This may very well be a duplicate, but I couldn't find it. Or it may not be worth tracking, in which case, feel free to close.
### Configuration
Current SharpLab: Core CLR 7.0.22.51805 on x86
### Regression?
Not that I know of.
| non_main | consecutive range checks are not combined description consider inspired by c static int arrayaccess int abcd return abcd abcd abcd abcd note that it accesses the array multiple times at increasing indexes which i think is a relatively common pattern this results in one range check for every array access asm mov eax test eax eax je short mov edx cmp eax jbe short add edx cmp eax jbe short add edx cmp eax jbe short mov eax edx add eax ret call this is annoying because only the check for abcd is necessary since there are no side effects between the array accesses even though these range checks are likely to be well predicted by the cpu i think it would be nice if they could be elided by the jit this may very well be a duplicate but i couldn t find it or it may not be worth tracking in which case feel free to close configuration current sharplab core clr on regression not that i know of | 0 |
657,172 | 21,787,669,437 | IssuesEvent | 2022-05-14 11:56:54 | wso2/api-manager | https://api.github.com/repos/wso2/api-manager | opened | API Product - Created time incorrect | Type/Bug Priority/Normal | ### Description:
The timestamp and the ALT for the created time is wrong when creating an API Product.

<img width="1475" alt="Screenshot 2022-05-14 at 3 02 48 PM" src="https://user-images.githubusercontent.com/5195851/168424530-537b14b3-1bff-4998-83a6-e517b98710ad.png">
### Steps to reproduce:
I followed the steps in [1] to create an API Product and the overview page gives and incorrect timestamp and ALT for the created time.
[1] https://apim.docs.wso2.com/en/latest/design/create-api-product/create-api-product/
### Affected product version:
4.1.0
### Affected component:
<!-- Members can use Component/*** labels -->
### Environment details (with versions):
- OS: Mac OS
- Client:
- Env (Docker/K8s):
---
### Optional fields
#### Related issues:
<!-- Any related issues from this/other repositories-->
#### Suggested labels:
<!--Only to be used by non-members-->
#### Suggested assignees:
<!--Only to be used by non-members--> | 1.0 | API Product - Created time incorrect - ### Description:
The timestamp and the ALT for the created time is wrong when creating an API Product.

<img width="1475" alt="Screenshot 2022-05-14 at 3 02 48 PM" src="https://user-images.githubusercontent.com/5195851/168424530-537b14b3-1bff-4998-83a6-e517b98710ad.png">
### Steps to reproduce:
I followed the steps in [1] to create an API Product and the overview page gives and incorrect timestamp and ALT for the created time.
[1] https://apim.docs.wso2.com/en/latest/design/create-api-product/create-api-product/
### Affected product version:
4.1.0
### Affected component:
<!-- Members can use Component/*** labels -->
### Environment details (with versions):
- OS: Mac OS
- Client:
- Env (Docker/K8s):
---
### Optional fields
#### Related issues:
<!-- Any related issues from this/other repositories-->
#### Suggested labels:
<!--Only to be used by non-members-->
#### Suggested assignees:
<!--Only to be used by non-members--> | non_main | api product created time incorrect description the timestamp and the alt for the created time is wrong when creating an api product img width alt screenshot at pm src steps to reproduce i followed the steps in to create an api product and the overview page gives and incorrect timestamp and alt for the created time affected product version affected component environment details with versions os mac os client env docker optional fields related issues suggested labels suggested assignees | 0 |
3,726 | 15,440,696,773 | IssuesEvent | 2021-03-08 04:03:45 | i-am-gizm0/VHL-Improvements | https://api.github.com/repos/i-am-gizm0/VHL-Improvements | closed | Move CSS to its own file to inject | maintainence | This extension was originally a Tampermonkey script, so the CSS was injected within the script. CRX can inject CSS separately, which will clean up the source a bit. | True | Move CSS to its own file to inject - This extension was originally a Tampermonkey script, so the CSS was injected within the script. CRX can inject CSS separately, which will clean up the source a bit. | main | move css to its own file to inject this extension was originally a tampermonkey script so the css was injected within the script crx can inject css separately which will clean up the source a bit | 1 |
1,058 | 4,875,072,483 | IssuesEvent | 2016-11-16 08:16:20 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | git update fails every other time | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
git
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
The default which shipped with Fedora release 24 (Twenty Four).
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
Git clone fails every other time, with this error message
```
TASK [clone icons] *************************************************************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 127.0.0.1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_x7POFb/ansible_module_git.py\", line 1040, in <module>\r\n main()\r\n File \"/tmp/ansible_x7POFb/ansible_module_git.py\", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n", "msg": "MODULE FAILURE"}
to retry, use: --limit @/home/l33tname/dotfiles/setup.retry
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: local
tasks:
- name: clone icons
git: repo=https://github.com/jcubic/Clarity.git force=yes dest=/home/l33tname/.icons/Clarity
- name: config icons
command: ./configure chdir=/home/l33tname/.icons/Clarity
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect that it works everytime not only every second time.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [clone icons] *************************************************************
task path: /home/l33tname/dotfiles/git_wtf.yaml:4
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/source_control/git.py
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `" && echo ansible-tmp-1479049433.89-122334128883345="` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `" ) && sleep 0'"'"''
<127.0.0.1> PUT /tmp/tmpPx4qzT TO /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py
<127.0.0.1> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C '[127.0.0.1]'
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '"'"'chmod u+x /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/ /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py && sleep 0'"'"''
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C -tt 127.0.0.1 '/bin/sh -c '"'"'/usr/bin/python /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py; rm -rf "/home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/" > /dev/null 2>&1 && sleep 0'"'"''
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "git"
},
"module_stderr": "OpenSSH_7.2p2, OpenSSL 1.0.2h-fips 3 May 2016\r\ndebug1: Reading configuration data /home/l33tname/.ssh/config\r\ndebug1: /home/l33tname/.ssh/config line 1: Applying options for 127.0.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 21589\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 127.0.0.1 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_MhEEpB/ansible_module_git.py\", line 1040, in <module>\r\n main()\r\n File \"/tmp/ansible_MhEEpB/ansible_module_git.py\", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n",
"msg": "MODULE FAILURE"
}
```
| True | git update fails every other time - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
git
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
The default which shipped with Fedora release 24 (Twenty Four).
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
Git clone fails every other time, with this error message
```
TASK [clone icons] *************************************************************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 127.0.0.1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_x7POFb/ansible_module_git.py\", line 1040, in <module>\r\n main()\r\n File \"/tmp/ansible_x7POFb/ansible_module_git.py\", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n", "msg": "MODULE FAILURE"}
to retry, use: --limit @/home/l33tname/dotfiles/setup.retry
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: local
tasks:
- name: clone icons
git: repo=https://github.com/jcubic/Clarity.git force=yes dest=/home/l33tname/.icons/Clarity
- name: config icons
command: ./configure chdir=/home/l33tname/.icons/Clarity
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect that it works everytime not only every second time.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [clone icons] *************************************************************
task path: /home/l33tname/dotfiles/git_wtf.yaml:4
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/source_control/git.py
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `" && echo ansible-tmp-1479049433.89-122334128883345="` echo $HOME/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345 `" ) && sleep 0'"'"''
<127.0.0.1> PUT /tmp/tmpPx4qzT TO /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py
<127.0.0.1> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C '[127.0.0.1]'
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C 127.0.0.1 '/bin/sh -c '"'"'chmod u+x /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/ /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py && sleep 0'"'"''
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: None
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/l33tname/.ansible/cp/ansible-ssh-%C -tt 127.0.0.1 '/bin/sh -c '"'"'/usr/bin/python /home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/git.py; rm -rf "/home/l33tname/.ansible/tmp/ansible-tmp-1479049433.89-122334128883345/" > /dev/null 2>&1 && sleep 0'"'"''
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "git"
},
"module_stderr": "OpenSSH_7.2p2, OpenSSL 1.0.2h-fips 3 May 2016\r\ndebug1: Reading configuration data /home/l33tname/.ssh/config\r\ndebug1: /home/l33tname/.ssh/config line 1: Applying options for 127.0.0.1\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 21589\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 127.0.0.1 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_MhEEpB/ansible_module_git.py\", line 1040, in <module>\r\n main()\r\n File \"/tmp/ansible_MhEEpB/ansible_module_git.py\", line 994, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n",
"msg": "MODULE FAILURE"
}
```
| main | git update fails every other time issue type bug report component name git ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables the default which shipped with fedora release twenty four os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary git clone fails every other time with this error message task fatal failed changed false failed true module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module git py line in r n main r n file tmp ansible ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure to retry use limit home dotfiles setup retry steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts local tasks name clone icons git repo force yes dest home icons clarity name config icons command configure chdir home icons clarity expected results i expect that it works everytime not only every second time actual results task task path home dotfiles git wtf yaml using module file usr lib site packages ansible modules core source control git py establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible tmp ansible tmp git py ssh exec sftp b vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c bin sh c chmod u x home ansible tmp ansible tmp home ansible tmp ansible tmp git py sleep establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh c tt bin sh c usr bin python home ansible tmp ansible tmp git py rm rf home ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module name git module stderr openssh openssl fips may r reading configuration data home ssh config r home ssh config line applying options for r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to closed r n module stdout traceback most recent call last r n file tmp ansible mheepb ansible module git py line in r n main r n file tmp ansible mheepb ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure | 1 |
208,972 | 15,953,147,643 | IssuesEvent | 2021-04-15 12:05:09 | backend-br/vagas | https://api.github.com/repos/backend-br/vagas | closed | [Remoto][Statup] Backend NodeJS Pleno @ Jazida.com | Docker Express Kubernetes Linux NodeJS PJ Pleno Presencial Remoto Senior Testes automatizados | **Backend - NodeJS (PLENO ou SENIOR)**
## Descrição da vaga
O JAZIDA está em busca de desenvolvedor Backend para entrega de melhorias e aplicação de tecnologias inovadoras maximizando a efetividade e eficiência do sistema. O cargo é requerido para entender e interpretar os aspectos técnicos, legais, comerciais, econômicos e corporativos que impactam o desenvolvimento de um software.
Este cargo é uma grande oportunidade de fazer parte do time de desenvolvimento de um sistema complexo entregando aos clientes do Jazida.com ferramentas capazes de mudar a maneira como milhares de pessoas trabalham diariamente. Este trabalho tem um foco estratégico e responsabilidade em todo território nacional.
Somos pioneiros na automatização da gestão de processos minerários e mudamos a forma que a mineração do Brasil gerencia seus processos.
Nossa história é marcada por inovações e pioneirismo. Fomos os primeiros a automatizar a leitura e interpretação de milhares de códigos e textos publicados diariamente em sites do governo e jornais oficiais que impactam a vida de milhares de pessoas em todo o Brasil. Fomos os primeiros a espacializar isso e traduzir de forma simplificada, removendo barreiras burocráticas e elevada exigência de conhecimento técnico. Além de impactar na vida de mineradores, esta eficiência se traduz na democratização de um setor, gerando empregos e renda para milhares de pessoas.
ATIVIDADES PRINCIPAIS
- Projetar, codificar, testar, operar e resolver problemas
- Suportar decisões e priorizações das demandas com o intuito de evoluir os produtos atuais e desenvolver novos produtos que gerem valor para os nossos clientes.
- Dar suporte para o restante da equipe de desenvolvimento em nível técnico
- Responsável por trazer novas ideias para soluções de problemas
- Ter um desejo de participar em projetos desafiadores que impactam na vida de milhares de pessoas
- Desenvolvido de Testes Automatizados
- Liderança e desenvolvimento de time
- Colaborar com outro setores da empresa para gerar valor
- Manutenção e cuidado de sistema em produção
## Local
Escritório localizado em Brasília - DF ou Remoto
## Benefícios
**O que o Jazida oferece**
- Aqui está listado apenas uma amostra do nosso pacote de benefícios
- Home Office
- Um pacote de salário competitivo com prêmio salarial de incentivo anual em dinheiro (PLR)
- Desenvolvimento de carreira e assistência educacional para promover seus objetivos
- Assistência educacional para inglês
- Uma política abrangente de licenças que cobre todos os momentos importantes da vida (férias anuais, licença parental remunerada, licença médica, férias remuneradas)
- Suporte no acesso a Plano de Saúde e Academia
- Apoio contínuo de bem-estar individual
#### Diferenciais
Ambiente de trabalho sem formalidades, horário flexível e hierarquia plana
Ambiente livre para tomada de decisões técnicas
## Requisitos
Para ter sucesso neste cargo você deve
- Ter um desejo de participar em projetos desafiadores que impactam na vida de milhares de pessoas
- Um elevado conhecimento em NodeJS: express, socket.io
- Experiência com banco de dados relacional
- Conhecimento em linux
- Experiência de 2 anos em Backend usando NodeJS
- Conhecimento bem desenvolvido em Testes Automatizados
- Comprovada experiência em liderança incluindo a habilidade de construir e desenvolver um time capaz de entregar um sistema inovador e de relevância
- Provada habilidade de nutrir e desenvolver a capacidade inovadora para desenvolver e entregar novas aplicações de tecnologias
- Provada habilidade em trabalhar colaborativamente com outros setores com objetivo de atingir resultados que geram valor além da área de desenvolvimento e compartilhar conhecimento, experiência e aprendizados
- Provada capacidade em manter de forma efetiva e com excelência um sistema em Produção utilizado diariamente por milhares de pessoas
- Conhecimentos em Docker e Kubernetes é um diferencial
## Contratação
PJ a combinar
## Como se candidatar
**Favor enviar apena Perfis Pleno ou Senior**
Venha fazer parte do Jazida, envie seu CV para vagas@jazida.com no assunto adicionar "Vaga Backend" E no email colocar a **pretensão salarial** e **perfil [PLENO] OU [SENIOR]**
## Tempo médio de feedbacks
Caso você seja selecionado, receberá um contato da nossa equipe.
E-mail para contato em caso de não haver resposta: vagas@jazida.com
## Labels
- 🏢 Flexível
- 🏢 Presencial
- 🏢 Remoto
- 👨 Pleno
- 👴 Sênior
- ⚖️ A-Combinar
- ⚖️ PJ
| 1.0 | [Remoto][Statup] Backend NodeJS Pleno @ Jazida.com - **Backend - NodeJS (PLENO ou SENIOR)**
## Descrição da vaga
O JAZIDA está em busca de desenvolvedor Backend para entrega de melhorias e aplicação de tecnologias inovadoras maximizando a efetividade e eficiência do sistema. O cargo é requerido para entender e interpretar os aspectos técnicos, legais, comerciais, econômicos e corporativos que impactam o desenvolvimento de um software.
Este cargo é uma grande oportunidade de fazer parte do time de desenvolvimento de um sistema complexo entregando aos clientes do Jazida.com ferramentas capazes de mudar a maneira como milhares de pessoas trabalham diariamente. Este trabalho tem um foco estratégico e responsabilidade em todo território nacional.
Somos pioneiros na automatização da gestão de processos minerários e mudamos a forma que a mineração do Brasil gerencia seus processos.
Nossa história é marcada por inovações e pioneirismo. Fomos os primeiros a automatizar a leitura e interpretação de milhares de códigos e textos publicados diariamente em sites do governo e jornais oficiais que impactam a vida de milhares de pessoas em todo o Brasil. Fomos os primeiros a espacializar isso e traduzir de forma simplificada, removendo barreiras burocráticas e elevada exigência de conhecimento técnico. Além de impactar na vida de mineradores, esta eficiência se traduz na democratização de um setor, gerando empregos e renda para milhares de pessoas.
ATIVIDADES PRINCIPAIS
- Projetar, codificar, testar, operar e resolver problemas
- Suportar decisões e priorizações das demandas com o intuito de evoluir os produtos atuais e desenvolver novos produtos que gerem valor para os nossos clientes.
- Dar suporte para o restante da equipe de desenvolvimento em nível técnico
- Responsável por trazer novas ideias para soluções de problemas
- Ter um desejo de participar em projetos desafiadores que impactam na vida de milhares de pessoas
- Desenvolvido de Testes Automatizados
- Liderança e desenvolvimento de time
- Colaborar com outro setores da empresa para gerar valor
- Manutenção e cuidado de sistema em produção
## Local
Escritório localizado em Brasília - DF ou Remoto
## Benefícios
**O que o Jazida oferece**
- Aqui está listado apenas uma amostra do nosso pacote de benefícios
- Home Office
- Um pacote de salário competitivo com prêmio salarial de incentivo anual em dinheiro (PLR)
- Desenvolvimento de carreira e assistência educacional para promover seus objetivos
- Assistência educacional para inglês
- Uma política abrangente de licenças que cobre todos os momentos importantes da vida (férias anuais, licença parental remunerada, licença médica, férias remuneradas)
- Suporte no acesso a Plano de Saúde e Academia
- Apoio contínuo de bem-estar individual
#### Diferenciais
Ambiente de trabalho sem formalidades, horário flexível e hierarquia plana
Ambiente livre para tomada de decisões técnicas
## Requisitos
Para ter sucesso neste cargo você deve
- Ter um desejo de participar em projetos desafiadores que impactam na vida de milhares de pessoas
- Um elevado conhecimento em NodeJS: express, socket.io
- Experiência com banco de dados relacional
- Conhecimento em linux
- Experiência de 2 anos em Backend usando NodeJS
- Conhecimento bem desenvolvido em Testes Automatizados
- Comprovada experiência em liderança incluindo a habilidade de construir e desenvolver um time capaz de entregar um sistema inovador e de relevância
- Provada habilidade de nutrir e desenvolver a capacidade inovadora para desenvolver e entregar novas aplicações de tecnologias
- Provada habilidade em trabalhar colaborativamente com outros setores com objetivo de atingir resultados que geram valor além da área de desenvolvimento e compartilhar conhecimento, experiência e aprendizados
- Provada capacidade em manter de forma efetiva e com excelência um sistema em Produção utilizado diariamente por milhares de pessoas
- Conhecimentos em Docker e Kubernetes é um diferencial
## Contratação
PJ a combinar
## Como se candidatar
**Favor enviar apena Perfis Pleno ou Senior**
Venha fazer parte do Jazida, envie seu CV para vagas@jazida.com no assunto adicionar "Vaga Backend" E no email colocar a **pretensão salarial** e **perfil [PLENO] OU [SENIOR]**
## Tempo médio de feedbacks
Caso você seja selecionado, receberá um contato da nossa equipe.
E-mail para contato em caso de não haver resposta: vagas@jazida.com
## Labels
- 🏢 Flexível
- 🏢 Presencial
- 🏢 Remoto
- 👨 Pleno
- 👴 Sênior
- ⚖️ A-Combinar
- ⚖️ PJ
| non_main | backend nodejs pleno jazida com backend nodejs pleno ou senior descrição da vaga o jazida está em busca de desenvolvedor backend para entrega de melhorias e aplicação de tecnologias inovadoras maximizando a efetividade e eficiência do sistema o cargo é requerido para entender e interpretar os aspectos técnicos legais comerciais econômicos e corporativos que impactam o desenvolvimento de um software este cargo é uma grande oportunidade de fazer parte do time de desenvolvimento de um sistema complexo entregando aos clientes do jazida com ferramentas capazes de mudar a maneira como milhares de pessoas trabalham diariamente este trabalho tem um foco estratégico e responsabilidade em todo território nacional somos pioneiros na automatização da gestão de processos minerários e mudamos a forma que a mineração do brasil gerencia seus processos nossa história é marcada por inovações e pioneirismo fomos os primeiros a automatizar a leitura e interpretação de milhares de códigos e textos publicados diariamente em sites do governo e jornais oficiais que impactam a vida de milhares de pessoas em todo o brasil fomos os primeiros a espacializar isso e traduzir de forma simplificada removendo barreiras burocráticas e elevada exigência de conhecimento técnico além de impactar na vida de mineradores esta eficiência se traduz na democratização de um setor gerando empregos e renda para milhares de pessoas atividades principais projetar codificar testar operar e resolver problemas suportar decisões e priorizações das demandas com o intuito de evoluir os produtos atuais e desenvolver novos produtos que gerem valor para os nossos clientes dar suporte para o restante da equipe de desenvolvimento em nível técnico responsável por trazer novas ideias para soluções de problemas ter um desejo de participar em projetos desafiadores que impactam na vida de milhares de pessoas desenvolvido de testes automatizados liderança e desenvolvimento de time colaborar com outro setores da empresa para gerar valor manutenção e cuidado de sistema em produção local escritório localizado em brasília df ou remoto benefícios o que o jazida oferece aqui está listado apenas uma amostra do nosso pacote de benefícios home office um pacote de salário competitivo com prêmio salarial de incentivo anual em dinheiro plr desenvolvimento de carreira e assistência educacional para promover seus objetivos assistência educacional para inglês uma política abrangente de licenças que cobre todos os momentos importantes da vida férias anuais licença parental remunerada licença médica férias remuneradas suporte no acesso a plano de saúde e academia apoio contínuo de bem estar individual diferenciais ambiente de trabalho sem formalidades horário flexível e hierarquia plana ambiente livre para tomada de decisões técnicas requisitos para ter sucesso neste cargo você deve ter um desejo de participar em projetos desafiadores que impactam na vida de milhares de pessoas um elevado conhecimento em nodejs express socket io experiência com banco de dados relacional conhecimento em linux experiência de anos em backend usando nodejs conhecimento bem desenvolvido em testes automatizados comprovada experiência em liderança incluindo a habilidade de construir e desenvolver um time capaz de entregar um sistema inovador e de relevância provada habilidade de nutrir e desenvolver a capacidade inovadora para desenvolver e entregar novas aplicações de tecnologias provada habilidade em trabalhar colaborativamente com outros setores com objetivo de atingir resultados que geram valor além da área de desenvolvimento e compartilhar conhecimento experiência e aprendizados provada capacidade em manter de forma efetiva e com excelência um sistema em produção utilizado diariamente por milhares de pessoas conhecimentos em docker e kubernetes é um diferencial contratação pj a combinar como se candidatar favor enviar apena perfis pleno ou senior venha fazer parte do jazida envie seu cv para vagas jazida com no assunto adicionar vaga backend e no email colocar a pretensão salarial e perfil ou tempo médio de feedbacks caso você seja selecionado receberá um contato da nossa equipe e mail para contato em caso de não haver resposta vagas jazida com labels 🏢 flexível 🏢 presencial 🏢 remoto 👨 pleno 👴 sênior ⚖️ a combinar ⚖️ pj | 0 |
1,456 | 6,303,845,739 | IssuesEvent | 2017-07-21 14:38:28 | enterprisemediawiki/meza | https://api.github.com/repos/enterprisemediawiki/meza | opened | Make unifyUserTables.php work; add tests | critical: bug difficulty: hard important: maintainability | I'm not sure it doesn't work, but I don't think it does. | True | Make unifyUserTables.php work; add tests - I'm not sure it doesn't work, but I don't think it does. | main | make unifyusertables php work add tests i m not sure it doesn t work but i don t think it does | 1 |
757 | 4,351,957,550 | IssuesEvent | 2016-08-01 03:16:11 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Mount task skipping | bug_report waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
mount
##### ANSIBLE VERSION
1.9.1
##### SUMMARY
I'm having an issue with a mount task to add an NFS target. I wasn't able to find any real documentation on doing this, so I'm not sure if it's supported at all. The mount command doesn't fail, or provide an error even with -vvvv so I'm not sure what it's doing or what's going wrong.
I'm running Ansible 1.9.1
Here is the task:
- mount: "name=/var/shared_files src='<Server IP>:/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present"
I have also tried calling it as an action instead of mount, but the results were the same.
Here is the output:
TASK: [webservers | mount name=/var/shared_files src='<Server IP>:/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present] ***
<server01> ESTABLISH CONNECTION FOR USER: root
<server01> REMOTE_MODULE mount name=/var/shared_files src='<Server IP>:/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present CHECKMODE=True
<server01> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server01', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730 && echo $HOME/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730'"]
<server02> ESTABLISH CONNECTION FOR USER: root
<server02> REMOTE_MODULE mount name=/var/shared_files src='<Server IP>:/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present CHECKMODE=True
<server02> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server02', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819 && echo $HOME/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819'"]
<server01> PUT /tmp/tmppujJbA TO /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/mount
<server02> PUT /tmp/tmpWlK9Sh TO /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/mount
<server02> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server02', "/bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/mount; rm -rf /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/ >/dev/null 2>&1'"]
<server01> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server01', "/bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/mount; rm -rf /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/ >/dev/null 2>&1'"] | True | Mount task skipping - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
mount
##### ANSIBLE VERSION
1.9.1
##### SUMMARY
I'm having an issue with a mount task to add an NFS target. I wasn't able to find any real documentation on doing this, so I'm not sure if it's supported at all. The mount command doesn't fail, or provide an error even with -vvvv so I'm not sure what it's doing or what's going wrong.
I'm running Ansible 1.9.1
Here is the task:
- mount: "name=/var/shared_files src='<Server IP>:/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present"
I have also tried calling it as an action instead of mount, but the results were the same.
Here is the output:
TASK: [webservers | mount name=/var/shared_files src='<Server IP>:/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present] ***
<server01> ESTABLISH CONNECTION FOR USER: root
<server01> REMOTE_MODULE mount name=/var/shared_files src='<Server IP>:/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present CHECKMODE=True
<server01> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server01', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730 && echo $HOME/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730'"]
<server02> ESTABLISH CONNECTION FOR USER: root
<server02> REMOTE_MODULE mount name=/var/shared_files src='<Server IP>:/var/shared_files' fstype=nfs opts='defaults,noatime,_netdev' state=present CHECKMODE=True
<server02> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server02', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819 && echo $HOME/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819'"]
<server01> PUT /tmp/tmppujJbA TO /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/mount
<server02> PUT /tmp/tmpWlK9Sh TO /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/mount
<server02> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server02', "/bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/mount; rm -rf /root/.ansible/tmp/ansible-tmp-1434558002.34-226129697900819/ >/dev/null 2>&1'"]
<server01> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'server01', "/bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/mount; rm -rf /root/.ansible/tmp/ansible-tmp-1434558002.33-10305240023730/ >/dev/null 2>&1'"] | main | mount task skipping issue type bug report component name mount ansible version summary i m having an issue with a mount task to add an nfs target i wasn t able to find any real documentation on doing this so i m not sure if it s supported at all the mount command doesn t fail or provide an error even with vvvv so i m not sure what it s doing or what s going wrong i m running ansible here is the task mount name var shared files src var shared files fstype nfs opts defaults noatime netdev state present i have also tried calling it as an action instead of mount but the results were the same here is the output task establish connection for user root remote module mount name var shared files src var shared files fstype nfs opts defaults noatime netdev state present checkmode true exec establish connection for user root remote module mount name var shared files src var shared files fstype nfs opts defaults noatime netdev state present checkmode true exec put tmp tmppujjba to root ansible tmp ansible tmp mount put tmp to root ansible tmp ansible tmp mount exec exec | 1 |
4,629 | 23,980,943,011 | IssuesEvent | 2022-09-13 15:00:35 | exercism/python | https://api.github.com/repos/exercism/python | closed | [New Concept Exercise] : other-comprehensions | x:status/claimed x:size/large claimed 🐾 maintainer action required❕ new exercise ✨ | This issue describes how to implement the `other-comprehensions` concept exercise for the Python track.
## Getting started
**Please please please read the docs before starting.** Posting PRs without reading these docs will be a lot more frustrating for you during the review cycle, and exhaust Exercism's maintainers' time. So, before diving into the implementation, please read up on the following documents:
- [Contributing to Exercism](https://exercism.org/docs/building) | [Exercism and GitHub](https://exercism.org/docs/building/github) | [Contributor Pull Request Guide](https://exercism.org/docs/building/github/contributors-pull-request-guide)
- [What are those Weird Task Tags about?](https://exercism.org/docs/building/product/tasks)
- [Building Language Tracks: An Overview](https://exercism.org/docs/building/tracks)
- [What are Concepts?](https://exercism.org/docs/building/tracks/concepts)
- [Concept Exercise Specifications](https://exercism.org/docs/building/tracks/concept-exercises)
- [Concept Specifications](https://exercism.org/docs/building/tracks/concepts)
- [Exercism Formatting and Style Guide](https://exercism.org/docs/building/markdown/style-guide)
- [Exercism Markdown Specification](https://exercism.org/docs/building/markdown/markdown)
- [Reputation](https://exercism.org/docs/using/product/reputation)
## Goal
The goal of this exercise is to teach the syntax and variants of `set comprehensions` and `dict comprehensions` in Python.
## Learning objectives
- Understand how `set` and `dict` comprehensions relate to their underlying data structures and the `loop` + `append` method of creating/computing them.
- Create a `dict` comprehension from a `loop` + `append`
- Create a `set` comprehension from a `loop` + `append`
- Create a `dict` comprehension from `Lists`, `Sets`, `Tuples`, or other `iterables` (_such as `zip()` or `dict.items()`_)
- Create a `set` comprehension from `Lists`, `Sets`, `Tuples`, or other `iterables` (_such as `zip()` or `dict.items()`_)
- Use one or more conditions/operators/methods to filter comprehension inputs
- Use methods or logic to format the elements (output members) of the comprehension
- Create a _nested comprehension_ (of either flavor)
- Create a _nested comprehension_ (of either flavor) with one or more formatting or filtering conditions
## Out of scope
- Memory and performance characteristics and optimizations
- `generators` and `generator expressions` in `other comprehensions`
- using the data structures in `collections` in combination with, or as part of a `set` or `dict` comprehension.
- Using the `assignment expression` (_walrus operator_) with either flavor of comprehension.
## Concepts
- `dict-comprehensions`
- `set-comprehensions`
- `comprehension syntax`
## Prerequisites
- `basics`
- `bools`
- `conditionals`
- `comparisons`
- `loops`
- `iteration`
## Resources to refer to
- [List Comprehensions (Python official docs)](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)
- [Nested List Comprehensions (Python official docs)](https://docs.python.org/3/tutorial/datastructures.html#nested-list-comprehensions)
- [Comprehending Python's Comprehensions (Dan Bader)](https://dbader.org/blog/list-dict-set-comprehensions-in-python)
- [List and Dict Comprehensions in Python (Timothy Bramlett)](https://timothybramlett.com/List_and_Dict_Comprehensions_in_Python.html)
### Hints
- `List Comprehensions` section of the Python docs tutorial: [List Comprehensions](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)
- This animated GIF from Trey Hunner: [List Comprehensions: The Movie](https://treyhunner.com/images/list-comprehension-condition.gif)
### After
- `comprehension syntax` for other data structures such as `sets` and `dictionaries`
- `generators` and `generator expressions`
- `generators` and `generator expressions` in `list comprehensions`
## Representer
No changes required.
## Analyzer
No changes rquired.
## Implementing
- Tests should be written using unittest.TestCase, and the test file named list_comprehensions_test.py.
- [How to Implement a Concept Exercise in Python](https://github.com/exercism/v3/blob/master/languages/python/reference/implementing-a-concept-exercise.md)
- [make-concept-exercise Utility](https://github.com/exercism/v3/tree/master/languages/python/bin)
## Help
If you have any questions while implementing the exercise, please post the questions as comments in this issue.
| True | [New Concept Exercise] : other-comprehensions - This issue describes how to implement the `other-comprehensions` concept exercise for the Python track.
## Getting started
**Please please please read the docs before starting.** Posting PRs without reading these docs will be a lot more frustrating for you during the review cycle, and exhaust Exercism's maintainers' time. So, before diving into the implementation, please read up on the following documents:
- [Contributing to Exercism](https://exercism.org/docs/building) | [Exercism and GitHub](https://exercism.org/docs/building/github) | [Contributor Pull Request Guide](https://exercism.org/docs/building/github/contributors-pull-request-guide)
- [What are those Weird Task Tags about?](https://exercism.org/docs/building/product/tasks)
- [Building Language Tracks: An Overview](https://exercism.org/docs/building/tracks)
- [What are Concepts?](https://exercism.org/docs/building/tracks/concepts)
- [Concept Exercise Specifications](https://exercism.org/docs/building/tracks/concept-exercises)
- [Concept Specifications](https://exercism.org/docs/building/tracks/concepts)
- [Exercism Formatting and Style Guide](https://exercism.org/docs/building/markdown/style-guide)
- [Exercism Markdown Specification](https://exercism.org/docs/building/markdown/markdown)
- [Reputation](https://exercism.org/docs/using/product/reputation)
## Goal
The goal of this exercise is to teach the syntax and variants of `set comprehensions` and `dict comprehensions` in Python.
## Learning objectives
- Understand how `set` and `dict` comprehensions relate to their underlying data structures and the `loop` + `append` method of creating/computing them.
- Create a `dict` comprehension from a `loop` + `append`
- Create a `set` comprehension from a `loop` + `append`
- Create a `dict` comprehension from `Lists`, `Sets`, `Tuples`, or other `iterables` (_such as `zip()` or `dict.items()`_)
- Create a `set` comprehension from `Lists`, `Sets`, `Tuples`, or other `iterables` (_such as `zip()` or `dict.items()`_)
- Use one or more conditions/operators/methods to filter comprehension inputs
- Use methods or logic to format the elements (output members) of the comprehension
- Create a _nested comprehension_ (of either flavor)
- Create a _nested comprehension_ (of either flavor) with one or more formatting or filtering conditions
## Out of scope
- Memory and performance characteristics and optimizations
- `generators` and `generator expressions` in `other comprehensions`
- using the data structures in `collections` in combination with, or as part of a `set` or `dict` comprehension.
- Using the `assignment expression` (_walrus operator_) with either flavor of comprehension.
## Concepts
- `dict-comprehensions`
- `set-comprehensions`
- `comprehension syntax`
## Prerequisites
- `basics`
- `bools`
- `conditionals`
- `comparisons`
- `loops`
- `iteration`
## Resources to refer to
- [List Comprehensions (Python official docs)](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)
- [Nested List Comprehensions (Python official docs)](https://docs.python.org/3/tutorial/datastructures.html#nested-list-comprehensions)
- [Comprehending Python's Comprehensions (Dan Bader)](https://dbader.org/blog/list-dict-set-comprehensions-in-python)
- [List and Dict Comprehensions in Python (Timothy Bramlett)](https://timothybramlett.com/List_and_Dict_Comprehensions_in_Python.html)
### Hints
- `List Comprehensions` section of the Python docs tutorial: [List Comprehensions](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)
- This animated GIF from Trey Hunner: [List Comprehensions: The Movie](https://treyhunner.com/images/list-comprehension-condition.gif)
### After
- `comprehension syntax` for other data structures such as `sets` and `dictionaries`
- `generators` and `generator expressions`
- `generators` and `generator expressions` in `list comprehensions`
## Representer
No changes required.
## Analyzer
No changes rquired.
## Implementing
- Tests should be written using unittest.TestCase, and the test file named list_comprehensions_test.py.
- [How to Implement a Concept Exercise in Python](https://github.com/exercism/v3/blob/master/languages/python/reference/implementing-a-concept-exercise.md)
- [make-concept-exercise Utility](https://github.com/exercism/v3/tree/master/languages/python/bin)
## Help
If you have any questions while implementing the exercise, please post the questions as comments in this issue.
| main | other comprehensions this issue describes how to implement the other comprehensions concept exercise for the python track getting started please please please read the docs before starting posting prs without reading these docs will be a lot more frustrating for you during the review cycle and exhaust exercism s maintainers time so before diving into the implementation please read up on the following documents goal the goal of this exercise is to teach the syntax and variants of set comprehensions and dict comprehensions in python learning objectives understand how set and dict comprehensions relate to their underlying data structures and the loop append method of creating computing them create a dict comprehension from a loop append create a set comprehension from a loop append create a dict comprehension from lists sets tuples or other iterables such as zip or dict items create a set comprehension from lists sets tuples or other iterables such as zip or dict items use one or more conditions operators methods to filter comprehension inputs use methods or logic to format the elements output members of the comprehension create a nested comprehension of either flavor create a nested comprehension of either flavor with one or more formatting or filtering conditions out of scope memory and performance characteristics and optimizations generators and generator expressions in other comprehensions using the data structures in collections in combination with or as part of a set or dict comprehension using the assignment expression walrus operator with either flavor of comprehension concepts dict comprehensions set comprehensions comprehension syntax prerequisites basics bools conditionals comparisons loops iteration resources to refer to hints list comprehensions section of the python docs tutorial this animated gif from trey hunner after comprehension syntax for other data structures such as sets and dictionaries generators and generator expressions generators and generator expressions in list comprehensions representer no changes required analyzer no changes rquired implementing tests should be written using unittest testcase and the test file named list comprehensions test py help if you have any questions while implementing the exercise please post the questions as comments in this issue | 1 |
1,952 | 6,665,042,113 | IssuesEvent | 2017-10-02 22:40:14 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | opened | Goodreads: Switch API to HTTPS before Dec 4 to prevent breakage | Bug Maintainer Input Requested Status: Needs a Developer | From Goodreas:
>To better protect user privacy and security, we will be enabling HTTP to HTTPS redirection for all requests to goodreads.com on December 4th.
>
>To prevent your applications from breaking, please do one of the following before December 4th:
>
> - Make sure your applications support HTTPS redirects
> - **Update your applications to make only HTTPS requests**
------
IA Page: http://duck.co/ia/view/goodreads
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @iammrigank | True | Goodreads: Switch API to HTTPS before Dec 4 to prevent breakage - From Goodreas:
>To better protect user privacy and security, we will be enabling HTTP to HTTPS redirection for all requests to goodreads.com on December 4th.
>
>To prevent your applications from breaking, please do one of the following before December 4th:
>
> - Make sure your applications support HTTPS redirects
> - **Update your applications to make only HTTPS requests**
------
IA Page: http://duck.co/ia/view/goodreads
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @iammrigank | main | goodreads switch api to https before dec to prevent breakage from goodreas to better protect user privacy and security we will be enabling http to https redirection for all requests to goodreads com on december to prevent your applications from breaking please do one of the following before december make sure your applications support https redirects update your applications to make only https requests ia page iammrigank | 1 |
3,048 | 11,387,823,520 | IssuesEvent | 2020-01-29 15:39:32 | precice/precice | https://api.github.com/repos/precice/precice | opened | Rename timestep to time window | maintainability | Leftover TODO from down the rabbit hole of #619
Update all versions of "timestep" in preCICE, e.g. `timestepsLeft`, `maxTimesteps` and `timesteps` in `BaseCouplingScheme`. | True | Rename timestep to time window - Leftover TODO from down the rabbit hole of #619
Update all versions of "timestep" in preCICE, e.g. `timestepsLeft`, `maxTimesteps` and `timesteps` in `BaseCouplingScheme`. | main | rename timestep to time window leftover todo from down the rabbit hole of update all versions of timestep in precice e g timestepsleft maxtimesteps and timesteps in basecouplingscheme | 1 |
3,248 | 12,371,555,415 | IssuesEvent | 2020-05-18 18:46:34 | cloud-gov/product | https://api.github.com/repos/cloud-gov/product | closed | As an operator, I want to remove cg-dashboard (5/18) | contractor-3-maintainability operations | A bug in stratos is currently requiring us to continue to run cg-dashboard: https://github.com/cloudfoundry/stratos/issues/4103
Once this issue is fixed, deployed and validated in cg, we should remove cg-dashboard.
## Acceptance Criteria
* [x] GIVEN The stratos bug is fixed
AND stratos is updated in production with the bug fix
WHEN a user accesses dashboard-deprecated.fr.cloud.gov
AND looks in the docs for user management information
THEN they are redirected to stratos
AND the docs match stratos
---
## Security considerations
be sure the stratos fix meets our needs. For example, a user should not be able to see and search for all users in a system. Instead, they should have to enter an exact username when setting a role.
## Implementation sketch
* [x] Remove CircleCI access to the cg-dashboard repo (remove the webhook)
* [x] Archive the cg-dashboard repo
* [x] Update docs and remove references to dashboard-deprecated
* [x] Add a redirect for the legacy dashboard to the current Stratos dashboard
* [x] Validate Stratos dashboard docs on docs.cloud.gov show the correct procedure to manage users | True | As an operator, I want to remove cg-dashboard (5/18) - A bug in stratos is currently requiring us to continue to run cg-dashboard: https://github.com/cloudfoundry/stratos/issues/4103
Once this issue is fixed, deployed and validated in cg, we should remove cg-dashboard.
## Acceptance Criteria
* [x] GIVEN The stratos bug is fixed
AND stratos is updated in production with the bug fix
WHEN a user accesses dashboard-deprecated.fr.cloud.gov
AND looks in the docs for user management information
THEN they are redirected to stratos
AND the docs match stratos
---
## Security considerations
be sure the stratos fix meets our needs. For example, a user should not be able to see and search for all users in a system. Instead, they should have to enter an exact username when setting a role.
## Implementation sketch
* [x] Remove CircleCI access to the cg-dashboard repo (remove the webhook)
* [x] Archive the cg-dashboard repo
* [x] Update docs and remove references to dashboard-deprecated
* [x] Add a redirect for the legacy dashboard to the current Stratos dashboard
* [x] Validate Stratos dashboard docs on docs.cloud.gov show the correct procedure to manage users | main | as an operator i want to remove cg dashboard a bug in stratos is currently requiring us to continue to run cg dashboard once this issue is fixed deployed and validated in cg we should remove cg dashboard acceptance criteria given the stratos bug is fixed and stratos is updated in production with the bug fix when a user accesses dashboard deprecated fr cloud gov and looks in the docs for user management information then they are redirected to stratos and the docs match stratos security considerations be sure the stratos fix meets our needs for example a user should not be able to see and search for all users in a system instead they should have to enter an exact username when setting a role implementation sketch remove circleci access to the cg dashboard repo remove the webhook archive the cg dashboard repo update docs and remove references to dashboard deprecated add a redirect for the legacy dashboard to the current stratos dashboard validate stratos dashboard docs on docs cloud gov show the correct procedure to manage users | 1 |
63,750 | 8,691,434,949 | IssuesEvent | 2018-12-04 01:20:04 | kubernetes-sigs/contributor-playground | https://api.github.com/repos/kubernetes-sigs/contributor-playground | closed | Tomeyday's first issue test | kind/documentation sig/contributor-experience ¯\_(ツ)_/¯ | My first issue test
/kind documentation
/sig contributor-experience | 1.0 | Tomeyday's first issue test - My first issue test
/kind documentation
/sig contributor-experience | non_main | tomeyday s first issue test my first issue test kind documentation sig contributor experience | 0 |
5,266 | 26,632,863,752 | IssuesEvent | 2023-01-24 19:15:39 | makubacki/mu_devops | https://api.github.com/repos/makubacki/mu_devops | closed | [Bug]: Test 2 | state:needs-triage state:needs-maintainer-feedback state:wont-fix type:bug urgency:high | ### Is there an existing issue for this?
- [X] I have searched existing issues
### Current Behavior
Test
### Expected Behavior
Test
### Steps To Reproduce
Test
### Build Environment
```markdown
- OS(s): Test
- Tool Chain(s): Test
- Targets Impacted: Test
```
### Version Information
```text
Test
```
### Urgency
High
### Are you going to fix this?
I will fix it
### Do you need maintainer feedback?
Maintainer feedback requested
### Anything else?
_No response_ | True | [Bug]: Test 2 - ### Is there an existing issue for this?
- [X] I have searched existing issues
### Current Behavior
Test
### Expected Behavior
Test
### Steps To Reproduce
Test
### Build Environment
```markdown
- OS(s): Test
- Tool Chain(s): Test
- Targets Impacted: Test
```
### Version Information
```text
Test
```
### Urgency
High
### Are you going to fix this?
I will fix it
### Do you need maintainer feedback?
Maintainer feedback requested
### Anything else?
_No response_ | main | test is there an existing issue for this i have searched existing issues current behavior test expected behavior test steps to reproduce test build environment markdown os s test tool chain s test targets impacted test version information text test urgency high are you going to fix this i will fix it do you need maintainer feedback maintainer feedback requested anything else no response | 1 |
4,061 | 18,983,547,835 | IssuesEvent | 2021-11-21 10:16:41 | svengreb/wand | https://api.github.com/repos/svengreb/wand | closed | Insufficient repository fetch-depth for action workflows | type-bug context-workflow scope-maintainability scope-quality | The [GitHub action workflows][1] using the [`actions/checkout` action][2] to fetch the repository that triggered the workflow. However, by default only the history of the latest commit is fetched which results in errors when _wand_ tries to extract repository metadata information like the amount of commits ahead of the latest commit. As an example this can be seen when [running the `bootstrap` command in the `test` job of the `ci-go` workflow][5] which fails with an `object not found` error because the history only contains a single commit.
To fix this problem `action/checkout` provides an option to [fetch all history for all tags and branches][3] which will be used to prevent errors like this in the pipeline.
[1]: https://github.com/svengreb/wand/tree/9caf10f9d3b0c97e1f6c18b29c175e71764b0ece/.github/workflows
[2]: https://github.com/actions/checkout
[3]: https://github.com/actions/checkout#Fetch-all-history-for-all-tags-and-branches
[4]: https://github.com/svengreb/wand/blob/cabd635c4ec73680b1776e7c536feca16643b00b/magefile.go#L136
[5]: https://github.com/svengreb/wand/runs/4275275079?check_suite_focus=true
| True | Insufficient repository fetch-depth for action workflows - The [GitHub action workflows][1] using the [`actions/checkout` action][2] to fetch the repository that triggered the workflow. However, by default only the history of the latest commit is fetched which results in errors when _wand_ tries to extract repository metadata information like the amount of commits ahead of the latest commit. As an example this can be seen when [running the `bootstrap` command in the `test` job of the `ci-go` workflow][5] which fails with an `object not found` error because the history only contains a single commit.
To fix this problem `action/checkout` provides an option to [fetch all history for all tags and branches][3] which will be used to prevent errors like this in the pipeline.
[1]: https://github.com/svengreb/wand/tree/9caf10f9d3b0c97e1f6c18b29c175e71764b0ece/.github/workflows
[2]: https://github.com/actions/checkout
[3]: https://github.com/actions/checkout#Fetch-all-history-for-all-tags-and-branches
[4]: https://github.com/svengreb/wand/blob/cabd635c4ec73680b1776e7c536feca16643b00b/magefile.go#L136
[5]: https://github.com/svengreb/wand/runs/4275275079?check_suite_focus=true
| main | insufficient repository fetch depth for action workflows the using the to fetch the repository that triggered the workflow however by default only the history of the latest commit is fetched which results in errors when wand tries to extract repository metadata information like the amount of commits ahead of the latest commit as an example this can be seen when which fails with an object not found error because the history only contains a single commit to fix this problem action checkout provides an option to which will be used to prevent errors like this in the pipeline | 1 |
4,604 | 23,849,858,772 | IssuesEvent | 2022-09-06 16:52:49 | ocsf/ocsf-schema | https://api.github.com/repos/ocsf/ocsf-schema | closed | Finalize list of core objects, classes, categories | maintainers | Initial release tracking for finalizing list of core objects, classes, and categories. | True | Finalize list of core objects, classes, categories - Initial release tracking for finalizing list of core objects, classes, and categories. | main | finalize list of core objects classes categories initial release tracking for finalizing list of core objects classes and categories | 1 |
1,760 | 6,574,997,582 | IssuesEvent | 2017-09-11 14:43:54 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_ami: AttributeError: 'BlockDeviceType' object has no attribute 'encrypted' | affects_2.1 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_ami
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
doesn't work from: Ubuntu 14.04, with python2.7-boto 2.20.1-2ubuntu2
works from: Ubuntu 16.04, with 2.38.0-1ubuntu1
to: Ubuntu 16.04 on AWS
##### SUMMARY
ec2_ami doesn't work on Ubuntu 14.04, works fine on 16.04
I suspect python-boto might be the problem. 14.04 uses 2.20.1-2ubuntu2, 16.04 uses 2.38.0-1ubuntu1
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- ec2_ami:
instance_id: "{{ awsInstanceId }}"
region: "{{ awsRegion }}"
ec2_access_key: "{{ hostvars[apiHost]['ec2_access_key'] }}"
ec2_secret_key: "{{ hostvars[apiHost]['ec2_secret_key'] }}"
wait: true
name: "{{gitsha}}-{{templateName}}"
wait_timeout: 3600
register: ami
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
fatal: [production-worker-template.clara.io]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 560, in <module>\n main()\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 552, in main\n create_image(module, ec2)\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 419, in create_image\n module.exit_json(msg=\"AMI creation operation complete\", changed=True, **get_ami_info(img))\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 331, in get_ami_info\n block_device_mapping=get_block_device_mapping(image),\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 318, in get_block_device_mapping\n 'encrypted': bdm[device_name].encrypted,\nAttributeError: 'BlockDeviceType' object has no attribute 'encrypted'\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
| True | ec2_ami: AttributeError: 'BlockDeviceType' object has no attribute 'encrypted' - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_ami
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
doesn't work from: Ubuntu 14.04, with python2.7-boto 2.20.1-2ubuntu2
works from: Ubuntu 16.04, with 2.38.0-1ubuntu1
to: Ubuntu 16.04 on AWS
##### SUMMARY
ec2_ami doesn't work on Ubuntu 14.04, works fine on 16.04
I suspect python-boto might be the problem. 14.04 uses 2.20.1-2ubuntu2, 16.04 uses 2.38.0-1ubuntu1
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- ec2_ami:
instance_id: "{{ awsInstanceId }}"
region: "{{ awsRegion }}"
ec2_access_key: "{{ hostvars[apiHost]['ec2_access_key'] }}"
ec2_secret_key: "{{ hostvars[apiHost]['ec2_secret_key'] }}"
wait: true
name: "{{gitsha}}-{{templateName}}"
wait_timeout: 3600
register: ami
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
fatal: [production-worker-template.clara.io]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 560, in <module>\n main()\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 552, in main\n create_image(module, ec2)\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 419, in create_image\n module.exit_json(msg=\"AMI creation operation complete\", changed=True, **get_ami_info(img))\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 331, in get_ami_info\n block_device_mapping=get_block_device_mapping(image),\n File \"/tmp/ansible_Uk8_mk/ansible_module_ec2_ami.py\", line 318, in get_block_device_mapping\n 'encrypted': bdm[device_name].encrypted,\nAttributeError: 'BlockDeviceType' object has no attribute 'encrypted'\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
| main | ami attributeerror blockdevicetype object has no attribute encrypted issue type bug report component name ami ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific doesn t work from ubuntu with boto works from ubuntu with to ubuntu on aws summary ami doesn t work on ubuntu works fine on i suspect python boto might be the problem uses uses steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ami instance id awsinstanceid region awsregion access key hostvars secret key hostvars wait true name gitsha templatename wait timeout register ami expected results actual results fatal failed changed false failed true module stderr traceback most recent call last n file tmp ansible mk ansible module ami py line in n main n file tmp ansible mk ansible module ami py line in main n create image module n file tmp ansible mk ansible module ami py line in create image n module exit json msg ami creation operation complete changed true get ami info img n file tmp ansible mk ansible module ami py line in get ami info n block device mapping get block device mapping image n file tmp ansible mk ansible module ami py line in get block device mapping n encrypted bdm encrypted nattributeerror blockdevicetype object has no attribute encrypted n module stdout msg module failure | 1 |
2,443 | 8,639,852,076 | IssuesEvent | 2018-11-23 22:05:16 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | Bad quality, suggestions? | V1 related (not maintained) | Heya! I'm transmitting on 88Mhz but the audio is really bad quality, I have an antena (copper wire soldered) on the pin. Any suggestions?
Even without the homemade antena the audio is crap
I normaly do:
rpitx -i /home/pi/SDRSharp_20180924_181735Z_AF.wav -f 88000
| True | Bad quality, suggestions? - Heya! I'm transmitting on 88Mhz but the audio is really bad quality, I have an antena (copper wire soldered) on the pin. Any suggestions?
Even without the homemade antena the audio is crap
I normaly do:
rpitx -i /home/pi/SDRSharp_20180924_181735Z_AF.wav -f 88000
| main | bad quality suggestions heya i m transmitting on but the audio is really bad quality i have an antena copper wire soldered on the pin any suggestions even without the homemade antena the audio is crap i normaly do rpitx i home pi sdrsharp af wav f | 1 |
3,896 | 17,333,369,809 | IssuesEvent | 2021-07-28 07:06:06 | skytable/skytable | https://api.github.com/repos/skytable/skytable | closed | Feature: Binary data type | A-independent C-Model C-actions C-enhancement C-protocol D-server S-waiting-on-maintainers | **Description**
Implement an action to support the binary data type as mentioned in the [Protocol](https://docs.skytable.io/next/protocol/data-types) | True | Feature: Binary data type - **Description**
Implement an action to support the binary data type as mentioned in the [Protocol](https://docs.skytable.io/next/protocol/data-types) | main | feature binary data type description implement an action to support the binary data type as mentioned in the | 1 |
5,592 | 28,014,497,412 | IssuesEvent | 2023-03-27 21:18:55 | beyarkay/eskom-calendar | https://api.github.com/repos/beyarkay/eskom-calendar | closed | Missing schedules in Drakenstein, WC (Val de Vie) | waiting-on-maintainer waiting-on-investigation missing-area-schedule | Hi,
I don't see a loadshedding schedule for Paarl. Can you please help? | True | Missing schedules in Drakenstein, WC (Val de Vie) - Hi,
I don't see a loadshedding schedule for Paarl. Can you please help? | main | missing schedules in drakenstein wc val de vie hi i don t see a loadshedding schedule for paarl can you please help | 1 |
3,437 | 13,210,696,312 | IssuesEvent | 2020-08-15 18:20:23 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | Pear module hangs for module installs expecting answer(s) | affects_2.0 bot_closed bug collection collection:community.general has_pr module needs_collection_redirect needs_maintainer packaging support:community | From @cmacrae on 2015-12-22T14:55:54Z
##### Issue Type:
Bug Report
##### Component Name:
pear module
##### Ansible Version:
Running `ansible --version` reports: `2.0.0`
Built from: `2.0.0-0.7.rc2` commit `cc98528ecbadaff6d2401207c0fa002607768216`
##### Ansible Configuration:
No applicable settings
##### Environment:
Control node: Fedora 23
Destination node: Centos 7
##### Summary:
The `pear` module, for installing PHP modules via pear/pecl seems to hang when installing any modules that require input from the user at compile/install time.
For instance, I'm trying to deploy the `pecl/apc` module with the following task:
``` yaml
- name: Ensure the APC PHP package is present via Pear
pear: name=pecl/apc state=present
```
This task hangs indefinitely.
If you install `pecl/apc` via the shell, it will prompt for several inputs from the user, like so:
```
downloading APC-3.1.13.tgz ...
Starting to download APC-3.1.13.tgz (171,591 bytes)
.....................................done: 171,591 bytes
55 source files, building
running: phpize
Configuring for:
PHP Api Version: 20100412
Zend Module Api No: 20100525
Zend Extension Api No: 220100525
Enable internal debugging in APC [no] :
Enable per request file info about files used from the APC cache [no] :
Enable spin locks (EXPERIMENTAL) [no] :
Enable memory protection (EXPERIMENTAL) [no] :
Enable pthread mutexes (default) [no] :
Enable pthread read/write locks (EXPERIMENTAL) [yes] :
building in /var/tmp/pear-build-rootwfMfn6/APC-3.1.13
....< *snip>....
```
Here, I had simply accepted the defaults.
When Ansible runs with the above task, it issues the following command on the destination system:
`/usr/bin/php -C -q -d include_path=/usr/share/pear -d date.timezone=UTC -d output_buffering=1 -d variables_order=EGPCS -d safe_mode=0 -d register_argc_argv=On -d open_basedir= -d auto_prepend_file= -d auto_append_file= /usr/share/pear/pearcmd.php install pecl/apc`
As stated previously, this process just hangs.
Inspecting the process by attaching with `strace` shows that it's waiting for user input (denoted by `read`):
```
[root@my-host ~]# strace -p 31801
Process 31801 attached
read(0,
```
##### Steps To Reproduce:
Run the following task:
``` yaml
- name: Ensure the APC PHP package is present via Pear
pear: name=pecl/apc state=present
```
##### Expected Results:
Ansible checks to see if the PHP module is present, if not, it installs it, regardless of needing user input (perhaps accepting the default options presented by the module would be the best approach here?).
##### Actual Results:
Ansible hangs indefinitely.
##### Thanks
Thanks in advance for any help on this!
Please do let me know if I can provide any further information that'd be of help.
Copied from original issue: ansible/ansible-modules-extras#1418
| True | Pear module hangs for module installs expecting answer(s) - From @cmacrae on 2015-12-22T14:55:54Z
##### Issue Type:
Bug Report
##### Component Name:
pear module
##### Ansible Version:
Running `ansible --version` reports: `2.0.0`
Built from: `2.0.0-0.7.rc2` commit `cc98528ecbadaff6d2401207c0fa002607768216`
##### Ansible Configuration:
No applicable settings
##### Environment:
Control node: Fedora 23
Destination node: Centos 7
##### Summary:
The `pear` module, for installing PHP modules via pear/pecl seems to hang when installing any modules that require input from the user at compile/install time.
For instance, I'm trying to deploy the `pecl/apc` module with the following task:
``` yaml
- name: Ensure the APC PHP package is present via Pear
pear: name=pecl/apc state=present
```
This task hangs indefinitely.
If you install `pecl/apc` via the shell, it will prompt for several inputs from the user, like so:
```
downloading APC-3.1.13.tgz ...
Starting to download APC-3.1.13.tgz (171,591 bytes)
.....................................done: 171,591 bytes
55 source files, building
running: phpize
Configuring for:
PHP Api Version: 20100412
Zend Module Api No: 20100525
Zend Extension Api No: 220100525
Enable internal debugging in APC [no] :
Enable per request file info about files used from the APC cache [no] :
Enable spin locks (EXPERIMENTAL) [no] :
Enable memory protection (EXPERIMENTAL) [no] :
Enable pthread mutexes (default) [no] :
Enable pthread read/write locks (EXPERIMENTAL) [yes] :
building in /var/tmp/pear-build-rootwfMfn6/APC-3.1.13
....< *snip>....
```
Here, I had simply accepted the defaults.
When Ansible runs with the above task, it issues the following command on the destination system:
`/usr/bin/php -C -q -d include_path=/usr/share/pear -d date.timezone=UTC -d output_buffering=1 -d variables_order=EGPCS -d safe_mode=0 -d register_argc_argv=On -d open_basedir= -d auto_prepend_file= -d auto_append_file= /usr/share/pear/pearcmd.php install pecl/apc`
As stated previously, this process just hangs.
Inspecting the process by attaching with `strace` shows that it's waiting for user input (denoted by `read`):
```
[root@my-host ~]# strace -p 31801
Process 31801 attached
read(0,
```
##### Steps To Reproduce:
Run the following task:
``` yaml
- name: Ensure the APC PHP package is present via Pear
pear: name=pecl/apc state=present
```
##### Expected Results:
Ansible checks to see if the PHP module is present, if not, it installs it, regardless of needing user input (perhaps accepting the default options presented by the module would be the best approach here?).
##### Actual Results:
Ansible hangs indefinitely.
##### Thanks
Thanks in advance for any help on this!
Please do let me know if I can provide any further information that'd be of help.
Copied from original issue: ansible/ansible-modules-extras#1418
| main | pear module hangs for module installs expecting answer s from cmacrae on issue type bug report component name pear module ansible version running ansible version reports built from commit ansible configuration no applicable settings environment control node fedora destination node centos summary the pear module for installing php modules via pear pecl seems to hang when installing any modules that require input from the user at compile install time for instance i m trying to deploy the pecl apc module with the following task yaml name ensure the apc php package is present via pear pear name pecl apc state present this task hangs indefinitely if you install pecl apc via the shell it will prompt for several inputs from the user like so downloading apc tgz starting to download apc tgz bytes done bytes source files building running phpize configuring for php api version zend module api no zend extension api no enable internal debugging in apc enable per request file info about files used from the apc cache enable spin locks experimental enable memory protection experimental enable pthread mutexes default enable pthread read write locks experimental building in var tmp pear build apc here i had simply accepted the defaults when ansible runs with the above task it issues the following command on the destination system usr bin php c q d include path usr share pear d date timezone utc d output buffering d variables order egpcs d safe mode d register argc argv on d open basedir d auto prepend file d auto append file usr share pear pearcmd php install pecl apc as stated previously this process just hangs inspecting the process by attaching with strace shows that it s waiting for user input denoted by read strace p process attached read steps to reproduce run the following task yaml name ensure the apc php package is present via pear pear name pecl apc state present expected results ansible checks to see if the php module is present if not it installs it regardless of needing user input perhaps accepting the default options presented by the module would be the best approach here actual results ansible hangs indefinitely thanks thanks in advance for any help on this please do let me know if i can provide any further information that d be of help copied from original issue ansible ansible modules extras | 1 |
550,823 | 16,132,941,533 | IssuesEvent | 2021-04-29 08:08:06 | CS-SI/eodag | https://api.github.com/repos/CS-SI/eodag | closed | Warning required when `items_per_page` in a search is set to a number higher than the known provider's limit | enhancement priority::2 | ```python
from eodag import EODataAccessGateway
dag = EODataAccessGateway('user_conf.yml')
dag.set_preferred_provider("mundi")
search_criteria = dict(
productType='S2_MSI_L1C',
start='2021-03-01',
end='2021-03-31',
geom={"lonmin": 1, "latmin": 42, "lonmax": 5, "latmax": 46},
)
search_results, total_count = dag.search(**search_criteria, items_per_page=100)
```
The snippet indicates that 314 products are available, and returns 50 products while 100 were requested. This is because `mundi` has a limit per page of 50 results. Since this limit is now known to EODAG (`max_items_per_page: 50`), it might be interesting to emit a warning message when `items_per_page` > `max_items_per_page` (when available). | 1.0 | Warning required when `items_per_page` in a search is set to a number higher than the known provider's limit - ```python
from eodag import EODataAccessGateway
dag = EODataAccessGateway('user_conf.yml')
dag.set_preferred_provider("mundi")
search_criteria = dict(
productType='S2_MSI_L1C',
start='2021-03-01',
end='2021-03-31',
geom={"lonmin": 1, "latmin": 42, "lonmax": 5, "latmax": 46},
)
search_results, total_count = dag.search(**search_criteria, items_per_page=100)
```
The snippet indicates that 314 products are available, and returns 50 products while 100 were requested. This is because `mundi` has a limit per page of 50 results. Since this limit is now known to EODAG (`max_items_per_page: 50`), it might be interesting to emit a warning message when `items_per_page` > `max_items_per_page` (when available). | non_main | warning required when items per page in a search is set to a number higher than the known provider s limit python from eodag import eodataaccessgateway dag eodataaccessgateway user conf yml dag set preferred provider mundi search criteria dict producttype msi start end geom lonmin latmin lonmax latmax search results total count dag search search criteria items per page the snippet indicates that products are available and returns products while were requested this is because mundi has a limit per page of results since this limit is now known to eodag max items per page it might be interesting to emit a warning message when items per page max items per page when available | 0 |
2,536 | 8,657,436,740 | IssuesEvent | 2018-11-27 21:19:31 | Kapeli/Dash-User-Contributions | https://api.github.com/repos/Kapeli/Dash-User-Contributions | closed | Redux Docset maintainer needed | needs maintainer | I can no longer have time to maintain this docset and I am looking for additional contributors to assist. My repo is located at [https://github.com/epitaphmike/redux-dash](https://github.com/epitaphmike/redux-dash). If this is something you are interested in helping with please reach out. Thank you. | True | Redux Docset maintainer needed - I can no longer have time to maintain this docset and I am looking for additional contributors to assist. My repo is located at [https://github.com/epitaphmike/redux-dash](https://github.com/epitaphmike/redux-dash). If this is something you are interested in helping with please reach out. Thank you. | main | redux docset maintainer needed i can no longer have time to maintain this docset and i am looking for additional contributors to assist my repo is located at if this is something you are interested in helping with please reach out thank you | 1 |
84,026 | 3,647,503,567 | IssuesEvent | 2016-02-16 01:13:42 | gophish/gophish | https://api.github.com/repos/gophish/gophish | opened | Implement the ability to "Copy" Email Templates | enhancement med-priority | Would like the ability to copy email templates so the user doesn't have to re-create them. | 1.0 | Implement the ability to "Copy" Email Templates - Would like the ability to copy email templates so the user doesn't have to re-create them. | non_main | implement the ability to copy email templates would like the ability to copy email templates so the user doesn t have to re create them | 0 |
735 | 4,326,381,978 | IssuesEvent | 2016-07-26 06:00:31 | Particular/PlatformInstaller | https://api.github.com/repos/Particular/PlatformInstaller | closed | Split out the NSB pre-requisities into seperate options | Tag: Maintainer Prio Type: Feature | The NSB prerequisites in the PI aren't really prerequisites any more for the platform, they are options that the developer may want to install. We should split out the perf counters, dtc and msmq setups into discrete options and not label as prerequisites.
| True | Split out the NSB pre-requisities into seperate options - The NSB prerequisites in the PI aren't really prerequisites any more for the platform, they are options that the developer may want to install. We should split out the perf counters, dtc and msmq setups into discrete options and not label as prerequisites.
| main | split out the nsb pre requisities into seperate options the nsb prerequisites in the pi aren t really prerequisites any more for the platform they are options that the developer may want to install we should split out the perf counters dtc and msmq setups into discrete options and not label as prerequisites | 1 |
1,618 | 6,572,644,447 | IssuesEvent | 2017-09-11 04:01:39 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | documentation error for sl_vm module | affects_2.1 cloud docs_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Documentation Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
sl_vm
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.1.2
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
In the documentation it says option **wait_timeout** but it should be **wait_time** indeed.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| True | documentation error for sl_vm module - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Documentation Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
sl_vm
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.1.2
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
In the documentation it says option **wait_timeout** but it should be **wait_time** indeed.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| main | documentation error for sl vm module issue type documentation report component name sl vm ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary in the documentation it says option wait timeout but it should be wait time indeed steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results | 1 |
305,266 | 26,374,432,413 | IssuesEvent | 2023-01-12 00:20:11 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] flaky test: YBTsCliTest.TestVModuleUpdate failing in alma8 release | kind/bug kind/failing-test area/docdb priority/high | Jira Link: [DB-4405](https://yugabyte.atlassian.net/browse/DB-4405)
### Description
Started failing since https://github.com/yugabyte/yugabyte-db/commit/f7f55d51621f81199c276a122807fc45a8d55778 | 1.0 | [DocDB] flaky test: YBTsCliTest.TestVModuleUpdate failing in alma8 release - Jira Link: [DB-4405](https://yugabyte.atlassian.net/browse/DB-4405)
### Description
Started failing since https://github.com/yugabyte/yugabyte-db/commit/f7f55d51621f81199c276a122807fc45a8d55778 | non_main | flaky test ybtsclitest testvmoduleupdate failing in release jira link description started failing since | 0 |
17,021 | 22,391,522,355 | IssuesEvent | 2022-06-17 08:11:48 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Outdated list of architectures for `process.arch`? | doc process | ### Affected URL(s)
https://nodejs.org/api/process.html
### Description of the problem
I'm trying to figure out which architectures node supports since I publish binary executables [esbuild](https://esbuild.github.io/) for various platforms. [The documentation](https://nodejs.org/api/process.html) says this:
> ## `process.arch`
>
> The operating system CPU architecture for which the Node.js binary was compiled. Possible values are: `'arm'`, `'arm64'`, `'ia32'`, `'mips'`,`'mipsel'`, `'ppc'`, `'ppc64'`, `'s390'`, `'s390x'`, `'x32'`, and `'x64'`.
However, [the code](https://github.com/nodejs/node/blob/bd86e5186a33803aa9283b9a4c6946da33b67511/configure.py#L49-L51) says this:
> ```py
> valid_arch = ('arm', 'arm64', 'ia32', 'mips', 'mipsel', 'mips64el', 'ppc',
> 'ppc64', 'x32','x64', 'x86', 'x86_64', 's390x', 'riscv64',
> 'loong64')
> ```
These are the differences:
```patch
arm
arm64
ia32
+loong64
mips
+mips64el
mipsel
ppc
ppc64
+riscv64
-s390
s390x
x32
x64
+x86_64
+x86
```
Is the documentation outdated? Are all architectures in that code officially supported by node, or only some of them?
| 1.0 | Outdated list of architectures for `process.arch`? - ### Affected URL(s)
https://nodejs.org/api/process.html
### Description of the problem
I'm trying to figure out which architectures node supports since I publish binary executables [esbuild](https://esbuild.github.io/) for various platforms. [The documentation](https://nodejs.org/api/process.html) says this:
> ## `process.arch`
>
> The operating system CPU architecture for which the Node.js binary was compiled. Possible values are: `'arm'`, `'arm64'`, `'ia32'`, `'mips'`,`'mipsel'`, `'ppc'`, `'ppc64'`, `'s390'`, `'s390x'`, `'x32'`, and `'x64'`.
However, [the code](https://github.com/nodejs/node/blob/bd86e5186a33803aa9283b9a4c6946da33b67511/configure.py#L49-L51) says this:
> ```py
> valid_arch = ('arm', 'arm64', 'ia32', 'mips', 'mipsel', 'mips64el', 'ppc',
> 'ppc64', 'x32','x64', 'x86', 'x86_64', 's390x', 'riscv64',
> 'loong64')
> ```
These are the differences:
```patch
arm
arm64
ia32
+loong64
mips
+mips64el
mipsel
ppc
ppc64
+riscv64
-s390
s390x
x32
x64
+x86_64
+x86
```
Is the documentation outdated? Are all architectures in that code officially supported by node, or only some of them?
| non_main | outdated list of architectures for process arch affected url s description of the problem i m trying to figure out which architectures node supports since i publish binary executables for various platforms says this process arch the operating system cpu architecture for which the node js binary was compiled possible values are arm mips mipsel ppc and however says this py valid arch arm mips mipsel ppc these are the differences patch arm mips mipsel ppc is the documentation outdated are all architectures in that code officially supported by node or only some of them | 0 |
315,456 | 27,074,959,593 | IssuesEvent | 2023-02-14 09:56:42 | akademia-envelo-3/meetek-front | https://api.github.com/repos/akademia-envelo-3/meetek-front | closed | FT009 - feat: Widok wszystkich kategorii | frontend test ok admin story feat sp3 | **story**
* Admin:
Podczas wejścia na stronę z kategoriami, widocznej w menu bocznym, widzi:
1. Wszystkie kategorie
2. Wyszukiwarkę
3. Przycisk do edycji oraz aktywacji /deaktywacji kategorii
4. Przycisk do dodawania nowych kategorii
* User:
Podczas wejścia na stronę z kategoriami, widocznej w menu bocznym, widzi:
1. Wszystkie kategorie
2. Wyszukiwarkę
3. Przycisk do sugerowania kategorii - po kliknięciu wyświetla się modal - na razie bez logiki sugerowania.
**dodatkowe informacje**
1. Stworzenie service do pobierania danych z bazy, które są obsługiwane przez effecty w ngrx oraz poprzez reducer dodawane do lokalnego stora.
**taski blokujące**
#44
**makiety**
[figma admin](https://www.figma.com/file/zEu3sivMxSapxiM9ehUs0s/MEETEK?node-id=304%3A16883&t=vUCybF0VMvaUlP2a-4)
[figma user](https://www.figma.com/file/zEu3sivMxSapxiM9ehUs0s/MEETEK?node-id=250%3A7933&t=BY7UF1vBhstMaRPX-4)
[Design system](https://www.figma.com/file/zEu3sivMxSapxiM9ehUs0s/MEETEK?node-id=325%3A17463&t=JYt3CImqlQ4GD88X-0)
**kryteria akceptacji**
1. Widok odwzorowany zgodnie z makietami
2. Wszystkie widoki muszą być responsywne
3. Kategorie poprawnie się wyświetlają
4. Logika zgodna z wyżej opisanym story.
| 1.0 | FT009 - feat: Widok wszystkich kategorii - **story**
* Admin:
Podczas wejścia na stronę z kategoriami, widocznej w menu bocznym, widzi:
1. Wszystkie kategorie
2. Wyszukiwarkę
3. Przycisk do edycji oraz aktywacji /deaktywacji kategorii
4. Przycisk do dodawania nowych kategorii
* User:
Podczas wejścia na stronę z kategoriami, widocznej w menu bocznym, widzi:
1. Wszystkie kategorie
2. Wyszukiwarkę
3. Przycisk do sugerowania kategorii - po kliknięciu wyświetla się modal - na razie bez logiki sugerowania.
**dodatkowe informacje**
1. Stworzenie service do pobierania danych z bazy, które są obsługiwane przez effecty w ngrx oraz poprzez reducer dodawane do lokalnego stora.
**taski blokujące**
#44
**makiety**
[figma admin](https://www.figma.com/file/zEu3sivMxSapxiM9ehUs0s/MEETEK?node-id=304%3A16883&t=vUCybF0VMvaUlP2a-4)
[figma user](https://www.figma.com/file/zEu3sivMxSapxiM9ehUs0s/MEETEK?node-id=250%3A7933&t=BY7UF1vBhstMaRPX-4)
[Design system](https://www.figma.com/file/zEu3sivMxSapxiM9ehUs0s/MEETEK?node-id=325%3A17463&t=JYt3CImqlQ4GD88X-0)
**kryteria akceptacji**
1. Widok odwzorowany zgodnie z makietami
2. Wszystkie widoki muszą być responsywne
3. Kategorie poprawnie się wyświetlają
4. Logika zgodna z wyżej opisanym story.
| non_main | feat widok wszystkich kategorii story admin podczas wejścia na stronę z kategoriami widocznej w menu bocznym widzi wszystkie kategorie wyszukiwarkę przycisk do edycji oraz aktywacji deaktywacji kategorii przycisk do dodawania nowych kategorii user podczas wejścia na stronę z kategoriami widocznej w menu bocznym widzi wszystkie kategorie wyszukiwarkę przycisk do sugerowania kategorii po kliknięciu wyświetla się modal na razie bez logiki sugerowania dodatkowe informacje stworzenie service do pobierania danych z bazy które są obsługiwane przez effecty w ngrx oraz poprzez reducer dodawane do lokalnego stora taski blokujące makiety kryteria akceptacji widok odwzorowany zgodnie z makietami wszystkie widoki muszą być responsywne kategorie poprawnie się wyświetlają logika zgodna z wyżej opisanym story | 0 |
264,023 | 23,096,518,818 | IssuesEvent | 2022-07-26 20:08:52 | TerryCavanagh/diceydungeons.com | https://api.github.com/repos/TerryCavanagh/diceydungeons.com | closed | runscript always takes self to refer to the player, even if called by the target | modding issue reported in v1.9 (Testing Round 1) | For example if an enemy equipment or "on start turn" fighter hook calls for blah.hx, and blah.hx is ``self.hp -= 5;``, the *player* will lose 5 health, not the target. If it's ever ambiguous who would be self and who would be target, self and target should both be null (players can supply fighters as arguments for scripts, right?) | 1.0 | runscript always takes self to refer to the player, even if called by the target - For example if an enemy equipment or "on start turn" fighter hook calls for blah.hx, and blah.hx is ``self.hp -= 5;``, the *player* will lose 5 health, not the target. If it's ever ambiguous who would be self and who would be target, self and target should both be null (players can supply fighters as arguments for scripts, right?) | non_main | runscript always takes self to refer to the player even if called by the target for example if an enemy equipment or on start turn fighter hook calls for blah hx and blah hx is self hp the player will lose health not the target if it s ever ambiguous who would be self and who would be target self and target should both be null players can supply fighters as arguments for scripts right | 0 |
13,928 | 3,787,597,256 | IssuesEvent | 2016-03-21 11:23:05 | eugenwintersberger/pnitools | https://api.github.com/repos/eugenwintersberger/pnitools | closed | Collate texinfo documentation | auto-migrated documentation Priority-Medium Type-Enhancement | ```
Currently the info pages for the individual programs use distinct nodes for
program options and examples. This is not necessary and makes the documentation
hard to read. Thus every program should be completely described by a single
node.
```
Original issue reported on code.google.com by `eugen.wintersberger@gmail.com` on 14 Oct 2014 at 9:23 | 1.0 | Collate texinfo documentation - ```
Currently the info pages for the individual programs use distinct nodes for
program options and examples. This is not necessary and makes the documentation
hard to read. Thus every program should be completely described by a single
node.
```
Original issue reported on code.google.com by `eugen.wintersberger@gmail.com` on 14 Oct 2014 at 9:23 | non_main | collate texinfo documentation currently the info pages for the individual programs use distinct nodes for program options and examples this is not necessary and makes the documentation hard to read thus every program should be completely described by a single node original issue reported on code google com by eugen wintersberger gmail com on oct at | 0 |
5,577 | 27,943,803,422 | IssuesEvent | 2023-03-24 00:10:55 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | closed | citrix-workspace native Apple Silicon support | awaiting maintainer feedback stale | ### Verification
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/Homebrew/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
- [X] I have retried my command with `--force`.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I have checked the instructions for [reporting bugs](https://github.com/Homebrew/homebrew-cask#reporting-bugs).
- [X] I made doubly sure this is not a [checksum does not match](https://docs.brew.sh/Common-Issues#cask---checksum-does-not-match) error.
### Description of issue
A 2023-02-07 article from Citrix indicates that they now offer native support of M1/M2 Apple Silicon via a Universal Binary: https://docs.citrix.com/en-us/citrix-workspace-app-for-mac/apple-silicon.html
There are 2x versions for download here:
https://www.citrix.com/downloads/workspace-app/mac/workspace-app-for-mac-native-support-for-silicon-mac.html
including Version: 23.01.0.53 (2301) which matches the latest Intel build available in this Cask: https://www.citrix.com/downloads/workspace-app/mac/workspace-app-for-mac-latest.html
I would update `citrix-workspace.rb` myself, but I could not find a direct download URL counterpart to the Intel installer: https://downloadplugins.citrix.com/Mac/CitrixWorkspaceApp.dmg currently used in the formula.
See this article https://support.citrix.com/article/CTX338523/is-there-a-direct-and-unattended-download-url-for-the-latest-citrix-workspace-app-version
Is there another way to get a direct download URL so the formula can be updated?
### Command that failed
brew install --cask citrix-workspace
### Output of command with `--verbose --debug`
```shell
N/A
```
### Output of `brew doctor` and `brew config`
```shell
❯ brew doctor
Please note that these warnings are just used to help the Homebrew maintainers
with debugging if you file an issue. If everything you use Homebrew for is
working fine: please don't worry or file an issue; just ignore this. Thanks!
Warning: Putting non-prefixed coreutils in your path can cause GMP builds to fail.
❯ brew config
HOMEBREW_VERSION: 4.0.1-58-g82a36d2
ORIGIN: https://github.com/Homebrew/brew
HEAD: 82a36d24fb96129fd0f398dbb5492a16a03244b7
Last commit: 17 hours ago
Core tap origin: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 452e1e125b45d68f8bf37852457bbe34a1e4debf
Core tap last commit: 8 hours ago
Core tap branch: master
Core tap JSON: 18 Feb 09:45 UTC
HOMEBREW_PREFIX: /opt/homebrew
HOMEBREW_CASK_OPTS: []
HOMEBREW_DISPLAY: /private/tmp/com.apple.launchd.NZcDTqOOhE/org.xquartz:0
HOMEBREW_EDITOR: emacsclient -t --alternate-editor=
HOMEBREW_MAKE_JOBS: 12
Homebrew Ruby: 2.6.10 => /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/bin/ruby
CPU: dodeca-core 64-bit arm_blizzard_avalanche
Clang: 14.0.0 build 1400
Git: 2.39.2 => /opt/homebrew/bin/git
Curl: 7.86.0 => /usr/bin/curl
macOS: 13.2.1-arm64
CLT: 14.2.0.0.1.1668646533
Xcode: 14.2
Rosetta 2: false
```
### Output of `brew tap`
```shell
❯ brew tap
github/gh
homebrew/bundle
homebrew/cask
homebrew/cask-drivers
homebrew/cask-fonts
homebrew/cask-versions
homebrew/core
sticklerm3/pourhouse
teamookla/speedtest
```
| True | citrix-workspace native Apple Silicon support - ### Verification
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/Homebrew/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
- [X] I have retried my command with `--force`.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I have checked the instructions for [reporting bugs](https://github.com/Homebrew/homebrew-cask#reporting-bugs).
- [X] I made doubly sure this is not a [checksum does not match](https://docs.brew.sh/Common-Issues#cask---checksum-does-not-match) error.
### Description of issue
A 2023-02-07 article from Citrix indicates that they now offer native support of M1/M2 Apple Silicon via a Universal Binary: https://docs.citrix.com/en-us/citrix-workspace-app-for-mac/apple-silicon.html
There are 2x versions for download here:
https://www.citrix.com/downloads/workspace-app/mac/workspace-app-for-mac-native-support-for-silicon-mac.html
including Version: 23.01.0.53 (2301) which matches the latest Intel build available in this Cask: https://www.citrix.com/downloads/workspace-app/mac/workspace-app-for-mac-latest.html
I would update `citrix-workspace.rb` myself, but I could not find a direct download URL counterpart to the Intel installer: https://downloadplugins.citrix.com/Mac/CitrixWorkspaceApp.dmg currently used in the formula.
See this article https://support.citrix.com/article/CTX338523/is-there-a-direct-and-unattended-download-url-for-the-latest-citrix-workspace-app-version
Is there another way to get a direct download URL so the formula can be updated?
### Command that failed
brew install --cask citrix-workspace
### Output of command with `--verbose --debug`
```shell
N/A
```
### Output of `brew doctor` and `brew config`
```shell
❯ brew doctor
Please note that these warnings are just used to help the Homebrew maintainers
with debugging if you file an issue. If everything you use Homebrew for is
working fine: please don't worry or file an issue; just ignore this. Thanks!
Warning: Putting non-prefixed coreutils in your path can cause GMP builds to fail.
❯ brew config
HOMEBREW_VERSION: 4.0.1-58-g82a36d2
ORIGIN: https://github.com/Homebrew/brew
HEAD: 82a36d24fb96129fd0f398dbb5492a16a03244b7
Last commit: 17 hours ago
Core tap origin: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 452e1e125b45d68f8bf37852457bbe34a1e4debf
Core tap last commit: 8 hours ago
Core tap branch: master
Core tap JSON: 18 Feb 09:45 UTC
HOMEBREW_PREFIX: /opt/homebrew
HOMEBREW_CASK_OPTS: []
HOMEBREW_DISPLAY: /private/tmp/com.apple.launchd.NZcDTqOOhE/org.xquartz:0
HOMEBREW_EDITOR: emacsclient -t --alternate-editor=
HOMEBREW_MAKE_JOBS: 12
Homebrew Ruby: 2.6.10 => /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/bin/ruby
CPU: dodeca-core 64-bit arm_blizzard_avalanche
Clang: 14.0.0 build 1400
Git: 2.39.2 => /opt/homebrew/bin/git
Curl: 7.86.0 => /usr/bin/curl
macOS: 13.2.1-arm64
CLT: 14.2.0.0.1.1668646533
Xcode: 14.2
Rosetta 2: false
```
### Output of `brew tap`
```shell
❯ brew tap
github/gh
homebrew/bundle
homebrew/cask
homebrew/cask-drivers
homebrew/cask-fonts
homebrew/cask-versions
homebrew/core
sticklerm3/pourhouse
teamookla/speedtest
```
| main | citrix workspace native apple silicon support verification i understand that i have retried my command with force i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i have checked the instructions for i made doubly sure this is not a error description of issue a article from citrix indicates that they now offer native support of apple silicon via a universal binary there are versions for download here including version which matches the latest intel build available in this cask i would update citrix workspace rb myself but i could not find a direct download url counterpart to the intel installer currently used in the formula see this article is there another way to get a direct download url so the formula can be updated command that failed brew install cask citrix workspace output of command with verbose debug shell n a output of brew doctor and brew config shell ❯ brew doctor please note that these warnings are just used to help the homebrew maintainers with debugging if you file an issue if everything you use homebrew for is working fine please don t worry or file an issue just ignore this thanks warning putting non prefixed coreutils in your path can cause gmp builds to fail ❯ brew config homebrew version origin head last commit hours ago core tap origin core tap head core tap last commit hours ago core tap branch master core tap json feb utc homebrew prefix opt homebrew homebrew cask opts homebrew display private tmp com apple launchd nzcdtqoohe org xquartz homebrew editor emacsclient t alternate editor homebrew make jobs homebrew ruby system library frameworks ruby framework versions usr bin ruby cpu dodeca core bit arm blizzard avalanche clang build git opt homebrew bin git curl usr bin curl macos clt xcode rosetta false output of brew tap shell ❯ brew tap github gh homebrew bundle homebrew cask homebrew cask drivers homebrew cask fonts homebrew cask versions homebrew core pourhouse teamookla speedtest | 1 |
5,734 | 30,321,973,544 | IssuesEvent | 2023-07-10 19:59:48 | aws/serverless-application-model | https://api.github.com/repos/aws/serverless-application-model | closed | Immutable AWS::Cognito::UserPool properties are not supported in the SAM translator. | contributors/good-first-issue maintainer/need-response | **Description:**
Immutable AWS::Cognito::UserPool properties are not supported in the SAM translator.
Reference Documentation links:
* https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-usernameconfiguration
* https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-accountrecoverysetting
Expected missing keys in property_types in `samtranslator/model/cognito.py`
**Steps to reproduce the issue:**
Use the following template snippet in a deploy
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
UsernameConfiguration:
CaseSensitive: False
AccountRecoverySetting:
RecoveryMechanisms:
- Name: verified_email
Priority: 1
**Observed result:**
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED.
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Resource with id [CognitoUserPool] is invalid. property UsernameConfiguration not defined for resource of type AWS::Cognito::UserPool
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Resource with id [CognitoUserPool] is invalid. property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool
**Expected result:**
Successful deploy | True | Immutable AWS::Cognito::UserPool properties are not supported in the SAM translator. - **Description:**
Immutable AWS::Cognito::UserPool properties are not supported in the SAM translator.
Reference Documentation links:
* https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-usernameconfiguration
* https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-accountrecoverysetting
Expected missing keys in property_types in `samtranslator/model/cognito.py`
**Steps to reproduce the issue:**
Use the following template snippet in a deploy
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
UsernameConfiguration:
CaseSensitive: False
AccountRecoverySetting:
RecoveryMechanisms:
- Name: verified_email
Priority: 1
**Observed result:**
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED.
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Resource with id [CognitoUserPool] is invalid. property UsernameConfiguration not defined for resource of type AWS::Cognito::UserPool
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Resource with id [CognitoUserPool] is invalid. property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool
**Expected result:**
Successful deploy | main | immutable aws cognito userpool properties are not supported in the sam translator description immutable aws cognito userpool properties are not supported in the sam translator reference documentation links expected missing keys in property types in samtranslator model cognito py steps to reproduce the issue use the following template snippet in a deploy awstemplateformatversion transform aws serverless resources cognitouserpool type aws cognito userpool properties usernameconfiguration casesensitive false accountrecoverysetting recoverymechanisms name verified email priority observed result failed to create the changeset waiter changesetcreatecomplete failed waiter encountered a terminal failure state status failed transform aws serverless failed with invalid serverless application specification document resource with id is invalid property usernameconfiguration not defined for resource of type aws cognito userpool transform aws serverless failed with invalid serverless application specification document resource with id is invalid property accountrecoverysetting not defined for resource of type aws cognito userpool expected result successful deploy | 1 |
290,786 | 25,095,674,374 | IssuesEvent | 2022-11-08 09:59:50 | OskarMorel/GORAS_EditeurGrapheProbalistiques | https://api.github.com/repos/OskarMorel/GORAS_EditeurGrapheProbalistiques | closed | US5 - Réenregistrement d'un graphe | redigerTestsAcceptation | ### User story
En tant qu'utilisateur
Je veux enregistrer un graphe ouvert à partir d’un fichier dans un fichier différent
Afin de créer une copie
### Tests d'acceptation
| 1.0 | US5 - Réenregistrement d'un graphe - ### User story
En tant qu'utilisateur
Je veux enregistrer un graphe ouvert à partir d’un fichier dans un fichier différent
Afin de créer une copie
### Tests d'acceptation
| non_main | réenregistrement d un graphe user story en tant qu utilisateur je veux enregistrer un graphe ouvert à partir d’un fichier dans un fichier différent afin de créer une copie tests d acceptation | 0 |
10,891 | 4,838,587,210 | IssuesEvent | 2016-11-09 04:28:47 | docker/docker | https://api.github.com/repos/docker/docker | closed | Dockerfile doesn't COPY all files in project directory | area/builder kind/bug platform/windows | # Issue
**Docker Version**: Docker for Windows 1.12.0-rc3
**OS**: Windows 10 RTM
In my application folder, I have a `.aws` folder, with stub configuration files. The filesystem structure of the project looks like this:
```
+ Project Folder
|--> app.py
|--> .aws
|-----> config
|-----> credentials
+
```
In my `Dockerfile`, I have a `COPY` instruction, to copy the entire `Project Folder` into the Docker image. However, the `.aws` folder isn't copied. I have not specified a `.dockerignore` file, so nothing should be excluded.
Any thoughts on why this is happening?
Cheers,
Trevor Sullivan
| 1.0 | Dockerfile doesn't COPY all files in project directory - # Issue
**Docker Version**: Docker for Windows 1.12.0-rc3
**OS**: Windows 10 RTM
In my application folder, I have a `.aws` folder, with stub configuration files. The filesystem structure of the project looks like this:
```
+ Project Folder
|--> app.py
|--> .aws
|-----> config
|-----> credentials
+
```
In my `Dockerfile`, I have a `COPY` instruction, to copy the entire `Project Folder` into the Docker image. However, the `.aws` folder isn't copied. I have not specified a `.dockerignore` file, so nothing should be excluded.
Any thoughts on why this is happening?
Cheers,
Trevor Sullivan
| non_main | dockerfile doesn t copy all files in project directory issue docker version docker for windows os windows rtm in my application folder i have a aws folder with stub configuration files the filesystem structure of the project looks like this project folder app py aws config credentials in my dockerfile i have a copy instruction to copy the entire project folder into the docker image however the aws folder isn t copied i have not specified a dockerignore file so nothing should be excluded any thoughts on why this is happening cheers trevor sullivan | 0 |
446 | 3,594,097,728 | IssuesEvent | 2016-02-01 22:15:34 | christoff-buerger/racr | https://api.github.com/repos/christoff-buerger/racr | closed | Refactoring of original Petri nets example | maintainability medium | The original Petri nets example included hierarchical nets. For didactic reasons and better modularisation, the non-hierarchical functionality was outsourced as atomic Petri nets. Conceptually, support for hierarchical nets is just an extension of the basic atomic interpreter.
The extracted atomic Petri nets interpreter is implemented according to best practices in the design of _RACR_-based languages; in particular query support functions are used, e.g., `(->foo n)` to query the `foo` child of `n` or `(=att n)` to query the `att` attribute of `n` etc.
The refactoring of the original Petri nets interpreter to an interpreter for hierarchical Petri nets according to these best practices and by reusing the atomic Petri nets implementation still has to be done however. | True | Refactoring of original Petri nets example - The original Petri nets example included hierarchical nets. For didactic reasons and better modularisation, the non-hierarchical functionality was outsourced as atomic Petri nets. Conceptually, support for hierarchical nets is just an extension of the basic atomic interpreter.
The extracted atomic Petri nets interpreter is implemented according to best practices in the design of _RACR_-based languages; in particular query support functions are used, e.g., `(->foo n)` to query the `foo` child of `n` or `(=att n)` to query the `att` attribute of `n` etc.
The refactoring of the original Petri nets interpreter to an interpreter for hierarchical Petri nets according to these best practices and by reusing the atomic Petri nets implementation still has to be done however. | main | refactoring of original petri nets example the original petri nets example included hierarchical nets for didactic reasons and better modularisation the non hierarchical functionality was outsourced as atomic petri nets conceptually support for hierarchical nets is just an extension of the basic atomic interpreter the extracted atomic petri nets interpreter is implemented according to best practices in the design of racr based languages in particular query support functions are used e g foo n to query the foo child of n or att n to query the att attribute of n etc the refactoring of the original petri nets interpreter to an interpreter for hierarchical petri nets according to these best practices and by reusing the atomic petri nets implementation still has to be done however | 1 |
2,532 | 8,657,429,304 | IssuesEvent | 2018-11-27 21:18:12 | Kapeli/Dash-User-Contributions | https://api.github.com/repos/Kapeli/Dash-User-Contributions | closed | Relay Docset maintainer needed | needs maintainer | I can no longer have time to maintain this docset and I am looking for additional contributors to assist. My repo is located at https://github.com/epitaphmike/relay-dash. If this is something you are interested in helping with please reach out. Thank you.
| True | Relay Docset maintainer needed - I can no longer have time to maintain this docset and I am looking for additional contributors to assist. My repo is located at https://github.com/epitaphmike/relay-dash. If this is something you are interested in helping with please reach out. Thank you.
| main | relay docset maintainer needed i can no longer have time to maintain this docset and i am looking for additional contributors to assist my repo is located at if this is something you are interested in helping with please reach out thank you | 1 |
1,566 | 6,572,261,948 | IssuesEvent | 2017-09-11 00:45:20 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | known_hosts module changes ownership of existing file | affects_2.0 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
```
##### OS / ENVIRONMENT
CentOS Linux release 7.2.1511 (Core)
##### SUMMARY
If known_hosts replaces an existing entry with a new key, the module changes the ownership of the file to root.
##### STEPS TO REPRODUCE
These plays replace the known_host when a system is rebuilt, so that man-in-the-middle warnings are properly addressed:
```
- name: get localhost pubkey
shell: cat /etc/ssh/ssh_host_rsa_key.pub
register: local_key
changed_when: false
- name: add pubkey to ansible@ known_hosts
known_hosts:
path: '/home/ansible/.ssh/known_hosts'
name: '{{ inventory_hostname }}'
key: '{{ inventory_hostname }},{{ some_ip_address }} {{ local_key.stdout }}'
delegate_to: localhost
```
##### EXPECTED RESULTS
Key changes if different. File ownership do not change.
##### ACTUAL RESULTS
Key is successfully changed in known_hosts, but file ownership also change
```
$ sudo ls -alh /home/ansible/.ssh/known_hosts
-rw------- 1 root root 45K Jul 1 14:14 /home/ansible/.ssh/known_hosts
$
```
##### WORKAROUND
add become and become_user to play to match owner of the known_hosts file
```
- name: add pubkey to ansible@ known_hosts
known_hosts:
path: '/home/ansible/.ssh/known_hosts'
name: '{{ inventory_hostname }}'
key: '{{ inventory_hostname }},{{ some_ip_address }} {{ local_key.stdout }}'
become: yes
become_user: ansible
delegate_to: localhost
tags: ssh_keys
```
| True | known_hosts module changes ownership of existing file - ##### ISSUE TYPE
- Bug Report
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
```
##### OS / ENVIRONMENT
CentOS Linux release 7.2.1511 (Core)
##### SUMMARY
If known_hosts replaces an existing entry with a new key, the module changes the ownership of the file to root.
##### STEPS TO REPRODUCE
These plays replace the known_host when a system is rebuilt, so that man-in-the-middle warnings are properly addressed:
```
- name: get localhost pubkey
shell: cat /etc/ssh/ssh_host_rsa_key.pub
register: local_key
changed_when: false
- name: add pubkey to ansible@ known_hosts
known_hosts:
path: '/home/ansible/.ssh/known_hosts'
name: '{{ inventory_hostname }}'
key: '{{ inventory_hostname }},{{ some_ip_address }} {{ local_key.stdout }}'
delegate_to: localhost
```
##### EXPECTED RESULTS
Key changes if different. File ownership do not change.
##### ACTUAL RESULTS
Key is successfully changed in known_hosts, but file ownership also change
```
$ sudo ls -alh /home/ansible/.ssh/known_hosts
-rw------- 1 root root 45K Jul 1 14:14 /home/ansible/.ssh/known_hosts
$
```
##### WORKAROUND
add become and become_user to play to match owner of the known_hosts file
```
- name: add pubkey to ansible@ known_hosts
known_hosts:
path: '/home/ansible/.ssh/known_hosts'
name: '{{ inventory_hostname }}'
key: '{{ inventory_hostname }},{{ some_ip_address }} {{ local_key.stdout }}'
become: yes
become_user: ansible
delegate_to: localhost
tags: ssh_keys
```
| main | known hosts module changes ownership of existing file issue type bug report ansible version ansible config file etc ansible ansible cfg os environment centos linux release core summary if known hosts replaces an existing entry with a new key the module changes the ownership of the file to root steps to reproduce these plays replace the known host when a system is rebuilt so that man in the middle warnings are properly addressed name get localhost pubkey shell cat etc ssh ssh host rsa key pub register local key changed when false name add pubkey to ansible known hosts known hosts path home ansible ssh known hosts name inventory hostname key inventory hostname some ip address local key stdout delegate to localhost expected results key changes if different file ownership do not change actual results key is successfully changed in known hosts but file ownership also change sudo ls alh home ansible ssh known hosts rw root root jul home ansible ssh known hosts workaround add become and become user to play to match owner of the known hosts file name add pubkey to ansible known hosts known hosts path home ansible ssh known hosts name inventory hostname key inventory hostname some ip address local key stdout become yes become user ansible delegate to localhost tags ssh keys | 1 |
352,895 | 25,087,705,333 | IssuesEvent | 2022-11-08 02:01:55 | fga-eps-mds/2022.2-Amis-Doc | https://api.github.com/repos/fga-eps-mds/2022.2-Amis-Doc | closed | Realizar reunião do kick-off com cliente | documentation EPS | **Descrição:**
Realizar reunião com o cliente buscando entender o projeto, as dificuldades e os problemas que devem ser resolvidos por este projeto.
| 1.0 | Realizar reunião do kick-off com cliente - **Descrição:**
Realizar reunião com o cliente buscando entender o projeto, as dificuldades e os problemas que devem ser resolvidos por este projeto.
| non_main | realizar reunião do kick off com cliente descrição realizar reunião com o cliente buscando entender o projeto as dificuldades e os problemas que devem ser resolvidos por este projeto | 0 |
4,630 | 23,980,996,588 | IssuesEvent | 2022-09-13 15:02:40 | exercism/python | https://api.github.com/repos/exercism/python | closed | [New Concept Exercise] : imports | maintainer action required❕ on hold ✋🏽 | This issue describes how to implement the `imports` concept exercise for the Python track, which should explain how & why it is useful to `import` names (libraries, modules, classes, functions, etc.) in Python. We're naming the concept "imports" to avoid a file name clash with the `import` keyword.
## Getting started
**Please please please read the docs before starting.** Posting PRs without reading these docs will be a lot more frustrating for you during the review cycle, and exhaust Exercism's maintainers' time. So, before diving into the implementation, please read up on the following documents:
- [The features of v3](https://github.com/exercism/v3/blob/master/docs/concept-exercises.md).
- [Rationale for v3](https://github.com/exercism/v3/blob/master/docs/rationale-for-v3.md).
- [What are concept exercise and how they are structured?](https://github.com/exercism/v3/blob/master/docs/features-of-v3.md)
Please also watch the following video:
- [The Anatomy of a Concept Exercise](https://www.youtube.com/watch?v=gkbBqd7hPrA).
## Goal
This concept exercise should convey a basic understanding and usage of `import` and `import from` in a Python program. Additionally, the student should learn how to employ the [as](https://yawpitchroll.com/posts/the-35-words-you-need-to-python/#as) keyword for _**aliasing**_ (re-naming) in an`import` context.
## Learning objectives
- Use the `import` keyword to import an entire Python library or module for use in a program
- use the `as` keyword to refer to an imported module by some other (_aliased_) name
- use the `from` keyword to import only specific classes/functions/names from a library or module
- use the `as` keyword with the `from` keyword to alias or re-name the specific classes/functions/names imported from a library or module.
- be aware of `import` conventions as outlined in [PEP8](https://www.python.org/dev/peps/pep-0008/#imports)
## Out of scope
- The `importlib` module, and the customization of import behavior.
## Concepts
- `import` statement
- `import from` statement
- `import ... as` statement
## Prerequisites
- `basics`
## Resources to refer to
- [`import` (python docs)](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)
- [yawpitchroll blog: **import** (_The 35 words you need to Python_)](https://yawpitchroll.com/posts/the-35-words-you-need-to-python/#import)
- [yawpitchroll blog: **as** (_The 35 words you need to Python_)](https://yawpitchroll.com/posts/the-35-words-you-need-to-python/#as)
- [Real Python: the Import keywords](https://realpython.com/python-keywords/#import-keywords-import-from-as)
- [PEP-8: imports](https://www.python.org/dev/peps/pep-0008/#imports)
### Hints
- Hints should refer to one or more of the links above, or analogous links from trusted sources or the Python docs.
## Concept Description
(_a variant of this can be used for the `v3/languages/python/concepts/<concept>/about.md` doc and this exercises `introduction.md` doc._)
_**Concept Description Needs to Be Filled In Here/Written**_
_Some "extras" that we might want to include as notes in the concept description, or as links in `links.json`:_
- Considerations and conventions when importing a large amount of libraries.
- Specific conventions used in large frameworks and projects such as [django](https://docs.djangoproject.com/en/dev/internals/contributing/writing-code/coding-style/#imports) or [pandas](https://stackoverflow.com/questions/35697404/why-its-a-convention-to-import-pandas-as-pd)
- More details on Pythons [import system](https://docs.python.org/3/reference/import.html) and `importlib`
- [Modules](https://docs.python.org/3/tutorial/modules.html) and and the [module search path](https://docs.python.org/3/tutorial/modules.html#the-module-search-path)
## Representer
No changes required.
## Analyzer
No changes required.
## Implementing
The general Python track concept exercise implantation guide can be found [here](https://github.com/exercism/v3/blob/master/languages/python/reference/implementing-a-concept-exercise.md).
Tests should be written using `unittest.TestCase` and the test file named `imports_test.py`.
Code in the `.meta/example.py` file should **only use syntax & concepts introduced in this exercise or one of its prerequisites.** Please do not use comprehensions, generator expressions, or other syntax not previously covered. Please also follow [PEP8](https://www.python.org/dev/peps/pep-0008/) guidelines.
## Help
If you have any questions while implementing the exercise, please post the questions as comments in this issue, or contact one of the maintainers on our Slack channel.
## Edits
- Added additional links for `as` and `from` keywords. Re-phrased goals and objectives for clarity. @BethanyG | True | [New Concept Exercise] : imports - This issue describes how to implement the `imports` concept exercise for the Python track, which should explain how & why it is useful to `import` names (libraries, modules, classes, functions, etc.) in Python. We're naming the concept "imports" to avoid a file name clash with the `import` keyword.
## Getting started
**Please please please read the docs before starting.** Posting PRs without reading these docs will be a lot more frustrating for you during the review cycle, and exhaust Exercism's maintainers' time. So, before diving into the implementation, please read up on the following documents:
- [The features of v3](https://github.com/exercism/v3/blob/master/docs/concept-exercises.md).
- [Rationale for v3](https://github.com/exercism/v3/blob/master/docs/rationale-for-v3.md).
- [What are concept exercise and how they are structured?](https://github.com/exercism/v3/blob/master/docs/features-of-v3.md)
Please also watch the following video:
- [The Anatomy of a Concept Exercise](https://www.youtube.com/watch?v=gkbBqd7hPrA).
## Goal
This concept exercise should convey a basic understanding and usage of `import` and `import from` in a Python program. Additionally, the student should learn how to employ the [as](https://yawpitchroll.com/posts/the-35-words-you-need-to-python/#as) keyword for _**aliasing**_ (re-naming) in an`import` context.
## Learning objectives
- Use the `import` keyword to import an entire Python library or module for use in a program
- use the `as` keyword to refer to an imported module by some other (_aliased_) name
- use the `from` keyword to import only specific classes/functions/names from a library or module
- use the `as` keyword with the `from` keyword to alias or re-name the specific classes/functions/names imported from a library or module.
- be aware of `import` conventions as outlined in [PEP8](https://www.python.org/dev/peps/pep-0008/#imports)
## Out of scope
- The `importlib` module, and the customization of import behavior.
## Concepts
- `import` statement
- `import from` statement
- `import ... as` statement
## Prerequisites
- `basics`
## Resources to refer to
- [`import` (python docs)](https://docs.python.org/3/reference/simple_stmts.html#the-import-statement)
- [yawpitchroll blog: **import** (_The 35 words you need to Python_)](https://yawpitchroll.com/posts/the-35-words-you-need-to-python/#import)
- [yawpitchroll blog: **as** (_The 35 words you need to Python_)](https://yawpitchroll.com/posts/the-35-words-you-need-to-python/#as)
- [Real Python: the Import keywords](https://realpython.com/python-keywords/#import-keywords-import-from-as)
- [PEP-8: imports](https://www.python.org/dev/peps/pep-0008/#imports)
### Hints
- Hints should refer to one or more of the links above, or analogous links from trusted sources or the Python docs.
## Concept Description
(_a variant of this can be used for the `v3/languages/python/concepts/<concept>/about.md` doc and this exercises `introduction.md` doc._)
_**Concept Description Needs to Be Filled In Here/Written**_
_Some "extras" that we might want to include as notes in the concept description, or as links in `links.json`:_
- Considerations and conventions when importing a large amount of libraries.
- Specific conventions used in large frameworks and projects such as [django](https://docs.djangoproject.com/en/dev/internals/contributing/writing-code/coding-style/#imports) or [pandas](https://stackoverflow.com/questions/35697404/why-its-a-convention-to-import-pandas-as-pd)
- More details on Pythons [import system](https://docs.python.org/3/reference/import.html) and `importlib`
- [Modules](https://docs.python.org/3/tutorial/modules.html) and and the [module search path](https://docs.python.org/3/tutorial/modules.html#the-module-search-path)
## Representer
No changes required.
## Analyzer
No changes required.
## Implementing
The general Python track concept exercise implantation guide can be found [here](https://github.com/exercism/v3/blob/master/languages/python/reference/implementing-a-concept-exercise.md).
Tests should be written using `unittest.TestCase` and the test file named `imports_test.py`.
Code in the `.meta/example.py` file should **only use syntax & concepts introduced in this exercise or one of its prerequisites.** Please do not use comprehensions, generator expressions, or other syntax not previously covered. Please also follow [PEP8](https://www.python.org/dev/peps/pep-0008/) guidelines.
## Help
If you have any questions while implementing the exercise, please post the questions as comments in this issue, or contact one of the maintainers on our Slack channel.
## Edits
- Added additional links for `as` and `from` keywords. Re-phrased goals and objectives for clarity. @BethanyG | main | imports this issue describes how to implement the imports concept exercise for the python track which should explain how why it is useful to import names libraries modules classes functions etc in python we re naming the concept imports to avoid a file name clash with the import keyword getting started please please please read the docs before starting posting prs without reading these docs will be a lot more frustrating for you during the review cycle and exhaust exercism s maintainers time so before diving into the implementation please read up on the following documents please also watch the following video goal this concept exercise should convey a basic understanding and usage of import and import from in a python program additionally the student should learn how to employ the keyword for aliasing re naming in an import context learning objectives use the import keyword to import an entire python library or module for use in a program use the as keyword to refer to an imported module by some other aliased name use the from keyword to import only specific classes functions names from a library or module use the as keyword with the from keyword to alias or re name the specific classes functions names imported from a library or module be aware of import conventions as outlined in out of scope the importlib module and the customization of import behavior concepts import statement import from statement import as statement prerequisites basics resources to refer to hints hints should refer to one or more of the links above or analogous links from trusted sources or the python docs concept description a variant of this can be used for the languages python concepts about md doc and this exercises introduction md doc concept description needs to be filled in here written some extras that we might want to include as notes in the concept description or as links in links json considerations and conventions when importing a large amount of libraries specific conventions used in large frameworks and projects such as or more details on pythons and importlib and and the representer no changes required analyzer no changes required implementing the general python track concept exercise implantation guide can be found tests should be written using unittest testcase and the test file named imports test py code in the meta example py file should only use syntax concepts introduced in this exercise or one of its prerequisites please do not use comprehensions generator expressions or other syntax not previously covered please also follow guidelines help if you have any questions while implementing the exercise please post the questions as comments in this issue or contact one of the maintainers on our slack channel edits added additional links for as and from keywords re phrased goals and objectives for clarity bethanyg | 1 |
2,029 | 6,778,744,923 | IssuesEvent | 2017-10-28 14:48:26 | dgets/RecScan | https://api.github.com/repos/dgets/RecScan | opened | Shave down try/catch blocks | enhancement maintainability | Try to keep the try/catch blocks a little slimmer in order to more concisely pinpoint what particular line of code is really effin' it all up, without having to resort to using the (slow) debugger. | True | Shave down try/catch blocks - Try to keep the try/catch blocks a little slimmer in order to more concisely pinpoint what particular line of code is really effin' it all up, without having to resort to using the (slow) debugger. | main | shave down try catch blocks try to keep the try catch blocks a little slimmer in order to more concisely pinpoint what particular line of code is really effin it all up without having to resort to using the slow debugger | 1 |
3,110 | 11,872,559,668 | IssuesEvent | 2020-03-26 15:59:29 | kensho-technologies/graphql-compiler | https://api.github.com/repos/kensho-technologies/graphql-compiler | closed | End to end test for MSSQL folded output with postprocessing | maintainer quality-of-life | An end-to-end test for the MSSQL fold post-processing would have caught the bug that is being resolved by https://github.com/kensho-technologies/graphql-compiler/pull/779.
We should add this test as soon as practical, to avoid having similar issues in the future. | True | End to end test for MSSQL folded output with postprocessing - An end-to-end test for the MSSQL fold post-processing would have caught the bug that is being resolved by https://github.com/kensho-technologies/graphql-compiler/pull/779.
We should add this test as soon as practical, to avoid having similar issues in the future. | main | end to end test for mssql folded output with postprocessing an end to end test for the mssql fold post processing would have caught the bug that is being resolved by we should add this test as soon as practical to avoid having similar issues in the future | 1 |
15,194 | 2,850,247,489 | IssuesEvent | 2015-05-31 12:06:38 | damonkohler/sl4a | https://api.github.com/repos/damonkohler/sl4a | opened | beanshell interpreter crashes | auto-migrated Priority-Medium Type-Defect | _From @GoogleCodeExporter on May 31, 2015 11:31_
```
Samsung Galaxy S III SGH-T999
Cyanogenmod 10
What steps will reproduce the problem?
1. Get the beanshell addon.
2. Tap Install.
3. In SL4A, start a beanshell interpreter.
What is the expected output? What do you see instead?
Expected: Beanshell starts.
Observed: Beanshell crashes, failing to find the Interpreter class.
What version of the product are you using? On what operating system?
Latest, on Android 4.2.2 Jelly Bean.
```
Original issue reported on code.google.com by `andrew.p...@gmail.com` on 12 Apr 2013 at 12:09
_Copied from original issue: damonkohler/android-scripting#680_ | 1.0 | beanshell interpreter crashes - _From @GoogleCodeExporter on May 31, 2015 11:31_
```
Samsung Galaxy S III SGH-T999
Cyanogenmod 10
What steps will reproduce the problem?
1. Get the beanshell addon.
2. Tap Install.
3. In SL4A, start a beanshell interpreter.
What is the expected output? What do you see instead?
Expected: Beanshell starts.
Observed: Beanshell crashes, failing to find the Interpreter class.
What version of the product are you using? On what operating system?
Latest, on Android 4.2.2 Jelly Bean.
```
Original issue reported on code.google.com by `andrew.p...@gmail.com` on 12 Apr 2013 at 12:09
_Copied from original issue: damonkohler/android-scripting#680_ | non_main | beanshell interpreter crashes from googlecodeexporter on may samsung galaxy s iii sgh cyanogenmod what steps will reproduce the problem get the beanshell addon tap install in start a beanshell interpreter what is the expected output what do you see instead expected beanshell starts observed beanshell crashes failing to find the interpreter class what version of the product are you using on what operating system latest on android jelly bean original issue reported on code google com by andrew p gmail com on apr at copied from original issue damonkohler android scripting | 0 |
1,830 | 6,577,356,695 | IssuesEvent | 2017-09-12 00:20:43 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | nxos_vlan returns "Command does not support JSON output" | affects_2.1 bug_report networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
nxos_vlan
##### ANSIBLE VERSION
```
vagrant@precise32:/vagrant$ ansible --version
ansible 2.1.0.0
config file = /vagrant/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ansible.cfg
ask_pass = False
gathering = explicit
roles_path = /vagrant/roles/
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Running from vagrant precise32
Managing Cisco Nexus 3172 Chassis; System version: 6.0(2)U5(2)
##### SUMMARY
<!--- Explain the problem briefly -->
nxos_vlan module returns "Command does not support JSON output" however the vlans are added to the device
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Example role
```
- name: Vlan configuration
nxos_vlan:
admin_state: "{{ item.admin_state | default(omit) }}"
host: "{{ inventory_hostname }}"
name: "{{ item.name | default(omit) }}"
password: "{{ cisco.nexus.password }}"
port: "{{ item.port | default(omit) }}"
provider: "{{ provider | default(omit) }}"
ssh_keyfile: "{{ ssh_keyfile | default(omit) }}"
state: "{{ item.state | default(omit) }}"
transport: "{{ transport | default('cli') }}"
use_ssl: "{{ use_ssl | default(omit) }}"
username: "{{ cisco.nexus.username }}"
vlan_id: "{{ item.vlan_id | default(omit) }}"
vlan_range: "{{ item.vlan_range | default(omit) }}"
vlan_state: "{{ item.vlan_state | default(omit) }}"
with_items: "{{ vlans }}"
```
Example group_vars:
```
vlans:
- vlan_id: 500
name: clbv2_vm_mgmt
state: present
- vlan_id: 600
name: clbv2_vm_snet
state: present
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expected the vlans to be added and the module to return 'changed'. Subsequent runs return 'ok'
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
vagrant@precise32:/vagrant$ ansible-playbook vlan_test.yml -i inventory/lab
PLAY [all] *********************************************************************
TASK [vlan : Include nxos vlan tasks] ******************************************
included: /vagrant/roles/vlan/tasks/nxos.yml for 10.127.49.31, 10.127.49.32
TASK [vlan : Vlan configuration] ***********************************************
failed: [10.127.49.32] (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) => {"command": "show vlan | json", "failed": true, "item": {"name": "clbv2_vm_mgmt", "state": "present", "vlan_id": 500}, "msg": "Command does not support JSON output"}
failed: [10.127.49.31] (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) => {"command": "show vlan | json", "failed": true, "item": {"name": "clbv2_vm_mgmt", "state": "present", "vlan_id": 500}, "msg": "Command does not support JSON output"}
failed: [10.127.49.32] (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) => {"command": "show vlan | json", "failed": true, "item": {"name": "clbv2_vm_snet", "state": "present", "vlan_id": 600}, "msg": "Command does not support JSON output"}
changed: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600})
to retry, use: --limit @vlan_test.retry
PLAY RECAP *********************************************************************
10.127.49.31 : ok=1 changed=0 unreachable=0 failed=1
10.127.49.32 : ok=1 changed=0 unreachable=0 failed=1
vagrant@precise32:/vagrant$ ansible-playbook vlan_test.yml -i inventory/lab
PLAY [all] *********************************************************************
TASK [vlan : Include nxos vlan tasks] ******************************************
included: /vagrant/roles/vlan/tasks/nxos.yml for 10.127.49.31, 10.127.49.32
TASK [vlan : Vlan configuration] ***********************************************
ok: [10.127.49.32] => (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500})
ok: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500})
ok: [10.127.49.32] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600})
ok: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600})
PLAY RECAP *********************************************************************
10.127.49.31 : ok=2 changed=0 unreachable=0 failed=0
10.127.49.32 : ok=2 changed=0 unreachable=0 failed=0
```
| True | nxos_vlan returns "Command does not support JSON output" - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
nxos_vlan
##### ANSIBLE VERSION
```
vagrant@precise32:/vagrant$ ansible --version
ansible 2.1.0.0
config file = /vagrant/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ansible.cfg
ask_pass = False
gathering = explicit
roles_path = /vagrant/roles/
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Running from vagrant precise32
Managing Cisco Nexus 3172 Chassis; System version: 6.0(2)U5(2)
##### SUMMARY
<!--- Explain the problem briefly -->
nxos_vlan module returns "Command does not support JSON output" however the vlans are added to the device
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Example role
```
- name: Vlan configuration
nxos_vlan:
admin_state: "{{ item.admin_state | default(omit) }}"
host: "{{ inventory_hostname }}"
name: "{{ item.name | default(omit) }}"
password: "{{ cisco.nexus.password }}"
port: "{{ item.port | default(omit) }}"
provider: "{{ provider | default(omit) }}"
ssh_keyfile: "{{ ssh_keyfile | default(omit) }}"
state: "{{ item.state | default(omit) }}"
transport: "{{ transport | default('cli') }}"
use_ssl: "{{ use_ssl | default(omit) }}"
username: "{{ cisco.nexus.username }}"
vlan_id: "{{ item.vlan_id | default(omit) }}"
vlan_range: "{{ item.vlan_range | default(omit) }}"
vlan_state: "{{ item.vlan_state | default(omit) }}"
with_items: "{{ vlans }}"
```
Example group_vars:
```
vlans:
- vlan_id: 500
name: clbv2_vm_mgmt
state: present
- vlan_id: 600
name: clbv2_vm_snet
state: present
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expected the vlans to be added and the module to return 'changed'. Subsequent runs return 'ok'
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
vagrant@precise32:/vagrant$ ansible-playbook vlan_test.yml -i inventory/lab
PLAY [all] *********************************************************************
TASK [vlan : Include nxos vlan tasks] ******************************************
included: /vagrant/roles/vlan/tasks/nxos.yml for 10.127.49.31, 10.127.49.32
TASK [vlan : Vlan configuration] ***********************************************
failed: [10.127.49.32] (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) => {"command": "show vlan | json", "failed": true, "item": {"name": "clbv2_vm_mgmt", "state": "present", "vlan_id": 500}, "msg": "Command does not support JSON output"}
failed: [10.127.49.31] (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500}) => {"command": "show vlan | json", "failed": true, "item": {"name": "clbv2_vm_mgmt", "state": "present", "vlan_id": 500}, "msg": "Command does not support JSON output"}
failed: [10.127.49.32] (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600}) => {"command": "show vlan | json", "failed": true, "item": {"name": "clbv2_vm_snet", "state": "present", "vlan_id": 600}, "msg": "Command does not support JSON output"}
changed: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600})
to retry, use: --limit @vlan_test.retry
PLAY RECAP *********************************************************************
10.127.49.31 : ok=1 changed=0 unreachable=0 failed=1
10.127.49.32 : ok=1 changed=0 unreachable=0 failed=1
vagrant@precise32:/vagrant$ ansible-playbook vlan_test.yml -i inventory/lab
PLAY [all] *********************************************************************
TASK [vlan : Include nxos vlan tasks] ******************************************
included: /vagrant/roles/vlan/tasks/nxos.yml for 10.127.49.31, 10.127.49.32
TASK [vlan : Vlan configuration] ***********************************************
ok: [10.127.49.32] => (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500})
ok: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_mgmt', u'vlan_id': 500})
ok: [10.127.49.32] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600})
ok: [10.127.49.31] => (item={u'state': u'present', u'name': u'clbv2_vm_snet', u'vlan_id': 600})
PLAY RECAP *********************************************************************
10.127.49.31 : ok=2 changed=0 unreachable=0 failed=0
10.127.49.32 : ok=2 changed=0 unreachable=0 failed=0
```
| main | nxos vlan returns command does not support json output issue type bug report component name nxos vlan ansible version vagrant vagrant ansible version ansible config file vagrant ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible cfg ask pass false gathering explicit roles path vagrant roles os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running from vagrant managing cisco nexus chassis system version summary nxos vlan module returns command does not support json output however the vlans are added to the device steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used example role name vlan configuration nxos vlan admin state item admin state default omit host inventory hostname name item name default omit password cisco nexus password port item port default omit provider provider default omit ssh keyfile ssh keyfile default omit state item state default omit transport transport default cli use ssl use ssl default omit username cisco nexus username vlan id item vlan id default omit vlan range item vlan range default omit vlan state item vlan state default omit with items vlans example group vars vlans vlan id name vm mgmt state present vlan id name vm snet state present expected results i expected the vlans to be added and the module to return changed subsequent runs return ok actual results vagrant vagrant ansible playbook vlan test yml i inventory lab play task included vagrant roles vlan tasks nxos yml for task failed item u state u present u name u vm mgmt u vlan id command show vlan json failed true item name vm mgmt state present vlan id msg command does not support json output failed item u state u present u name u vm mgmt u vlan id command show vlan json failed true item name vm mgmt state present vlan id msg command does not support json output failed item u state u present u name u vm snet u vlan id command show vlan json failed true item name vm snet state present vlan id msg command does not support json output changed item u state u present u name u vm snet u vlan id to retry use limit vlan test retry play recap ok changed unreachable failed ok changed unreachable failed vagrant vagrant ansible playbook vlan test yml i inventory lab play task included vagrant roles vlan tasks nxos yml for task ok item u state u present u name u vm mgmt u vlan id ok item u state u present u name u vm mgmt u vlan id ok item u state u present u name u vm snet u vlan id ok item u state u present u name u vm snet u vlan id play recap ok changed unreachable failed ok changed unreachable failed | 1 |
97,169 | 8,650,547,890 | IssuesEvent | 2018-11-26 22:57:19 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Panic seen in logs when provisioning cluster. | kind/bug-qa status/resolved status/to-test version/2.0 | Rancher server version - v2.1.2-rc13
Steps to reproduce the problem:
Provision a 1 node DO cluster.
Cluster provisioning succeeded . But following panic is seen in logs:
```
2018/11/22 00:32:47 [INFO] cluster [c-5txzx] provisioning: [worker] Successfully started [rke-log-linker] container on host [<ip>]
2018/11/22 00:32:49 [INFO] 2018/11/22 00:32:49 http: panic serving <ip>:59674: runtime error: invalid memory address or nil pointer dereference
2018/11/22 00:32:49 [INFO] goroutine 21496615 [running]:
2018/11/22 00:32:49 [INFO] net/http.(*conn).serve.func1(0xc01f5a1a40)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:1746 +0xd0
2018/11/22 00:32:49 [INFO] panic(0x4590140, 0xba68cd0)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/runtime/panic.go:513 +0x1b9
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/pkg/api/store/cluster.(*Store).ByID(0xc007f6ab40, 0xc019f39b00, 0xc0095fddc0, 0xc0223db58d, 0x7, 0x4fd9aa5, 0xc01acd1768, 0x3938515)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/pkg/api/store/cluster/cluster_store.go:76 +0xc2
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/rancher/norman/store/wrapper.(*StoreWrapper).ByID(0xc0083266e0, 0xc019f39b00, 0xc0095fddc0, 0xc0223db58d, 0x7, 0xc00bd4c060, 0x4fd9aa5, 0xa3f552)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/wrapper/wrapper.go:24 +0x68
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/rancher/norman/api/handler.ListHandler(0xc019f39b00, 0x51c5910, 0xc0095fddc0, 0x0)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/handler/list.go:28 +0xa2
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/rancher/norman/api.(*Server).handle(0xc0004bafd0, 0x81109a0, 0xc05e91da40, 0xc017562f00, 0xc007e3d700, 0xc0051a42d0, 0xc01acd1908)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/server.go:240 +0x2a2
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/rancher/norman/api.(*Server).ServeHTTP(0xc0004bafd0, 0x81109a0, 0xc05e91da40, 0xc017562f00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/server.go:171 +0x49
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc00076be30, 0x81109a0, 0xc05e91da40, 0xc017562f00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/gorilla/mux/mux.go:159 +0xf1
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/pkg/filter.authHandler.ServeHTTP(0x80d30e0, 0xc00b322b40, 0x80c4fc0, 0xc00076be30, 0x0, 0x81109a0, 0xc05e91da40, 0xc017562d00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/pkg/filter/filter.go:92 +0x2e5
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc00076bdc0, 0x81109a0, 0xc05e91da40, 0xc017562d00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/gorilla/mux/mux.go:159 +0xf1
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/pkg/dynamiclistener.(*Server).cacheIPHandler.func1(0x81109a0, 0xc05e91da40, 0xc017562b00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/pkg/dynamiclistener/server.go:382 +0x101
2018/11/22 00:32:49 [INFO] net/http.HandlerFunc.ServeHTTP(0xc008dc8b40, 0x81109a0, 0xc05e91da40, 0xc017562b00)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:1964 +0x44
2018/11/22 00:32:49 [INFO] net/http.serverHandler.ServeHTTP(0xc0101729c0, 0x81109a0, 0xc05e91da40, 0xc017562b00)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:2741 +0xab
2018/11/22 00:32:49 [INFO] net/http.(*conn).serve(0xc01f5a1a40, 0x8119a60, 0xc0554aca80)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:1847 +0x646
2018/11/22 00:32:49 [INFO] created by net/http.(*Server).Serve
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:2851 +0x2f5
``` | 1.0 | Panic seen in logs when provisioning cluster. - Rancher server version - v2.1.2-rc13
Steps to reproduce the problem:
Provision a 1 node DO cluster.
Cluster provisioning succeeded . But following panic is seen in logs:
```
2018/11/22 00:32:47 [INFO] cluster [c-5txzx] provisioning: [worker] Successfully started [rke-log-linker] container on host [<ip>]
2018/11/22 00:32:49 [INFO] 2018/11/22 00:32:49 http: panic serving <ip>:59674: runtime error: invalid memory address or nil pointer dereference
2018/11/22 00:32:49 [INFO] goroutine 21496615 [running]:
2018/11/22 00:32:49 [INFO] net/http.(*conn).serve.func1(0xc01f5a1a40)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:1746 +0xd0
2018/11/22 00:32:49 [INFO] panic(0x4590140, 0xba68cd0)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/runtime/panic.go:513 +0x1b9
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/pkg/api/store/cluster.(*Store).ByID(0xc007f6ab40, 0xc019f39b00, 0xc0095fddc0, 0xc0223db58d, 0x7, 0x4fd9aa5, 0xc01acd1768, 0x3938515)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/pkg/api/store/cluster/cluster_store.go:76 +0xc2
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/rancher/norman/store/wrapper.(*StoreWrapper).ByID(0xc0083266e0, 0xc019f39b00, 0xc0095fddc0, 0xc0223db58d, 0x7, 0xc00bd4c060, 0x4fd9aa5, 0xa3f552)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/store/wrapper/wrapper.go:24 +0x68
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/rancher/norman/api/handler.ListHandler(0xc019f39b00, 0x51c5910, 0xc0095fddc0, 0x0)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/handler/list.go:28 +0xa2
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/rancher/norman/api.(*Server).handle(0xc0004bafd0, 0x81109a0, 0xc05e91da40, 0xc017562f00, 0xc007e3d700, 0xc0051a42d0, 0xc01acd1908)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/server.go:240 +0x2a2
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/rancher/norman/api.(*Server).ServeHTTP(0xc0004bafd0, 0x81109a0, 0xc05e91da40, 0xc017562f00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/api/server.go:171 +0x49
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc00076be30, 0x81109a0, 0xc05e91da40, 0xc017562f00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/gorilla/mux/mux.go:159 +0xf1
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/pkg/filter.authHandler.ServeHTTP(0x80d30e0, 0xc00b322b40, 0x80c4fc0, 0xc00076be30, 0x0, 0x81109a0, 0xc05e91da40, 0xc017562d00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/pkg/filter/filter.go:92 +0x2e5
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc00076bdc0, 0x81109a0, 0xc05e91da40, 0xc017562d00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/vendor/github.com/gorilla/mux/mux.go:159 +0xf1
2018/11/22 00:32:49 [INFO] github.com/rancher/rancher/pkg/dynamiclistener.(*Server).cacheIPHandler.func1(0x81109a0, 0xc05e91da40, 0xc017562b00)
2018/11/22 00:32:49 [INFO] /go/src/github.com/rancher/rancher/pkg/dynamiclistener/server.go:382 +0x101
2018/11/22 00:32:49 [INFO] net/http.HandlerFunc.ServeHTTP(0xc008dc8b40, 0x81109a0, 0xc05e91da40, 0xc017562b00)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:1964 +0x44
2018/11/22 00:32:49 [INFO] net/http.serverHandler.ServeHTTP(0xc0101729c0, 0x81109a0, 0xc05e91da40, 0xc017562b00)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:2741 +0xab
2018/11/22 00:32:49 [INFO] net/http.(*conn).serve(0xc01f5a1a40, 0x8119a60, 0xc0554aca80)
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:1847 +0x646
2018/11/22 00:32:49 [INFO] created by net/http.(*Server).Serve
2018/11/22 00:32:49 [INFO] /usr/local/go/src/net/http/server.go:2851 +0x2f5
``` | non_main | panic seen in logs when provisioning cluster rancher server version steps to reproduce the problem provision a node do cluster cluster provisioning succeeded but following panic is seen in logs cluster provisioning successfully started container on host http panic serving runtime error invalid memory address or nil pointer dereference goroutine net http conn serve usr local go src net http server go panic usr local go src runtime panic go github com rancher rancher pkg api store cluster store byid go src github com rancher rancher pkg api store cluster cluster store go github com rancher rancher vendor github com rancher norman store wrapper storewrapper byid go src github com rancher rancher vendor github com rancher norman store wrapper wrapper go github com rancher rancher vendor github com rancher norman api handler listhandler go src github com rancher rancher vendor github com rancher norman api handler list go github com rancher rancher vendor github com rancher norman api server handle go src github com rancher rancher vendor github com rancher norman api server go github com rancher rancher vendor github com rancher norman api server servehttp go src github com rancher rancher vendor github com rancher norman api server go github com rancher rancher vendor github com gorilla mux router servehttp go src github com rancher rancher vendor github com gorilla mux mux go github com rancher rancher pkg filter authhandler servehttp go src github com rancher rancher pkg filter filter go github com rancher rancher vendor github com gorilla mux router servehttp go src github com rancher rancher vendor github com gorilla mux mux go github com rancher rancher pkg dynamiclistener server cacheiphandler go src github com rancher rancher pkg dynamiclistener server go net http handlerfunc servehttp usr local go src net http server go net http serverhandler servehttp usr local go src net http server go net http conn serve usr local go src net http server go created by net http server serve usr local go src net http server go | 0 |
29,501 | 24,048,167,160 | IssuesEvent | 2022-09-16 10:12:33 | woowacourse-teams/2022-kkogkkog | https://api.github.com/repos/woowacourse-teams/2022-kkogkkog | closed | [FE] Lighthouse CI | 🦄 frontend 🌐 infrastructure | ## 배경
CI를 도입하여 리포트를 저장해두고, 어떤 작업에서 어떤 결과를 내보였는지 기억할 수 있도록 한다.
## 진행사항
- set up `lighthouse CI`
- connect `github actions`
- set up `lighthouse Report Server`
- puppeter 적용하여 인증된 메인 페이지를 측정할 수 있도록 한다.
<!--
## 공유사항
해당 작업을 수행함에 있어 주의해야 할 사항에 대한 설명
-->
<!--
아래 작업들을 완료 후 주석은 전부 제거
1. Assignees에 해당 작업과 관련된 팀원들만 배정되도록 수정
2. labels 목록 수정
3. Projects에 현재 진행 중인 스프린트에 해당되는 칸반보드 등록
4. 개별 작업을 진행하면서 진행사항의 체크리스트들을 칸반보드에서 하나씩 체크
-->
| 1.0 | [FE] Lighthouse CI - ## 배경
CI를 도입하여 리포트를 저장해두고, 어떤 작업에서 어떤 결과를 내보였는지 기억할 수 있도록 한다.
## 진행사항
- set up `lighthouse CI`
- connect `github actions`
- set up `lighthouse Report Server`
- puppeter 적용하여 인증된 메인 페이지를 측정할 수 있도록 한다.
<!--
## 공유사항
해당 작업을 수행함에 있어 주의해야 할 사항에 대한 설명
-->
<!--
아래 작업들을 완료 후 주석은 전부 제거
1. Assignees에 해당 작업과 관련된 팀원들만 배정되도록 수정
2. labels 목록 수정
3. Projects에 현재 진행 중인 스프린트에 해당되는 칸반보드 등록
4. 개별 작업을 진행하면서 진행사항의 체크리스트들을 칸반보드에서 하나씩 체크
-->
| non_main | lighthouse ci 배경 ci를 도입하여 리포트를 저장해두고 어떤 작업에서 어떤 결과를 내보였는지 기억할 수 있도록 한다 진행사항 set up lighthouse ci connect github actions set up lighthouse report server puppeter 적용하여 인증된 메인 페이지를 측정할 수 있도록 한다 공유사항 해당 작업을 수행함에 있어 주의해야 할 사항에 대한 설명 아래 작업들을 완료 후 주석은 전부 제거 assignees에 해당 작업과 관련된 팀원들만 배정되도록 수정 labels 목록 수정 projects에 현재 진행 중인 스프린트에 해당되는 칸반보드 등록 개별 작업을 진행하면서 진행사항의 체크리스트들을 칸반보드에서 하나씩 체크 | 0 |
394,129 | 27,021,336,774 | IssuesEvent | 2023-02-11 03:04:31 | JustBrandonLim/ICT2106_P2_Project | https://api.github.com/repos/JustBrandonLim/ICT2106_P2_Project | closed | 2.2 Initial Design Document Start: 06/02/23 | End: 11/02/23 | documentation | The team shall provide an initial design document that covers the system's design. | 1.0 | 2.2 Initial Design Document Start: 06/02/23 | End: 11/02/23 - The team shall provide an initial design document that covers the system's design. | non_main | initial design document start end the team shall provide an initial design document that covers the system s design | 0 |
1,081 | 4,927,009,714 | IssuesEvent | 2016-11-26 13:53:27 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | cloudformation zero length field name in format when creating a stack | affects_2.3 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
cloudformation
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.3.0
```
##### CONFIGURATION
<!---
-->
##### OS / ENVIRONMENT
<!---
Ansible running on Centos 6.6
python 2.6.6
boto (2.43.0)
boto3 (1.4.1)
botocore (1.4.65)
-->
##### SUMMARY
<!--- Explain the problem briefly -->
When running the Cloudformation module, the tasks says it fails with "ValueError: zero length field name in format". However, the stack does get created correctly.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Launch Base VPC
cloudformation:
stack_name: "{{ vpc_stack_name }}" # Test-VPC
state: "present"
region: "{{ vpc_region }}
template: "files/VPC.yml"
template_parameters:
ClassC: "{{ vpc_classc }}"
tags:
BuiltWith: "Ansible"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Cloudformation stack gets built and the tasks returns with changed or OK
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Stack gets created, but the tasks fails blocking further tasks from being ran.
<!--- Paste verbatim command output between quotes below -->
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "/tmp/ansible_n9rFoj/ansible_module_cloudformation.py:203: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6\n elif hasattr(err, 'message'):\n/tmp/ansible_n9rFoj/ansible_module_cloudformation.py:204: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6\n error = err.message + ' ' + str(err) + ' - ' + str(type(err))\nTraceback (most recent call last):\n File \"/tmp/ansible_n9rFoj/ansible_module_cloudformation.py\", line 479, in <module>\n main()\n File \"/tmp/ansible_n9rFoj/ansible_module_cloudformation.py\", line 407, in main\n result = stack_operation(cfn, stack_params['StackName'], 'CREATE')\n File \"/tmp/ansible_n9rFoj/ansible_module_cloudformation.py\", line 262, in stack_operation\n ret = get_stack_events(cfn, stack_name)\n File \"/tmp/ansible_n9rFoj/ansible_module_cloudformation.py\", line 237, in get_stack_events\n eventline = 'StackEvent {} {} {}'.format(e['ResourceType'], e['LogicalResourceId'], e['ResourceStatus'])\nValueError: zero length field name in format\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
| True | cloudformation zero length field name in format when creating a stack - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
cloudformation
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.3.0
```
##### CONFIGURATION
<!---
-->
##### OS / ENVIRONMENT
<!---
Ansible running on Centos 6.6
python 2.6.6
boto (2.43.0)
boto3 (1.4.1)
botocore (1.4.65)
-->
##### SUMMARY
<!--- Explain the problem briefly -->
When running the Cloudformation module, the tasks says it fails with "ValueError: zero length field name in format". However, the stack does get created correctly.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Launch Base VPC
cloudformation:
stack_name: "{{ vpc_stack_name }}" # Test-VPC
state: "present"
region: "{{ vpc_region }}
template: "files/VPC.yml"
template_parameters:
ClassC: "{{ vpc_classc }}"
tags:
BuiltWith: "Ansible"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Cloudformation stack gets built and the tasks returns with changed or OK
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Stack gets created, but the tasks fails blocking further tasks from being ran.
<!--- Paste verbatim command output between quotes below -->
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "/tmp/ansible_n9rFoj/ansible_module_cloudformation.py:203: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6\n elif hasattr(err, 'message'):\n/tmp/ansible_n9rFoj/ansible_module_cloudformation.py:204: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6\n error = err.message + ' ' + str(err) + ' - ' + str(type(err))\nTraceback (most recent call last):\n File \"/tmp/ansible_n9rFoj/ansible_module_cloudformation.py\", line 479, in <module>\n main()\n File \"/tmp/ansible_n9rFoj/ansible_module_cloudformation.py\", line 407, in main\n result = stack_operation(cfn, stack_params['StackName'], 'CREATE')\n File \"/tmp/ansible_n9rFoj/ansible_module_cloudformation.py\", line 262, in stack_operation\n ret = get_stack_events(cfn, stack_name)\n File \"/tmp/ansible_n9rFoj/ansible_module_cloudformation.py\", line 237, in get_stack_events\n eventline = 'StackEvent {} {} {}'.format(e['ResourceType'], e['LogicalResourceId'], e['ResourceStatus'])\nValueError: zero length field name in format\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
| main | cloudformation zero length field name in format when creating a stack issue type bug report component name cloudformation ansible version configuration os environment ansible running on centos python boto botocore summary when running the cloudformation module the tasks says it fails with valueerror zero length field name in format however the stack does get created correctly steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name launch base vpc cloudformation stack name vpc stack name test vpc state present region vpc region template files vpc yml template parameters classc vpc classc tags builtwith ansible expected results cloudformation stack gets built and the tasks returns with changed or ok actual results stack gets created but the tasks fails blocking further tasks from being ran fatal failed changed false failed true module stderr tmp ansible ansible module cloudformation py deprecationwarning baseexception message has been deprecated as of python n elif hasattr err message n tmp ansible ansible module cloudformation py deprecationwarning baseexception message has been deprecated as of python n error err message str err str type err ntraceback most recent call last n file tmp ansible ansible module cloudformation py line in n main n file tmp ansible ansible module cloudformation py line in main n result stack operation cfn stack params create n file tmp ansible ansible module cloudformation py line in stack operation n ret get stack events cfn stack name n file tmp ansible ansible module cloudformation py line in get stack events n eventline stackevent format e e e nvalueerror zero length field name in format n module stdout msg module failure | 1 |
790,707 | 27,833,712,318 | IssuesEvent | 2023-03-20 07:52:57 | calcom/cal.com | https://api.github.com/repos/calcom/cal.com | closed | [CAL-1096] Embed selection modal - UI update | ⚡ Quick Wins Low priority | Currently

Shuold be

Main issues:
* svgs have the old grays
* border radius's are off
* other subtle changes required like the hover state and other font issues
[View in Figma](https://www.figma.com/file/xk4HOxtSI82J0F7enMxeak/Cal---Live?node-id=22%3A38047&t=AQX9GWSFsCzlRrEy-1)
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-1096](https://linear.app/calcom/issue/CAL-1096/embed-selection-modal-ui-update)</sub> | 1.0 | [CAL-1096] Embed selection modal - UI update - Currently

Shuold be

Main issues:
* svgs have the old grays
* border radius's are off
* other subtle changes required like the hover state and other font issues
[View in Figma](https://www.figma.com/file/xk4HOxtSI82J0F7enMxeak/Cal---Live?node-id=22%3A38047&t=AQX9GWSFsCzlRrEy-1)
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-1096](https://linear.app/calcom/issue/CAL-1096/embed-selection-modal-ui-update)</sub> | non_main | embed selection modal ui update currently shuold be main issues svgs have the old grays border radius s are off other subtle changes required like the hover state and other font issues from | 0 |
4,950 | 25,455,552,624 | IssuesEvent | 2022-11-24 13:55:26 | pace/bricks | https://api.github.com/repos/pace/bricks | closed | Follow-up from "Resolve "Add label for all http request related stats to filter"" | T::Maintainance | All metrics in `http/metrics.go` need to be documented. | True | Follow-up from "Resolve "Add label for all http request related stats to filter"" - All metrics in `http/metrics.go` need to be documented. | main | follow up from resolve add label for all http request related stats to filter all metrics in http metrics go need to be documented | 1 |
2,195 | 7,746,723,348 | IssuesEvent | 2018-05-29 23:00:10 | react-navigation/react-navigation | https://api.github.com/repos/react-navigation/react-navigation | closed | Drawer Routes do not close drawer | needs action from maintainer | Navigating to any screen from the Drawer does not close the Drawer, instead I have to manually close the drawer in every screens constructor.
There is a snack located here but it seems a common bug encountered by many;
https://snack.expo.io/ByYc_wBkQ | True | Drawer Routes do not close drawer - Navigating to any screen from the Drawer does not close the Drawer, instead I have to manually close the drawer in every screens constructor.
There is a snack located here but it seems a common bug encountered by many;
https://snack.expo.io/ByYc_wBkQ | main | drawer routes do not close drawer navigating to any screen from the drawer does not close the drawer instead i have to manually close the drawer in every screens constructor there is a snack located here but it seems a common bug encountered by many | 1 |
893 | 4,553,931,704 | IssuesEvent | 2016-09-13 07:35:09 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Docker: 'reloaded' state not recreating container | affects_2.0 bug_report cloud docker P3 waiting_on_maintainer | Hi,
according to the docs it should recreate the container if parameters changed:
> "reloaded" asserts that all matching containers are running and restarts any that have any images or configuration out of date.
But that doesn't seem to work. Sometimes it doesn't reload even though parameters changed:
```
- hosts: 127.0.0.1
connection: local
tasks:
- docker:
image: ubuntu
name: test
state: reloaded
command: "nc -l -k 2342"
```
Running this, creates a container. Running it again, doesn't touch it. So far so good. Now I added 'restart_policy: always', ran the play again and it didn't recreate the container.
I'm running the latest devel branch (just pulled, including the sub repos). | True | Docker: 'reloaded' state not recreating container - Hi,
according to the docs it should recreate the container if parameters changed:
> "reloaded" asserts that all matching containers are running and restarts any that have any images or configuration out of date.
But that doesn't seem to work. Sometimes it doesn't reload even though parameters changed:
```
- hosts: 127.0.0.1
connection: local
tasks:
- docker:
image: ubuntu
name: test
state: reloaded
command: "nc -l -k 2342"
```
Running this, creates a container. Running it again, doesn't touch it. So far so good. Now I added 'restart_policy: always', ran the play again and it didn't recreate the container.
I'm running the latest devel branch (just pulled, including the sub repos). | main | docker reloaded state not recreating container hi according to the docs it should recreate the container if parameters changed reloaded asserts that all matching containers are running and restarts any that have any images or configuration out of date but that doesn t seem to work sometimes it doesn t reload even though parameters changed hosts connection local tasks docker image ubuntu name test state reloaded command nc l k running this creates a container running it again doesn t touch it so far so good now i added restart policy always ran the play again and it didn t recreate the container i m running the latest devel branch just pulled including the sub repos | 1 |
433 | 3,549,451,576 | IssuesEvent | 2016-01-20 18:03:08 | DynamoRIO/drmemory | https://api.github.com/repos/DynamoRIO/drmemory | opened | add end-user support for updating syscall #'s from pdb's | Maintainability Type-Feature | The goal is to future-proof Dr. Memory: make it more adaptive to avoid requiring manual updates to fix breakages on each new Windows change. Xref #1826.
Then plan is:
+ Detect unknown version by looking at particular syscall #'s (as we can't rely on PEB versions anymore): xref https://github.com/DynamoRIO/drmemory/issues/1598
+ Create utility that downloads pdb's for the core dll's, does sthg like what winsysnums does, and comes up with new syscall numbers. We should be able to automate everything except for the usercall stuff.
+ Can we launch the helper process from our online client? Even if so, we'll need to cache the results, so we could ask the user to run the utility standalone?
+ Cache the results and load them in.
Things can still break if the syscall wrappers change (xref https://github.com/DynamoRIO/drmemory/issues/1854) or other things besides numbers change, but this would be an improvement and could help future-proof Dr. Memory.
| True | add end-user support for updating syscall #'s from pdb's - The goal is to future-proof Dr. Memory: make it more adaptive to avoid requiring manual updates to fix breakages on each new Windows change. Xref #1826.
Then plan is:
+ Detect unknown version by looking at particular syscall #'s (as we can't rely on PEB versions anymore): xref https://github.com/DynamoRIO/drmemory/issues/1598
+ Create utility that downloads pdb's for the core dll's, does sthg like what winsysnums does, and comes up with new syscall numbers. We should be able to automate everything except for the usercall stuff.
+ Can we launch the helper process from our online client? Even if so, we'll need to cache the results, so we could ask the user to run the utility standalone?
+ Cache the results and load them in.
Things can still break if the syscall wrappers change (xref https://github.com/DynamoRIO/drmemory/issues/1854) or other things besides numbers change, but this would be an improvement and could help future-proof Dr. Memory.
| main | add end user support for updating syscall s from pdb s the goal is to future proof dr memory make it more adaptive to avoid requiring manual updates to fix breakages on each new windows change xref then plan is detect unknown version by looking at particular syscall s as we can t rely on peb versions anymore xref create utility that downloads pdb s for the core dll s does sthg like what winsysnums does and comes up with new syscall numbers we should be able to automate everything except for the usercall stuff can we launch the helper process from our online client even if so we ll need to cache the results so we could ask the user to run the utility standalone cache the results and load them in things can still break if the syscall wrappers change xref or other things besides numbers change but this would be an improvement and could help future proof dr memory | 1 |
28,646 | 4,425,292,114 | IssuesEvent | 2016-08-16 15:03:52 | leeensminger/DelDOT-NPDES-Field-Tool | https://api.github.com/repos/leeensminger/DelDOT-NPDES-Field-Tool | closed | Manhole and inlet components not present gets greyed out in inspection | Version 1.2 - ready for testing in Version 1.2 Enhancement Release. | Any component for both manholes and inlets that are selected as not present in the inventory tab, the rating for that component should be greyed out in the inspection.
For example, if both frame of cover present and cover present are both "No" (Circled in red),

Then in the inspection, Frame of Cover Condition and Cover Condition should grey out.

| 1.0 | Manhole and inlet components not present gets greyed out in inspection - Any component for both manholes and inlets that are selected as not present in the inventory tab, the rating for that component should be greyed out in the inspection.
For example, if both frame of cover present and cover present are both "No" (Circled in red),

Then in the inspection, Frame of Cover Condition and Cover Condition should grey out.

| non_main | manhole and inlet components not present gets greyed out in inspection any component for both manholes and inlets that are selected as not present in the inventory tab the rating for that component should be greyed out in the inspection for example if both frame of cover present and cover present are both no circled in red then in the inspection frame of cover condition and cover condition should grey out | 0 |
1,465 | 6,363,153,396 | IssuesEvent | 2017-07-31 16:24:27 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Conversions: Add support for transfer rate conversions | Category: Highest Impact Tasks Maintainer Approved Status: Work In Progress Topic: Conversions | **convert 50 Mbps to Kbps**
[https://duckduckgo.com/?q=convert%2050%20Mbps%20to%20Kbps](https://duckduckgo.com/?q=convert%2050%20Mbps%20to%20Kbps)
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft | True | Conversions: Add support for transfer rate conversions - **convert 50 Mbps to Kbps**
[https://duckduckgo.com/?q=convert%2050%20Mbps%20to%20Kbps](https://duckduckgo.com/?q=convert%2050%20Mbps%20to%20Kbps)
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft | main | conversions add support for transfer rate conversions convert mbps to kbps ia page mintsoft | 1 |
68,925 | 29,929,260,290 | IssuesEvent | 2023-06-22 08:19:16 | flipperdevices/flipperzero-firmware | https://api.github.com/repos/flipperdevices/flipperzero-firmware | closed | Add a Keypad Lock mode to dummy mode in GUI | Feature Request Core+Services | ### Describe the enhancement you're suggesting.
It would be nice if it were possible to implement the Keypad Lock functionality even on the dummy mode, so that if a person has possession of your Flipper Zero, to play on it, it doesn't have the possibility to enter into the brainac mode for pentest or malicious purpose.
### Anything else?
_No response_ | 1.0 | Add a Keypad Lock mode to dummy mode in GUI - ### Describe the enhancement you're suggesting.
It would be nice if it were possible to implement the Keypad Lock functionality even on the dummy mode, so that if a person has possession of your Flipper Zero, to play on it, it doesn't have the possibility to enter into the brainac mode for pentest or malicious purpose.
### Anything else?
_No response_ | non_main | add a keypad lock mode to dummy mode in gui describe the enhancement you re suggesting it would be nice if it were possible to implement the keypad lock functionality even on the dummy mode so that if a person has possession of your flipper zero to play on it it doesn t have the possibility to enter into the brainac mode for pentest or malicious purpose anything else no response | 0 |
52,899 | 13,772,653,157 | IssuesEvent | 2020-10-08 01:12:02 | taddhopkins/maven-project | https://api.github.com/repos/taddhopkins/maven-project | opened | CVE-2020-5421 (High) detected in spring-web-4.0.5.RELEASE.jar | security vulnerability | ## CVE-2020-5421 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-4.0.5.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: maven-project/pom.xml</p>
<p>Path to vulnerable library: epository/org/springframework/spring-web/4.0.5.RELEASE/spring-web-4.0.5.RELEASE.jar,maven-project/target/todo-api-1.0-SNAPSHOT/WEB-INF/lib/spring-web-4.0.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-web-4.0.5.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.
<p>Publish Date: 2020-09-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421>CVE-2020-5421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2020-5421">https://tanzu.vmware.com/security/cve-2020-5421</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: org.springframework:spring-web:5.2.9,org.springframework:spring-web:5.1.18,org.springframework:spring-web:5.0.19,org.springframework:spring-web:4.3.29</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"4.0.5.RELEASE","isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-web:4.0.5.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-web:5.2.9,org.springframework:spring-web:5.1.18,org.springframework:spring-web:5.0.19,org.springframework:spring-web:4.3.29"}],"vulnerabilityIdentifier":"CVE-2020-5421","vulnerabilityDetails":"In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-5421 (High) detected in spring-web-4.0.5.RELEASE.jar - ## CVE-2020-5421 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-4.0.5.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: maven-project/pom.xml</p>
<p>Path to vulnerable library: epository/org/springframework/spring-web/4.0.5.RELEASE/spring-web-4.0.5.RELEASE.jar,maven-project/target/todo-api-1.0-SNAPSHOT/WEB-INF/lib/spring-web-4.0.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-web-4.0.5.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.
<p>Publish Date: 2020-09-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421>CVE-2020-5421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2020-5421">https://tanzu.vmware.com/security/cve-2020-5421</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: org.springframework:spring-web:5.2.9,org.springframework:spring-web:5.1.18,org.springframework:spring-web:5.0.19,org.springframework:spring-web:4.3.29</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-web","packageVersion":"4.0.5.RELEASE","isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-web:4.0.5.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-web:5.2.9,org.springframework:spring-web:5.1.18,org.springframework:spring-web:5.0.19,org.springframework:spring-web:4.3.29"}],"vulnerabilityIdentifier":"CVE-2020-5421","vulnerabilityDetails":"In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in spring web release jar cve high severity vulnerability vulnerable library spring web release jar spring web library home page a href path to dependency file maven project pom xml path to vulnerable library epository org springframework spring web release spring web release jar maven project target todo api snapshot web inf lib spring web release jar dependency hierarchy x spring web release jar vulnerable library vulnerability details in spring framework versions and older unsupported versions the protections against rfd attacks from cve may be bypassed depending on the browser used through the use of a jsessionid path parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring web org springframework spring web org springframework spring web org springframework spring web rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in spring framework versions and older unsupported versions the protections against rfd attacks from cve may be bypassed depending on the browser used through the use of a jsessionid path parameter vulnerabilityurl | 0 |
4,736 | 24,452,846,723 | IssuesEvent | 2022-10-07 02:01:29 | usefulmove/comp | https://api.github.com/repos/usefulmove/comp | closed | Suggestion to improve argument parsing | enhancement wontfix maintainability | Hi @usefulmove I noticed that you're parsing the arguments by yourself.
I'm referring to `src/comp.rs`
```rs
let mut args: Vec<String> = env::args().collect();
// ...
if args.len() <= 1 {
args.push("help".to_string());
}
// ...
```
As a suggestion please take a look at this [crate](https://crates.io/crates/argh), which takes care of argument parsing, (lightweight in comparison to clap)
This is only a suggestion, you're free to dismiss it 😁. Hopefully this will help you with future projects! | True | Suggestion to improve argument parsing - Hi @usefulmove I noticed that you're parsing the arguments by yourself.
I'm referring to `src/comp.rs`
```rs
let mut args: Vec<String> = env::args().collect();
// ...
if args.len() <= 1 {
args.push("help".to_string());
}
// ...
```
As a suggestion please take a look at this [crate](https://crates.io/crates/argh), which takes care of argument parsing, (lightweight in comparison to clap)
This is only a suggestion, you're free to dismiss it 😁. Hopefully this will help you with future projects! | main | suggestion to improve argument parsing hi usefulmove i noticed that you re parsing the arguments by yourself i m referring to src comp rs rs let mut args vec env args collect if args len args push help to string as a suggestion please take a look at this which takes care of argument parsing lightweight in comparison to clap this is only a suggestion you re free to dismiss it 😁 hopefully this will help you with future projects | 1 |
325,823 | 27,964,405,325 | IssuesEvent | 2023-03-24 18:09:35 | kubernetes-sigs/cluster-api | https://api.github.com/repos/kubernetes-sigs/cluster-api | closed | Self-hosted e2e tests are a bit flaky | help wanted kind/failing-test triage/accepted | ### Failure cluster [16cd3caed44925f8a110](https://go.k8s.io/triage#16cd3caed44925f8a110)
##### Error text:
```
Failed to run clusterctl move
Expected success, but got an error:
<*errors.withStack | 0xc002245170>: {
error: <*errors.withMessage | 0xc0023f27c0>{
cause: <*errors.withStack | 0xc002245140>{
error: <*errors.withMessage | 0xc0023f2780>{
cause: <*errors.withStack | 0xc002245020>{
error: <*errors.withMessage | 0xc0023f2740>{
cause: <*errors.withStack | 0xc002244ff0>{
error: <*errors.withMessage | 0xc0023f2700>{
cause: <*errors.withStack | 0xc002244fc0>{
error: <*errors.withMessage | 0xc0023f26c0>{cause: ..., msg: ...},
stack: [..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ...],
},
msg: "error creating client",
},
stack: [0x1bd89de, 0x1bd8189, 0x1bc9f91, 0x119505b, 0x1195137, 0x11950b9, 0x11959ff, 0x1bc9e65, 0x1bd802c, 0x1bd68e5, 0x1bd44a5, 0x1c2100a, 0x1c20e59, 0x1c2a1f9, 0x1cc5148, 0x861c5b, 0x874ad8, 0x4704c1],
},
msg: "action failed after 10 attempts",
},
stack: [0x1bc9ec5, 0x1bd802c, 0x1bd68e5, 0x1bd44a5, 0x1c2100a, 0x1c20e59, 0x1c2a1f9, 0x
```
#### Recent failures:
[26/11/2022, 01:26:11 periodic-cluster-api-e2e-main](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-main/1596208087008546816)
[21/11/2022, 07:28:26 periodic-cluster-api-e2e-main](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-main/1594487339571220480)
[20/11/2022, 19:26:35 periodic-cluster-api-e2e-main](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-main/1594305640572915712)
/kind failing-test
<!-- If this is a flake, please add: /kind flake -->
<!-- Please assign a SIG using: /sig SIG-NAME --> | 1.0 | Self-hosted e2e tests are a bit flaky - ### Failure cluster [16cd3caed44925f8a110](https://go.k8s.io/triage#16cd3caed44925f8a110)
##### Error text:
```
Failed to run clusterctl move
Expected success, but got an error:
<*errors.withStack | 0xc002245170>: {
error: <*errors.withMessage | 0xc0023f27c0>{
cause: <*errors.withStack | 0xc002245140>{
error: <*errors.withMessage | 0xc0023f2780>{
cause: <*errors.withStack | 0xc002245020>{
error: <*errors.withMessage | 0xc0023f2740>{
cause: <*errors.withStack | 0xc002244ff0>{
error: <*errors.withMessage | 0xc0023f2700>{
cause: <*errors.withStack | 0xc002244fc0>{
error: <*errors.withMessage | 0xc0023f26c0>{cause: ..., msg: ...},
stack: [..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ...],
},
msg: "error creating client",
},
stack: [0x1bd89de, 0x1bd8189, 0x1bc9f91, 0x119505b, 0x1195137, 0x11950b9, 0x11959ff, 0x1bc9e65, 0x1bd802c, 0x1bd68e5, 0x1bd44a5, 0x1c2100a, 0x1c20e59, 0x1c2a1f9, 0x1cc5148, 0x861c5b, 0x874ad8, 0x4704c1],
},
msg: "action failed after 10 attempts",
},
stack: [0x1bc9ec5, 0x1bd802c, 0x1bd68e5, 0x1bd44a5, 0x1c2100a, 0x1c20e59, 0x1c2a1f9, 0x
```
#### Recent failures:
[26/11/2022, 01:26:11 periodic-cluster-api-e2e-main](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-main/1596208087008546816)
[21/11/2022, 07:28:26 periodic-cluster-api-e2e-main](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-main/1594487339571220480)
[20/11/2022, 19:26:35 periodic-cluster-api-e2e-main](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-main/1594305640572915712)
/kind failing-test
<!-- If this is a flake, please add: /kind flake -->
<!-- Please assign a SIG using: /sig SIG-NAME --> | non_main | self hosted tests are a bit flaky failure cluster error text failed to run clusterctl move expected success but got an error error cause error cause error cause error cause error cause msg stack msg error creating client stack msg action failed after attempts stack recent failures kind failing test | 0 |
464,686 | 13,337,972,115 | IssuesEvent | 2020-08-28 10:09:56 | netdata/netdata | https://api.github.com/repos/netdata/netdata | closed | Disabling cloud still performs requests to app.netdata.cloud | area/web bug needs triage priority/high | <!--
When creating a bug report please:
- Verify first that your issue is not already reported on GitHub.
- Test if the latest release and master branch are affected too.
-->
##### Bug report summary
<!-- Provide a clear and concise description of the bug you're experiencing. -->
I put `[global] enabled = false` in `/var/lib/netdata/cloud.d/cloud.conf`
Info api response:
```
"cloud-enabled": false,
"cloud-available": false,
"agent-claimed": false,
"aclk-available": false
```
##### OS / Environment
<!--
Provide as much information about your environment (which operating system and distribution you're using, if Netdata is running in a container, etc.)
as possible to allow us reproduce this bug faster.
To get this information, execute the following commands based on your operating system:
- uname -a; grep -Hv "^#" /etc/*release # Linux
- uname -a; uname -K # BSD
- uname -a; sw_vers # macOS
Place the output from the command in the code section below.
-->
```
Debian 10
```
##### Netdata version
<!--
Provide output of `netdata -V`.
If Netdata is running, execute: $(ps aux | grep -E -o "[a-zA-Z/]+netdata ") -V
-->
netdata v1.22.1-17-nightly
##### Component Name
<!--
Let us know which component is affected by the bug. Our code is structured according to its component,
so the component name is the same as the top level directory of the repository.
For example, a bug in the dashboard would be under the web component.
-->
Web/dashboard
##### Steps To Reproduce
<!--
Describe how you found this bug and how we can reproduce it, preferably with a minimal test-case scenario.
If you'd like to attach larger files, use gist.github.com and paste in links.
-->
1. Disable cloud
2. Open the dashboard and monitor requests in devtools to `sso/sign-in`
##### Expected behavior
<!-- Provide a clear and concise description of what you expected to happen. -->
When cloud is disabled, I expect netdata not to perform any external requests on the dashboard. | 1.0 | Disabling cloud still performs requests to app.netdata.cloud - <!--
When creating a bug report please:
- Verify first that your issue is not already reported on GitHub.
- Test if the latest release and master branch are affected too.
-->
##### Bug report summary
<!-- Provide a clear and concise description of the bug you're experiencing. -->
I put `[global] enabled = false` in `/var/lib/netdata/cloud.d/cloud.conf`
Info api response:
```
"cloud-enabled": false,
"cloud-available": false,
"agent-claimed": false,
"aclk-available": false
```
##### OS / Environment
<!--
Provide as much information about your environment (which operating system and distribution you're using, if Netdata is running in a container, etc.)
as possible to allow us reproduce this bug faster.
To get this information, execute the following commands based on your operating system:
- uname -a; grep -Hv "^#" /etc/*release # Linux
- uname -a; uname -K # BSD
- uname -a; sw_vers # macOS
Place the output from the command in the code section below.
-->
```
Debian 10
```
##### Netdata version
<!--
Provide output of `netdata -V`.
If Netdata is running, execute: $(ps aux | grep -E -o "[a-zA-Z/]+netdata ") -V
-->
netdata v1.22.1-17-nightly
##### Component Name
<!--
Let us know which component is affected by the bug. Our code is structured according to its component,
so the component name is the same as the top level directory of the repository.
For example, a bug in the dashboard would be under the web component.
-->
Web/dashboard
##### Steps To Reproduce
<!--
Describe how you found this bug and how we can reproduce it, preferably with a minimal test-case scenario.
If you'd like to attach larger files, use gist.github.com and paste in links.
-->
1. Disable cloud
2. Open the dashboard and monitor requests in devtools to `sso/sign-in`
##### Expected behavior
<!-- Provide a clear and concise description of what you expected to happen. -->
When cloud is disabled, I expect netdata not to perform any external requests on the dashboard. | non_main | disabling cloud still performs requests to app netdata cloud when creating a bug report please verify first that your issue is not already reported on github test if the latest release and master branch are affected too bug report summary i put enabled false in var lib netdata cloud d cloud conf info api response cloud enabled false cloud available false agent claimed false aclk available false os environment provide as much information about your environment which operating system and distribution you re using if netdata is running in a container etc as possible to allow us reproduce this bug faster to get this information execute the following commands based on your operating system uname a grep hv etc release linux uname a uname k bsd uname a sw vers macos place the output from the command in the code section below debian netdata version provide output of netdata v if netdata is running execute ps aux grep e o netdata v netdata nightly component name let us know which component is affected by the bug our code is structured according to its component so the component name is the same as the top level directory of the repository for example a bug in the dashboard would be under the web component web dashboard steps to reproduce describe how you found this bug and how we can reproduce it preferably with a minimal test case scenario if you d like to attach larger files use gist github com and paste in links disable cloud open the dashboard and monitor requests in devtools to sso sign in expected behavior when cloud is disabled i expect netdata not to perform any external requests on the dashboard | 0 |
3,438 | 13,211,537,136 | IssuesEvent | 2020-08-15 23:57:34 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | terraform module '-no-color' conflicts with TF_CLI_ARGS_plan env variable | affects_2.9 bot_closed bug cloud collection collection:community.general module needs_collection_redirect needs_maintainer needs_triage python3 support:community | ##### SUMMARY
terraform module sets `-no-color` but if `-no-color` is set in `TF_CLI_ARGS_plan` this would fail, it's `terraform` itself which fails like this, but here ansible module sets itself explicitly `-no-color` in code.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform module
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /home/jiri/.ansible.cfg
configured module search path = ['/home/jiri/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jiri/stow/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /home/jiri/stow/ansible/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### STEPS TO REPRODUCE
```
$ export TF_CLI_ARGS_plan=-no-color
$ cat > /tmp/test.yml <<EOF
- name: test
hosts: localhost
gather_facts: false
tasks:
- terraform:
project_path: /tmp/
state: present
register: _result
- debug: var=_result
EOF
$ ansible-playbook /tmp/test.yml
```
##### EXPECTED RESULTS
should not fail, ansible should not put either `-no-color` or unset the env variable for the play.
##### ACTUAL RESULTS
playbook fails with:
```
"msg": "Terraform plan could not be created\r\nSTDOUT: \r\n\r\nSTDERR: Usage: terraform plan [options] [DIR]\n\n Generates an execution plan for Terraform.\n\n This execution plan can be reviewed prior to r
unning apply to get a\n sense for what Terraform will do. Optionally, the plan can be saved to\n a Terraform plan file, and apply can take this plan file to execute\n this plan exactly.\n\nOptions:\n\n -destr
oy If set, a plan will be generated to destroy all resources\n managed by the given configuration and state.\n\n -detailed-exitcode Return detailed exit codes when the command ex
its. This\n will change the meaning of exit codes to:\n 0 - Succeeded, diff is empty (no changes)\n 1 - Errored\n 2 - Succeeded,
there is a diff\n\n -input=true Ask for input for variables if not directly set.\n\n -lock=true Lock the state file when locking is supported.\n\n -lock-timeout=0s Duration to retry a stat
e lock.\n\n -no-color If specified, output won't contain any color.\n\n -out=path Write a plan file to the given path. This can be used as\n input to the \"apply\" comma
nd.\n\n -parallelism=n Limit the number of concurrent operations. Defaults to 10.\n\n -refresh=true Update state prior to checking for differences.\n\n -state=statefile Path to a Terraform state
file to use to look\n up Terraform-managed resources. By default it will\n use the state \"terraform.tfstate\" if it exists.\n\n -target=resource Resource to target.
Operation will be limited to this\n resource and its dependencies. This flag can be used\n multiple times.\n\n -var 'foo=bar' Set a variable in the Terraform config
uration. This\n flag can be set multiple times.\n\n -var-file=foo Set variables in the Terraform configuration from\n a file. If \"terraform.tfvars\" or any \".aut
o.tfvars\"\n files are present, they will be automatically loaded.\n"
}
```
```
$ ag -G 'terraform.*' -- '-no-color' /home/jiri/stow/ansible/venv/
/home/jiri/stow/ansible/venv/lib/python3.6/site-packages/ansible/modules/cloud/misc/terraform.py
168:DESTROY_ARGS = ('destroy', '-no-color', '-force')
169:APPLY_ARGS = ('apply', '-no-color', '-input=false', '-auto-approve=true')
209: command = [bin_path, 'workspace', 'list', '-no-color']
225: command = [bin_path, 'workspace', action, workspace, '-no-color']
248: plan_command = [command[0], 'plan', '-input=false', '-no-color', '-detailed-exitcode', '-out', plan_path]
374: outputs_command = [command[0], 'output', '-no-color', '-json'] + _state_args(state_file)
``` | True | terraform module '-no-color' conflicts with TF_CLI_ARGS_plan env variable - ##### SUMMARY
terraform module sets `-no-color` but if `-no-color` is set in `TF_CLI_ARGS_plan` this would fail, it's `terraform` itself which fails like this, but here ansible module sets itself explicitly `-no-color` in code.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform module
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /home/jiri/.ansible.cfg
configured module search path = ['/home/jiri/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jiri/stow/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /home/jiri/stow/ansible/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### STEPS TO REPRODUCE
```
$ export TF_CLI_ARGS_plan=-no-color
$ cat > /tmp/test.yml <<EOF
- name: test
hosts: localhost
gather_facts: false
tasks:
- terraform:
project_path: /tmp/
state: present
register: _result
- debug: var=_result
EOF
$ ansible-playbook /tmp/test.yml
```
##### EXPECTED RESULTS
should not fail, ansible should not put either `-no-color` or unset the env variable for the play.
##### ACTUAL RESULTS
playbook fails with:
```
"msg": "Terraform plan could not be created\r\nSTDOUT: \r\n\r\nSTDERR: Usage: terraform plan [options] [DIR]\n\n Generates an execution plan for Terraform.\n\n This execution plan can be reviewed prior to r
unning apply to get a\n sense for what Terraform will do. Optionally, the plan can be saved to\n a Terraform plan file, and apply can take this plan file to execute\n this plan exactly.\n\nOptions:\n\n -destr
oy If set, a plan will be generated to destroy all resources\n managed by the given configuration and state.\n\n -detailed-exitcode Return detailed exit codes when the command ex
its. This\n will change the meaning of exit codes to:\n 0 - Succeeded, diff is empty (no changes)\n 1 - Errored\n 2 - Succeeded,
there is a diff\n\n -input=true Ask for input for variables if not directly set.\n\n -lock=true Lock the state file when locking is supported.\n\n -lock-timeout=0s Duration to retry a stat
e lock.\n\n -no-color If specified, output won't contain any color.\n\n -out=path Write a plan file to the given path. This can be used as\n input to the \"apply\" comma
nd.\n\n -parallelism=n Limit the number of concurrent operations. Defaults to 10.\n\n -refresh=true Update state prior to checking for differences.\n\n -state=statefile Path to a Terraform state
file to use to look\n up Terraform-managed resources. By default it will\n use the state \"terraform.tfstate\" if it exists.\n\n -target=resource Resource to target.
Operation will be limited to this\n resource and its dependencies. This flag can be used\n multiple times.\n\n -var 'foo=bar' Set a variable in the Terraform config
uration. This\n flag can be set multiple times.\n\n -var-file=foo Set variables in the Terraform configuration from\n a file. If \"terraform.tfvars\" or any \".aut
o.tfvars\"\n files are present, they will be automatically loaded.\n"
}
```
```
$ ag -G 'terraform.*' -- '-no-color' /home/jiri/stow/ansible/venv/
/home/jiri/stow/ansible/venv/lib/python3.6/site-packages/ansible/modules/cloud/misc/terraform.py
168:DESTROY_ARGS = ('destroy', '-no-color', '-force')
169:APPLY_ARGS = ('apply', '-no-color', '-input=false', '-auto-approve=true')
209: command = [bin_path, 'workspace', 'list', '-no-color']
225: command = [bin_path, 'workspace', action, workspace, '-no-color']
248: plan_command = [command[0], 'plan', '-input=false', '-no-color', '-detailed-exitcode', '-out', plan_path]
374: outputs_command = [command[0], 'output', '-no-color', '-json'] + _state_args(state_file)
``` | main | terraform module no color conflicts with tf cli args plan env variable summary terraform module sets no color but if no color is set in tf cli args plan this would fail it s terraform itself which fails like this but here ansible module sets itself explicitly no color in code issue type bug report component name terraform module ansible version ansible config file home jiri ansible cfg configured module search path ansible python module location home jiri stow ansible venv lib site packages ansible executable location home jiri stow ansible venv bin ansible python version default nov steps to reproduce export tf cli args plan no color cat tmp test yml eof name test hosts localhost gather facts false tasks terraform project path tmp state present register result debug var result eof ansible playbook tmp test yml expected results should not fail ansible should not put either no color or unset the env variable for the play actual results playbook fails with msg terraform plan could not be created r nstdout r n r nstderr usage terraform plan n n generates an execution plan for terraform n n this execution plan can be reviewed prior to r unning apply to get a n sense for what terraform will do optionally the plan can be saved to n a terraform plan file and apply can take this plan file to execute n this plan exactly n noptions n n destr oy if set a plan will be generated to destroy all resources n managed by the given configuration and state n n detailed exitcode return detailed exit codes when the command ex its this n will change the meaning of exit codes to n succeeded diff is empty no changes n errored n succeeded there is a diff n n input true ask for input for variables if not directly set n n lock true lock the state file when locking is supported n n lock timeout duration to retry a stat e lock n n no color if specified output won t contain any color n n out path write a plan file to the given path this can be used as n input to the apply comma nd n n parallelism n limit the number of concurrent operations defaults to n n refresh true update state prior to checking for differences n n state statefile path to a terraform state file to use to look n up terraform managed resources by default it will n use the state terraform tfstate if it exists n n target resource resource to target operation will be limited to this n resource and its dependencies this flag can be used n multiple times n n var foo bar set a variable in the terraform config uration this n flag can be set multiple times n n var file foo set variables in the terraform configuration from n a file if terraform tfvars or any aut o tfvars n files are present they will be automatically loaded n ag g terraform no color home jiri stow ansible venv home jiri stow ansible venv lib site packages ansible modules cloud misc terraform py destroy args destroy no color force apply args apply no color input false auto approve true command command plan command plan input false no color detailed exitcode out plan path outputs command output no color json state args state file | 1 |
1,571 | 6,572,329,951 | IssuesEvent | 2017-09-11 01:26:32 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | lxc_container: provide option to make automatic container restarts optional | affects_2.1 cloud feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
lxc_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
None
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
When the container_config is changed (and some other options are implemented), the module actions a container restart. It would be great if that behaviour could be optional so that it is possible to use handlers/tasks to action a restart at a later time.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
N/A
<!--- Paste example playbooks or commands between quotes below -->
```
N/A
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
N/A
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
N/A
```
| True | lxc_container: provide option to make automatic container restarts optional - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
lxc_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
None
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
When the container_config is changed (and some other options are implemented), the module actions a container restart. It would be great if that behaviour could be optional so that it is possible to use handlers/tasks to action a restart at a later time.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
N/A
<!--- Paste example playbooks or commands between quotes below -->
```
N/A
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
N/A
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
N/A
```
| main | lxc container provide option to make automatic container restarts optional issue type feature idea component name lxc container ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary when the container config is changed and some other options are implemented the module actions a container restart it would be great if that behaviour could be optional so that it is possible to use handlers tasks to action a restart at a later time steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used n a n a expected results n a actual results n a | 1 |
5,207 | 26,464,324,963 | IssuesEvent | 2023-01-16 21:17:12 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | closed | Flag --incompatible_disable_starlark_host_transitions will break CLion Plugin Google in Bazel 7.0 | type: bug product: CLion topic: bazel awaiting-maintainer | Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking CLion Plugin Google. Please migrate to fix this and unblock the flip of this flag.
The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032).
Please check the following CI builds for build and test results:
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d799-41be-bf9a-17ec6b1de4da)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d796-46f1-81ee-b81f44e7dd1c)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d7a1-465a-86c9-f7239ae834c7)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d79e-4705-9c09-6ee73ea1da7b)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d7a4-4008-b4e6-f6684a321fb0)
Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything.
If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration. | True | Flag --incompatible_disable_starlark_host_transitions will break CLion Plugin Google in Bazel 7.0 - Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking CLion Plugin Google. Please migrate to fix this and unblock the flip of this flag.
The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032).
Please check the following CI builds for build and test results:
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d799-41be-bf9a-17ec6b1de4da)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d796-46f1-81ee-b81f44e7dd1c)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d7a1-465a-86c9-f7239ae834c7)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d79e-4705-9c09-6ee73ea1da7b)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-d7a4-4008-b4e6-f6684a321fb0)
Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything.
If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration. | main | flag incompatible disable starlark host transitions will break clion plugin google in bazel incompatible flag incompatible disable starlark host transitions will be enabled by default in the next major release bazel thus breaking clion plugin google please migrate to fix this and unblock the flip of this flag the flag is documented here please check the following ci builds for build and test results never heard of incompatible flags before we have that explains everything if you have any questions please file an issue in | 1 |
107,901 | 4,321,663,185 | IssuesEvent | 2016-07-25 11:08:40 | Jumpscale/jumpscale_portal8 | https://api.github.com/repos/Jumpscale/jumpscale_portal8 | closed | Action button in grid macro only work for the 10 first rows. | priority_urgent type_bug | Example code : https://github.com/Jumpscale/jscockpit/blob/master/apps/Cockpit/Instances/Instances.wiki
On all the rows that are not displayed when page is loading( so from row 11 to the end), the action button doesn't trigger anything. The popup form is not showing up. | 1.0 | Action button in grid macro only work for the 10 first rows. - Example code : https://github.com/Jumpscale/jscockpit/blob/master/apps/Cockpit/Instances/Instances.wiki
On all the rows that are not displayed when page is loading( so from row 11 to the end), the action button doesn't trigger anything. The popup form is not showing up. | non_main | action button in grid macro only work for the first rows example code on all the rows that are not displayed when page is loading so from row to the end the action button doesn t trigger anything the popup form is not showing up | 0 |
2,336 | 8,361,769,713 | IssuesEvent | 2018-10-03 15:06:45 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | --flush-cache operation fails with redis database | affects_2.4 bug module needs_maintainer support:community | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- plugin/cache/redis
##### ANSIBLE VERSION
```
ansible 2.4.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/myuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 29 2016, 10:12:21) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]
```
##### CONFIGURATION
<!---
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_FORCE_COLOR(/etc/ansible/ansible.cfg) = True
ANSIBLE_NOCOLOR(/etc/ansible/ansible.cfg) = False
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=240s
ANSIBLE_SSH_CONTROL_PATH_DIR(/etc/ansible/ansible.cfg) = ~/.ansible/cp
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = redis
CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = localhost:6379:0
CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 10000
COMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False
DEFAULT_ALLOW_UNSAFE_LOOKUPS(/etc/ansible/ansible.cfg) = True
DEFAULT_BECOME(/etc/ansible/ansible.cfg) = True
DEFAULT_BECOME_METHOD(/etc/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/etc/ansible/ansible.cfg) = root
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['timer', 'skippy', 'profile_roles', 'junit']
DEFAULT_EXECUTABLE(/etc/ansible/ansible.cfg) = /bin/sh
DEFAULT_FORCE_HANDLERS(/etc/ansible/ansible.cfg) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 15
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart
DEFAULT_GATHER_SUBSET(/etc/ansible/ansible.cfg) = all
DEFAULT_GATHER_TIMEOUT(/etc/ansible/ansible.cfg) = 60
DEFAULT_HASH_BEHAVIOUR(/etc/ansible/ansible.cfg) = replace
DEFAULT_LOAD_CALLBACK_PLUGINS(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
DEFAULT_MANAGED_STR(/etc/ansible/ansible.cfg) = Ansible managed: {file} %Y-%m-%d %H:%M:%S
DEFAULT_MODULE_NAME(/etc/ansible/ansible.cfg) = command
DEFAULT_NO_TARGET_SYSLOG(/etc/ansible/ansible.cfg) = True
DEFAULT_PRIVATE_ROLE_VARS(/etc/ansible/ansible.cfg) = True
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = smart
DEFAULT_SFTP_BATCH_MODE(/etc/ansible/ansible.cfg) = False
DEFAULT_SSH_TRANSFER_METHOD(/etc/ansible/ansible.cfg) = smart
DEFAULT_STRATEGY(/etc/ansible/ansible.cfg) = linear
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 300
DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = smart
DEFAULT_UNDEFINED_VAR_BEHAVIOR(/etc/ansible/ansible.cfg) = True
DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False
DISPLAY_ARGS_TO_STDOUT(/etc/ansible/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(/etc/ansible/ansible.cfg) = True
ERROR_ON_MISSING_HANDLER(/etc/ansible/ansible.cfg) = True
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
PARAMIKO_RECORD_HOST_KEYS(/etc/ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_SAVE_PATH(/etc/ansible/ansible.cfg) = /home/myuser/.ansible-retry
SHOW_CUSTOM_STATS(/etc/ansible/ansible.cfg) = True
SYSTEM_WARNINGS(/etc/ansible/ansible.cfg) = True
-->
##### OS / ENVIRONMENT
Redhat 7.2
##### SUMMARY
When trying to flush-cache with the redis database cache plugin, the command fails with the following error:
```
ERROR! Unexpected Exception, this is probably a bug: u'NAME_OF_ONE_HOST'
```
##### STEPS TO REPRODUCE
With the following inventory and playbook, run the following commands:
- Flush the existing redis cache if needed:
```
redis-cli flushdb
```
- Run the playbook first to gather facts:
```
ansible-playbook -i hosts test-playbook.yml
```
- Try to flush the cache
```
ansible-playbook -i hosts test-playbook.yml --flush-cache
```
The playbook used is:
```yaml
- hosts: all
tasks:
- setup:
```
The inventory file used is:
```yaml
[test]
localhost ansible_connection=local
```
##### EXPECTED RESULTS
The cache is flushed and the playbook runs fine.
##### ACTUAL RESULTS
The playbook then fails with the following error:
```
ERROR! Unexpected Exception, this is probably a bug: u'localhost'
```
| True | --flush-cache operation fails with redis database - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- plugin/cache/redis
##### ANSIBLE VERSION
```
ansible 2.4.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/myuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 29 2016, 10:12:21) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]
```
##### CONFIGURATION
<!---
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_FORCE_COLOR(/etc/ansible/ansible.cfg) = True
ANSIBLE_NOCOLOR(/etc/ansible/ansible.cfg) = False
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=240s
ANSIBLE_SSH_CONTROL_PATH_DIR(/etc/ansible/ansible.cfg) = ~/.ansible/cp
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = redis
CACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = localhost:6379:0
CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 10000
COMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False
DEFAULT_ALLOW_UNSAFE_LOOKUPS(/etc/ansible/ansible.cfg) = True
DEFAULT_BECOME(/etc/ansible/ansible.cfg) = True
DEFAULT_BECOME_METHOD(/etc/ansible/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/etc/ansible/ansible.cfg) = root
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['timer', 'skippy', 'profile_roles', 'junit']
DEFAULT_EXECUTABLE(/etc/ansible/ansible.cfg) = /bin/sh
DEFAULT_FORCE_HANDLERS(/etc/ansible/ansible.cfg) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 15
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart
DEFAULT_GATHER_SUBSET(/etc/ansible/ansible.cfg) = all
DEFAULT_GATHER_TIMEOUT(/etc/ansible/ansible.cfg) = 60
DEFAULT_HASH_BEHAVIOUR(/etc/ansible/ansible.cfg) = replace
DEFAULT_LOAD_CALLBACK_PLUGINS(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
DEFAULT_MANAGED_STR(/etc/ansible/ansible.cfg) = Ansible managed: {file} %Y-%m-%d %H:%M:%S
DEFAULT_MODULE_NAME(/etc/ansible/ansible.cfg) = command
DEFAULT_NO_TARGET_SYSLOG(/etc/ansible/ansible.cfg) = True
DEFAULT_PRIVATE_ROLE_VARS(/etc/ansible/ansible.cfg) = True
DEFAULT_SCP_IF_SSH(/etc/ansible/ansible.cfg) = smart
DEFAULT_SFTP_BATCH_MODE(/etc/ansible/ansible.cfg) = False
DEFAULT_SSH_TRANSFER_METHOD(/etc/ansible/ansible.cfg) = smart
DEFAULT_STRATEGY(/etc/ansible/ansible.cfg) = linear
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 300
DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = smart
DEFAULT_UNDEFINED_VAR_BEHAVIOR(/etc/ansible/ansible.cfg) = True
DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False
DISPLAY_ARGS_TO_STDOUT(/etc/ansible/ansible.cfg) = True
DISPLAY_SKIPPED_HOSTS(/etc/ansible/ansible.cfg) = True
ERROR_ON_MISSING_HANDLER(/etc/ansible/ansible.cfg) = True
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
PARAMIKO_RECORD_HOST_KEYS(/etc/ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_SAVE_PATH(/etc/ansible/ansible.cfg) = /home/myuser/.ansible-retry
SHOW_CUSTOM_STATS(/etc/ansible/ansible.cfg) = True
SYSTEM_WARNINGS(/etc/ansible/ansible.cfg) = True
-->
##### OS / ENVIRONMENT
Redhat 7.2
##### SUMMARY
When trying to flush-cache with the redis database cache plugin, the command fails with the following error:
```
ERROR! Unexpected Exception, this is probably a bug: u'NAME_OF_ONE_HOST'
```
##### STEPS TO REPRODUCE
With the following inventory and playbook, run the following commands:
- Flush the existing redis cache if needed:
```
redis-cli flushdb
```
- Run the playbook first to gather facts:
```
ansible-playbook -i hosts test-playbook.yml
```
- Try to flush the cache
```
ansible-playbook -i hosts test-playbook.yml --flush-cache
```
The playbook used is:
```yaml
- hosts: all
tasks:
- setup:
```
The inventory file used is:
```yaml
[test]
localhost ansible_connection=local
```
##### EXPECTED RESULTS
The cache is flushed and the playbook runs fine.
##### ACTUAL RESULTS
The playbook then fails with the following error:
```
ERROR! Unexpected Exception, this is probably a bug: u'localhost'
```
| main | flush cache operation fails with redis database issue type bug report component name plugin cache redis ansible version ansible config file etc ansible ansible cfg configured module search path ansible python module location usr lib site packages ansible executable location usr bin ansible python version default aug configuration allow world readable tmpfiles etc ansible ansible cfg true ansible force color etc ansible ansible cfg true ansible nocolor etc ansible ansible cfg false ansible nocows etc ansible ansible cfg true ansible pipelining etc ansible ansible cfg true ansible ssh args etc ansible ansible cfg c o controlmaster auto o controlpersist ansible ssh control path dir etc ansible ansible cfg ansible cp cache plugin etc ansible ansible cfg redis cache plugin connection etc ansible ansible cfg localhost cache plugin timeout etc ansible ansible cfg command warnings etc ansible ansible cfg false default allow unsafe lookups etc ansible ansible cfg true default become etc ansible ansible cfg true default become method etc ansible ansible cfg sudo default become user etc ansible ansible cfg root default callback whitelist etc ansible ansible cfg default executable etc ansible ansible cfg bin sh default force handlers etc ansible ansible cfg true default forks etc ansible ansible cfg default gathering etc ansible ansible cfg smart default gather subset etc ansible ansible cfg all default gather timeout etc ansible ansible cfg default hash behaviour etc ansible ansible cfg replace default load callback plugins etc ansible ansible cfg true default log path etc ansible ansible cfg var log ansible ansible log default managed str etc ansible ansible cfg ansible managed file y m d h m s default module name etc ansible ansible cfg command default no target syslog etc ansible ansible cfg true default private role vars etc ansible ansible cfg true default scp if ssh etc ansible ansible cfg smart default sftp batch mode etc ansible ansible cfg false default ssh transfer method etc ansible ansible cfg smart default strategy etc ansible ansible cfg linear default timeout etc ansible ansible cfg default transport etc ansible ansible cfg smart default undefined var behavior etc ansible ansible cfg true deprecation warnings etc ansible ansible cfg false display args to stdout etc ansible ansible cfg true display skipped hosts etc ansible ansible cfg true error on missing handler etc ansible ansible cfg true host key checking etc ansible ansible cfg false paramiko record host keys etc ansible ansible cfg true retry files enabled etc ansible ansible cfg false retry files save path etc ansible ansible cfg home myuser ansible retry show custom stats etc ansible ansible cfg true system warnings etc ansible ansible cfg true os environment redhat summary when trying to flush cache with the redis database cache plugin the command fails with the following error error unexpected exception this is probably a bug u name of one host steps to reproduce with the following inventory and playbook run the following commands flush the existing redis cache if needed redis cli flushdb run the playbook first to gather facts ansible playbook i hosts test playbook yml try to flush the cache ansible playbook i hosts test playbook yml flush cache the playbook used is yaml hosts all tasks setup the inventory file used is yaml localhost ansible connection local expected results the cache is flushed and the playbook runs fine actual results the playbook then fails with the following error error unexpected exception this is probably a bug u localhost | 1 |
339,171 | 30,349,409,664 | IssuesEvent | 2023-07-11 17:44:41 | antrea-io/antrea | https://api.github.com/repos/antrea-io/antrea | opened | [Flaky Test] TestController_RotateCertificates is failing consistently | kind/bug kind/failing-test | **Describe the bug**
Even after merging #5187, unit test `TestController_RotateCertificates` is failing consistently. It can be reproduced locally.
```
=== RUN TestController_RotateCertificates
I0711 10:31:26.786847 71689 ipsec_certificate_controller.go:447] "Shutting down AntreaAgentIPsecCertificateController"
I0711 10:31:26.787848 71689 ipsec_certificate_controller.go:483] "Created new certificate and key for IPSec" cert="/var/folders/9q/0_7t6cs557d_v8mfkhcw5r780000gp/T/antrea-ipsec-test2255709075/fake-node-1-9fba09efc2.crt" key="/var/folders/9q/0_7t6cs557d_v8mfkhcw5r780000gp/T/antrea-ipsec-test2255709075/fake-node-1-9fba09efc2.key"
E0711 10:31:26.789445 71689 ipsec_certificate_controller.go:143] "Error syncing IPSec certificates, requeuing" err="failed to validate new certificate: x509: certificate has expired or is not yet valid: current time 2023-07-11T10:31:26-07:00 is before 2023-07-11T17:31:31Z"
I0711 10:31:27.080704 71689 ipsec_certificate_controller.go:423] "Starting AntreaAgentIPsecCertificateController"
E0711 10:31:27.080748 71689 ipsec_certificate_controller.go:272] "Verifying current certificate configurations failed" err="certificate and key pair is nil"
I0711 10:31:27.080753 71689 ipsec_certificate_controller.go:279] "Start rotating IPsec certificate"
ipsec_certificate_controller_test.go:388: Sign CSR "fake-node-1-rbv9gsjf" successfully
I0711 10:31:28.280508 71689 ipsec_certificate_controller.go:483] "Created new certificate and key for IPSec" cert="/var/folders/9q/0_7t6cs557d_v8mfkhcw5r780000gp/T/antrea-ipsec-test1123940196/fake-node-1-6f5f574069.crt" key="/var/folders/9q/0_7t6cs557d_v8mfkhcw5r780000gp/T/antrea-ipsec-test1123940196/fake-node-1-6f5f574069.key"
I0711 10:31:28.281264 71689 ipsec_certificate_controller.go:234] "Calculated certificate rotation deadline" expiration="2023-07-11 17:31:36 +0000 UTC" deadline="2023-07-11 17:31:33.118241302 +0000 UTC"
I0711 10:31:28.281283 71689 ipsec_certificate_controller.go:395] "Updating OVS configurations for IPsec certificates"
I0711 10:31:28.281370 71689 ipsec_certificate_controller.go:279] "Start rotating IPsec certificate"
ipsec_certificate_controller_test.go:388: Sign CSR "fake-node-1-hvlh6q88" successfully
ipsec_certificate_controller_test.go:315: CSR should not be signed before the rotation deadline
--- FAIL: TestController_RotateCertificates (2.83s)
```
My guess is that the root cause is this error:
```
E0711 10:31:27.080748 71689 ipsec_certificate_controller.go:272] "Verifying current certificate configurations failed" err="certificate and key pair is nil"
```
The error is causing the certificate to be rotated immediately, rather than waiting for the next rotation deadline, hence causing the test to fail because the certificate is rotated too quickly:
https://github.com/antrea-io/antrea/blob/b3aa3a007fd806646293fcf79db457703e9a2cb6/pkg/agent/controller/ipseccertificate/ipsec_certificate_controller.go#L270-L276 | 1.0 | [Flaky Test] TestController_RotateCertificates is failing consistently - **Describe the bug**
Even after merging #5187, unit test `TestController_RotateCertificates` is failing consistently. It can be reproduced locally.
```
=== RUN TestController_RotateCertificates
I0711 10:31:26.786847 71689 ipsec_certificate_controller.go:447] "Shutting down AntreaAgentIPsecCertificateController"
I0711 10:31:26.787848 71689 ipsec_certificate_controller.go:483] "Created new certificate and key for IPSec" cert="/var/folders/9q/0_7t6cs557d_v8mfkhcw5r780000gp/T/antrea-ipsec-test2255709075/fake-node-1-9fba09efc2.crt" key="/var/folders/9q/0_7t6cs557d_v8mfkhcw5r780000gp/T/antrea-ipsec-test2255709075/fake-node-1-9fba09efc2.key"
E0711 10:31:26.789445 71689 ipsec_certificate_controller.go:143] "Error syncing IPSec certificates, requeuing" err="failed to validate new certificate: x509: certificate has expired or is not yet valid: current time 2023-07-11T10:31:26-07:00 is before 2023-07-11T17:31:31Z"
I0711 10:31:27.080704 71689 ipsec_certificate_controller.go:423] "Starting AntreaAgentIPsecCertificateController"
E0711 10:31:27.080748 71689 ipsec_certificate_controller.go:272] "Verifying current certificate configurations failed" err="certificate and key pair is nil"
I0711 10:31:27.080753 71689 ipsec_certificate_controller.go:279] "Start rotating IPsec certificate"
ipsec_certificate_controller_test.go:388: Sign CSR "fake-node-1-rbv9gsjf" successfully
I0711 10:31:28.280508 71689 ipsec_certificate_controller.go:483] "Created new certificate and key for IPSec" cert="/var/folders/9q/0_7t6cs557d_v8mfkhcw5r780000gp/T/antrea-ipsec-test1123940196/fake-node-1-6f5f574069.crt" key="/var/folders/9q/0_7t6cs557d_v8mfkhcw5r780000gp/T/antrea-ipsec-test1123940196/fake-node-1-6f5f574069.key"
I0711 10:31:28.281264 71689 ipsec_certificate_controller.go:234] "Calculated certificate rotation deadline" expiration="2023-07-11 17:31:36 +0000 UTC" deadline="2023-07-11 17:31:33.118241302 +0000 UTC"
I0711 10:31:28.281283 71689 ipsec_certificate_controller.go:395] "Updating OVS configurations for IPsec certificates"
I0711 10:31:28.281370 71689 ipsec_certificate_controller.go:279] "Start rotating IPsec certificate"
ipsec_certificate_controller_test.go:388: Sign CSR "fake-node-1-hvlh6q88" successfully
ipsec_certificate_controller_test.go:315: CSR should not be signed before the rotation deadline
--- FAIL: TestController_RotateCertificates (2.83s)
```
My guess is that the root cause is this error:
```
E0711 10:31:27.080748 71689 ipsec_certificate_controller.go:272] "Verifying current certificate configurations failed" err="certificate and key pair is nil"
```
The error is causing the certificate to be rotated immediately, rather than waiting for the next rotation deadline, hence causing the test to fail because the certificate is rotated too quickly:
https://github.com/antrea-io/antrea/blob/b3aa3a007fd806646293fcf79db457703e9a2cb6/pkg/agent/controller/ipseccertificate/ipsec_certificate_controller.go#L270-L276 | non_main | testcontroller rotatecertificates is failing consistently describe the bug even after merging unit test testcontroller rotatecertificates is failing consistently it can be reproduced locally run testcontroller rotatecertificates ipsec certificate controller go shutting down antreaagentipseccertificatecontroller ipsec certificate controller go created new certificate and key for ipsec cert var folders t antrea ipsec fake node crt key var folders t antrea ipsec fake node key ipsec certificate controller go error syncing ipsec certificates requeuing err failed to validate new certificate certificate has expired or is not yet valid current time is before ipsec certificate controller go starting antreaagentipseccertificatecontroller ipsec certificate controller go verifying current certificate configurations failed err certificate and key pair is nil ipsec certificate controller go start rotating ipsec certificate ipsec certificate controller test go sign csr fake node successfully ipsec certificate controller go created new certificate and key for ipsec cert var folders t antrea ipsec fake node crt key var folders t antrea ipsec fake node key ipsec certificate controller go calculated certificate rotation deadline expiration utc deadline utc ipsec certificate controller go updating ovs configurations for ipsec certificates ipsec certificate controller go start rotating ipsec certificate ipsec certificate controller test go sign csr fake node successfully ipsec certificate controller test go csr should not be signed before the rotation deadline fail testcontroller rotatecertificates my guess is that the root cause is this error ipsec certificate controller go verifying current certificate configurations failed err certificate and key pair is nil the error is causing the certificate to be rotated immediately rather than waiting for the next rotation deadline hence causing the test to fail because the certificate is rotated too quickly | 0 |
3,969 | 18,168,370,588 | IssuesEvent | 2021-09-27 16:57:28 | coq/platform | https://api.github.com/repos/coq/platform | closed | Add deriving to the Coq Platform package | package inclusion has maintainer agreement | [deriving](https://github.com/arthuraa/deriving) is very useful and is a part of the Mathcomp universe. | True | Add deriving to the Coq Platform package - [deriving](https://github.com/arthuraa/deriving) is very useful and is a part of the Mathcomp universe. | main | add deriving to the coq platform package is very useful and is a part of the mathcomp universe | 1 |
92,816 | 8,378,857,921 | IssuesEvent | 2018-10-06 18:30:45 | fluidization/fluidization | https://api.github.com/repos/fluidization/fluidization | opened | Updates for transport_disengagement module | docs test | Update docstrings for functions in `transport_disengagement` module.
Write tests for functions in `transport_disengagement` module. Use pytest framework.
Write Sphinx documentation for functions in `transport_disengagement` module. | 1.0 | Updates for transport_disengagement module - Update docstrings for functions in `transport_disengagement` module.
Write tests for functions in `transport_disengagement` module. Use pytest framework.
Write Sphinx documentation for functions in `transport_disengagement` module. | non_main | updates for transport disengagement module update docstrings for functions in transport disengagement module write tests for functions in transport disengagement module use pytest framework write sphinx documentation for functions in transport disengagement module | 0 |
412 | 3,479,489,051 | IssuesEvent | 2015-12-28 20:41:35 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Can't install visual-studio-code, different archive by region | awaiting maintainer feedback bug upstream | I was unable to install this cask
Output of brew cask install visual-studio-code --verbose
```
$ brew cask install visual-studio-code --verbose
==> Downloading https://az764295.vo.msecnd.net/public/0.10.6/VSCode-darwin.zip
/usr/bin/curl -fLA Homebrew-cask v0.51+ (Ruby 2.0.0-645) https://az764295.vo.msecnd.net/public/0.10.6/VSCode-darwin.zip -C 0 -o /Library/Caches/Homebrew/visual-studio-code-0.10.6.zip.incomplete
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 66.8M 100 66.8M 0 0 3660k 0 0:00:18 0:00:18 --:--:-- 3896k
cask 'visual-studio-code' do
==> Verifying checksum for Cask visual-studio-code
==> Note: running "brew update" may fix sha256 checksum errors
Error: sha256 mismatch
Expected: 78d333e1e7ae9bffc84fa1c6a0dbc3e8530d85b62c3b318e6687eefec2b3ddf9
Actual: 5b1bbf1964cc3163f38d88406e3eec9b631692a9df40fbba4f829042f3a359a7
File: /Library/Caches/Homebrew/visual-studio-code-0.10.6.zip
To retry an incomplete download, remove the file above.
Error: Kernel.exit
```
I tried to download from Nederland and Russia and acquired different archives, one that matched hash and one that not
`curl -fLA "Homebrew-cask v0.51+ (Ruby 2.0.0-645)" https://az764295.vo.msecnd.net/public/0.10.6/VSCode-darwin.zip -C 0 -o visual-studio-code-0.10.6.zip`
```
bash-3.2$ zipinfo -2 ~/Downloads/VSCode-darwin.zip | grep -E '^[^/]*(/[^/]*){1,1}$'
Visual Studio Code.app/
bash-3.2$ zipinfo -2 ~/Downloads/visual-studio-code-0.10.6.zip | grep -E '^[^/]*(/[^/]*){1,1}$'
Visual Studio Code.app/
__MACOSX/
__MACOSX/._Visual Studio Code.app
```
| True | Can't install visual-studio-code, different archive by region - I was unable to install this cask
Output of brew cask install visual-studio-code --verbose
```
$ brew cask install visual-studio-code --verbose
==> Downloading https://az764295.vo.msecnd.net/public/0.10.6/VSCode-darwin.zip
/usr/bin/curl -fLA Homebrew-cask v0.51+ (Ruby 2.0.0-645) https://az764295.vo.msecnd.net/public/0.10.6/VSCode-darwin.zip -C 0 -o /Library/Caches/Homebrew/visual-studio-code-0.10.6.zip.incomplete
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 66.8M 100 66.8M 0 0 3660k 0 0:00:18 0:00:18 --:--:-- 3896k
cask 'visual-studio-code' do
==> Verifying checksum for Cask visual-studio-code
==> Note: running "brew update" may fix sha256 checksum errors
Error: sha256 mismatch
Expected: 78d333e1e7ae9bffc84fa1c6a0dbc3e8530d85b62c3b318e6687eefec2b3ddf9
Actual: 5b1bbf1964cc3163f38d88406e3eec9b631692a9df40fbba4f829042f3a359a7
File: /Library/Caches/Homebrew/visual-studio-code-0.10.6.zip
To retry an incomplete download, remove the file above.
Error: Kernel.exit
```
I tried to download from Nederland and Russia and acquired different archives, one that matched hash and one that not
`curl -fLA "Homebrew-cask v0.51+ (Ruby 2.0.0-645)" https://az764295.vo.msecnd.net/public/0.10.6/VSCode-darwin.zip -C 0 -o visual-studio-code-0.10.6.zip`
```
bash-3.2$ zipinfo -2 ~/Downloads/VSCode-darwin.zip | grep -E '^[^/]*(/[^/]*){1,1}$'
Visual Studio Code.app/
bash-3.2$ zipinfo -2 ~/Downloads/visual-studio-code-0.10.6.zip | grep -E '^[^/]*(/[^/]*){1,1}$'
Visual Studio Code.app/
__MACOSX/
__MACOSX/._Visual Studio Code.app
```
| main | can t install visual studio code different archive by region i was unable to install this cask output of brew cask install visual studio code verbose brew cask install visual studio code verbose downloading usr bin curl fla homebrew cask ruby c o library caches homebrew visual studio code zip incomplete total received xferd average speed time time time current dload upload total spent left speed cask visual studio code do verifying checksum for cask visual studio code note running brew update may fix checksum errors error mismatch expected actual file library caches homebrew visual studio code zip to retry an incomplete download remove the file above error kernel exit i tried to download from nederland and russia and acquired different archives one that matched hash and one that not curl fla homebrew cask ruby c o visual studio code zip bash zipinfo downloads vscode darwin zip grep e visual studio code app bash zipinfo downloads visual studio code zip grep e visual studio code app macosx macosx visual studio code app | 1 |
5,242 | 26,563,493,954 | IssuesEvent | 2023-01-20 17:51:46 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | InvalidDatetimeFormat error when attempting to filter on a partially-entered date | type: bug work: frontend status: draft restricted: maintainers | ## Steps to reproduce
1. Open the table page for table with a Date column, e.g. the ["all_data_types" data set](https://github.com/centerofci/mathesar-data-playground/blob/master/all_data_types/all_data_types.sql).
1. Add a filter condition specifying the date column to be equal to a date. Then _begin entering a date_ starting with one number.

1. Expect either to see no results or to see the same results as without the filter condition.
1. Instead, observe that the `GET` request to the records endpoint responds with an error 500 and the following traceback
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/709/records/?limit=500&offset=0&filter=%7B%22equal%22%3A%5B%7B%22column_id%22%3A%5B2917%5D%7D%2C%7B%22literal%22%3A%5B%227%22%5D%7D%5D%7D
Django Version: 3.1.14
Python Version: 3.9.9
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1770, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 717, in do_execute
cursor.execute(statement, parameters)
The above exception (invalid input syntax for type date: "7"
LINE 4: WHERE date = '7'),
^
) was the direct cause of the following exception:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/records.py", line 68, in list
records = paginator.paginate_queryset(
File "/code/mathesar/api/pagination.py", line 75, in paginate_queryset
self.count = table.sa_num_records(filter=filters, search=search)
File "/code/mathesar/models/base.py", line 454, in sa_num_records
return get_count(
File "/code/db/records/operations/select.py", line 98, in get_count
return execute_pg_query(engine, relation)[0][col_name]
File "/code/db/utils.py", line 32, in execute_pg_query
return execute_statement(engine, executable, connection_to_use=connection_to_use).fetchall()
File "/code/db/utils.py", line 18, in execute_statement
return conn.execute(statement)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/future/engine.py", line 280, in execute
return self._execute_20(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1582, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 324, in _execute_on_connection
return connection._execute_clauseelement(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1451, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1813, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1994, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1770, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 717, in do_execute
cursor.execute(statement, parameters)
Exception Type: DataError at /api/db/v0/tables/709/records/
Exception Value: (psycopg2.errors.InvalidDatetimeFormat) invalid input syntax for type date: "7"
LINE 4: WHERE date = '7'),
^
[SQL: WITH anon_2 AS
(SELECT public.all_types.id AS id, public.all_types.text AS text, public.all_types.number AS number, public.all_types.money AS money, public.all_types.boolean AS boolean, public.all_types.date AS date, public.all_types.date_time AS date_time, public.all_types.time AS time, public.all_types.duration AS duration, public.all_types.email AS email, public.all_types.uri AS uri
FROM public.all_types
WHERE date = %(param_1)s),
anon_1 AS
(SELECT count(%(count_1)s) AS _count
FROM anon_2)
SELECT anon_1._count
FROM anon_1]
[parameters: {'count_1': 1, 'param_1': '7'}]
(Background on this error at: http://sqlalche.me/e/14/9h9h)
```
</details>
## Notes
- This also affects filtering via the `search_fuzzy` parameter used by the Record Selector.
## Implementation
- In theory, this could be fixed on the front end by, but I think a back end fix would make more sense.
- As for the expected behavior here, I'm leaning towards _ignoring_ invalid dates as filter conditions.
CC @pavish @dmos62 @mathemancer
| True | InvalidDatetimeFormat error when attempting to filter on a partially-entered date - ## Steps to reproduce
1. Open the table page for table with a Date column, e.g. the ["all_data_types" data set](https://github.com/centerofci/mathesar-data-playground/blob/master/all_data_types/all_data_types.sql).
1. Add a filter condition specifying the date column to be equal to a date. Then _begin entering a date_ starting with one number.

1. Expect either to see no results or to see the same results as without the filter condition.
1. Instead, observe that the `GET` request to the records endpoint responds with an error 500 and the following traceback
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/709/records/?limit=500&offset=0&filter=%7B%22equal%22%3A%5B%7B%22column_id%22%3A%5B2917%5D%7D%2C%7B%22literal%22%3A%5B%227%22%5D%7D%5D%7D
Django Version: 3.1.14
Python Version: 3.9.9
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1770, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 717, in do_execute
cursor.execute(statement, parameters)
The above exception (invalid input syntax for type date: "7"
LINE 4: WHERE date = '7'),
^
) was the direct cause of the following exception:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/records.py", line 68, in list
records = paginator.paginate_queryset(
File "/code/mathesar/api/pagination.py", line 75, in paginate_queryset
self.count = table.sa_num_records(filter=filters, search=search)
File "/code/mathesar/models/base.py", line 454, in sa_num_records
return get_count(
File "/code/db/records/operations/select.py", line 98, in get_count
return execute_pg_query(engine, relation)[0][col_name]
File "/code/db/utils.py", line 32, in execute_pg_query
return execute_statement(engine, executable, connection_to_use=connection_to_use).fetchall()
File "/code/db/utils.py", line 18, in execute_statement
return conn.execute(statement)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/future/engine.py", line 280, in execute
return self._execute_20(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1582, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 324, in _execute_on_connection
return connection._execute_clauseelement(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1451, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1813, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1994, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1770, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 717, in do_execute
cursor.execute(statement, parameters)
Exception Type: DataError at /api/db/v0/tables/709/records/
Exception Value: (psycopg2.errors.InvalidDatetimeFormat) invalid input syntax for type date: "7"
LINE 4: WHERE date = '7'),
^
[SQL: WITH anon_2 AS
(SELECT public.all_types.id AS id, public.all_types.text AS text, public.all_types.number AS number, public.all_types.money AS money, public.all_types.boolean AS boolean, public.all_types.date AS date, public.all_types.date_time AS date_time, public.all_types.time AS time, public.all_types.duration AS duration, public.all_types.email AS email, public.all_types.uri AS uri
FROM public.all_types
WHERE date = %(param_1)s),
anon_1 AS
(SELECT count(%(count_1)s) AS _count
FROM anon_2)
SELECT anon_1._count
FROM anon_1]
[parameters: {'count_1': 1, 'param_1': '7'}]
(Background on this error at: http://sqlalche.me/e/14/9h9h)
```
</details>
## Notes
- This also affects filtering via the `search_fuzzy` parameter used by the Record Selector.
## Implementation
- In theory, this could be fixed on the front end by, but I think a back end fix would make more sense.
- As for the expected behavior here, I'm leaning towards _ignoring_ invalid dates as filter conditions.
CC @pavish @dmos62 @mathemancer
| main | invaliddatetimeformat error when attempting to filter on a partially entered date steps to reproduce open the table page for table with a date column e g the add a filter condition specifying the date column to be equal to a date then begin entering a date starting with one number expect either to see no results or to see the same results as without the filter condition instead observe that the get request to the records endpoint responds with an error and the following traceback traceback environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware mathesar middleware cursorclosedhandlermiddleware traceback most recent call last file usr local lib site packages sqlalchemy engine base py line in execute context self dialect do execute file usr local lib site packages sqlalchemy engine default py line in do execute cursor execute statement parameters the above exception invalid input syntax for type date line where date was the direct cause of the following exception file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file code mathesar api db viewsets records py line in list records paginator paginate queryset file code mathesar api pagination py line in paginate queryset self count table sa num records filter filters search search file code mathesar models base py line in sa num records return get count file code db records operations select py line in get count return execute pg query engine relation file code db utils py line in execute pg query return execute statement engine executable connection to use connection to use fetchall file code db utils py line in execute statement return conn execute statement file usr local lib site packages sqlalchemy future engine py line in execute return self execute file usr local lib site packages sqlalchemy engine base py line in execute return meth self args kwargs execution options file usr local lib site packages sqlalchemy sql elements py line in execute on connection return connection execute clauseelement file usr local lib site packages sqlalchemy engine base py line in execute clauseelement ret self execute context file usr local lib site packages sqlalchemy engine base py line in execute context self handle dbapi exception file usr local lib site packages sqlalchemy engine base py line in handle dbapi exception util raise file usr local lib site packages sqlalchemy util compat py line in raise raise exception file usr local lib site packages sqlalchemy engine base py line in execute context self dialect do execute file usr local lib site packages sqlalchemy engine default py line in do execute cursor execute statement parameters exception type dataerror at api db tables records exception value errors invaliddatetimeformat invalid input syntax for type date line where date sql with anon as select public all types id as id public all types text as text public all types number as number public all types money as money public all types boolean as boolean public all types date as date public all types date time as date time public all types time as time public all types duration as duration public all types email as email public all types uri as uri from public all types where date param s anon as select count count s as count from anon select anon count from anon background on this error at notes this also affects filtering via the search fuzzy parameter used by the record selector implementation in theory this could be fixed on the front end by but i think a back end fix would make more sense as for the expected behavior here i m leaning towards ignoring invalid dates as filter conditions cc pavish mathemancer | 1 |
56,518 | 6,521,399,557 | IssuesEvent | 2017-08-28 20:25:29 | OData/odata.net | https://api.github.com/repos/OData/odata.net | closed | Microsoft.Data.ServerUnitTests1.UnitTests has failed test cases. | SkippedTestClean | <!-- markdownlint-disable MD002 MD041 -->
Test project: Microsoft.Data.ServerUnitTests1.UnitTests
### Assemblies affected
test
### Reproduce steps
uncomments the test cases and re-run
### Expected result
all test cases pass
### Actual result
some test cases failed.
| 1.0 | Microsoft.Data.ServerUnitTests1.UnitTests has failed test cases. - <!-- markdownlint-disable MD002 MD041 -->
Test project: Microsoft.Data.ServerUnitTests1.UnitTests
### Assemblies affected
test
### Reproduce steps
uncomments the test cases and re-run
### Expected result
all test cases pass
### Actual result
some test cases failed.
| non_main | microsoft data unittests has failed test cases test project microsoft data unittests assemblies affected test reproduce steps uncomments the test cases and re run expected result all test cases pass actual result some test cases failed | 0 |
3,456 | 13,222,008,149 | IssuesEvent | 2020-08-17 14:53:16 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | logentries module fails to correctly unfollow logs | affects_1.9 bot_closed bug collection collection:community.general module monitoring needs_collection_redirect needs_maintainer needs_triage support:community | _From @jmehnle on October 27, 2016 19:8_
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`logentries` module
##### ANSIBLE VERSION
```
ansible 1.9.3
configured module search path = None
```
(but broken in Ansible 2 as well)
##### OS / ENVIRONMENT
Managing Ubuntu 14.04 and 16.04.
`logentries` agent 1.4.39 (latest as of 2016-10-27), but tried various versions.
##### SUMMARY
The `logentries` module fails to correctly unfollow logs. The `le` command it issues is `le rm <logfile-path>`, but that only results in an error message:
```
failed: [<hostname>] => (item={'path': '/var/log/diskutil.log', 'state': 'absent', 'name': 'Diskutil'}) => {"failed": true, "item": {"name": "Diskutil", "path": "/var/log/diskutil.log", "state": "absent"}}
msg: failed to remove '/var/log/diskutil.log': Error: Resource var not found.
```
The play in the playbook is:
```
- name: logentries
hosts: all
sudo: yes
roles:
- role: logentries
logentries_logs:
- …
- name: "Diskutil"
path: "/var/log/diskutil.log"
state: absent
```
The relevant task is:
```
- name: Follow logs
logentries: path={{ item.path }} state={{ item.state | default('present') }}
with_items: logentries_logs
```
According to https://docs.logentries.com/docs/linux-agent and https://github.com/logentries/le/issues/66, the correct argument to pass to `le rm` is not the log file's file system path but a virtual path composed of 1. the prefix `hosts/`, 2. the fully qualified name of the host, and 3. the symbolic name of the log file specified during the initial `follow` action (defaulting to the base name of the log file path), e.g., `hosts/<hostname>/diskutil.log`.
_Copied from original issue: ansible/ansible-modules-extras#3307_ | True | logentries module fails to correctly unfollow logs - _From @jmehnle on October 27, 2016 19:8_
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`logentries` module
##### ANSIBLE VERSION
```
ansible 1.9.3
configured module search path = None
```
(but broken in Ansible 2 as well)
##### OS / ENVIRONMENT
Managing Ubuntu 14.04 and 16.04.
`logentries` agent 1.4.39 (latest as of 2016-10-27), but tried various versions.
##### SUMMARY
The `logentries` module fails to correctly unfollow logs. The `le` command it issues is `le rm <logfile-path>`, but that only results in an error message:
```
failed: [<hostname>] => (item={'path': '/var/log/diskutil.log', 'state': 'absent', 'name': 'Diskutil'}) => {"failed": true, "item": {"name": "Diskutil", "path": "/var/log/diskutil.log", "state": "absent"}}
msg: failed to remove '/var/log/diskutil.log': Error: Resource var not found.
```
The play in the playbook is:
```
- name: logentries
hosts: all
sudo: yes
roles:
- role: logentries
logentries_logs:
- …
- name: "Diskutil"
path: "/var/log/diskutil.log"
state: absent
```
The relevant task is:
```
- name: Follow logs
logentries: path={{ item.path }} state={{ item.state | default('present') }}
with_items: logentries_logs
```
According to https://docs.logentries.com/docs/linux-agent and https://github.com/logentries/le/issues/66, the correct argument to pass to `le rm` is not the log file's file system path but a virtual path composed of 1. the prefix `hosts/`, 2. the fully qualified name of the host, and 3. the symbolic name of the log file specified during the initial `follow` action (defaulting to the base name of the log file path), e.g., `hosts/<hostname>/diskutil.log`.
_Copied from original issue: ansible/ansible-modules-extras#3307_ | main | logentries module fails to correctly unfollow logs from jmehnle on october issue type bug report component name logentries module ansible version ansible configured module search path none but broken in ansible as well os environment managing ubuntu and logentries agent latest as of but tried various versions summary the logentries module fails to correctly unfollow logs the le command it issues is le rm but that only results in an error message failed item path var log diskutil log state absent name diskutil failed true item name diskutil path var log diskutil log state absent msg failed to remove var log diskutil log error resource var not found the play in the playbook is name logentries hosts all sudo yes roles role logentries logentries logs … name diskutil path var log diskutil log state absent the relevant task is name follow logs logentries path item path state item state default present with items logentries logs according to and the correct argument to pass to le rm is not the log file s file system path but a virtual path composed of the prefix hosts the fully qualified name of the host and the symbolic name of the log file specified during the initial follow action defaulting to the base name of the log file path e g hosts diskutil log copied from original issue ansible ansible modules extras | 1 |
3,739 | 15,705,284,717 | IssuesEvent | 2021-03-26 15:58:21 | petl-developers/petl | https://api.github.com/repos/petl-developers/petl | opened | Research benefits of moving release process from travis-ci.org to Github Actions | Help Wanted Maintainability | ## Problem description
Research benefits of moving release process from [travis-ci.org](https://travis-ci.org/github/petl-developers/petl) to [Github Actions](https://github.com/petl-developers/petl/actions).
### Pros
- Already exists a workflow for [testing](https://github.com/petl-developers/petl/actions/workflows/test-changes.yml) changes on `push` / `pull-request` added on PR #543 .
- This newer workflow targets all tests running in [travis-ci.org](https://travis-ci.org) and [appveyor.com](https://ci.appveyor.com/project/petl-developers/petl).
- It also expands the test surface to:
- python: `3.9`
- platforms: `mac-os`
- remote filesystem: `sftp` and `samba` running on docker.
- database: `PostgreSQL` and `MySQL` running on docker.
- increased the test span on python `2.7` and `windows`.
- Sometime in the future we will be forced to deprecate and/or remove the support for python 2.7.
- Unify testing and release in just one CI.
### Cons
- Need to develop a release workflow for `petl` in [Github Actions](https://github.com/petl-developers/petl/actions).
- Probably changing the CI for release will incur in fixing a lot of settings and permission between:
- [Github](https://github.com/petl-developers/petl/)
- [readthedocs.io](https://petl.readthedocs.io/en/stable/)
- [pypi](http://pypi.python.org/pypi/petl)
- [conda](https://anaconda.org/conda-forge/petl)
- [coveralls.io](https://coveralls.io/github/petl-developers/petl)
- Using [travis-ci.org](https://travis-ci.org/github/petl-developers/petl) for `linux` builds and [appveyor.com](https://ci.appveyor.com/project/petl-developers/petl) for `windows` builds are **working fine** for now.
Any thoughts or help will be appreciated.
| True | Research benefits of moving release process from travis-ci.org to Github Actions - ## Problem description
Research benefits of moving release process from [travis-ci.org](https://travis-ci.org/github/petl-developers/petl) to [Github Actions](https://github.com/petl-developers/petl/actions).
### Pros
- Already exists a workflow for [testing](https://github.com/petl-developers/petl/actions/workflows/test-changes.yml) changes on `push` / `pull-request` added on PR #543 .
- This newer workflow targets all tests running in [travis-ci.org](https://travis-ci.org) and [appveyor.com](https://ci.appveyor.com/project/petl-developers/petl).
- It also expands the test surface to:
- python: `3.9`
- platforms: `mac-os`
- remote filesystem: `sftp` and `samba` running on docker.
- database: `PostgreSQL` and `MySQL` running on docker.
- increased the test span on python `2.7` and `windows`.
- Sometime in the future we will be forced to deprecate and/or remove the support for python 2.7.
- Unify testing and release in just one CI.
### Cons
- Need to develop a release workflow for `petl` in [Github Actions](https://github.com/petl-developers/petl/actions).
- Probably changing the CI for release will incur in fixing a lot of settings and permission between:
- [Github](https://github.com/petl-developers/petl/)
- [readthedocs.io](https://petl.readthedocs.io/en/stable/)
- [pypi](http://pypi.python.org/pypi/petl)
- [conda](https://anaconda.org/conda-forge/petl)
- [coveralls.io](https://coveralls.io/github/petl-developers/petl)
- Using [travis-ci.org](https://travis-ci.org/github/petl-developers/petl) for `linux` builds and [appveyor.com](https://ci.appveyor.com/project/petl-developers/petl) for `windows` builds are **working fine** for now.
Any thoughts or help will be appreciated.
| main | research benefits of moving release process from travis ci org to github actions problem description research benefits of moving release process from to pros already exists a workflow for changes on push pull request added on pr this newer workflow targets all tests running in and it also expands the test surface to python platforms mac os remote filesystem sftp and samba running on docker database postgresql and mysql running on docker increased the test span on python and windows sometime in the future we will be forced to deprecate and or remove the support for python unify testing and release in just one ci cons need to develop a release workflow for petl in probably changing the ci for release will incur in fixing a lot of settings and permission between using for linux builds and for windows builds are working fine for now any thoughts or help will be appreciated | 1 |
176,225 | 21,390,858,860 | IssuesEvent | 2022-04-21 06:56:33 | turkdevops/update-electron-app | https://api.github.com/repos/turkdevops/update-electron-app | opened | CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz | security vulnerability | ## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- standard-14.3.4.tgz (Root Library)
- eslint-6.8.0.tgz
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/update-electron-app/commit/f34c4d5aa805ffe788d3c6aa87f10f5e2a320036">f34c4d5aa805ffe788d3c6aa87f10f5e2a320036</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution (lodash): 4.17.21</p>
<p>Direct dependency fix Resolution (standard): 15.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- standard-14.3.4.tgz (Root Library)
- eslint-6.8.0.tgz
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/update-electron-app/commit/f34c4d5aa805ffe788d3c6aa87f10f5e2a320036">f34c4d5aa805ffe788d3c6aa87f10f5e2a320036</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution (lodash): 4.17.21</p>
<p>Direct dependency fix Resolution (standard): 15.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy standard tgz root library eslint tgz x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions whitesource note after conducting further research whitesource has determined that cve only affects environments with versions to of lodash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution standard step up your open source security game with whitesource | 0 |
4,806 | 24,759,368,917 | IssuesEvent | 2022-10-21 21:25:32 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | opened | Replace all Throwable.printStackTrace() with logging | bug maintainability | In investigating #5354, I realized that this is just the tip of the iceberg and there are a large number of places (~100) in the code where a naked `e.printStackTrace()` call is used. These should all be converted to using logging so that they can be turned on/off, routed to log collectors, reformatted, etc.
By default they probably just need to replace with something like `LOGGER.error("<Meaningful error message here>", e)`
Note that there are also some calls to `printStackTrace(PrintWriter s)` which is used to get a formatted stack trace to forward to the front end. These are probably fine as is. | True | Replace all Throwable.printStackTrace() with logging - In investigating #5354, I realized that this is just the tip of the iceberg and there are a large number of places (~100) in the code where a naked `e.printStackTrace()` call is used. These should all be converted to using logging so that they can be turned on/off, routed to log collectors, reformatted, etc.
By default they probably just need to replace with something like `LOGGER.error("<Meaningful error message here>", e)`
Note that there are also some calls to `printStackTrace(PrintWriter s)` which is used to get a formatted stack trace to forward to the front end. These are probably fine as is. | main | replace all throwable printstacktrace with logging in investigating i realized that this is just the tip of the iceberg and there are a large number of places in the code where a naked e printstacktrace call is used these should all be converted to using logging so that they can be turned on off routed to log collectors reformatted etc by default they probably just need to replace with something like logger error e note that there are also some calls to printstacktrace printwriter s which is used to get a formatted stack trace to forward to the front end these are probably fine as is | 1 |
5,018 | 25,771,051,061 | IssuesEvent | 2022-12-09 08:00:23 | cloverhearts/quilljs-markdown | https://api.github.com/repos/cloverhearts/quilljs-markdown | closed | use this package in the browser | Saw with Maintainer | I use this package as a node package and it works, but I need to load it in the browser
here are my trials:
```
import("/node_modules/quilljs-markdown/index.js").then(QuillMarkdown=>Quill.register(...))
```
this causes an error because it imports from './src/app.js' which includes `import 'regenerator-runtime'`
the paths in the browser should be relative `/node_modules/regenerator-runtime`
```
// load a normal javascript file
load( "/node_modules/quilljs-markdown/dist/quilljs-markdown.js" ).then(()=>{
console.log(window.QuillMarkdown) // function
Quill.register({ "modules/QuillMarkdown": window.QuillMarkdown }, true)
})
```
this causes the error `quill Cannot import modules/QuillMarkdown. Are you sure it was registered?`
the same behavior when loading from CDN
```
load("https://cdn.jsdelivr.net/npm/quilljs-markdown@latest/dist/quilljs-markdown.js").then(...) | True | use this package in the browser - I use this package as a node package and it works, but I need to load it in the browser
here are my trials:
```
import("/node_modules/quilljs-markdown/index.js").then(QuillMarkdown=>Quill.register(...))
```
this causes an error because it imports from './src/app.js' which includes `import 'regenerator-runtime'`
the paths in the browser should be relative `/node_modules/regenerator-runtime`
```
// load a normal javascript file
load( "/node_modules/quilljs-markdown/dist/quilljs-markdown.js" ).then(()=>{
console.log(window.QuillMarkdown) // function
Quill.register({ "modules/QuillMarkdown": window.QuillMarkdown }, true)
})
```
this causes the error `quill Cannot import modules/QuillMarkdown. Are you sure it was registered?`
the same behavior when loading from CDN
```
load("https://cdn.jsdelivr.net/npm/quilljs-markdown@latest/dist/quilljs-markdown.js").then(...) | main | use this package in the browser i use this package as a node package and it works but i need to load it in the browser here are my trials import node modules quilljs markdown index js then quillmarkdown quill register this causes an error because it imports from src app js which includes import regenerator runtime the paths in the browser should be relative node modules regenerator runtime load a normal javascript file load node modules quilljs markdown dist quilljs markdown js then console log window quillmarkdown function quill register modules quillmarkdown window quillmarkdown true this causes the error quill cannot import modules quillmarkdown are you sure it was registered the same behavior when loading from cdn load | 1 |
288,249 | 24,893,355,726 | IssuesEvent | 2022-10-28 13:52:25 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Test failure: brave_browser_tests.xml.[empty] | ci-concern bot/type/test bot/arch/x64 bot/channel/nightly bot/platform/macos bot/branch/v1.47 | Greetings human!
Bad news. `brave_browser_tests.xml.[empty]` [failed on macos x64 nightly v1.47.26](https://ci.brave.com/job/brave-browser-build-macos-x64/5637/testReport/junit/brave_browser_tests/xml/test_browser____empty_).
<details>
<summary>Stack trace</summary>
```
Test report file /Users/jenkins/jenkins/workspace/brave-browser-build-macos-x64-nightly/src/brave_browser_tests.xml was length 0
```
</details>
<details>
<summary>Previous issues</summary>
* #25605
* #25604
* #24379
[Find all](https://github.com/brave/brave-browser/issues?q=type%3Aissue+label%3Abot%2Ftype%2Ftest+in%3Atitle+%22Test+failure%3A+brave_browser_tests.xml.%5Bempty%5D%22)
</details> | 1.0 | Test failure: brave_browser_tests.xml.[empty] - Greetings human!
Bad news. `brave_browser_tests.xml.[empty]` [failed on macos x64 nightly v1.47.26](https://ci.brave.com/job/brave-browser-build-macos-x64/5637/testReport/junit/brave_browser_tests/xml/test_browser____empty_).
<details>
<summary>Stack trace</summary>
```
Test report file /Users/jenkins/jenkins/workspace/brave-browser-build-macos-x64-nightly/src/brave_browser_tests.xml was length 0
```
</details>
<details>
<summary>Previous issues</summary>
* #25605
* #25604
* #24379
[Find all](https://github.com/brave/brave-browser/issues?q=type%3Aissue+label%3Abot%2Ftype%2Ftest+in%3Atitle+%22Test+failure%3A+brave_browser_tests.xml.%5Bempty%5D%22)
</details> | non_main | test failure brave browser tests xml greetings human bad news brave browser tests xml stack trace test report file users jenkins jenkins workspace brave browser build macos nightly src brave browser tests xml was length previous issues | 0 |
144,958 | 19,318,939,348 | IssuesEvent | 2021-12-14 01:41:40 | txh51591/tm-repo | https://api.github.com/repos/txh51591/tm-repo | opened | CVE-2020-36180 (High) detected in jackson-databind-2.9.9.jar | security vulnerability | ## CVE-2020-36180 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: tm-repo/pom.xml</p>
<p>Path to vulnerable library: m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36180>CVE-2020-36180</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-36180 (High) detected in jackson-databind-2.9.9.jar - ## CVE-2020-36180 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: tm-repo/pom.xml</p>
<p>Path to vulnerable library: m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36180>CVE-2020-36180</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tm repo pom xml path to vulnerable library repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
89,773 | 10,616,618,149 | IssuesEvent | 2019-10-12 13:09:49 | neutralinojs/neutralinojs | https://api.github.com/repos/neutralinojs/neutralinojs | opened | Add contributors list to README | documentation | Use a quick tool like https://dev.to/lacolaco/introducing-contributors-img-keep-contributors-in-readme-md-gci
We need to easily update when there are new contributors | 1.0 | Add contributors list to README - Use a quick tool like https://dev.to/lacolaco/introducing-contributors-img-keep-contributors-in-readme-md-gci
We need to easily update when there are new contributors | non_main | add contributors list to readme use a quick tool like we need to easily update when there are new contributors | 0 |
32,627 | 4,779,403,826 | IssuesEvent | 2016-10-27 22:22:08 | coreos/etcd | https://api.github.com/repos/coreos/etcd | closed | functional-tester: mvcc database space exceeded and tester kept running | component/functional-tester | 
tester kept getting `mvcc: database space exceeded` errors, and the tester just kept retrying the first failure case `kill all members`. It starts around round 349. We should just mark it as error or stop the tester. Since this happens, no other case is run. Just `kill all members` counting keeps increasing.
```
2016-10-27 07:04:15.830866 W | etcd-tester: #38 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:16.834728 W | etcd-tester: #39 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:17.838512 W | etcd-tester: #40 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:18.845227 W | etcd-tester: #41 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:19.864146 W | etcd-tester: #42 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:20.874420 W | etcd-tester: #43 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:21.878229 W | etcd-tester: #44 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:22.882214 W | etcd-tester: #45 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:23.886160 W | etcd-tester: #46 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:24.890366 W | etcd-tester: #47 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:25.929590 W | etcd-tester: #48 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:26.946588 W | etcd-tester: #49 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:27.950237 W | etcd-tester: #50 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:28.954109 W | etcd-tester: #51 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:29.989237 W | etcd-tester: #52 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:30.993481 W | etcd-tester: #53 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:31.997697 W | etcd-tester: #54 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:33.001455 W | etcd-tester: #55 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:34.013551 W | etcd-tester: #56 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:35.018662 W | etcd-tester: #57 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:36.034003 W | etcd-tester: #58 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:37.052432 W | etcd-tester: #59 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:38.052602 I | etcd-tester: [round#359 case#0] wait full health error: etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379)
...
until the end
```
| 1.0 | functional-tester: mvcc database space exceeded and tester kept running - 
tester kept getting `mvcc: database space exceeded` errors, and the tester just kept retrying the first failure case `kill all members`. It starts around round 349. We should just mark it as error or stop the tester. Since this happens, no other case is run. Just `kill all members` counting keeps increasing.
```
2016-10-27 07:04:15.830866 W | etcd-tester: #38 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:16.834728 W | etcd-tester: #39 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:17.838512 W | etcd-tester: #40 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:18.845227 W | etcd-tester: #41 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:19.864146 W | etcd-tester: #42 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:20.874420 W | etcd-tester: #43 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:21.878229 W | etcd-tester: #44 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:22.882214 W | etcd-tester: #45 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:23.886160 W | etcd-tester: #46 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:24.890366 W | etcd-tester: #47 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:25.929590 W | etcd-tester: #48 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:26.946588 W | etcd-tester: #49 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:27.950237 W | etcd-tester: #50 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:28.954109 W | etcd-tester: #51 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:29.989237 W | etcd-tester: #52 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:30.993481 W | etcd-tester: #53 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:31.997697 W | etcd-tester: #54 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:33.001455 W | etcd-tester: #55 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:34.013551 W | etcd-tester: #56 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:35.018662 W | etcd-tester: #57 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:36.034003 W | etcd-tester: #58 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:37.052432 W | etcd-tester: #59 setHealthKey error (etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379))
2016-10-27 07:04:38.052602 I | etcd-tester: [round#359 case#0] wait full health error: etcdserver: mvcc: database space exceeded (http://10.240.0.2:2379)
...
until the end
```
| non_main | functional tester mvcc database space exceeded and tester kept running tester kept getting mvcc database space exceeded errors and the tester just kept retrying the first failure case kill all members it starts around round we should just mark it as error or stop the tester since this happens no other case is run just kill all members counting keeps increasing w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded w etcd tester sethealthkey error etcdserver mvcc database space exceeded i etcd tester wait full health error etcdserver mvcc database space exceeded until the end | 0 |
176,848 | 21,443,085,512 | IssuesEvent | 2022-04-25 01:07:27 | emilwareus/spring-boot | https://api.github.com/repos/emilwareus/spring-boot | closed | CVE-2014-0114 (High) detected in commons-beanutils-1.9.3.jar - autoclosed | security vulnerability | ## CVE-2014-0114 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.9.3.jar</b></p></summary>
<p>Apache Commons BeanUtils provides an easy-to-use but flexible wrapper around reflection and introspection.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-beanutils/">https://commons.apache.org/proper/commons-beanutils/</a></p>
<p>Path to dependency file: /spring-boot-project/spring-boot-starters/spring-boot-starter-artemis/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar,/home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar,/home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar</p>
<p>
Dependency Hierarchy:
- artemis-jms-server-2.8.0.jar (Root Library)
- artemis-server-2.8.0.jar
- :x: **commons-beanutils-1.9.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/emilwareus/spring-boot/commit/a9fcc95f14645f6bfd4924ca382b3d6b814680b0">a9fcc95f14645f6bfd4924ca382b3d6b814680b0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.
<p>Publish Date: 2014-04-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114>CVE-2014-0114</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114</a></p>
<p>Release Date: 2014-04-30</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4;org.apache.struts:struts2-core:2.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2014-0114 (High) detected in commons-beanutils-1.9.3.jar - autoclosed - ## CVE-2014-0114 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.9.3.jar</b></p></summary>
<p>Apache Commons BeanUtils provides an easy-to-use but flexible wrapper around reflection and introspection.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-beanutils/">https://commons.apache.org/proper/commons-beanutils/</a></p>
<p>Path to dependency file: /spring-boot-project/spring-boot-starters/spring-boot-starter-artemis/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar,/home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar,/home/wss-scanner/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar</p>
<p>
Dependency Hierarchy:
- artemis-jms-server-2.8.0.jar (Root Library)
- artemis-server-2.8.0.jar
- :x: **commons-beanutils-1.9.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/emilwareus/spring-boot/commit/a9fcc95f14645f6bfd4924ca382b3d6b814680b0">a9fcc95f14645f6bfd4924ca382b3d6b814680b0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1.
<p>Publish Date: 2014-04-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0114>CVE-2014-0114</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114</a></p>
<p>Release Date: 2014-04-30</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4;org.apache.struts:struts2-core:2.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in commons beanutils jar autoclosed cve high severity vulnerability vulnerable library commons beanutils jar apache commons beanutils provides an easy to use but flexible wrapper around reflection and introspection library home page a href path to dependency file spring boot project spring boot starters spring boot starter artemis pom xml path to vulnerable library home wss scanner repository commons beanutils commons beanutils commons beanutils jar home wss scanner repository commons beanutils commons beanutils commons beanutils jar home wss scanner repository commons beanutils commons beanutils commons beanutils jar dependency hierarchy artemis jms server jar root library artemis server jar x commons beanutils jar vulnerable library found in head commit a href vulnerability details apache commons beanutils as distributed in lib commons beanutils jar in apache struts x through and in other products requiring commons beanutils through does not suppress the class property which allows remote attackers to manipulate the classloader and execute arbitrary code via the class parameter as demonstrated by the passing of this parameter to the getclass method of the actionform object in struts publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons beanutils commons beanutils org apache struts core step up your open source security game with whitesource | 0 |
2,439 | 8,634,933,303 | IssuesEvent | 2018-11-22 19:15:46 | coq-community/manifesto | https://api.github.com/repos/coq-community/manifesto | opened | Proposal to add RelationExtraction | maintainer-wanted | ## Move a project to coq-community ##
**Project name:** RelationExtraction
**Initial author(s):** Catherine Dubois, David Delahaye, and Pierre-Nicolas Tollitte
**Current URL:** https://github.com/picnic/RelationExtraction
**Kind:** OCaml plugin
**License:** GPL3
**Description:** A plugin for generating functions from inductive types which make this possible. The functions can either be functions inside Coq or functions in an extraction language, such as OCaml. The underlying theory and implementation of the plugin is described in the paper [Producing Certified Functional Code from Inductive Specifications](https://doi.org/10.1007/978-3-642-35308-6_9).
**Status:** Appears unmaintained. Last supported version is Coq 8.4.
**New maintainer:** looking for a volunteer
| True | Proposal to add RelationExtraction - ## Move a project to coq-community ##
**Project name:** RelationExtraction
**Initial author(s):** Catherine Dubois, David Delahaye, and Pierre-Nicolas Tollitte
**Current URL:** https://github.com/picnic/RelationExtraction
**Kind:** OCaml plugin
**License:** GPL3
**Description:** A plugin for generating functions from inductive types which make this possible. The functions can either be functions inside Coq or functions in an extraction language, such as OCaml. The underlying theory and implementation of the plugin is described in the paper [Producing Certified Functional Code from Inductive Specifications](https://doi.org/10.1007/978-3-642-35308-6_9).
**Status:** Appears unmaintained. Last supported version is Coq 8.4.
**New maintainer:** looking for a volunteer
| main | proposal to add relationextraction move a project to coq community project name relationextraction initial author s catherine dubois david delahaye and pierre nicolas tollitte current url kind ocaml plugin license description a plugin for generating functions from inductive types which make this possible the functions can either be functions inside coq or functions in an extraction language such as ocaml the underlying theory and implementation of the plugin is described in the paper status appears unmaintained last supported version is coq new maintainer looking for a volunteer | 1 |
5,470 | 27,350,840,690 | IssuesEvent | 2023-02-27 09:27:15 | cncf/glossary | https://api.github.com/repos/cncf/glossary | closed | Website updates: Tags | maintainers | @cjyabraham, as discussed over Slack, ideally, we'd want three different tag styles:
- Fundamentals or advanced
- Tech/concept/property
- The rest
Each tag type tells us a little bit about that word. We could use the style we have now + a lighter version of the same pink + a ghost button style. Generally, it would go from bold to ghost but I feel that the latter category tag should be the full-colored one. Maybe you can ask your UX expert what the best approach is.
Also, it'd be great to have all tags at the top so people could filter by category (e.g., I want to see all architecture-related terms"). Here again, a UX perspective might be valuable. Should there be three lines?
- Fundamentals, advanced tags on top
- Tech/concept/property tags in the middle
- Other tags at the bottom?
I can ask for UX advice if no one is available on your end. | True | Website updates: Tags - @cjyabraham, as discussed over Slack, ideally, we'd want three different tag styles:
- Fundamentals or advanced
- Tech/concept/property
- The rest
Each tag type tells us a little bit about that word. We could use the style we have now + a lighter version of the same pink + a ghost button style. Generally, it would go from bold to ghost but I feel that the latter category tag should be the full-colored one. Maybe you can ask your UX expert what the best approach is.
Also, it'd be great to have all tags at the top so people could filter by category (e.g., I want to see all architecture-related terms"). Here again, a UX perspective might be valuable. Should there be three lines?
- Fundamentals, advanced tags on top
- Tech/concept/property tags in the middle
- Other tags at the bottom?
I can ask for UX advice if no one is available on your end. | main | website updates tags cjyabraham as discussed over slack ideally we d want three different tag styles fundamentals or advanced tech concept property the rest each tag type tells us a little bit about that word we could use the style we have now a lighter version of the same pink a ghost button style generally it would go from bold to ghost but i feel that the latter category tag should be the full colored one maybe you can ask your ux expert what the best approach is also it d be great to have all tags at the top so people could filter by category e g i want to see all architecture related terms here again a ux perspective might be valuable should there be three lines fundamentals advanced tags on top tech concept property tags in the middle other tags at the bottom i can ask for ux advice if no one is available on your end | 1 |
1,453 | 3,700,749,560 | IssuesEvent | 2016-02-29 10:02:30 | CartoDB/cartodb | https://api.github.com/repos/CartoDB/cartodb | closed | feature_flags_users id field should be a UUID, not a integer | Data-services | There cannot be integer primary keys in any cartodb model related to dynamic data | 1.0 | feature_flags_users id field should be a UUID, not a integer - There cannot be integer primary keys in any cartodb model related to dynamic data | non_main | feature flags users id field should be a uuid not a integer there cannot be integer primary keys in any cartodb model related to dynamic data | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.