Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5,380 | 27,041,756,033 | IssuesEvent | 2023-02-13 06:14:08 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Some mobs are immune to vaccum, i believe this is a mistake. | Maintainability/Hinders improvements Good First Issue Consistency Issue | [Round ID]: # (If you discovered this issue from playing tgstation hosted servers:)12806 On Citadel station
[Testmerges]: # (If you believe the issue to be caused by a test merge [OOC tab -> Show Server Revision], report it in the pull request's comment section instead.)
[Reproduction]: # (Explain your issue in detail, including the steps to reproduce it. Issues without proper reproduction steps or explanation are open to being ignored/closed by maintainers.)
During an attempt to steal the HoS gun, i blew open his office from space, and entered. I was attacked, and followed into vacuu, by the spider araneus. He proceeded to follow me into space, and kill me, because i am a pleb. But, after checking on him several times after my death, i found that despite the office being breached, he was alive and well. I believe he should have suffocated, as spiders would need air to live.
| True | Some mobs are immune to vaccum, i believe this is a mistake. - [Round ID]: # (If you discovered this issue from playing tgstation hosted servers:)12806 On Citadel station
[Testmerges]: # (If you believe the issue to be caused by a test merge [OOC tab -> Show Server Revision], report it in the pull request's comment section instead.)
[Reproduction]: # (Explain your issue in detail, including the steps to reproduce it. Issues without proper reproduction steps or explanation are open to being ignored/closed by maintainers.)
During an attempt to steal the HoS gun, i blew open his office from space, and entered. I was attacked, and followed into vacuu, by the spider araneus. He proceeded to follow me into space, and kill me, because i am a pleb. But, after checking on him several times after my death, i found that despite the office being breached, he was alive and well. I believe he should have suffocated, as spiders would need air to live.
| main | some mobs are immune to vaccum i believe this is a mistake if you discovered this issue from playing tgstation hosted servers on citadel station if you believe the issue to be caused by a test merge report it in the pull request s comment section instead explain your issue in detail including the steps to reproduce it issues without proper reproduction steps or explanation are open to being ignored closed by maintainers during an attempt to steal the hos gun i blew open his office from space and entered i was attacked and followed into vacuu by the spider araneus he proceeded to follow me into space and kill me because i am a pleb but after checking on him several times after my death i found that despite the office being breached he was alive and well i believe he should have suffocated as spiders would need air to live | 1 |
2,733 | 9,670,434,842 | IssuesEvent | 2019-05-21 19:54:45 | Fuzzik/ttt-weapon-collection | https://api.github.com/repos/Fuzzik/ttt-weapon-collection | opened | Reformat all repo code | maintainability | Change indentation to tabs
Remove whitespace between parentheses
Mentioned in https://github.com/Fuzzik/ttt-weapon-collection/commit/b784f744e299f903070e47b540d4ca5e44bef401 | True | Reformat all repo code - Change indentation to tabs
Remove whitespace between parentheses
Mentioned in https://github.com/Fuzzik/ttt-weapon-collection/commit/b784f744e299f903070e47b540d4ca5e44bef401 | main | reformat all repo code change indentation to tabs remove whitespace between parentheses mentioned in | 1 |
788,541 | 27,756,225,957 | IssuesEvent | 2023-03-16 02:52:29 | chareenav/github-issues-template | https://api.github.com/repos/chareenav/github-issues-template | opened | Point of Contact not working | bug Severity 2 Priority 2 | **Describe the bug**
Chat agent leads to a 404 page not found.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://chareenav.github.io/github-issues-template/index.html
2. Click on Support
3. Scroll down to Contact
4. See error
**Expected behavior**
Page should load to a chat with an agent
| 1.0 | Point of Contact not working - **Describe the bug**
Chat agent leads to a 404 page not found.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://chareenav.github.io/github-issues-template/index.html
2. Click on Support
3. Scroll down to Contact
4. See error
**Expected behavior**
Page should load to a chat with an agent
| non_main | point of contact not working describe the bug chat agent leads to a page not found to reproduce steps to reproduce the behavior go to click on support scroll down to contact see error expected behavior page should load to a chat with an agent | 0 |
1,078 | 4,893,665,170 | IssuesEvent | 2016-11-19 00:18:10 | duckduckgo/zeroclickinfo-fathead | https://api.github.com/repos/duckduckgo/zeroclickinfo-fathead | closed | C++ Reference Docs: | Bug External Maintainer Input Requested Mission: Programming Status: Needs a Developer Topic: C++ | I would like the following query to be part of IA for C++:
cpp begin valarray
---
IA Page: http://duck.co/ia/view/cppreference_doc
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mylainos | True | C++ Reference Docs: - I would like the following query to be part of IA for C++:
cpp begin valarray
---
IA Page: http://duck.co/ia/view/cppreference_doc
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mylainos | main | c reference docs i would like the following query to be part of ia for c cpp begin valarray ia page mylainos | 1 |
58,411 | 14,274,430,765 | IssuesEvent | 2020-11-22 03:54:48 | Ghost-chu/QuickShop-Reremake | https://api.github.com/repos/Ghost-chu/QuickShop-Reremake | closed | CVE-2020-24616 (High) detected in jackson-databind-2.3.4.jar - autoclosed | Bug security vulnerability | ## CVE-2020-24616 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.3.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: QuickShop-Reremake/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.3.4/jackson-databind-2.3.4.jar</p>
<p>
Dependency Hierarchy:
- jenkins-client-0.3.8.jar (Root Library)
- :x: **jackson-databind-2.3.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Ghost-chu/QuickShop-Reremake/commit/8ee7d2b71191adf05b366e0787aec78ffbdad102">8ee7d2b71191adf05b366e0787aec78ffbdad102</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.6 mishandles the interaction between serialization gadgets and typing, related to br.com.anteros.dbcp.AnterosDBCPDataSource (aka Anteros-DBCP).
<p>Publish Date: 2020-08-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24616>CVE-2020-24616</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616</a></p>
<p>Release Date: 2020-08-25</p>
<p>Fix Resolution: 2.9.10.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-24616 (High) detected in jackson-databind-2.3.4.jar - autoclosed - ## CVE-2020-24616 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.3.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: QuickShop-Reremake/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.3.4/jackson-databind-2.3.4.jar</p>
<p>
Dependency Hierarchy:
- jenkins-client-0.3.8.jar (Root Library)
- :x: **jackson-databind-2.3.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Ghost-chu/QuickShop-Reremake/commit/8ee7d2b71191adf05b366e0787aec78ffbdad102">8ee7d2b71191adf05b366e0787aec78ffbdad102</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.6 mishandles the interaction between serialization gadgets and typing, related to br.com.anteros.dbcp.AnterosDBCPDataSource (aka Anteros-DBCP).
<p>Publish Date: 2020-08-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24616>CVE-2020-24616</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-24616</a></p>
<p>Release Date: 2020-08-25</p>
<p>Fix Resolution: 2.9.10.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file quickshop reremake pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jenkins client jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to br com anteros dbcp anterosdbcpdatasource aka anteros dbcp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
16,990 | 22,355,341,116 | IssuesEvent | 2022-06-15 15:12:07 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | "echo" in powershell is an alias. for clarity, one otta use "Write-Output" instead | doc-enhancement devops/prod Pri1 devops-cicd-process/tech | howdy y'all,
many of your examples use "echo" for the "script:" code. that has caused relatively new users to try to use "Write-Host" instead of "Write-Output". i recommend changing from the alias to the real cmdlet.
aliases should NEVER be used for anything other than one-off, throwaway code. if it will be reused or read by anyone ... the full cmdlet otta be used.
take care,
lee
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#use-outputs-in-a-different-stage)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | "echo" in powershell is an alias. for clarity, one otta use "Write-Output" instead - howdy y'all,
many of your examples use "echo" for the "script:" code. that has caused relatively new users to try to use "Write-Host" instead of "Write-Output". i recommend changing from the alias to the real cmdlet.
aliases should NEVER be used for anything other than one-off, throwaway code. if it will be reused or read by anyone ... the full cmdlet otta be used.
take care,
lee
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#use-outputs-in-a-different-stage)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | non_main | echo in powershell is an alias for clarity one otta use write output instead howdy y all many of your examples use echo for the script code that has caused relatively new users to try to use write host instead of write output i recommend changing from the alias to the real cmdlet aliases should never be used for anything other than one off throwaway code if it will be reused or read by anyone the full cmdlet otta be used take care lee document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 0 |
5,812 | 30,777,288,657 | IssuesEvent | 2023-07-31 07:33:06 | beyarkay/eskom-calendar | https://api.github.com/repos/beyarkay/eskom-calendar | opened | Missing area schedule | waiting-on-maintainer missing-area-schedule | **What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot.
West Acres, Nelspruit, Mpumalanga
EskomSePush - West Acres (14) - Delta (14) Eskom Municipal, Mbombela, Mpumalanga
**Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
This really helps us figure out what's working!
SolarAssistant
**Any other information**
If you've got any other info you think might be helpful, feel free to leave it here
| True | Missing area schedule - **What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot.
West Acres, Nelspruit, Mpumalanga
EskomSePush - West Acres (14) - Delta (14) Eskom Municipal, Mbombela, Mpumalanga
**Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
This really helps us figure out what's working!
SolarAssistant
**Any other information**
If you've got any other info you think might be helpful, feel free to leave it here
| main | missing area schedule what area s couldn t you find on please also give the province municipality our beautiful country has a surprising number of places that are named the same as each other if you know what your area is named on eskomsepush including that also helps a lot west acres nelspruit mpumalanga eskomsepush west acres delta eskom municipal mbombela mpumalanga where did you hear about this really helps us figure out what s working solarassistant any other information if you ve got any other info you think might be helpful feel free to leave it here | 1 |
42,152 | 12,879,017,264 | IssuesEvent | 2020-07-11 19:41:16 | loftwah/loftwah-dev-2020 | https://api.github.com/repos/loftwah/loftwah-dev-2020 | opened | CVE-2020-7608 (Medium) detected in yargs-parser-10.1.0.tgz | security vulnerability | ## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>yargs-parser-10.1.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-10.1.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-10.1.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/loftwah-dev-2020/wordpress/app/wp-content/themes/twentytwenty/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/loftwah-dev-2020/wordpress/app/wp-content/themes/twentytwenty/node_modules/meow/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- scripts-5.1.0.tgz (Root Library)
- npm-package-json-lint-3.7.0.tgz
- meow-5.0.0.tgz
- :x: **yargs-parser-10.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/loftwah/loftwah-dev-2020/commit/89fd212689151e9f7ffbaa73a2adacf222367b48">89fd212689151e9f7ffbaa73a2adacf222367b48</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7608 (Medium) detected in yargs-parser-10.1.0.tgz - ## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>yargs-parser-10.1.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-10.1.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-10.1.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/loftwah-dev-2020/wordpress/app/wp-content/themes/twentytwenty/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/loftwah-dev-2020/wordpress/app/wp-content/themes/twentytwenty/node_modules/meow/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- scripts-5.1.0.tgz (Root Library)
- npm-package-json-lint-3.7.0.tgz
- meow-5.0.0.tgz
- :x: **yargs-parser-10.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/loftwah/loftwah-dev-2020/commit/89fd212689151e9f7ffbaa73a2adacf222367b48">89fd212689151e9f7ffbaa73a2adacf222367b48</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in yargs parser tgz cve medium severity vulnerability vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm loftwah dev wordpress app wp content themes twentytwenty package json path to vulnerable library tmp ws scm loftwah dev wordpress app wp content themes twentytwenty node modules meow node modules yargs parser package json dependency hierarchy scripts tgz root library npm package json lint tgz meow tgz x yargs parser tgz vulnerable library found in head commit a href vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
241,119 | 7,808,927,049 | IssuesEvent | 2018-06-11 21:53:42 | rogerthat-platform/rogerthat-backend | https://api.github.com/repos/rogerthat-platform/rogerthat-backend | closed | Unhandled payments error - KeyError: 'location' | priority_critical state_verification type_bug | Happened 2 times for OSA Bakker
```
...
2018-06-09 00:33:20.893 CEST
Sending request to https://dev.payconiq.com/v2/transactions (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:315)
{"currency":"EUR","amount":200,"description":"Payment OSA Bakker via Onze Stad App app\nRef.: _js_411efb44-a04d-f762-3784-xxx","callbackUrl":"https://rogerth.at/payments/callbacks/payconiq/transaction/update?id=_js_411efb44-a04d-f762-3784-xxx"}
2018-06-09 00:33:21.005 CEST
{"transactionId":"xxx"} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:327)
2018-06-09 00:33:21.005 CEST
{'x-application-context': 'Payconiq API Gateway:ext:8080', 'transfer-encoding': 'chunked', 'connection': 'keep-alive', 'x-newrelic-app-data': '***', 'date': 'Fri, 08 Jun 2018 22:33:20 GMT', 'content-type': 'application/json;charset=UTF-8'} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:328)
2018-06-09 00:33:21.005 CEST
Unhandled payments error (Unhandled payments error (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/add_1_monkey_patches.py:128/base/data/home/apps/e~roger )
Traceback (most recent call last):
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/api/payment.py", line 160, in _do_call
result = call(app_user, *args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/__init__.py", line 347, in create_transaction
return get_api_module(provider_id).create_transaction(app_user, params)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py", line 329, in create_transaction
payconic_transaction_url = result.headers['Location']
File "/base/alloc/tmpfs/dynamic_runtimes/python27/277b61042b697c7a_unzipped/python27_lib/versions/1/google/appengine/api/urlfetch.py", line 109, in __getitem__
return self.data[self.caseless_keys[key.lower()]]
KeyError: 'location'
2018-06-09 00:33:21.080 CEST
[XX-OFFLOADv1]{"timestamp":1528497201.0803299,"request_data":{"a":[],"c":[{"a":{"request":{"provider_id":"payconiq","params":"{\"target\":\"service-b6580411-e0a8-4f3b-bcf4-17b50f5f310e@rogerth.at\",\"currency\":\"EUR\",\"amount\":200,\"precision\":2,\"memo\":\"Payment OSA Bakker via Onze Stad App app\",\"message_key\":\"_js_411efb44-a04d-f762-3784-52d55d3f148e\",\"test_mode\":true}"}},"ci":"06922f5f-6d2f-4678-9f7d-1562b70c853c","av":1,"t":1528497201,"f":"com.mobicage.api.payment.createTransaction"}],"r":[],"av":1},"type":"app","response_data":{"ap":"https://rogerthat-server.appspot.com/json-rpc","r":[{"s":"success","r":{"result":null,"success":false,"error":{"message":"*****","code":"unknown","data":null}},"av":1,"ci":"06922f5f-6d2f-4678-9f7d-1562b70c853c","t":1528497201}],"av":1,"t":1528497201,"more":false},"user":"c2cea01b5830a394e19744762f0f118e:osa-demo2"} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/lib/log_offload/log_offload.py:58)
``` | 1.0 | Unhandled payments error - KeyError: 'location' - Happened 2 times for OSA Bakker
```
...
2018-06-09 00:33:20.893 CEST
Sending request to https://dev.payconiq.com/v2/transactions (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:315)
{"currency":"EUR","amount":200,"description":"Payment OSA Bakker via Onze Stad App app\nRef.: _js_411efb44-a04d-f762-3784-xxx","callbackUrl":"https://rogerth.at/payments/callbacks/payconiq/transaction/update?id=_js_411efb44-a04d-f762-3784-xxx"}
2018-06-09 00:33:21.005 CEST
{"transactionId":"xxx"} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:327)
2018-06-09 00:33:21.005 CEST
{'x-application-context': 'Payconiq API Gateway:ext:8080', 'transfer-encoding': 'chunked', 'connection': 'keep-alive', 'x-newrelic-app-data': '***', 'date': 'Fri, 08 Jun 2018 22:33:20 GMT', 'content-type': 'application/json;charset=UTF-8'} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:328)
2018-06-09 00:33:21.005 CEST
Unhandled payments error (Unhandled payments error (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/add_1_monkey_patches.py:128/base/data/home/apps/e~roger )
Traceback (most recent call last):
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/api/payment.py", line 160, in _do_call
result = call(app_user, *args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/__init__.py", line 347, in create_transaction
return get_api_module(provider_id).create_transaction(app_user, params)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py", line 164, in typechecked_return
result = f(*args, **kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py", line 142, in typechecked_f
return f(**kwargs)
File "/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py", line 329, in create_transaction
payconic_transaction_url = result.headers['Location']
File "/base/alloc/tmpfs/dynamic_runtimes/python27/277b61042b697c7a_unzipped/python27_lib/versions/1/google/appengine/api/urlfetch.py", line 109, in __getitem__
return self.data[self.caseless_keys[key.lower()]]
KeyError: 'location'
2018-06-09 00:33:21.080 CEST
[XX-OFFLOADv1]{"timestamp":1528497201.0803299,"request_data":{"a":[],"c":[{"a":{"request":{"provider_id":"payconiq","params":"{\"target\":\"service-b6580411-e0a8-4f3b-bcf4-17b50f5f310e@rogerth.at\",\"currency\":\"EUR\",\"amount\":200,\"precision\":2,\"memo\":\"Payment OSA Bakker via Onze Stad App app\",\"message_key\":\"_js_411efb44-a04d-f762-3784-52d55d3f148e\",\"test_mode\":true}"}},"ci":"06922f5f-6d2f-4678-9f7d-1562b70c853c","av":1,"t":1528497201,"f":"com.mobicage.api.payment.createTransaction"}],"r":[],"av":1},"type":"app","response_data":{"ap":"https://rogerthat-server.appspot.com/json-rpc","r":[{"s":"success","r":{"result":null,"success":false,"error":{"message":"*****","code":"unknown","data":null}},"av":1,"ci":"06922f5f-6d2f-4678-9f7d-1562b70c853c","t":1528497201}],"av":1,"t":1528497201,"more":false},"user":"c2cea01b5830a394e19744762f0f118e:osa-demo2"} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/lib/log_offload/log_offload.py:58)
``` | non_main | unhandled payments error keyerror location happened times for osa bakker cest sending request to base data home apps e rogerthat server rogerthat bizz payment providers payconiq api py currency eur amount description payment osa bakker via onze stad app app nref js xxx callbackurl cest transactionid xxx base data home apps e rogerthat server rogerthat bizz payment providers payconiq api py cest x application context payconiq api gateway ext transfer encoding chunked connection keep alive x newrelic app data date fri jun gmt content type application json charset utf base data home apps e rogerthat server rogerthat bizz payment providers payconiq api py cest unhandled payments error unhandled payments error base data home apps e rogerthat server add monkey patches py base data home apps e roger traceback most recent call last file base data home apps e rogerthat server rogerthat api payment py line in do call result call app user args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked return result f args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked f return f kwargs file base data home apps e rogerthat server rogerthat bizz payment init py line in create transaction return get api module provider id create transaction app user params file base data home apps e rogerthat server mcfw rpc py line in typechecked return result f args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked f return f kwargs file base data home apps e rogerthat server rogerthat bizz payment providers payconiq api py line in create transaction payconic transaction url result headers file base alloc tmpfs dynamic runtimes unzipped lib versions google appengine api urlfetch py line in getitem return self data keyerror location cest timestamp request data a c r av type app response data ap av t more false user osa base data home apps e rogerthat server lib log offload log offload py | 0 |
5,323 | 26,884,610,357 | IssuesEvent | 2023-02-06 01:12:14 | PyCQA/pydocstyle | https://api.github.com/repos/PyCQA/pydocstyle | closed | Call for maintainers | call for maintainers | Hello all,
As of late, I have been the only somewhat active maintainer for pydocstyle and I am in a position where I am finding less and less time to dedicate to this project.
As a consequence, @Nurdok and I have decided to do a call for maintainers.
Ideally we would love for the new maintainer to
- be an avid user or contributor to pydocstyle or a similar project (like sibling linting projects in PyCQA)
- have a good history of involvement in open source
The immediate tasks for new maintainer(s) would be to -
- Automate the release process
- Release the current set of contributions on the master branch
- Triage the issue backlog
- Review the current set of open PRs
Longer term, there are a few milestones on -
- Making conventions a first class concepts in pydocstyle as opposed to a group of issues
- Enhance the linting capabilities for numpy and Google docstyles
for more details see https://github.com/PyCQA/pydocstyle/milestone/7
If you are interested, feel free to reach out to us on the pydocstyle #general discord channel https://discord.gg/GAt2YQEXga or comment on this issue.
| True | Call for maintainers - Hello all,
As of late, I have been the only somewhat active maintainer for pydocstyle and I am in a position where I am finding less and less time to dedicate to this project.
As a consequence, @Nurdok and I have decided to do a call for maintainers.
Ideally we would love for the new maintainer to
- be an avid user or contributor to pydocstyle or a similar project (like sibling linting projects in PyCQA)
- have a good history of involvement in open source
The immediate tasks for new maintainer(s) would be to -
- Automate the release process
- Release the current set of contributions on the master branch
- Triage the issue backlog
- Review the current set of open PRs
Longer term, there are a few milestones on -
- Making conventions a first class concepts in pydocstyle as opposed to a group of issues
- Enhance the linting capabilities for numpy and Google docstyles
for more details see https://github.com/PyCQA/pydocstyle/milestone/7
If you are interested, feel free to reach out to us on the pydocstyle #general discord channel https://discord.gg/GAt2YQEXga or comment on this issue.
| main | call for maintainers hello all as of late i have been the only somewhat active maintainer for pydocstyle and i am in a position where i am finding less and less time to dedicate to this project as a consequence nurdok and i have decided to do a call for maintainers ideally we would love for the new maintainer to be an avid user or contributor to pydocstyle or a similar project like sibling linting projects in pycqa have a good history of involvement in open source the immediate tasks for new maintainer s would be to automate the release process release the current set of contributions on the master branch triage the issue backlog review the current set of open prs longer term there are a few milestones on making conventions a first class concepts in pydocstyle as opposed to a group of issues enhance the linting capabilities for numpy and google docstyles for more details see if you are interested feel free to reach out to us on the pydocstyle general discord channel or comment on this issue | 1 |
4,977 | 25,548,282,356 | IssuesEvent | 2022-11-29 20:54:38 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Add client-side validation to Extract Columns form | type: enhancement work: frontend status: ready restricted: maintainers | - table name
- can't be empty
- can't be duplicate
- column name for new FK column
- can't be empty
- can't be duplicate
- list of columns to extract
- can't be empty
- can't include a column used in the FK link we're following
- can't include PK columns
- must leave at least one non-pk column left in the table
| True | Add client-side validation to Extract Columns form - - table name
- can't be empty
- can't be duplicate
- column name for new FK column
- can't be empty
- can't be duplicate
- list of columns to extract
- can't be empty
- can't include a column used in the FK link we're following
- can't include PK columns
- must leave at least one non-pk column left in the table
| main | add client side validation to extract columns form table name can t be empty can t be duplicate column name for new fk column can t be empty can t be duplicate list of columns to extract can t be empty can t include a column used in the fk link we re following can t include pk columns must leave at least one non pk column left in the table | 1 |
243,240 | 7,854,600,961 | IssuesEvent | 2018-06-20 21:19:46 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | Treasury doesnt appear | High Priority | 
It's supposed to be here but isnt. Lodding issue?
Double check other objects too. | 1.0 | Treasury doesnt appear - 
It's supposed to be here but isnt. Lodding issue?
Double check other objects too. | non_main | treasury doesnt appear it s supposed to be here but isnt lodding issue double check other objects too | 0 |
3,303 | 12,747,072,024 | IssuesEvent | 2020-06-26 17:09:56 | DynamoRIO/dynamorio | https://api.github.com/repos/DynamoRIO/dynamorio | opened | Eliminate include of system signal.h from core/ builds | Maintainability help wanted | We've had various define conflicts and problems in the past from the system signal.h conflicting with kernel-level or other defines we need inside DR. Today we still include signal.h, but we have our own distinctly-named types like `kernel_sigcontext_t` to avoid conflicts. But we still have some shared types, including on aarch64 where `struct fpsimd_context` ends up coming from the system includes. Xref PR #4325 where I added it to our own sigcontext.h to support targeting aarch64 on x86, but it would better to rename the type, and even better to eliminate the signal.h include. Xref the discussion at https://github.com/DynamoRIO/dynamorio/pull/4325#discussion_r446257978
| True | Eliminate include of system signal.h from core/ builds - We've had various define conflicts and problems in the past from the system signal.h conflicting with kernel-level or other defines we need inside DR. Today we still include signal.h, but we have our own distinctly-named types like `kernel_sigcontext_t` to avoid conflicts. But we still have some shared types, including on aarch64 where `struct fpsimd_context` ends up coming from the system includes. Xref PR #4325 where I added it to our own sigcontext.h to support targeting aarch64 on x86, but it would better to rename the type, and even better to eliminate the signal.h include. Xref the discussion at https://github.com/DynamoRIO/dynamorio/pull/4325#discussion_r446257978
| main | eliminate include of system signal h from core builds we ve had various define conflicts and problems in the past from the system signal h conflicting with kernel level or other defines we need inside dr today we still include signal h but we have our own distinctly named types like kernel sigcontext t to avoid conflicts but we still have some shared types including on where struct fpsimd context ends up coming from the system includes xref pr where i added it to our own sigcontext h to support targeting on but it would better to rename the type and even better to eliminate the signal h include xref the discussion at | 1 |
66,840 | 8,052,823,986 | IssuesEvent | 2018-08-01 20:35:58 | gitcoinco/web | https://api.github.com/repos/gitcoinco/web | opened | Create "Cancel Funding" Page - Design | design-help design-principles grants | ### User Story
As a Grant funder, I want to cancel my subscription to a grant project so I can end support whenever I want to.
### Why Is this Needed
The "Fund Grant" page is needed to allow users to cancel support for a given project.
### Description
*Type*: Feature
### Definition of Done
- [ ] Page has "Are you sure you want to cancel you support the 'project title' project?"
- [ ] Page has a "Cancel support" button
- [ ] Page has standard Gitcoin header and footer
- [ ] Design must be on brand, see creative repo for brand guide
### Additional Information
To be reviewed with @PixelantDesign @mbeacom @captnseagraves
Existing assets can be found at this PR: https://github.com/gitcoinco/web/pull/674
More information about the Dev Grants project can be found here: https://github.com/gitcoinco/web/issues/1469
Please allow the core team a few rounds of revisions. | 2.0 | Create "Cancel Funding" Page - Design - ### User Story
As a Grant funder, I want to cancel my subscription to a grant project so I can end support whenever I want to.
### Why Is this Needed
The "Fund Grant" page is needed to allow users to cancel support for a given project.
### Description
*Type*: Feature
### Definition of Done
- [ ] Page has "Are you sure you want to cancel you support the 'project title' project?"
- [ ] Page has a "Cancel support" button
- [ ] Page has standard Gitcoin header and footer
- [ ] Design must be on brand, see creative repo for brand guide
### Additional Information
To be reviewed with @PixelantDesign @mbeacom @captnseagraves
Existing assets can be found at this PR: https://github.com/gitcoinco/web/pull/674
More information about the Dev Grants project can be found here: https://github.com/gitcoinco/web/issues/1469
Please allow the core team a few rounds of revisions. | non_main | create cancel funding page design user story as a grant funder i want to cancel my subscription to a grant project so i can end support whenever i want to why is this needed the fund grant page is needed to allow users to cancel support for a given project description type feature definition of done page has are you sure you want to cancel you support the project title project page has a cancel support button page has standard gitcoin header and footer design must be on brand see creative repo for brand guide additional information to be reviewed with pixelantdesign mbeacom captnseagraves existing assets can be found at this pr more information about the dev grants project can be found here please allow the core team a few rounds of revisions | 0 |
190,029 | 15,215,525,003 | IssuesEvent | 2021-02-17 14:31:05 | eproxus/meck | https://api.github.com/repos/eproxus/meck | closed | Expect different return values at different times | documentation | Hi,
Consider the following example. We want some mocked function to return different values in different cases. Normally, this is being done either using something like `meck:seq/1` or by calling `meck:expect/4` in-place.
The sequencing approach can be a problem if there's no determined mount of calls to be made. For example, when we have some periodic calls, or when we have a complex call tree that calls our mocked function not once or twice. We may not even care about the amount of function calls made, just the fact that its return value was always the defined one.
Calling `meck:expect/4` in place is much better from the semantics perspective but it's painfully slow, as we recompile the module every time. As a result, simple tests may start taking 5-10 seconds instead of sub-seconds.
Is it technically possible to use something like ETS tables for storing `meck:expect` table, or there were some technical limitations that made you require to recompile the module? I ended up having that ETS-based approach for some cases and now wonder if `meck:expect/4` could be changed to work this way, which is much faster, or another function like `meck:quick_expect/4` can be added as a wrapper, because this seems like a fairly common pattern.
Thanks!
| 1.0 | Expect different return values at different times - Hi,
Consider the following example. We want some mocked function to return different values in different cases. Normally, this is being done either using something like `meck:seq/1` or by calling `meck:expect/4` in-place.
The sequencing approach can be a problem if there's no determined mount of calls to be made. For example, when we have some periodic calls, or when we have a complex call tree that calls our mocked function not once or twice. We may not even care about the amount of function calls made, just the fact that its return value was always the defined one.
Calling `meck:expect/4` in place is much better from the semantics perspective but it's painfully slow, as we recompile the module every time. As a result, simple tests may start taking 5-10 seconds instead of sub-seconds.
Is it technically possible to use something like ETS tables for storing `meck:expect` table, or there were some technical limitations that made you require to recompile the module? I ended up having that ETS-based approach for some cases and now wonder if `meck:expect/4` could be changed to work this way, which is much faster, or another function like `meck:quick_expect/4` can be added as a wrapper, because this seems like a fairly common pattern.
Thanks!
| non_main | expect different return values at different times hi consider the following example we want some mocked function to return different values in different cases normally this is being done either using something like meck seq or by calling meck expect in place the sequencing approach can be a problem if there s no determined mount of calls to be made for example when we have some periodic calls or when we have a complex call tree that calls our mocked function not once or twice we may not even care about the amount of function calls made just the fact that its return value was always the defined one calling meck expect in place is much better from the semantics perspective but it s painfully slow as we recompile the module every time as a result simple tests may start taking seconds instead of sub seconds is it technically possible to use something like ets tables for storing meck expect table or there were some technical limitations that made you require to recompile the module i ended up having that ets based approach for some cases and now wonder if meck expect could be changed to work this way which is much faster or another function like meck quick expect can be added as a wrapper because this seems like a fairly common pattern thanks | 0 |
940 | 4,652,445,906 | IssuesEvent | 2016-10-03 14:02:59 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ansible os_server module doesnot work with async_status | affects_2.1 bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
module : os_server and async_status
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
Tried using os_server module in using async , its failing to handle , output of the job .
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Create a playbook to boot a single instance on openstack async
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: test the os_server module on async
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: "provision os_server resources"
os_server:
state: "present"
auth:
auth_url: "http://localhost:5000/v2.0/"
username: "openstackusername"
password: "openstackpassword"
project_name: "openstackprojectname"
name: "helloinstance"
image: "rhel-6.5_jeos"
key_name: "test_keypair"
api_timeout: 99999
flavor: "m1.small"
network: "testnetwork"
async: 1000
poll: 0
register: yum_sleeper
- name: 'check on fire and forget task'
async_status:
jid: "{{ yum_sleeper.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 30
```
<!--- You can also paste gist.github.com links for larger files -->##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expected , job output with openstack server details .
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
https://gist.github.com/samvarankashyap/645de866d564eee9e2fe0fcb6c02c77e
<!--- Paste verbatim command output between quotes below -->
command:
```
ansible-playbook -vvvvvv pluck_os.yml
```
Actual output:
https://gist.github.com/samvarankashyap/645de866d564eee9e2fe0fcb6c02c77e
```
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
Loaded callback default of type stdout, v2.0
PLAYBOOK: pluck_os.yml *********************************************************
1 plays in pluck_os.yml
PLAY [test the os_server module on async] **************************************
TASK [provision/deprovision os_server resources by looping on count] ***********
task path: /root/linch-pin/pluck_os.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578 `" && echo ansible-tmp-1470853932.03-182394085106578="` echo $HOME/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpF3aQ4d TO /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server
<127.0.0.1> PUT /tmp/tmpQqJAgL TO /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper
<127.0.0.1> EXEC /bin/sh -c 'chmod -R u+x /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper 501588893815 1000 /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {"ansible_job_id": "501588893815.445", "changed": false, "results_file": "/root/.ansible_async/501588893815.445", "started": 1}
TASK [check on fire and forget task] *******************************************
task path: /root/linch-pin/pluck_os.yml:24
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373 `" && echo ansible-tmp-1470853933.23-277604438370373="` echo $HOME/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpIkSVL9 TO /root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/" > /dev/null 2>&1 && sleep 0'
FAILED - RETRYING: TASK: check on fire and forget task (29 retries left).Result was: {"ansible_job_id": "501588893815.445", "attempts": 1, "changed": false, "finished": 0, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "results_file": "/root/.ansible_async/501588893815.445", "retries": 30, "started": 1}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954 `" && echo ansible-tmp-1470853938.32-242588126297954="` echo $HOME/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp4jd2U7 TO /root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/" > /dev/null 2>&1 && sleep 0'
FAILED - RETRYING: TASK: check on fire and forget task (28 retries left).Result was: {"ansible_job_id": "501588893815.445", "attempts": 2, "changed": false, "finished": 0, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "results_file": "/root/.ansible_async/501588893815.445", "retries": 30, "started": 1}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906 `" && echo ansible-tmp-1470853943.42-153837274674906="` echo $HOME/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpchWAZ5 TO /root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/" > /dev/null 2>&1 && sleep 0'
FAILED - RETRYING: TASK: check on fire and forget task (27 retries left).Result was: {"ansible_job_id": "501588893815.445", "attempts": 3, "changed": false, "finished": 0, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "results_file": "/root/.ansible_async/501588893815.445", "retries": 30, "started": 1}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038 `" && echo ansible-tmp-1470853948.51-150710275020038="` echo $HOME/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpTFyDJs TO /root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/" > /dev/null 2>&1 && sleep 0'
FAILED - RETRYING: TASK: check on fire and forget task (26 retries left).Result was: {"ansible_job_id": "501588893815.445", "attempts": 4, "changed": false, "finished": 0, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "results_file": "/root/.ansible_async/501588893815.445", "retries": 30, "started": 1}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367 `" && echo ansible-tmp-1470853953.6-191645118737367="` echo $HOME/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpFrI9Bt TO /root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"ansible_job_id": "501588893815.445", "changed": false, "failed": true, "finished": 1, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "msg": "Could not parse job output: No handlers could be found for logger \"keystoneauth.identity.base\"\n\n{\"invocation\": {\"module_args\": {\"auth_type\": null, \"availability_zone\": null, \"image\": \"rhel-6.5_jeos\", \"image_exclude\": \"(deprecated)\", \"flavor_include\": null, \"meta\": null, \"flavor\": \"m1.small\", \"cloud\": null, \"scheduler_hints\": null, \"boot_from_volume\": false, \"userdata\": null, \"network\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \"nics\": [], \"floating_ips\": null, \"flavor_ram\": null, \"volume_size\": false, \"state\": \"present\", \"auto_ip\": true, \"security_groups\": [\"default\"], \"config_drive\": false, \"volumes\": [], \"key_name\": \"ci-factory\", \"api_timeout\": 99999, \"auth\": {\"username\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \"project_name\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \"password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \"auth_url\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"}, \"endpoint_type\": \"public\", \"boot_volume\": null, \"key\": null, \"cacert\": null, \"wait\": true, \"name\": \"helloinstance\", \"region_name\": null, \"timeout\": 180, \"cert\": null, \"terminate_volume\": false, \"verify\": true, \"floating_ip_pools\": null}}, \"openstack\": {\"OS-EXT-STS:task_state\": null, \"addresses\": {\"e2e-openstack\": [{\"OS-EXT-IPS-MAC:mac_addr\": \"fa:16:3e:11:38:38\", \"version\": 4, \"addr\": \"172.16.100.97\", \"OS-EXT-IPS:type\": \"fixed\"}, {\"OS-EXT-IPS-MAC:mac_addr\": \"fa:16:3e:11:38:38\", \"version\": 4, \"addr\": \"10.8.183.233\", \"OS-EXT-IPS:type\": \"floating\"}]}, \"image\": {\"id\": \"3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\", \"name\": \"rhel-6.5_jeos\"}, \"OS-EXT-STS:vm_state\": \"active\", \"OS-SRV-USG:launched_at\": \"2016-08-10T18:32:23.000000\", \"NAME_ATTR\": \"name\", \"flavor\": {\"id\": \"2\", \"name\": \"m1.small\"}, \"az\": \"nova\", \"id\": \"b66ba857-5740-4c13-9e70-191883398af8\", \"cloud\": \"defaults\", \"user_id\": \"9c770dbddda444799e627004fee26e0a\", \"OS-DCF:diskConfig\": \"MANUAL\", \"networks\": {\"e2e-openstack\": [\"172.16.100.97\", \"10.8.183.233\"]}, \"accessIPv4\": \"10.8.183.233\", \"accessIPv6\": \"\", \"security_groups\": [{\"id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"name\": \"default\", \"security_group_rules\": [{\"direction\": \"ingress\", \"protocol\": null, \"remote_ip_prefix\": null, \"port_range_max\": null, \"security_group_id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"port_range_min\": null, \"ethertype\": \"IPv4\", \"id\": \"ade9fcb9-14c1-4975-a04d-6007f80005c1\"}, {\"direction\": \"ingress\", \"protocol\": null, \"remote_ip_prefix\": null, \"port_range_max\": null, \"security_group_id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"port_range_min\": null, \"ethertype\": \"IPv4\", \"id\": \"d03e4bae-24b6-415a-a30c-ee0d060f566f\"}], \"description\": \"Default security group\"}], \"key_name\": \"ci-factory\", \"progress\": 0, \"OS-EXT-STS:power_state\": 1, \"OS-EXT-AZ:availability_zone\": \"nova\", \"metadata\": {}, \"status\": \"ACTIVE\", \"updated\": \"2016-08-10T18:32:23Z\", \"hostId\": \"be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\", \"HUMAN_ID\": true, \"OS-SRV-USG:terminated_at\": null, \"public_v4\": \"10.8.183.233\", \"public_v6\": \"\", \"private_v4\": \"172.16.100.97\", \"interface_ip\": \"10.8.183.233\", \"name\": \"helloinstance\", \"created\": \"2016-08-10T18:32:16Z\", \"tenant_id\": \"f1dda47890754241a3e111f9b7394707\", \"region\": \"\", \"adminPass\": \"tX4SZ3gV85Sb\", \"os-extended-volumes:volumes_attached\": [], \"volumes\": [], \"config_drive\": \"\", \"human_id\": \"helloinstance\"}, \"changed\": true, \"id\": \"b66ba857-5740-4c13-9e70-191883398af8\", \"server\": {\"OS-EXT-STS:task_state\": null, \"addresses\": {\"e2e-openstack\": [{\"OS-EXT-IPS-MAC:mac_addr\": \"fa:16:3e:11:38:38\", \"version\": 4, \"addr\": \"172.16.100.97\", \"OS-EXT-IPS:type\": \"fixed\"}, {\"OS-EXT-IPS-MAC:mac_addr\": \"fa:16:3e:11:38:38\", \"version\": 4, \"addr\": \"10.8.183.233\", \"OS-EXT-IPS:type\": \"floating\"}]}, \"image\": {\"id\": \"3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\", \"name\": \"rhel-6.5_jeos\"}, \"OS-EXT-STS:vm_state\": \"active\", \"OS-SRV-USG:launched_at\": \"2016-08-10T18:32:23.000000\", \"NAME_ATTR\": \"name\", \"flavor\": {\"id\": \"2\", \"name\": \"m1.small\"}, \"az\": \"nova\", \"id\": \"b66ba857-5740-4c13-9e70-191883398af8\", \"cloud\": \"defaults\", \"user_id\": \"9c770dbddda444799e627004fee26e0a\", \"OS-DCF:diskConfig\": \"MANUAL\", \"networks\": {\"e2e-openstack\": [\"172.16.100.97\", \"10.8.183.233\"]}, \"accessIPv4\": \"10.8.183.233\", \"accessIPv6\": \"\", \"security_groups\": [{\"id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"name\": \"default\", \"security_group_rules\": [{\"direction\": \"ingress\", \"protocol\": null, \"remote_ip_prefix\": null, \"port_range_max\": null, \"security_group_id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"port_range_min\": null, \"ethertype\": \"IPv4\", \"id\": \"ade9fcb9-14c1-4975-a04d-6007f80005c1\"}, {\"direction\": \"ingress\", \"protocol\": null, \"remote_ip_prefix\": null, \"port_range_max\": null, \"security_group_id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"port_range_min\": null, \"ethertype\": \"IPv4\", \"id\": \"d03e4bae-24b6-415a-a30c-ee0d060f566f\"}], \"description\": \"Default security group\"}], \"key_name\": \"ci-factory\", \"progress\": 0, \"OS-EXT-STS:power_state\": 1, \"OS-EXT-AZ:availability_zone\": \"nova\", \"metadata\": {}, \"status\": \"ACTIVE\", \"updated\": \"2016-08-10T18:32:23Z\", \"hostId\": \"be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\", \"HUMAN_ID\": true, \"OS-SRV-USG:terminated_at\": null, \"public_v4\": \"10.8.183.233\", \"public_v6\": \"\", \"private_v4\": \"172.16.100.97\", \"interface_ip\": \"10.8.183.233\", \"name\": \"helloinstance\", \"created\": \"2016-08-10T18:32:16Z\", \"tenant_id\": \"f1dda47890754241a3e111f9b7394707\", \"region\": \"\", \"adminPass\": \"tX4SZ3gV85Sb\", \"os-extended-volumes:volumes_attached\": [], \"volumes\": [], \"config_drive\": \"\", \"human_id\": \"helloinstance\"}}\n{\"msg\": \"Traceback (most recent call last):\\n File \\\"/root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper\\\", line 89, in _run_module\\n File \\\"/usr/lib64/python2.7/json/__init__.py\\\", line 339, in loads\\n return _default_decoder.decode(s)\\n File \\\"/usr/lib64/python2.7/json/decoder.py\\\", line 364, in decode\\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\\n File \\\"/usr/lib64/python2.7/json/decoder.py\\\", line 382, in raw_decode\\n raise ValueError(\\\"No JSON object could be decoded\\\")\\nValueError: No JSON object could be decoded\\n\", \"failed\": 1, \"cmd\": \"/root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server\", \"data\": \"No handlers could be found for logger \\\"keystoneauth.identity.base\\\"\\n\\n{\\\"invocation\\\": {\\\"module_args\\\": {\\\"auth_type\\\": null, \\\"availability_zone\\\": null, \\\"image\\\": \\\"rhel-6.5_jeos\\\", \\\"image_exclude\\\": \\\"(deprecated)\\\", \\\"flavor_include\\\": null, \\\"meta\\\": null, \\\"flavor\\\": \\\"m1.small\\\", \\\"cloud\\\": null, \\\"scheduler_hints\\\": null, \\\"boot_from_volume\\\": false, \\\"userdata\\\": null, \\\"network\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\", \\\"nics\\\": [], \\\"floating_ips\\\": null, \\\"flavor_ram\\\": null, \\\"volume_size\\\": false, \\\"state\\\": \\\"present\\\", \\\"auto_ip\\\": true, \\\"security_groups\\\": [\\\"default\\\"], \\\"config_drive\\\": false, \\\"volumes\\\": [], \\\"key_name\\\": \\\"ci-factory\\\", \\\"api_timeout\\\": 99999, \\\"auth\\\": {\\\"username\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\", \\\"project_name\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\", \\\"password\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\", \\\"auth_url\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"}, \\\"endpoint_type\\\": \\\"public\\\", \\\"boot_volume\\\": null, \\\"key\\\": null, \\\"cacert\\\": null, \\\"wait\\\": true, \\\"name\\\": \\\"helloinstance\\\", \\\"region_name\\\": null, \\\"timeout\\\": 180, \\\"cert\\\": null, \\\"terminate_volume\\\": false, \\\"verify\\\": true, \\\"floating_ip_pools\\\": null}}, \\\"openstack\\\": {\\\"OS-EXT-STS:task_state\\\": null, \\\"addresses\\\": {\\\"e2e-openstack\\\": [{\\\"OS-EXT-IPS-MAC:mac_addr\\\": \\\"fa:16:3e:11:38:38\\\", \\\"version\\\": 4, \\\"addr\\\": \\\"172.16.100.97\\\", \\\"OS-EXT-IPS:type\\\": \\\"fixed\\\"}, {\\\"OS-EXT-IPS-MAC:mac_addr\\\": \\\"fa:16:3e:11:38:38\\\", \\\"version\\\": 4, \\\"addr\\\": \\\"10.8.183.233\\\", \\\"OS-EXT-IPS:type\\\": \\\"floating\\\"}]}, \\\"image\\\": {\\\"id\\\": \\\"3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\\\", \\\"name\\\": \\\"rhel-6.5_jeos\\\"}, \\\"OS-EXT-STS:vm_state\\\": \\\"active\\\", \\\"OS-SRV-USG:launched_at\\\": \\\"2016-08-10T18:32:23.000000\\\", \\\"NAME_ATTR\\\": \\\"name\\\", \\\"flavor\\\": {\\\"id\\\": \\\"2\\\", \\\"name\\\": \\\"m1.small\\\"}, \\\"az\\\": \\\"nova\\\", \\\"id\\\": \\\"b66ba857-5740-4c13-9e70-191883398af8\\\", \\\"cloud\\\": \\\"defaults\\\", \\\"user_id\\\": \\\"9c770dbddda444799e627004fee26e0a\\\", \\\"OS-DCF:diskConfig\\\": \\\"MANUAL\\\", \\\"networks\\\": {\\\"e2e-openstack\\\": [\\\"172.16.100.97\\\", \\\"10.8.183.233\\\"]}, \\\"accessIPv4\\\": \\\"10.8.183.233\\\", \\\"accessIPv6\\\": \\\"\\\", \\\"security_groups\\\": [{\\\"id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"name\\\": \\\"default\\\", \\\"security_group_rules\\\": [{\\\"direction\\\": \\\"ingress\\\", \\\"protocol\\\": null, \\\"remote_ip_prefix\\\": null, \\\"port_range_max\\\": null, \\\"security_group_id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"port_range_min\\\": null, \\\"ethertype\\\": \\\"IPv4\\\", \\\"id\\\": \\\"ade9fcb9-14c1-4975-a04d-6007f80005c1\\\"}, {\\\"direction\\\": \\\"ingress\\\", \\\"protocol\\\": null, \\\"remote_ip_prefix\\\": null, \\\"port_range_max\\\": null, \\\"security_group_id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"port_range_min\\\": null, \\\"ethertype\\\": \\\"IPv4\\\", \\\"id\\\": \\\"d03e4bae-24b6-415a-a30c-ee0d060f566f\\\"}], \\\"description\\\": \\\"Default security group\\\"}], \\\"key_name\\\": \\\"ci-factory\\\", \\\"progress\\\": 0, \\\"OS-EXT-STS:power_state\\\": 1, \\\"OS-EXT-AZ:availability_zone\\\": \\\"nova\\\", \\\"metadata\\\": {}, \\\"status\\\": \\\"ACTIVE\\\", \\\"updated\\\": \\\"2016-08-10T18:32:23Z\\\", \\\"hostId\\\": \\\"be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\\\", \\\"HUMAN_ID\\\": true, \\\"OS-SRV-USG:terminated_at\\\": null, \\\"public_v4\\\": \\\"10.8.183.233\\\", \\\"public_v6\\\": \\\"\\\", \\\"private_v4\\\": \\\"172.16.100.97\\\", \\\"interface_ip\\\": \\\"10.8.183.233\\\", \\\"name\\\": \\\"helloinstance\\\", \\\"created\\\": \\\"2016-08-10T18:32:16Z\\\", \\\"tenant_id\\\": \\\"f1dda47890754241a3e111f9b7394707\\\", \\\"region\\\": \\\"\\\", \\\"adminPass\\\": \\\"tX4SZ3gV85Sb\\\", \\\"os-extended-volumes:volumes_attached\\\": [], \\\"volumes\\\": [], \\\"config_drive\\\": \\\"\\\", \\\"human_id\\\": \\\"helloinstance\\\"}, \\\"changed\\\": true, \\\"id\\\": \\\"b66ba857-5740-4c13-9e70-191883398af8\\\", \\\"server\\\": {\\\"OS-EXT-STS:task_state\\\": null, \\\"addresses\\\": {\\\"e2e-openstack\\\": [{\\\"OS-EXT-IPS-MAC:mac_addr\\\": \\\"fa:16:3e:11:38:38\\\", \\\"version\\\": 4, \\\"addr\\\": \\\"172.16.100.97\\\", \\\"OS-EXT-IPS:type\\\": \\\"fixed\\\"}, {\\\"OS-EXT-IPS-MAC:mac_addr\\\": \\\"fa:16:3e:11:38:38\\\", \\\"version\\\": 4, \\\"addr\\\": \\\"10.8.183.233\\\", \\\"OS-EXT-IPS:type\\\": \\\"floating\\\"}]}, \\\"image\\\": {\\\"id\\\": \\\"3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\\\", \\\"name\\\": \\\"rhel-6.5_jeos\\\"}, \\\"OS-EXT-STS:vm_state\\\": \\\"active\\\", \\\"OS-SRV-USG:launched_at\\\": \\\"2016-08-10T18:32:23.000000\\\", \\\"NAME_ATTR\\\": \\\"name\\\", \\\"flavor\\\": {\\\"id\\\": \\\"2\\\", \\\"name\\\": \\\"m1.small\\\"}, \\\"az\\\": \\\"nova\\\", \\\"id\\\": \\\"b66ba857-5740-4c13-9e70-191883398af8\\\", \\\"cloud\\\": \\\"defaults\\\", \\\"user_id\\\": \\\"9c770dbddda444799e627004fee26e0a\\\", \\\"OS-DCF:diskConfig\\\": \\\"MANUAL\\\", \\\"networks\\\": {\\\"e2e-openstack\\\": [\\\"172.16.100.97\\\", \\\"10.8.183.233\\\"]}, \\\"accessIPv4\\\": \\\"10.8.183.233\\\", \\\"accessIPv6\\\": \\\"\\\", \\\"security_groups\\\": [{\\\"id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"name\\\": \\\"default\\\", \\\"security_group_rules\\\": [{\\\"direction\\\": \\\"ingress\\\", \\\"protocol\\\": null, \\\"remote_ip_prefix\\\": null, \\\"port_range_max\\\": null, \\\"security_group_id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"port_range_min\\\": null, \\\"ethertype\\\": \\\"IPv4\\\", \\\"id\\\": \\\"ade9fcb9-14c1-4975-a04d-6007f80005c1\\\"}, {\\\"direction\\\": \\\"ingress\\\", \\\"protocol\\\": null, \\\"remote_ip_prefix\\\": null, \\\"port_range_max\\\": null, \\\"security_group_id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"port_range_min\\\": null, \\\"ethertype\\\": \\\"IPv4\\\", \\\"id\\\": \\\"d03e4bae-24b6-415a-a30c-ee0d060f566f\\\"}], \\\"description\\\": \\\"Default security group\\\"}], \\\"key_name\\\": \\\"ci-factory\\\", \\\"progress\\\": 0, \\\"OS-EXT-STS:power_state\\\": 1, \\\"OS-EXT-AZ:availability_zone\\\": \\\"nova\\\", \\\"metadata\\\": {}, \\\"status\\\": \\\"ACTIVE\\\", \\\"updated\\\": \\\"2016-08-10T18:32:23Z\\\", \\\"hostId\\\": \\\"be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\\\", \\\"HUMAN_ID\\\": true, \\\"OS-SRV-USG:terminated_at\\\": null, \\\"public_v4\\\": \\\"10.8.183.233\\\", \\\"public_v6\\\": \\\"\\\", \\\"private_v4\\\": \\\"172.16.100.97\\\", \\\"interface_ip\\\": \\\"10.8.183.233\\\", \\\"name\\\": \\\"helloinstance\\\", \\\"created\\\": \\\"2016-08-10T18:32:16Z\\\", \\\"tenant_id\\\": \\\"f1dda47890754241a3e111f9b7394707\\\", \\\"region\\\": \\\"\\\", \\\"adminPass\\\": \\\"tX4SZ3gV85Sb\\\", \\\"os-extended-volumes:volumes_attached\\\": [], \\\"volumes\\\": [], \\\"config_drive\\\": \\\"\\\", \\\"human_id\\\": \\\"helloinstance\\\"}}\\n\", \"ansible_job_id\": \"501588893815.445\"}", "results_file": "/root/.ansible_async/501588893815.445", "started": 1}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @pluck_os.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
``` | True | ansible os_server module doesnot work with async_status - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
module : os_server and async_status
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
Tried using os_server module in using async , its failing to handle , output of the job .
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Create a playbook to boot a single instance on openstack async
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: test the os_server module on async
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: "provision os_server resources"
os_server:
state: "present"
auth:
auth_url: "http://localhost:5000/v2.0/"
username: "openstackusername"
password: "openstackpassword"
project_name: "openstackprojectname"
name: "helloinstance"
image: "rhel-6.5_jeos"
key_name: "test_keypair"
api_timeout: 99999
flavor: "m1.small"
network: "testnetwork"
async: 1000
poll: 0
register: yum_sleeper
- name: 'check on fire and forget task'
async_status:
jid: "{{ yum_sleeper.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 30
```
<!--- You can also paste gist.github.com links for larger files -->##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expected , job output with openstack server details .
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
https://gist.github.com/samvarankashyap/645de866d564eee9e2fe0fcb6c02c77e
<!--- Paste verbatim command output between quotes below -->
command:
```
ansible-playbook -vvvvvv pluck_os.yml
```
Actual output:
https://gist.github.com/samvarankashyap/645de866d564eee9e2fe0fcb6c02c77e
```
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
Loaded callback default of type stdout, v2.0
PLAYBOOK: pluck_os.yml *********************************************************
1 plays in pluck_os.yml
PLAY [test the os_server module on async] **************************************
TASK [provision/deprovision os_server resources by looping on count] ***********
task path: /root/linch-pin/pluck_os.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578 `" && echo ansible-tmp-1470853932.03-182394085106578="` echo $HOME/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpF3aQ4d TO /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server
<127.0.0.1> PUT /tmp/tmpQqJAgL TO /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper
<127.0.0.1> EXEC /bin/sh -c 'chmod -R u+x /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper 501588893815 1000 /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {"ansible_job_id": "501588893815.445", "changed": false, "results_file": "/root/.ansible_async/501588893815.445", "started": 1}
TASK [check on fire and forget task] *******************************************
task path: /root/linch-pin/pluck_os.yml:24
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373 `" && echo ansible-tmp-1470853933.23-277604438370373="` echo $HOME/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpIkSVL9 TO /root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853933.23-277604438370373/" > /dev/null 2>&1 && sleep 0'
FAILED - RETRYING: TASK: check on fire and forget task (29 retries left).Result was: {"ansible_job_id": "501588893815.445", "attempts": 1, "changed": false, "finished": 0, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "results_file": "/root/.ansible_async/501588893815.445", "retries": 30, "started": 1}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954 `" && echo ansible-tmp-1470853938.32-242588126297954="` echo $HOME/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp4jd2U7 TO /root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853938.32-242588126297954/" > /dev/null 2>&1 && sleep 0'
FAILED - RETRYING: TASK: check on fire and forget task (28 retries left).Result was: {"ansible_job_id": "501588893815.445", "attempts": 2, "changed": false, "finished": 0, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "results_file": "/root/.ansible_async/501588893815.445", "retries": 30, "started": 1}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906 `" && echo ansible-tmp-1470853943.42-153837274674906="` echo $HOME/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpchWAZ5 TO /root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853943.42-153837274674906/" > /dev/null 2>&1 && sleep 0'
FAILED - RETRYING: TASK: check on fire and forget task (27 retries left).Result was: {"ansible_job_id": "501588893815.445", "attempts": 3, "changed": false, "finished": 0, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "results_file": "/root/.ansible_async/501588893815.445", "retries": 30, "started": 1}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038 `" && echo ansible-tmp-1470853948.51-150710275020038="` echo $HOME/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpTFyDJs TO /root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853948.51-150710275020038/" > /dev/null 2>&1 && sleep 0'
FAILED - RETRYING: TASK: check on fire and forget task (26 retries left).Result was: {"ansible_job_id": "501588893815.445", "attempts": 4, "changed": false, "finished": 0, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "results_file": "/root/.ansible_async/501588893815.445", "retries": 30, "started": 1}
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367 `" && echo ansible-tmp-1470853953.6-191645118737367="` echo $HOME/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpFrI9Bt TO /root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/async_status
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/async_status; rm -rf "/root/.ansible/tmp/ansible-tmp-1470853953.6-191645118737367/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"ansible_job_id": "501588893815.445", "changed": false, "failed": true, "finished": 1, "invocation": {"module_args": {"jid": "501588893815.445", "mode": "status"}, "module_name": "async_status"}, "msg": "Could not parse job output: No handlers could be found for logger \"keystoneauth.identity.base\"\n\n{\"invocation\": {\"module_args\": {\"auth_type\": null, \"availability_zone\": null, \"image\": \"rhel-6.5_jeos\", \"image_exclude\": \"(deprecated)\", \"flavor_include\": null, \"meta\": null, \"flavor\": \"m1.small\", \"cloud\": null, \"scheduler_hints\": null, \"boot_from_volume\": false, \"userdata\": null, \"network\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \"nics\": [], \"floating_ips\": null, \"flavor_ram\": null, \"volume_size\": false, \"state\": \"present\", \"auto_ip\": true, \"security_groups\": [\"default\"], \"config_drive\": false, \"volumes\": [], \"key_name\": \"ci-factory\", \"api_timeout\": 99999, \"auth\": {\"username\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \"project_name\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \"password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \"auth_url\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\"}, \"endpoint_type\": \"public\", \"boot_volume\": null, \"key\": null, \"cacert\": null, \"wait\": true, \"name\": \"helloinstance\", \"region_name\": null, \"timeout\": 180, \"cert\": null, \"terminate_volume\": false, \"verify\": true, \"floating_ip_pools\": null}}, \"openstack\": {\"OS-EXT-STS:task_state\": null, \"addresses\": {\"e2e-openstack\": [{\"OS-EXT-IPS-MAC:mac_addr\": \"fa:16:3e:11:38:38\", \"version\": 4, \"addr\": \"172.16.100.97\", \"OS-EXT-IPS:type\": \"fixed\"}, {\"OS-EXT-IPS-MAC:mac_addr\": \"fa:16:3e:11:38:38\", \"version\": 4, \"addr\": \"10.8.183.233\", \"OS-EXT-IPS:type\": \"floating\"}]}, \"image\": {\"id\": \"3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\", \"name\": \"rhel-6.5_jeos\"}, \"OS-EXT-STS:vm_state\": \"active\", \"OS-SRV-USG:launched_at\": \"2016-08-10T18:32:23.000000\", \"NAME_ATTR\": \"name\", \"flavor\": {\"id\": \"2\", \"name\": \"m1.small\"}, \"az\": \"nova\", \"id\": \"b66ba857-5740-4c13-9e70-191883398af8\", \"cloud\": \"defaults\", \"user_id\": \"9c770dbddda444799e627004fee26e0a\", \"OS-DCF:diskConfig\": \"MANUAL\", \"networks\": {\"e2e-openstack\": [\"172.16.100.97\", \"10.8.183.233\"]}, \"accessIPv4\": \"10.8.183.233\", \"accessIPv6\": \"\", \"security_groups\": [{\"id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"name\": \"default\", \"security_group_rules\": [{\"direction\": \"ingress\", \"protocol\": null, \"remote_ip_prefix\": null, \"port_range_max\": null, \"security_group_id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"port_range_min\": null, \"ethertype\": \"IPv4\", \"id\": \"ade9fcb9-14c1-4975-a04d-6007f80005c1\"}, {\"direction\": \"ingress\", \"protocol\": null, \"remote_ip_prefix\": null, \"port_range_max\": null, \"security_group_id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"port_range_min\": null, \"ethertype\": \"IPv4\", \"id\": \"d03e4bae-24b6-415a-a30c-ee0d060f566f\"}], \"description\": \"Default security group\"}], \"key_name\": \"ci-factory\", \"progress\": 0, \"OS-EXT-STS:power_state\": 1, \"OS-EXT-AZ:availability_zone\": \"nova\", \"metadata\": {}, \"status\": \"ACTIVE\", \"updated\": \"2016-08-10T18:32:23Z\", \"hostId\": \"be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\", \"HUMAN_ID\": true, \"OS-SRV-USG:terminated_at\": null, \"public_v4\": \"10.8.183.233\", \"public_v6\": \"\", \"private_v4\": \"172.16.100.97\", \"interface_ip\": \"10.8.183.233\", \"name\": \"helloinstance\", \"created\": \"2016-08-10T18:32:16Z\", \"tenant_id\": \"f1dda47890754241a3e111f9b7394707\", \"region\": \"\", \"adminPass\": \"tX4SZ3gV85Sb\", \"os-extended-volumes:volumes_attached\": [], \"volumes\": [], \"config_drive\": \"\", \"human_id\": \"helloinstance\"}, \"changed\": true, \"id\": \"b66ba857-5740-4c13-9e70-191883398af8\", \"server\": {\"OS-EXT-STS:task_state\": null, \"addresses\": {\"e2e-openstack\": [{\"OS-EXT-IPS-MAC:mac_addr\": \"fa:16:3e:11:38:38\", \"version\": 4, \"addr\": \"172.16.100.97\", \"OS-EXT-IPS:type\": \"fixed\"}, {\"OS-EXT-IPS-MAC:mac_addr\": \"fa:16:3e:11:38:38\", \"version\": 4, \"addr\": \"10.8.183.233\", \"OS-EXT-IPS:type\": \"floating\"}]}, \"image\": {\"id\": \"3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\", \"name\": \"rhel-6.5_jeos\"}, \"OS-EXT-STS:vm_state\": \"active\", \"OS-SRV-USG:launched_at\": \"2016-08-10T18:32:23.000000\", \"NAME_ATTR\": \"name\", \"flavor\": {\"id\": \"2\", \"name\": \"m1.small\"}, \"az\": \"nova\", \"id\": \"b66ba857-5740-4c13-9e70-191883398af8\", \"cloud\": \"defaults\", \"user_id\": \"9c770dbddda444799e627004fee26e0a\", \"OS-DCF:diskConfig\": \"MANUAL\", \"networks\": {\"e2e-openstack\": [\"172.16.100.97\", \"10.8.183.233\"]}, \"accessIPv4\": \"10.8.183.233\", \"accessIPv6\": \"\", \"security_groups\": [{\"id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"name\": \"default\", \"security_group_rules\": [{\"direction\": \"ingress\", \"protocol\": null, \"remote_ip_prefix\": null, \"port_range_max\": null, \"security_group_id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"port_range_min\": null, \"ethertype\": \"IPv4\", \"id\": \"ade9fcb9-14c1-4975-a04d-6007f80005c1\"}, {\"direction\": \"ingress\", \"protocol\": null, \"remote_ip_prefix\": null, \"port_range_max\": null, \"security_group_id\": \"df1a797b-009c-4685-a7c9-43863c36d653\", \"port_range_min\": null, \"ethertype\": \"IPv4\", \"id\": \"d03e4bae-24b6-415a-a30c-ee0d060f566f\"}], \"description\": \"Default security group\"}], \"key_name\": \"ci-factory\", \"progress\": 0, \"OS-EXT-STS:power_state\": 1, \"OS-EXT-AZ:availability_zone\": \"nova\", \"metadata\": {}, \"status\": \"ACTIVE\", \"updated\": \"2016-08-10T18:32:23Z\", \"hostId\": \"be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\", \"HUMAN_ID\": true, \"OS-SRV-USG:terminated_at\": null, \"public_v4\": \"10.8.183.233\", \"public_v6\": \"\", \"private_v4\": \"172.16.100.97\", \"interface_ip\": \"10.8.183.233\", \"name\": \"helloinstance\", \"created\": \"2016-08-10T18:32:16Z\", \"tenant_id\": \"f1dda47890754241a3e111f9b7394707\", \"region\": \"\", \"adminPass\": \"tX4SZ3gV85Sb\", \"os-extended-volumes:volumes_attached\": [], \"volumes\": [], \"config_drive\": \"\", \"human_id\": \"helloinstance\"}}\n{\"msg\": \"Traceback (most recent call last):\\n File \\\"/root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/async_wrapper\\\", line 89, in _run_module\\n File \\\"/usr/lib64/python2.7/json/__init__.py\\\", line 339, in loads\\n return _default_decoder.decode(s)\\n File \\\"/usr/lib64/python2.7/json/decoder.py\\\", line 364, in decode\\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\\n File \\\"/usr/lib64/python2.7/json/decoder.py\\\", line 382, in raw_decode\\n raise ValueError(\\\"No JSON object could be decoded\\\")\\nValueError: No JSON object could be decoded\\n\", \"failed\": 1, \"cmd\": \"/root/.ansible/tmp/ansible-tmp-1470853932.03-182394085106578/os_server\", \"data\": \"No handlers could be found for logger \\\"keystoneauth.identity.base\\\"\\n\\n{\\\"invocation\\\": {\\\"module_args\\\": {\\\"auth_type\\\": null, \\\"availability_zone\\\": null, \\\"image\\\": \\\"rhel-6.5_jeos\\\", \\\"image_exclude\\\": \\\"(deprecated)\\\", \\\"flavor_include\\\": null, \\\"meta\\\": null, \\\"flavor\\\": \\\"m1.small\\\", \\\"cloud\\\": null, \\\"scheduler_hints\\\": null, \\\"boot_from_volume\\\": false, \\\"userdata\\\": null, \\\"network\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\", \\\"nics\\\": [], \\\"floating_ips\\\": null, \\\"flavor_ram\\\": null, \\\"volume_size\\\": false, \\\"state\\\": \\\"present\\\", \\\"auto_ip\\\": true, \\\"security_groups\\\": [\\\"default\\\"], \\\"config_drive\\\": false, \\\"volumes\\\": [], \\\"key_name\\\": \\\"ci-factory\\\", \\\"api_timeout\\\": 99999, \\\"auth\\\": {\\\"username\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\", \\\"project_name\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\", \\\"password\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\", \\\"auth_url\\\": \\\"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\\\"}, \\\"endpoint_type\\\": \\\"public\\\", \\\"boot_volume\\\": null, \\\"key\\\": null, \\\"cacert\\\": null, \\\"wait\\\": true, \\\"name\\\": \\\"helloinstance\\\", \\\"region_name\\\": null, \\\"timeout\\\": 180, \\\"cert\\\": null, \\\"terminate_volume\\\": false, \\\"verify\\\": true, \\\"floating_ip_pools\\\": null}}, \\\"openstack\\\": {\\\"OS-EXT-STS:task_state\\\": null, \\\"addresses\\\": {\\\"e2e-openstack\\\": [{\\\"OS-EXT-IPS-MAC:mac_addr\\\": \\\"fa:16:3e:11:38:38\\\", \\\"version\\\": 4, \\\"addr\\\": \\\"172.16.100.97\\\", \\\"OS-EXT-IPS:type\\\": \\\"fixed\\\"}, {\\\"OS-EXT-IPS-MAC:mac_addr\\\": \\\"fa:16:3e:11:38:38\\\", \\\"version\\\": 4, \\\"addr\\\": \\\"10.8.183.233\\\", \\\"OS-EXT-IPS:type\\\": \\\"floating\\\"}]}, \\\"image\\\": {\\\"id\\\": \\\"3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\\\", \\\"name\\\": \\\"rhel-6.5_jeos\\\"}, \\\"OS-EXT-STS:vm_state\\\": \\\"active\\\", \\\"OS-SRV-USG:launched_at\\\": \\\"2016-08-10T18:32:23.000000\\\", \\\"NAME_ATTR\\\": \\\"name\\\", \\\"flavor\\\": {\\\"id\\\": \\\"2\\\", \\\"name\\\": \\\"m1.small\\\"}, \\\"az\\\": \\\"nova\\\", \\\"id\\\": \\\"b66ba857-5740-4c13-9e70-191883398af8\\\", \\\"cloud\\\": \\\"defaults\\\", \\\"user_id\\\": \\\"9c770dbddda444799e627004fee26e0a\\\", \\\"OS-DCF:diskConfig\\\": \\\"MANUAL\\\", \\\"networks\\\": {\\\"e2e-openstack\\\": [\\\"172.16.100.97\\\", \\\"10.8.183.233\\\"]}, \\\"accessIPv4\\\": \\\"10.8.183.233\\\", \\\"accessIPv6\\\": \\\"\\\", \\\"security_groups\\\": [{\\\"id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"name\\\": \\\"default\\\", \\\"security_group_rules\\\": [{\\\"direction\\\": \\\"ingress\\\", \\\"protocol\\\": null, \\\"remote_ip_prefix\\\": null, \\\"port_range_max\\\": null, \\\"security_group_id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"port_range_min\\\": null, \\\"ethertype\\\": \\\"IPv4\\\", \\\"id\\\": \\\"ade9fcb9-14c1-4975-a04d-6007f80005c1\\\"}, {\\\"direction\\\": \\\"ingress\\\", \\\"protocol\\\": null, \\\"remote_ip_prefix\\\": null, \\\"port_range_max\\\": null, \\\"security_group_id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"port_range_min\\\": null, \\\"ethertype\\\": \\\"IPv4\\\", \\\"id\\\": \\\"d03e4bae-24b6-415a-a30c-ee0d060f566f\\\"}], \\\"description\\\": \\\"Default security group\\\"}], \\\"key_name\\\": \\\"ci-factory\\\", \\\"progress\\\": 0, \\\"OS-EXT-STS:power_state\\\": 1, \\\"OS-EXT-AZ:availability_zone\\\": \\\"nova\\\", \\\"metadata\\\": {}, \\\"status\\\": \\\"ACTIVE\\\", \\\"updated\\\": \\\"2016-08-10T18:32:23Z\\\", \\\"hostId\\\": \\\"be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\\\", \\\"HUMAN_ID\\\": true, \\\"OS-SRV-USG:terminated_at\\\": null, \\\"public_v4\\\": \\\"10.8.183.233\\\", \\\"public_v6\\\": \\\"\\\", \\\"private_v4\\\": \\\"172.16.100.97\\\", \\\"interface_ip\\\": \\\"10.8.183.233\\\", \\\"name\\\": \\\"helloinstance\\\", \\\"created\\\": \\\"2016-08-10T18:32:16Z\\\", \\\"tenant_id\\\": \\\"f1dda47890754241a3e111f9b7394707\\\", \\\"region\\\": \\\"\\\", \\\"adminPass\\\": \\\"tX4SZ3gV85Sb\\\", \\\"os-extended-volumes:volumes_attached\\\": [], \\\"volumes\\\": [], \\\"config_drive\\\": \\\"\\\", \\\"human_id\\\": \\\"helloinstance\\\"}, \\\"changed\\\": true, \\\"id\\\": \\\"b66ba857-5740-4c13-9e70-191883398af8\\\", \\\"server\\\": {\\\"OS-EXT-STS:task_state\\\": null, \\\"addresses\\\": {\\\"e2e-openstack\\\": [{\\\"OS-EXT-IPS-MAC:mac_addr\\\": \\\"fa:16:3e:11:38:38\\\", \\\"version\\\": 4, \\\"addr\\\": \\\"172.16.100.97\\\", \\\"OS-EXT-IPS:type\\\": \\\"fixed\\\"}, {\\\"OS-EXT-IPS-MAC:mac_addr\\\": \\\"fa:16:3e:11:38:38\\\", \\\"version\\\": 4, \\\"addr\\\": \\\"10.8.183.233\\\", \\\"OS-EXT-IPS:type\\\": \\\"floating\\\"}]}, \\\"image\\\": {\\\"id\\\": \\\"3bcfd17c-6bf0-4134-ae7f-80bded8b46fd\\\", \\\"name\\\": \\\"rhel-6.5_jeos\\\"}, \\\"OS-EXT-STS:vm_state\\\": \\\"active\\\", \\\"OS-SRV-USG:launched_at\\\": \\\"2016-08-10T18:32:23.000000\\\", \\\"NAME_ATTR\\\": \\\"name\\\", \\\"flavor\\\": {\\\"id\\\": \\\"2\\\", \\\"name\\\": \\\"m1.small\\\"}, \\\"az\\\": \\\"nova\\\", \\\"id\\\": \\\"b66ba857-5740-4c13-9e70-191883398af8\\\", \\\"cloud\\\": \\\"defaults\\\", \\\"user_id\\\": \\\"9c770dbddda444799e627004fee26e0a\\\", \\\"OS-DCF:diskConfig\\\": \\\"MANUAL\\\", \\\"networks\\\": {\\\"e2e-openstack\\\": [\\\"172.16.100.97\\\", \\\"10.8.183.233\\\"]}, \\\"accessIPv4\\\": \\\"10.8.183.233\\\", \\\"accessIPv6\\\": \\\"\\\", \\\"security_groups\\\": [{\\\"id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"name\\\": \\\"default\\\", \\\"security_group_rules\\\": [{\\\"direction\\\": \\\"ingress\\\", \\\"protocol\\\": null, \\\"remote_ip_prefix\\\": null, \\\"port_range_max\\\": null, \\\"security_group_id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"port_range_min\\\": null, \\\"ethertype\\\": \\\"IPv4\\\", \\\"id\\\": \\\"ade9fcb9-14c1-4975-a04d-6007f80005c1\\\"}, {\\\"direction\\\": \\\"ingress\\\", \\\"protocol\\\": null, \\\"remote_ip_prefix\\\": null, \\\"port_range_max\\\": null, \\\"security_group_id\\\": \\\"df1a797b-009c-4685-a7c9-43863c36d653\\\", \\\"port_range_min\\\": null, \\\"ethertype\\\": \\\"IPv4\\\", \\\"id\\\": \\\"d03e4bae-24b6-415a-a30c-ee0d060f566f\\\"}], \\\"description\\\": \\\"Default security group\\\"}], \\\"key_name\\\": \\\"ci-factory\\\", \\\"progress\\\": 0, \\\"OS-EXT-STS:power_state\\\": 1, \\\"OS-EXT-AZ:availability_zone\\\": \\\"nova\\\", \\\"metadata\\\": {}, \\\"status\\\": \\\"ACTIVE\\\", \\\"updated\\\": \\\"2016-08-10T18:32:23Z\\\", \\\"hostId\\\": \\\"be958de354ca4b72bb0a02694148f8d7f2d5ba965cb49e864fe63d37\\\", \\\"HUMAN_ID\\\": true, \\\"OS-SRV-USG:terminated_at\\\": null, \\\"public_v4\\\": \\\"10.8.183.233\\\", \\\"public_v6\\\": \\\"\\\", \\\"private_v4\\\": \\\"172.16.100.97\\\", \\\"interface_ip\\\": \\\"10.8.183.233\\\", \\\"name\\\": \\\"helloinstance\\\", \\\"created\\\": \\\"2016-08-10T18:32:16Z\\\", \\\"tenant_id\\\": \\\"f1dda47890754241a3e111f9b7394707\\\", \\\"region\\\": \\\"\\\", \\\"adminPass\\\": \\\"tX4SZ3gV85Sb\\\", \\\"os-extended-volumes:volumes_attached\\\": [], \\\"volumes\\\": [], \\\"config_drive\\\": \\\"\\\", \\\"human_id\\\": \\\"helloinstance\\\"}}\\n\", \"ansible_job_id\": \"501588893815.445\"}", "results_file": "/root/.ansible_async/501588893815.445", "started": 1}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @pluck_os.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
``` | main | ansible os server module doesnot work with async status issue type bug report component name module os server and async status ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary tried using os server module in using async its failing to handle output of the job steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create a playbook to boot a single instance on openstack async name test the os server module on async hosts localhost connection local gather facts false tasks name provision os server resources os server state present auth auth url username openstackusername password openstackpassword project name openstackprojectname name helloinstance image rhel jeos key name test keypair api timeout flavor small network testnetwork async poll register yum sleeper name check on fire and forget task async status jid yum sleeper ansible job id register job result until job result finished retries expected results expected job output with openstack server details actual results command ansible playbook vvvvvv pluck os yml actual output using etc ansible ansible cfg as config file provided hosts list is empty only localhost is available loaded callback default of type stdout playbook pluck os yml plays in pluck os yml play task task path root linch pin pluck os yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp os server put tmp tmpqqjagl to root ansible tmp ansible tmp async wrapper exec bin sh c chmod r u x root ansible tmp ansible tmp sleep exec bin sh c lang en us utf lc all en us utf lc messages en us utf root ansible tmp ansible tmp async wrapper root ansible tmp ansible tmp os server sleep exec bin sh c rm f r root ansible tmp ansible tmp dev null sleep ok ansible job id changed false results file root ansible async started task task path root linch pin pluck os yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep failed retrying task check on fire and forget task retries left result was ansible job id attempts changed false finished invocation module args jid mode status module name async status results file root ansible async retries started exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep failed retrying task check on fire and forget task retries left result was ansible job id attempts changed false finished invocation module args jid mode status module name async status results file root ansible async retries started exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep failed retrying task check on fire and forget task retries left result was ansible job id attempts changed false finished invocation module args jid mode status module name async status results file root ansible async retries started exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmptfydjs to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep failed retrying task check on fire and forget task retries left result was ansible job id attempts changed false finished invocation module args jid mode status module name async status results file root ansible async retries started exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp async status exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp async status rm rf root ansible tmp ansible tmp dev null sleep fatal failed ansible job id changed false failed true finished invocation module args jid mode status module name async status msg could not parse job output no handlers could be found for logger keystoneauth identity base n n invocation module args auth type null availability zone null image rhel jeos image exclude deprecated flavor include null meta null flavor small cloud null scheduler hints null boot from volume false userdata null network value specified in no log parameter nics floating ips null flavor ram null volume size false state present auto ip true security groups config drive false volumes key name ci factory api timeout auth username value specified in no log parameter project name value specified in no log parameter password value specified in no log parameter auth url value specified in no log parameter endpoint type public boot volume null key null cacert null wait true name helloinstance region name null timeout cert null terminate volume false verify true floating ip pools null openstack os ext sts task state null addresses openstack image id name rhel jeos os ext sts vm state active os srv usg launched at name attr name flavor id name small az nova id cloud defaults user id os dcf diskconfig manual networks openstack security groups description default security group key name ci factory progress os ext sts power state os ext az availability zone nova metadata status active updated hostid human id true os srv usg terminated at null public public private interface ip name helloinstance created tenant id region adminpass os extended volumes volumes attached volumes config drive human id helloinstance changed true id server os ext sts task state null addresses openstack image id name rhel jeos os ext sts vm state active os srv usg launched at name attr name flavor id name small az nova id cloud defaults user id os dcf diskconfig manual networks openstack security groups description default security group key name ci factory progress os ext sts power state os ext az availability zone nova metadata status active updated hostid human id true os srv usg terminated at null public public private interface ip name helloinstance created tenant id region adminpass os extended volumes volumes attached volumes config drive human id helloinstance n msg traceback most recent call last n file root ansible tmp ansible tmp async wrapper line in run module n file usr json init py line in loads n return default decoder decode s n file usr json decoder py line in decode n obj end self raw decode s idx w s end n file usr json decoder py line in raw decode n raise valueerror no json object could be decoded nvalueerror no json object could be decoded n failed cmd root ansible tmp ansible tmp os server data no handlers could be found for logger keystoneauth identity base n n invocation module args auth type null availability zone null image rhel jeos image exclude deprecated flavor include null meta null flavor small cloud null scheduler hints null boot from volume false userdata null network value specified in no log parameter nics floating ips null flavor ram null volume size false state present auto ip true security groups config drive false volumes key name ci factory api timeout auth username value specified in no log parameter project name value specified in no log parameter password value specified in no log parameter auth url value specified in no log parameter endpoint type public boot volume null key null cacert null wait true name helloinstance region name null timeout cert null terminate volume false verify true floating ip pools null openstack os ext sts task state null addresses openstack image id name rhel jeos os ext sts vm state active os srv usg launched at name attr name flavor id name small az nova id cloud defaults user id os dcf diskconfig manual networks openstack security groups description default security group key name ci factory progress os ext sts power state os ext az availability zone nova metadata status active updated hostid human id true os srv usg terminated at null public public private interface ip name helloinstance created tenant id region adminpass os extended volumes volumes attached volumes config drive human id helloinstance changed true id server os ext sts task state null addresses openstack image id name rhel jeos os ext sts vm state active os srv usg launched at name attr name flavor id name small az nova id cloud defaults user id os dcf diskconfig manual networks openstack security groups description default security group key name ci factory progress os ext sts power state os ext az availability zone nova metadata status active updated hostid human id true os srv usg terminated at null public public private interface ip name helloinstance created tenant id region adminpass os extended volumes volumes attached volumes config drive human id helloinstance n ansible job id results file root ansible async started no more hosts left to retry use limit pluck os retry play recap localhost ok changed unreachable failed | 1 |
756,959 | 26,490,809,183 | IssuesEvent | 2023-01-17 22:27:09 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Fix browser caching of images when using S3 upload backend | bug area: uploads priority: high post release | Apparently, the way we securely handle user-uploaded images on zulipchat.com prevents proper browser caching from occurring. As described in our [security model doc](https://zulip.readthedocs.io/en/latest/production/security-model.html#user-uploaded-content), when serving uploaded files, we serve a redirect to a randomly generated temporary URL in S3, to ensure that uploaded files can only be accessed by an active Zulip account with the appropriate credentials.
Unfortunately, this means that when one reloads a browser window, we get served a redirect to a different temporary S3 URL, and there's no way for the browser to know that it already has a copy of the file, even though the user-facing URLs for uploaded files (that serve the redirects) are in fact immutable.
I'm not sure if there's an architecturally sound way to fix this without requiring a Zulip server that can check cookies to be an intermediary for every request for uploaded files, which we could do, but feels wasteful. @andersk any ideas?
https://stackoverflow.com/questions/5172630/how-do-i-display-protected-amazon-s3-images-on-my-secure-site-using-php I think describes our scenario and has no satisfying answer. Maybe we can do something cool with service workers (https://developers.google.com/web/ilt/pwa/lab-caching-files-with-service-worker)? I haven't played with those much, but I think a tiny bit of code that for `/user_uploads` URLs resolves the redirect and caches based on the pre-redirect URL might work. | 1.0 | Fix browser caching of images when using S3 upload backend - Apparently, the way we securely handle user-uploaded images on zulipchat.com prevents proper browser caching from occurring. As described in our [security model doc](https://zulip.readthedocs.io/en/latest/production/security-model.html#user-uploaded-content), when serving uploaded files, we serve a redirect to a randomly generated temporary URL in S3, to ensure that uploaded files can only be accessed by an active Zulip account with the appropriate credentials.
Unfortunately, this means that when one reloads a browser window, we get served a redirect to a different temporary S3 URL, and there's no way for the browser to know that it already has a copy of the file, even though the user-facing URLs for uploaded files (that serve the redirects) are in fact immutable.
I'm not sure if there's an architecturally sound way to fix this without requiring a Zulip server that can check cookies to be an intermediary for every request for uploaded files, which we could do, but feels wasteful. @andersk any ideas?
https://stackoverflow.com/questions/5172630/how-do-i-display-protected-amazon-s3-images-on-my-secure-site-using-php I think describes our scenario and has no satisfying answer. Maybe we can do something cool with service workers (https://developers.google.com/web/ilt/pwa/lab-caching-files-with-service-worker)? I haven't played with those much, but I think a tiny bit of code that for `/user_uploads` URLs resolves the redirect and caches based on the pre-redirect URL might work. | non_main | fix browser caching of images when using upload backend apparently the way we securely handle user uploaded images on zulipchat com prevents proper browser caching from occurring as described in our when serving uploaded files we serve a redirect to a randomly generated temporary url in to ensure that uploaded files can only be accessed by an active zulip account with the appropriate credentials unfortunately this means that when one reloads a browser window we get served a redirect to a different temporary url and there s no way for the browser to know that it already has a copy of the file even though the user facing urls for uploaded files that serve the redirects are in fact immutable i m not sure if there s an architecturally sound way to fix this without requiring a zulip server that can check cookies to be an intermediary for every request for uploaded files which we could do but feels wasteful andersk any ideas i think describes our scenario and has no satisfying answer maybe we can do something cool with service workers i haven t played with those much but i think a tiny bit of code that for user uploads urls resolves the redirect and caches based on the pre redirect url might work | 0 |
5,012 | 25,759,507,407 | IssuesEvent | 2022-12-08 19:11:20 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | How to access /var/runtime/UserFunction.js? | stage/needs-investigation maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->
### Describe your idea/feature/enhancement
I got a mystic error (await needs to be in async) but error only traces back to `UserFunction.js`, how can I access this file so I can locate the exact problem?
### Proposal
Show how it is done, and probably add a section in document.
Thanks! | True | How to access /var/runtime/UserFunction.js? - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->
### Describe your idea/feature/enhancement
I got a mystic error (await needs to be in async) but error only traces back to `UserFunction.js`, how can I access this file so I can locate the exact problem?
### Proposal
Show how it is done, and probably add a section in document.
Thanks! | main | how to access var runtime userfunction js describe your idea feature enhancement i got a mystic error await needs to be in async but error only traces back to userfunction js how can i access this file so i can locate the exact problem proposal show how it is done and probably add a section in document thanks | 1 |
4,709 | 24,270,832,416 | IssuesEvent | 2022-09-28 10:07:35 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | SEO | Unlocalized pages being published | engineering Maintain | # Description
Mozilla.org currently localizes content using methods that make it difficult for Google to find our non-en content and to understand which countries it is intended to serve.
**Domain Redirect**
We respond to requests for the domain name, i.e. https://www.mozilla.org/, with a redirect to a localized version of the site, based on the IP address of the requestor. Please see the HTTP viewer results here to see a redirection to en-US without a browser language setting.
Global navigation links to other Mozilla subdomains, e.g. from the www site to https://foundation.mozilla.org/ in global navigation, are not localized, and again rely on a redirect to find localized content.
**SEO Recommendation**
1. Do not auto-select a language and country for a user. Allow the user to choose their language and location.
a) Serve a static page at https://www.mozilla.org/ with static links to all of the regional Mozilla sites. The …/locales/ page content would be ideal. Crawlers will be able to find all of the sites by following the links, and we will present no unintended prioritization of en-US over all other languages and regions.
b) Use a pop-up to suggest which localized site we think will be best for the user, based on their IP or language setting. Allow the user to select from other choices if we offer the wrong language.
2. Use correctly localized links in all navigation. Do not use redirection to find localized content.
a) Update all links between subdomains to use the appropriate localization string.
**Inconsistent Translation**
It appears that Mozilla has only localized some of the content for some of the languages that we wish to support (e.g. ach, gd, is). Navigation links in those non-English locales use English anchor text, and some localized pages link to unlocalized pages from the global navigation or footer.
To see a link to a localized URL that does not exist and results in a redirection, please see HTTP viewer results here.
**SEO Recommendation**
1. Remove all links to unlocalized pages.
2. Translate all English copy on non-EN pages.
3. All English-language pages that were previously published on non-English URLs should be deleted. Those URLs should return a 404 “File Not Found” server status code, e.g. …/ach/firefox/privacy/, which Google indexes with en-US content.
# Acceptance criteria
- [ ] Add criteria here
# Dev tasks
- [ ] Add dev tasks here | True | SEO | Unlocalized pages being published - # Description
Mozilla.org currently localizes content using methods that make it difficult for Google to find our non-en content and to understand which countries it is intended to serve.
**Domain Redirect**
We respond to requests for the domain name, i.e. https://www.mozilla.org/, with a redirect to a localized version of the site, based on the IP address of the requestor. Please see the HTTP viewer results here to see a redirection to en-US without a browser language setting.
Global navigation links to other Mozilla subdomains, e.g. from the www site to https://foundation.mozilla.org/ in global navigation, are not localized, and again rely on a redirect to find localized content.
**SEO Recommendation**
1. Do not auto-select a language and country for a user. Allow the user to choose their language and location.
a) Serve a static page at https://www.mozilla.org/ with static links to all of the regional Mozilla sites. The …/locales/ page content would be ideal. Crawlers will be able to find all of the sites by following the links, and we will present no unintended prioritization of en-US over all other languages and regions.
b) Use a pop-up to suggest which localized site we think will be best for the user, based on their IP or language setting. Allow the user to select from other choices if we offer the wrong language.
2. Use correctly localized links in all navigation. Do not use redirection to find localized content.
a) Update all links between subdomains to use the appropriate localization string.
**Inconsistent Translation**
It appears that Mozilla has only localized some of the content for some of the languages that we wish to support (e.g. ach, gd, is). Navigation links in those non-English locales use English anchor text, and some localized pages link to unlocalized pages from the global navigation or footer.
To see a link to a localized URL that does not exist and results in a redirection, please see HTTP viewer results here.
**SEO Recommendation**
1. Remove all links to unlocalized pages.
2. Translate all English copy on non-EN pages.
3. All English-language pages that were previously published on non-English URLs should be deleted. Those URLs should return a 404 “File Not Found” server status code, e.g. …/ach/firefox/privacy/, which Google indexes with en-US content.
# Acceptance criteria
- [ ] Add criteria here
# Dev tasks
- [ ] Add dev tasks here | main | seo unlocalized pages being published description mozilla org currently localizes content using methods that make it difficult for google to find our non en content and to understand which countries it is intended to serve domain redirect we respond to requests for the domain name i e with a redirect to a localized version of the site based on the ip address of the requestor please see the http viewer results here to see a redirection to en us without a browser language setting global navigation links to other mozilla subdomains e g from the www site to in global navigation are not localized and again rely on a redirect to find localized content seo recommendation do not auto select a language and country for a user allow the user to choose their language and location a serve a static page at with static links to all of the regional mozilla sites the … locales page content would be ideal crawlers will be able to find all of the sites by following the links and we will present no unintended prioritization of en us over all other languages and regions b use a pop up to suggest which localized site we think will be best for the user based on their ip or language setting allow the user to select from other choices if we offer the wrong language use correctly localized links in all navigation do not use redirection to find localized content a update all links between subdomains to use the appropriate localization string inconsistent translation it appears that mozilla has only localized some of the content for some of the languages that we wish to support e g ach gd is navigation links in those non english locales use english anchor text and some localized pages link to unlocalized pages from the global navigation or footer to see a link to a localized url that does not exist and results in a redirection please see http viewer results here seo recommendation remove all links to unlocalized pages translate all english copy on non en pages all english language pages that were previously published on non english urls should be deleted those urls should return a “file not found” server status code e g … ach firefox privacy which google indexes with en us content acceptance criteria add criteria here dev tasks add dev tasks here | 1 |
2,134 | 7,333,017,996 | IssuesEvent | 2018-03-05 18:02:22 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Exceptions in catch blocks should be named 'ex' | Area: analyzer Area: maintainability feature in progress | Exceptions that are caught and handled in catch clauses should be named `ex`. | True | Exceptions in catch blocks should be named 'ex' - Exceptions that are caught and handled in catch clauses should be named `ex`. | main | exceptions in catch blocks should be named ex exceptions that are caught and handled in catch clauses should be named ex | 1 |
118,989 | 15,388,313,243 | IssuesEvent | 2021-03-03 10:36:47 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Global styles: add text styles to blocks | Global Styles Needs Design Feedback | Being able to set to the font style on all the contents of a block and all instances of that block when in global styles, is useful. This opens up more granular control of everything on the block over having to select which is the current state.
I took some time to explore what that could look like and began with the paragraph block as that has a lot of text options already. It's worth noting, this is when in global styles for now. This work builds on the new UI explorations of @pablohoneyhoney, @jasmussen and @mtias - so credit goes there for taking this into the global styles work I have been doing.
This is a simple addition to the paragraph block, nothing major here or that different from what we have now.
<img width="512" alt="paragraph block" src="https://user-images.githubusercontent.com/253067/76335811-f88cc380-62ec-11ea-8b55-fd7e5e1c3f41.png">
## VIewing in global styles
Version one shows just adding text styling dropdown to any place we have text, for example when viewing the paragraph block in global styles.
<img width="1257" alt="global" src="https://user-images.githubusercontent.com/253067/76335883-19edaf80-62ed-11ea-899f-d6beb0739524.png">
Moving further into the future with the ideas in font pairing, perhaps this could then come in where we show text and heading fonts - maybe even letting themes set these. This is further on work, the proposal in this issue is just to add a font style option.
<img width="1257" alt="future" src="https://user-images.githubusercontent.com/253067/76335966-3853ab00-62ed-11ea-915e-6e977700c2d1.png">
## Feedback
Right now, it would be great to explore if the time is right to bring this in or we wait. Beyond that general feedback would be great. I feel that having this as an option on these blocks makes sense for global styles:
- Button
- Paragraph (whilst I demonstrated the edge case of bolding all, likely this would be useful for weights)
- Heading
- Quote
Here are some points to spring into feedback from:
- What other blocks could benefit from this?
- Should it only be on global styles or in some blocks by default? For example, paragraph block could benefit from this perhaps.
- As different fonts have different text styles, do we have a baseline we offer? | 1.0 | Global styles: add text styles to blocks - Being able to set to the font style on all the contents of a block and all instances of that block when in global styles, is useful. This opens up more granular control of everything on the block over having to select which is the current state.
I took some time to explore what that could look like and began with the paragraph block as that has a lot of text options already. It's worth noting, this is when in global styles for now. This work builds on the new UI explorations of @pablohoneyhoney, @jasmussen and @mtias - so credit goes there for taking this into the global styles work I have been doing.
This is a simple addition to the paragraph block, nothing major here or that different from what we have now.
<img width="512" alt="paragraph block" src="https://user-images.githubusercontent.com/253067/76335811-f88cc380-62ec-11ea-8b55-fd7e5e1c3f41.png">
## VIewing in global styles
Version one shows just adding text styling dropdown to any place we have text, for example when viewing the paragraph block in global styles.
<img width="1257" alt="global" src="https://user-images.githubusercontent.com/253067/76335883-19edaf80-62ed-11ea-899f-d6beb0739524.png">
Moving further into the future with the ideas in font pairing, perhaps this could then come in where we show text and heading fonts - maybe even letting themes set these. This is further on work, the proposal in this issue is just to add a font style option.
<img width="1257" alt="future" src="https://user-images.githubusercontent.com/253067/76335966-3853ab00-62ed-11ea-915e-6e977700c2d1.png">
## Feedback
Right now, it would be great to explore if the time is right to bring this in or we wait. Beyond that general feedback would be great. I feel that having this as an option on these blocks makes sense for global styles:
- Button
- Paragraph (whilst I demonstrated the edge case of bolding all, likely this would be useful for weights)
- Heading
- Quote
Here are some points to spring into feedback from:
- What other blocks could benefit from this?
- Should it only be on global styles or in some blocks by default? For example, paragraph block could benefit from this perhaps.
- As different fonts have different text styles, do we have a baseline we offer? | non_main | global styles add text styles to blocks being able to set to the font style on all the contents of a block and all instances of that block when in global styles is useful this opens up more granular control of everything on the block over having to select which is the current state i took some time to explore what that could look like and began with the paragraph block as that has a lot of text options already it s worth noting this is when in global styles for now this work builds on the new ui explorations of pablohoneyhoney jasmussen and mtias so credit goes there for taking this into the global styles work i have been doing this is a simple addition to the paragraph block nothing major here or that different from what we have now img width alt paragraph block src viewing in global styles version one shows just adding text styling dropdown to any place we have text for example when viewing the paragraph block in global styles img width alt global src moving further into the future with the ideas in font pairing perhaps this could then come in where we show text and heading fonts maybe even letting themes set these this is further on work the proposal in this issue is just to add a font style option img width alt future src feedback right now it would be great to explore if the time is right to bring this in or we wait beyond that general feedback would be great i feel that having this as an option on these blocks makes sense for global styles button paragraph whilst i demonstrated the edge case of bolding all likely this would be useful for weights heading quote here are some points to spring into feedback from what other blocks could benefit from this should it only be on global styles or in some blocks by default for example paragraph block could benefit from this perhaps as different fonts have different text styles do we have a baseline we offer | 0 |
130,195 | 10,600,642,120 | IssuesEvent | 2019-10-10 10:29:58 | microsoft/azure-pipelines-tasks | https://api.github.com/repos/microsoft/azure-pipelines-tasks | closed | xUnit InlineData theories are not run when using parallel strategy (DTAExecutionHost) | Area: Test bug | ## Required Information
**Question, Bug, or Feature?**
*Type*: Bug
**Enter Task Name**: VSTest
## Environment
Azure Pipelines: uipath, Orchestrator, CI/506187:
- Agent - Hosted: Windows Server 2016
## Issue Description
We have tests written in xUnit that use InlineData attribute in order to test different use cases.
We run the tests using the parallel strategy for jobs.
In the vstest execution logs we can see all test variants being discovered, but only the first variant is being run and reported.
For example, we can see them being discoverd:
`DiscoveryMessage : [xUnit.net 00:00:06.73] UiPath.Orchestrator.Web.Tests: Discovered test case 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit(robotType: Attended)' (ID = '18ab8dd645897913c5822fac1ca6db43eaada5f9', VS FQN = 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit')
DiscoveryMessage : [xUnit.net 00:00:06.73] UiPath.Orchestrator.Web.Tests: Discovered test case 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit(robotType: Development)' (ID = '9bb2388a04bd999bff42d337646ea8977e4c6bb6', VS FQN = 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit')
DiscoveryMessage : [xUnit.net 00:00:06.73] UiPath.Orchestrator.Web.Tests: Discovered test case 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit(robotType: StudioX)' (ID = '7f91e3265f99298d27071d517ffff3146bec54fb', VS FQN = 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit')
`
But only one is executed:
`Passed UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit(robotType: Attended)
`
| 1.0 | xUnit InlineData theories are not run when using parallel strategy (DTAExecutionHost) - ## Required Information
**Question, Bug, or Feature?**
*Type*: Bug
**Enter Task Name**: VSTest
## Environment
Azure Pipelines: uipath, Orchestrator, CI/506187:
- Agent - Hosted: Windows Server 2016
## Issue Description
We have tests written in xUnit that use InlineData attribute in order to test different use cases.
We run the tests using the parallel strategy for jobs.
In the vstest execution logs we can see all test variants being discovered, but only the first variant is being run and reported.
For example, we can see them being discoverd:
`DiscoveryMessage : [xUnit.net 00:00:06.73] UiPath.Orchestrator.Web.Tests: Discovered test case 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit(robotType: Attended)' (ID = '18ab8dd645897913c5822fac1ca6db43eaada5f9', VS FQN = 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit')
DiscoveryMessage : [xUnit.net 00:00:06.73] UiPath.Orchestrator.Web.Tests: Discovered test case 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit(robotType: Development)' (ID = '9bb2388a04bd999bff42d337646ea8977e4c6bb6', VS FQN = 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit')
DiscoveryMessage : [xUnit.net 00:00:06.73] UiPath.Orchestrator.Web.Tests: Discovered test case 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit(robotType: StudioX)' (ID = '7f91e3265f99298d27071d517ffff3146bec54fb', VS FQN = 'UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit')
`
But only one is executed:
`Passed UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit UiPath.Orchestrator.Web.Tests.Licensing.LicensingTests.CreateConcurrentRobotOverTheLimit(robotType: Attended)
`
| non_main | xunit inlinedata theories are not run when using parallel strategy dtaexecutionhost required information question bug or feature type bug enter task name vstest environment azure pipelines uipath orchestrator ci agent hosted windows server issue description we have tests written in xunit that use inlinedata attribute in order to test different use cases we run the tests using the parallel strategy for jobs in the vstest execution logs we can see all test variants being discovered but only the first variant is being run and reported for example we can see them being discoverd discoverymessage uipath orchestrator web tests discovered test case uipath orchestrator web tests licensing licensingtests createconcurrentrobotoverthelimit robottype attended id vs fqn uipath orchestrator web tests licensing licensingtests createconcurrentrobotoverthelimit discoverymessage uipath orchestrator web tests discovered test case uipath orchestrator web tests licensing licensingtests createconcurrentrobotoverthelimit robottype development id vs fqn uipath orchestrator web tests licensing licensingtests createconcurrentrobotoverthelimit discoverymessage uipath orchestrator web tests discovered test case uipath orchestrator web tests licensing licensingtests createconcurrentrobotoverthelimit robottype studiox id vs fqn uipath orchestrator web tests licensing licensingtests createconcurrentrobotoverthelimit but only one is executed passed uipath orchestrator web tests licensing licensingtests createconcurrentrobotoverthelimit uipath orchestrator web tests licensing licensingtests createconcurrentrobotoverthelimit robottype attended | 0 |
136,716 | 30,577,238,929 | IssuesEvent | 2023-07-21 06:49:14 | optuna/optuna | https://api.github.com/repos/optuna/optuna | closed | Bump up `NumPy` version to `1.24.3` | code-fix | ### Motivation
Run test with `numpy>=1.24.3`
### Suggestion
Separate `MXNet`, `scikit-optimize`, and `shap` from optuna/optuna and move tesets to `optuna/optuna-integration`
We need to fix following files:
- [x] `tests/importance_tests/test_init.py`
- [ ] `tests/integration_tests/test_integration.py`
- [x] `tests/integration_tests/test_shap.py`
- [ ] `tests/integration_tests/test_skopt.py`
- [ ] `tests/integration_tests/test_mxnet.py`
### Additional context (optional)
Currently, `NumPy` version is fixed to be smaller than `1.24.0` since some libraries used in integrations are incompatible with them. | 1.0 | Bump up `NumPy` version to `1.24.3` - ### Motivation
Run test with `numpy>=1.24.3`
### Suggestion
Separate `MXNet`, `scikit-optimize`, and `shap` from optuna/optuna and move tesets to `optuna/optuna-integration`
We need to fix following files:
- [x] `tests/importance_tests/test_init.py`
- [ ] `tests/integration_tests/test_integration.py`
- [x] `tests/integration_tests/test_shap.py`
- [ ] `tests/integration_tests/test_skopt.py`
- [ ] `tests/integration_tests/test_mxnet.py`
### Additional context (optional)
Currently, `NumPy` version is fixed to be smaller than `1.24.0` since some libraries used in integrations are incompatible with them. | non_main | bump up numpy version to motivation run test with numpy suggestion separate mxnet scikit optimize and shap from optuna optuna and move tesets to optuna optuna integration we need to fix following files tests importance tests test init py tests integration tests test integration py tests integration tests test shap py tests integration tests test skopt py tests integration tests test mxnet py additional context optional currently numpy version is fixed to be smaller than since some libraries used in integrations are incompatible with them | 0 |
286,367 | 24,748,352,889 | IssuesEvent | 2022-10-21 11:42:23 | Lurkars/gloomhavensecretary | https://api.github.com/repos/Lurkars/gloomhavensecretary | closed | Adding treasures only possible for Gloomhaven, not JOTL | bug to test | When I am trying to add treasures under the campaign sheet the only option seems to be 'Gloomhaven'. The party is using JOTL in the campaign mode but adding treasures by entering the number does not seem to be working and there is also no option for JOTL treasures.
| 1.0 | Adding treasures only possible for Gloomhaven, not JOTL - When I am trying to add treasures under the campaign sheet the only option seems to be 'Gloomhaven'. The party is using JOTL in the campaign mode but adding treasures by entering the number does not seem to be working and there is also no option for JOTL treasures.
| non_main | adding treasures only possible for gloomhaven not jotl when i am trying to add treasures under the campaign sheet the only option seems to be gloomhaven the party is using jotl in the campaign mode but adding treasures by entering the number does not seem to be working and there is also no option for jotl treasures | 0 |
1,073 | 4,892,188,959 | IssuesEvent | 2016-11-18 18:57:12 | coniks-sys/coniks-go | https://api.github.com/repos/coniks-sys/coniks-go | closed | Make distinction between errors and request/check results more clear | maintainability | Only some of the error codes in error.go refer to actual errors, the others refer to results to a client request, or results of consistency checks. Original discussion: https://github.com/coniks-sys/coniks-go/pull/105#discussion_r85631712
| True | Make distinction between errors and request/check results more clear - Only some of the error codes in error.go refer to actual errors, the others refer to results to a client request, or results of consistency checks. Original discussion: https://github.com/coniks-sys/coniks-go/pull/105#discussion_r85631712
| main | make distinction between errors and request check results more clear only some of the error codes in error go refer to actual errors the others refer to results to a client request or results of consistency checks original discussion | 1 |
5,426 | 19,581,811,221 | IssuesEvent | 2022-01-04 22:31:55 | dannytsang/homeassistant-config | https://api.github.com/repos/dannytsang/homeassistant-config | closed | Replace Travis CI With GitHub Actions | bug automations integration: git | Ran out of credits on Travis CI. Can't be bothered to request more credits because this repository is public (although questionable if it's OSS "software").
This means my automated build and deploys pipeline is broken because it depends on a CI build. | 1.0 | Replace Travis CI With GitHub Actions - Ran out of credits on Travis CI. Can't be bothered to request more credits because this repository is public (although questionable if it's OSS "software").
This means my automated build and deploys pipeline is broken because it depends on a CI build. | non_main | replace travis ci with github actions ran out of credits on travis ci can t be bothered to request more credits because this repository is public although questionable if it s oss software this means my automated build and deploys pipeline is broken because it depends on a ci build | 0 |
4,859 | 25,010,228,293 | IssuesEvent | 2022-11-03 14:46:28 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Allow cell selection in Data Explorer | type: enhancement work: frontend status: ready restricted: maintainers | We need to be able to active a cell and allow cell selection within the results in Data Explorer, Exploration page, and Exploration editor | True | Allow cell selection in Data Explorer - We need to be able to active a cell and allow cell selection within the results in Data Explorer, Exploration page, and Exploration editor | main | allow cell selection in data explorer we need to be able to active a cell and allow cell selection within the results in data explorer exploration page and exploration editor | 1 |
144,408 | 5,539,975,053 | IssuesEvent | 2017-03-22 08:44:39 | intel-analytics/BigDL | https://api.github.com/repos/intel-analytics/BigDL | closed | logging the learning rates during training | high priority hyper-parameter | Printing learning rate (and/or other hyper params) in logs is of great help for DL application development/debugging. This is the default behavior of both Torch and Caffe.
In our current implementation, learning rate (along with other hyper params) is updated inside optimMethod (i.e. SGD). DistOptimizer is unaware of any changes of hyper parameters and thus unable to output any meaningful logs for debugging.
We could either pass the logger into optimMethod for certain status output, or make optimize() function return a structure indicating the status. If there're concerns about too many logs, we could put those logs in optimMethod in debug mode.
| 1.0 | logging the learning rates during training - Printing learning rate (and/or other hyper params) in logs is of great help for DL application development/debugging. This is the default behavior of both Torch and Caffe.
In our current implementation, learning rate (along with other hyper params) is updated inside optimMethod (i.e. SGD). DistOptimizer is unaware of any changes of hyper parameters and thus unable to output any meaningful logs for debugging.
We could either pass the logger into optimMethod for certain status output, or make optimize() function return a structure indicating the status. If there're concerns about too many logs, we could put those logs in optimMethod in debug mode.
| non_main | logging the learning rates during training printing learning rate and or other hyper params in logs is of great help for dl application development debugging this is the default behavior of both torch and caffe in our current implementation learning rate along with other hyper params is updated inside optimmethod i e sgd distoptimizer is unaware of any changes of hyper parameters and thus unable to output any meaningful logs for debugging we could either pass the logger into optimmethod for certain status output or make optimize function return a structure indicating the status if there re concerns about too many logs we could put those logs in optimmethod in debug mode | 0 |
539 | 2,502,331,912 | IssuesEvent | 2015-01-09 07:27:16 | fossology/fossology | https://api.github.com/repos/fossology/fossology | opened | foss discovery | Category: UI Component: Rank Component: Tester Priority: High Status: New Tracker: Feature | ---
Author Name: **Bob Gobeille**
Original Redmine Issue: 7125, http://www.fossology.org/issues/7125
Original Date: 2014/06/18
---
We already have the antelink solution for discovering foss projects, but that requires an antepedia license. So let's implement a manual solution that integrates with the SPDX module. For example, let the user pick directories that contain projects. Like a distro src/ directory which typically contains many projects. The software could even learn common directories to search in.
This needs quite a bit more research to determine if this is a viable approach.
| 1.0 | foss discovery - ---
Author Name: **Bob Gobeille**
Original Redmine Issue: 7125, http://www.fossology.org/issues/7125
Original Date: 2014/06/18
---
We already have the antelink solution for discovering foss projects, but that requires an antepedia license. So let's implement a manual solution that integrates with the SPDX module. For example, let the user pick directories that contain projects. Like a distro src/ directory which typically contains many projects. The software could even learn common directories to search in.
This needs quite a bit more research to determine if this is a viable approach.
| non_main | foss discovery author name bob gobeille original redmine issue original date we already have the antelink solution for discovering foss projects but that requires an antepedia license so let s implement a manual solution that integrates with the spdx module for example let the user pick directories that contain projects like a distro src directory which typically contains many projects the software could even learn common directories to search in this needs quite a bit more research to determine if this is a viable approach | 0 |
5,742 | 30,385,678,105 | IssuesEvent | 2023-07-13 00:19:11 | pacificclimate/scip-frontend | https://api.github.com/repos/pacificclimate/scip-frontend | closed | simplifiy development setup by enabling CORS on geoserver | maintainability | Geoserver appears to have an environment variable, [CORS_ENABLED](https://github.com/pacificclimate/scip/blob/dockerize/docker/docker-compose.yaml#L18) that would set it to share information.
Currently, the SCIP setup is complicated by the use of a proxy to get around geoserver's CORS restrictions, so if we could update this variable in our development geoserver, setting up SCIP for development or deployment would be much easier! | True | simplifiy development setup by enabling CORS on geoserver - Geoserver appears to have an environment variable, [CORS_ENABLED](https://github.com/pacificclimate/scip/blob/dockerize/docker/docker-compose.yaml#L18) that would set it to share information.
Currently, the SCIP setup is complicated by the use of a proxy to get around geoserver's CORS restrictions, so if we could update this variable in our development geoserver, setting up SCIP for development or deployment would be much easier! | main | simplifiy development setup by enabling cors on geoserver geoserver appears to have an environment variable that would set it to share information currently the scip setup is complicated by the use of a proxy to get around geoserver s cors restrictions so if we could update this variable in our development geoserver setting up scip for development or deployment would be much easier | 1 |
1,001 | 4,770,650,999 | IssuesEvent | 2016-10-26 15:47:17 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | File module calls /usr/bin/python to execute temp_name.ps1 PowerShell script on Windows | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Builtin **File** module
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
pristine
##### OS / ENVIRONMENT
Host: Ubuntu 14.04, managed node: Windows Server 2012 R2
##### SUMMARY
I got an error and had turned verbose mode, so I saw that output:
```
TASK [Copy *.msi files from ./MSI to C:\MSI] ***********************************
task path: /home/qaexpert/ansible-lab/tcagent.yml:8
<agentsmith> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO agentsmith
<agentsmith> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1477410445.62-187863101456896").FullName | Write-Host -Separator '';
<agentsmith> PUT "/tmp/tmpqOJYen" TO "C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1"
<agentsmith> EXEC Set-StrictMode -Version Latest
Try
{
/usr/bin/python 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1'
}
Catch
{
```
Here is the strange line: `/usr/bin/python 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1'`
Obviously `/usr/bin/python` couldn't exist on Windows server.
##### STEPS TO REPRODUCE
Here is my playbook:
```
tasks:
# Copy MSI files
- name: Copy *.msi files from ./MSI to C:\MSI
file: path=C:\MSI state=directory
```
##### EXPECTED RESULTS
I expect the taks to create the folder C:\MSI
##### ACTUAL RESULTS
It reports the error:
```
TASK [Copy *.msi files from ./MSI to C:\MSI] ***********************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: + ~~~~~~~~~~~~~~~
fatal: [agentsmith]: FAILED! => {"changed": false, "failed": true, "msg": "The term '/usr/bin/python' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again."}
```
| True | File module calls /usr/bin/python to execute temp_name.ps1 PowerShell script on Windows - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Builtin **File** module
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
pristine
##### OS / ENVIRONMENT
Host: Ubuntu 14.04, managed node: Windows Server 2012 R2
##### SUMMARY
I got an error and had turned verbose mode, so I saw that output:
```
TASK [Copy *.msi files from ./MSI to C:\MSI] ***********************************
task path: /home/qaexpert/ansible-lab/tcagent.yml:8
<agentsmith> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO agentsmith
<agentsmith> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1477410445.62-187863101456896").FullName | Write-Host -Separator '';
<agentsmith> PUT "/tmp/tmpqOJYen" TO "C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1"
<agentsmith> EXEC Set-StrictMode -Version Latest
Try
{
/usr/bin/python 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1'
}
Catch
{
```
Here is the strange line: `/usr/bin/python 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477410445.62-187863101456896\file.ps1'`
Obviously `/usr/bin/python` couldn't exist on Windows server.
##### STEPS TO REPRODUCE
Here is my playbook:
```
tasks:
# Copy MSI files
- name: Copy *.msi files from ./MSI to C:\MSI
file: path=C:\MSI state=directory
```
##### EXPECTED RESULTS
I expect the taks to create the folder C:\MSI
##### ACTUAL RESULTS
It reports the error:
```
TASK [Copy *.msi files from ./MSI to C:\MSI] ***********************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: + ~~~~~~~~~~~~~~~
fatal: [agentsmith]: FAILED! => {"changed": false, "failed": true, "msg": "The term '/usr/bin/python' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again."}
```
| main | file module calls usr bin python to execute temp name powershell script on windows issue type bug report component name builtin file module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration pristine os environment host ubuntu managed node windows server summary i got an error and had turned verbose mode so i saw that output task task path home qaexpert ansible lab tcagent yml establish winrm connection for user administrator on port to agentsmith exec set strictmode version latest new item type directory path env temp name ansible tmp fullname write host separator put tmp tmpqojyen to c users administrator appdata local temp ansible tmp file exec set strictmode version latest try usr bin python c users administrator appdata local temp ansible tmp file catch here is the strange line usr bin python c users administrator appdata local temp ansible tmp file obviously usr bin python couldn t exist on windows server steps to reproduce here is my playbook tasks copy msi files name copy msi files from msi to c msi file path c msi state directory expected results i expect the taks to create the folder c msi actual results it reports the error task an exception occurred during task execution to see the full traceback use vvv the error was fatal failed changed false failed true msg the term usr bin python is not recognized as the name of a cmdlet function script file or operable program check the spelling of the name or if a path was included verify that the path is correct and try again | 1 |
39,508 | 20,025,610,751 | IssuesEvent | 2022-02-01 20:57:28 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | closed | Make expiring performance telemetry permanent (Feb 2022) | performance | Let's change the performance telemetry probes (if relevant) to be permanent fixtures rather than renewing. This assumes they don't affect performance and they don't have privacy implications.
Note: the fenix team has a renewal ticket in https://github.com/mozilla-mobile/fenix/issues/22870. | True | Make expiring performance telemetry permanent (Feb 2022) - Let's change the performance telemetry probes (if relevant) to be permanent fixtures rather than renewing. This assumes they don't affect performance and they don't have privacy implications.
Note: the fenix team has a renewal ticket in https://github.com/mozilla-mobile/fenix/issues/22870. | non_main | make expiring performance telemetry permanent feb let s change the performance telemetry probes if relevant to be permanent fixtures rather than renewing this assumes they don t affect performance and they don t have privacy implications note the fenix team has a renewal ticket in | 0 |
24,211 | 7,467,424,155 | IssuesEvent | 2018-04-02 15:15:01 | zurb/foundation-sites | https://api.github.com/repos/zurb/foundation-sites | closed | build: use Husky for Git hooks | build | It makes sense to run some checks when working with the develop branch.
We should use [husky](https://github.com/typicode/husky) for this.
* [ ] eslint
* [ ] sasslint / stylelint
* [x] unit tests
* [ ] ...
Can we adapt airbnb coding guidelines for Foundation Sites? https://github.com/zurb/foundation-sites/pull/9333#issuecomment-259777119 | 1.0 | build: use Husky for Git hooks - It makes sense to run some checks when working with the develop branch.
We should use [husky](https://github.com/typicode/husky) for this.
* [ ] eslint
* [ ] sasslint / stylelint
* [x] unit tests
* [ ] ...
Can we adapt airbnb coding guidelines for Foundation Sites? https://github.com/zurb/foundation-sites/pull/9333#issuecomment-259777119 | non_main | build use husky for git hooks it makes sense to run some checks when working with the develop branch we should use for this eslint sasslint stylelint unit tests can we adapt airbnb coding guidelines for foundation sites | 0 |
647 | 2,594,473,775 | IssuesEvent | 2015-02-20 03:56:54 | BALL-Project/ball | https://api.github.com/repos/BALL-Project/ball | opened | Integrate CADDSuite in KNIME via GKN in the nightly builds | C: Buildsystem P: major T: task | **Reported by delagarza on 15 Dec 43714212 02:05 UTC**
Similar to what OpenMS is doing. Expose CADDSuite as a Community Plug-in in KNIME. | 1.0 | Integrate CADDSuite in KNIME via GKN in the nightly builds - **Reported by delagarza on 15 Dec 43714212 02:05 UTC**
Similar to what OpenMS is doing. Expose CADDSuite as a Community Plug-in in KNIME. | non_main | integrate caddsuite in knime via gkn in the nightly builds reported by delagarza on dec utc similar to what openms is doing expose caddsuite as a community plug in in knime | 0 |
1,936 | 6,609,884,145 | IssuesEvent | 2017-09-19 15:51:18 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | closed | [Feature request] Скрытие комментариев | need-maintainer | ### 1. Запрос
Неплохо было бы, если б все закомментированные линии не отображались бы при отыгрыше пакета, даже если они содержат астериски.
Например, если строка начинается с `<!--`, хорошо было бы, если б она пропускалась вне зависимости от дальнейшего содержания.
### 2. Аргументация
Комментарии предназначены для чтения и удобства навигации по пакету, но не для отыгрыша.
### 3. Пример
```markdown
<!-- 1984*Лос-Анджелес*-info-Организованы бизнесменьём без участия государства. В социалистических странах в разных городах прошли соревнования «Дружба-84». Не участвовала прыгунья в высоту Тамара Быкова. -->
1988 з*Калгари*-info-Самая дорогая, обошлась в миллиард.*-proof-159
<!-- 1988*Сеул*-info-3 судьи получили взятки, не дав победить Рою Джонсу. Выходец с Ямайки канадец Бен Джонсон выиграл 100 метров с результатом 9.79, но был дисквалифицирован за допинг. В 1993 его дисквалифицировали пожизненно. Тяжелоатлетов дисквалифицировали за употребление фуросемида, позволяющего быстро выводить из организма запрещённые вещества. -->
```
Первая и третья строки данного отрывка не должны отображаться при отыгрыше пакета.
Спасибо. | True | [Feature request] Скрытие комментариев - ### 1. Запрос
Неплохо было бы, если б все закомментированные линии не отображались бы при отыгрыше пакета, даже если они содержат астериски.
Например, если строка начинается с `<!--`, хорошо было бы, если б она пропускалась вне зависимости от дальнейшего содержания.
### 2. Аргументация
Комментарии предназначены для чтения и удобства навигации по пакету, но не для отыгрыша.
### 3. Пример
```markdown
<!-- 1984*Лос-Анджелес*-info-Организованы бизнесменьём без участия государства. В социалистических странах в разных городах прошли соревнования «Дружба-84». Не участвовала прыгунья в высоту Тамара Быкова. -->
1988 з*Калгари*-info-Самая дорогая, обошлась в миллиард.*-proof-159
<!-- 1988*Сеул*-info-3 судьи получили взятки, не дав победить Рою Джонсу. Выходец с Ямайки канадец Бен Джонсон выиграл 100 метров с результатом 9.79, но был дисквалифицирован за допинг. В 1993 его дисквалифицировали пожизненно. Тяжелоатлетов дисквалифицировали за употребление фуросемида, позволяющего быстро выводить из организма запрещённые вещества. -->
```
Первая и третья строки данного отрывка не должны отображаться при отыгрыше пакета.
Спасибо. | main | скрытие комментариев запрос неплохо было бы если б все закомментированные линии не отображались бы при отыгрыше пакета даже если они содержат астериски например если строка начинается с хорошо было бы если б она пропускалась вне зависимости от дальнейшего содержания аргументация комментарии предназначены для чтения и удобства навигации по пакету но не для отыгрыша пример markdown з калгари info самая дорогая обошлась в миллиард proof первая и третья строки данного отрывка не должны отображаться при отыгрыше пакета спасибо | 1 |
138,993 | 11,224,011,659 | IssuesEvent | 2020-01-08 00:40:43 | servo/servo | https://api.github.com/repos/servo/servo | closed | WebGL2 program link error when using uniform blocks | A-content/webgl C-has-manual-testcase C-has-patch | The following code has been minimized from the WebGL conformance suite's uniform buffer test (tests/wpt/webgl/tests/conformance2/buffers/uniform-buffers.html).
https://pastebin.com/4xQysf6A
When running it with Servo (eg. `./mach run test.html --pref dom.webgl2.enabled --dev --headless`) it manages to compile the shaders, but unable to link them together into a program. The following log output is produced:
```
shader 'vshader' compiled? true
shader log for 'vshader':
shader 'fshadernamed' compiled? true
shader log for 'fshadernamed':
program linked? false
program log: error: linking with uncompiled/unspecialized shader
DONE
```
(ie. **error: linking with uncompiled/unspecialized shader**).
The code does work in Firefox (69) and Chromium (78), and produces the expected result:
```
shader 'vshader' compiled? true
shader log for 'vshader':
shader 'fshadernamed' compiled? true
shader log for 'fshadernamed':
program linked? true
program log:
DONE
```
(Tested on Ubuntu 18.04, in both headless and non-headless mode)
cc @jdm @zakorgy | 1.0 | WebGL2 program link error when using uniform blocks - The following code has been minimized from the WebGL conformance suite's uniform buffer test (tests/wpt/webgl/tests/conformance2/buffers/uniform-buffers.html).
https://pastebin.com/4xQysf6A
When running it with Servo (eg. `./mach run test.html --pref dom.webgl2.enabled --dev --headless`) it manages to compile the shaders, but unable to link them together into a program. The following log output is produced:
```
shader 'vshader' compiled? true
shader log for 'vshader':
shader 'fshadernamed' compiled? true
shader log for 'fshadernamed':
program linked? false
program log: error: linking with uncompiled/unspecialized shader
DONE
```
(ie. **error: linking with uncompiled/unspecialized shader**).
The code does work in Firefox (69) and Chromium (78), and produces the expected result:
```
shader 'vshader' compiled? true
shader log for 'vshader':
shader 'fshadernamed' compiled? true
shader log for 'fshadernamed':
program linked? true
program log:
DONE
```
(Tested on Ubuntu 18.04, in both headless and non-headless mode)
cc @jdm @zakorgy | non_main | program link error when using uniform blocks the following code has been minimized from the webgl conformance suite s uniform buffer test tests wpt webgl tests buffers uniform buffers html when running it with servo eg mach run test html pref dom enabled dev headless it manages to compile the shaders but unable to link them together into a program the following log output is produced shader vshader compiled true shader log for vshader shader fshadernamed compiled true shader log for fshadernamed program linked false program log error linking with uncompiled unspecialized shader done ie error linking with uncompiled unspecialized shader the code does work in firefox and chromium and produces the expected result shader vshader compiled true shader log for vshader shader fshadernamed compiled true shader log for fshadernamed program linked true program log done tested on ubuntu in both headless and non headless mode cc jdm zakorgy | 0 |
2,668 | 9,126,328,653 | IssuesEvent | 2019-02-24 20:43:41 | DynamoRIO/drmemory | https://api.github.com/repos/DynamoRIO/drmemory | closed | add end-user support for updating syscall #'s from pdb's | Hotlist-Release Maintainability OpSys-Windows Type-Feature | The goal is to future-proof Dr. Memory: make it more adaptive to avoid requiring manual updates to fix breakages on each new Windows change. Xref #1826.
The plan is:
- Detect unknown version by looking at particular syscall #'s (as we can't rely on PEB versions anymore): xref https://github.com/DynamoRIO/dynamorio/issues/1598
- Create utility that downloads pdb's for the core dll's, does sthg like what winsysnums does, and comes up with new syscall numbers. We should be able to automate everything except for the usercall stuff.
- Can we launch the helper process from our online client? Even if so, we'll need to cache the results, so we could ask the user to run the utility standalone?
- Cache the results and load them in.
Things can still break if the syscall wrappers change (xref https://github.com/DynamoRIO/dynamorio/issues/1854) or other things besides numbers change, but this would be an improvement and could help future-proof Dr. Memory.
| True | add end-user support for updating syscall #'s from pdb's - The goal is to future-proof Dr. Memory: make it more adaptive to avoid requiring manual updates to fix breakages on each new Windows change. Xref #1826.
The plan is:
- Detect unknown version by looking at particular syscall #'s (as we can't rely on PEB versions anymore): xref https://github.com/DynamoRIO/dynamorio/issues/1598
- Create utility that downloads pdb's for the core dll's, does sthg like what winsysnums does, and comes up with new syscall numbers. We should be able to automate everything except for the usercall stuff.
- Can we launch the helper process from our online client? Even if so, we'll need to cache the results, so we could ask the user to run the utility standalone?
- Cache the results and load them in.
Things can still break if the syscall wrappers change (xref https://github.com/DynamoRIO/dynamorio/issues/1854) or other things besides numbers change, but this would be an improvement and could help future-proof Dr. Memory.
| main | add end user support for updating syscall s from pdb s the goal is to future proof dr memory make it more adaptive to avoid requiring manual updates to fix breakages on each new windows change xref the plan is detect unknown version by looking at particular syscall s as we can t rely on peb versions anymore xref create utility that downloads pdb s for the core dll s does sthg like what winsysnums does and comes up with new syscall numbers we should be able to automate everything except for the usercall stuff can we launch the helper process from our online client even if so we ll need to cache the results so we could ask the user to run the utility standalone cache the results and load them in things can still break if the syscall wrappers change xref or other things besides numbers change but this would be an improvement and could help future proof dr memory | 1 |
56,483 | 3,080,028,888 | IssuesEvent | 2015-08-21 19:37:24 | open-learning-exchange/BeLL-Apps | https://api.github.com/repos/open-learning-exchange/BeLL-Apps | opened | Search Error: Old resources are "invisible" to new search feature | bug priority v0.11.57 | On a community (v0.11.57), we are not able to use the updated search features to search for old content that is in the BeLL. We "implanted" the resources.couch file that we used on the nation (via stopping couchdb, replacing file, restarting) and did not have success. We then uploaded new resources by using the usual library form interface and then also tried adding a new document in the _utils interface, and we WERE successful with searching for those resources. We then had a nation with v0.11.52 and successfully searched old resources but after updating to v0.11.57, the searching was broken - this makes the nation update a no-go since all of our resources in the BeLLs would not be "visible" to the new search features. | 1.0 | Search Error: Old resources are "invisible" to new search feature - On a community (v0.11.57), we are not able to use the updated search features to search for old content that is in the BeLL. We "implanted" the resources.couch file that we used on the nation (via stopping couchdb, replacing file, restarting) and did not have success. We then uploaded new resources by using the usual library form interface and then also tried adding a new document in the _utils interface, and we WERE successful with searching for those resources. We then had a nation with v0.11.52 and successfully searched old resources but after updating to v0.11.57, the searching was broken - this makes the nation update a no-go since all of our resources in the BeLLs would not be "visible" to the new search features. | non_main | search error old resources are invisible to new search feature on a community we are not able to use the updated search features to search for old content that is in the bell we implanted the resources couch file that we used on the nation via stopping couchdb replacing file restarting and did not have success we then uploaded new resources by using the usual library form interface and then also tried adding a new document in the utils interface and we were successful with searching for those resources we then had a nation with and successfully searched old resources but after updating to the searching was broken this makes the nation update a no go since all of our resources in the bells would not be visible to the new search features | 0 |
1,321 | 5,654,999,435 | IssuesEvent | 2017-04-09 14:05:54 | WhitestormJS/whitestorm.js | https://api.github.com/repos/WhitestormJS/whitestorm.js | opened | Cover CameraModule with unit tests | MAINTAINANCE MODULE | CameraModule needs more comprehensive unit tests
###### Version:
- [x] v2.x.x
- [ ] v1.x.x
###### Issue type:
- [ ] Bug
- [x] Proposal/Enhancement
- [ ] Question
- [ ] Discussion
------
<details>
<summary> <b>Tested on: </b> </summary>
###### Desktop
- [ ] Chrome
- [ ] Chrome Canary
- [ ] Chrome dev-channel
- [ ] Firefox
- [ ] Opera
- [ ] Microsoft IE
- [ ] Microsoft Edge
###### Android
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
###### IOS
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
</details>
| True | Cover CameraModule with unit tests - CameraModule needs more comprehensive unit tests
###### Version:
- [x] v2.x.x
- [ ] v1.x.x
###### Issue type:
- [ ] Bug
- [x] Proposal/Enhancement
- [ ] Question
- [ ] Discussion
------
<details>
<summary> <b>Tested on: </b> </summary>
###### Desktop
- [ ] Chrome
- [ ] Chrome Canary
- [ ] Chrome dev-channel
- [ ] Firefox
- [ ] Opera
- [ ] Microsoft IE
- [ ] Microsoft Edge
###### Android
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
###### IOS
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
</details>
| main | cover cameramodule with unit tests cameramodule needs more comprehensive unit tests version x x x x issue type bug proposal enhancement question discussion tested on desktop chrome chrome canary chrome dev channel firefox opera microsoft ie microsoft edge android chrome firefox opera ios chrome firefox opera | 1 |
119,355 | 25,512,132,070 | IssuesEvent | 2022-11-28 13:53:02 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Arm64: If Conversion can bypass overflow checks | area-CodeGen-coreclr in-pr | This issue was spotted as part of PR https://github.com/dotnet/runtime/pull/77728
```
[MethodImplAttribute(MethodImplOptions.NoInlining)]
static void lenfail(uint[] a, uint i)
{
if (i < a.Length)
{
i = a[i];
}
Consume(i,i+2);
}
static void Main(string[] args)
{
var arr = new uint[] { 1, 42, 3000 };
lenfail(arr,0xffffffff);
}
```
With the following set:
```
export DOTNET_TieredCompilation=0
export DOTNET_JitStressModeNames=STRESS_IF_CONVERSION_COST
export DOTNET_JITMinOpts=0
```
Get error:
```
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at CSharpTutorials.Program.lenfail(UInt32[], UInt32)
at CSharpTutorials.Program.Main(System.String[])
[1] 2099041 abort (core dumped) ./bin/release/net7.0/linux-arm64/conditional.dll
```
Why? The load from `a[i]` should only happen if the `if` condition holds. With If Conversion, the load is effectively hoisted outside `if` and is always executed.
This is the Assignment block the If Conversion sees. There are no flags here to indicate moving the `ASG` before the compare is unsafe. Even more worrying, the `IND` is marked as `GTF_IND_NONFAULTING`.
```
STMT00003 ( 0x008[E-] ... 0x00B )
N012 ( 10, 12) [000018] -A--G---R-- * ASG int $145
N011 ( 1, 1) [000017] D------N--- +--* LCL_VAR int V01 arg1 d:3 $VN.Void
N010 ( 10, 12) [000031] n---G--N--- \--* IND int <l:$183, c:$c2>
N009 ( 7, 10) [000029] ----------- \--* ARR_ADDR byref int[] $2c0
N008 ( 7, 10) [000028] -------N--- \--* ADD byref $201
N003 ( 3, 4) [000027] ----------- +--* ADD byref $200
N001 ( 1, 1) [000019] ----------- | +--* LCL_VAR ref V00 arg0 u:1 (last use) $80
N002 ( 1, 2) [000026] ----------- | \--* CNS_INT long 16 $1c0
N007 ( 4, 6) [000025] ----------- \--* LSH long $241
N005 ( 2, 3) [000023] ---------U- +--* CAST long <- uint $240
N004 ( 1, 1) [000020] ----------- | \--* LCL_VAR int V01 arg1 u:1 (last use) $c0
N006 ( 1, 2) [000024] -------N--- \--* CNS_INT long 2 $1c1
```
Not quite sure yet on what needs to be fixed earlier on | 1.0 | Arm64: If Conversion can bypass overflow checks - This issue was spotted as part of PR https://github.com/dotnet/runtime/pull/77728
```
[MethodImplAttribute(MethodImplOptions.NoInlining)]
static void lenfail(uint[] a, uint i)
{
if (i < a.Length)
{
i = a[i];
}
Consume(i,i+2);
}
static void Main(string[] args)
{
var arr = new uint[] { 1, 42, 3000 };
lenfail(arr,0xffffffff);
}
```
With the following set:
```
export DOTNET_TieredCompilation=0
export DOTNET_JitStressModeNames=STRESS_IF_CONVERSION_COST
export DOTNET_JITMinOpts=0
```
Get error:
```
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at CSharpTutorials.Program.lenfail(UInt32[], UInt32)
at CSharpTutorials.Program.Main(System.String[])
[1] 2099041 abort (core dumped) ./bin/release/net7.0/linux-arm64/conditional.dll
```
Why? The load from `a[i]` should only happen if the `if` condition holds. With If Conversion, the load is effectively hoisted outside `if` and is always executed.
This is the Assignment block the If Conversion sees. There are no flags here to indicate moving the `ASG` before the compare is unsafe. Even more worrying, the `IND` is marked as `GTF_IND_NONFAULTING`.
```
STMT00003 ( 0x008[E-] ... 0x00B )
N012 ( 10, 12) [000018] -A--G---R-- * ASG int $145
N011 ( 1, 1) [000017] D------N--- +--* LCL_VAR int V01 arg1 d:3 $VN.Void
N010 ( 10, 12) [000031] n---G--N--- \--* IND int <l:$183, c:$c2>
N009 ( 7, 10) [000029] ----------- \--* ARR_ADDR byref int[] $2c0
N008 ( 7, 10) [000028] -------N--- \--* ADD byref $201
N003 ( 3, 4) [000027] ----------- +--* ADD byref $200
N001 ( 1, 1) [000019] ----------- | +--* LCL_VAR ref V00 arg0 u:1 (last use) $80
N002 ( 1, 2) [000026] ----------- | \--* CNS_INT long 16 $1c0
N007 ( 4, 6) [000025] ----------- \--* LSH long $241
N005 ( 2, 3) [000023] ---------U- +--* CAST long <- uint $240
N004 ( 1, 1) [000020] ----------- | \--* LCL_VAR int V01 arg1 u:1 (last use) $c0
N006 ( 1, 2) [000024] -------N--- \--* CNS_INT long 2 $1c1
```
Not quite sure yet on what needs to be fixed earlier on | non_main | if conversion can bypass overflow checks this issue was spotted as part of pr static void lenfail uint a uint i if i a length i a consume i i static void main string args var arr new uint lenfail arr with the following set export dotnet tieredcompilation export dotnet jitstressmodenames stress if conversion cost export dotnet jitminopts get error fatal error system accessviolationexception attempted to read or write protected memory this is often an indication that other memory is corrupt at csharptutorials program lenfail at csharptutorials program main system string abort core dumped bin release linux conditional dll why the load from a should only happen if the if condition holds with if conversion the load is effectively hoisted outside if and is always executed this is the assignment block the if conversion sees there are no flags here to indicate moving the asg before the compare is unsafe even more worrying the ind is marked as gtf ind nonfaulting a g r asg int d n lcl var int d vn void n g n ind int arr addr byref int n add byref add byref lcl var ref u last use cns int long lsh long u cast long uint lcl var int u last use n cns int long not quite sure yet on what needs to be fixed earlier on | 0 |
805,080 | 29,507,284,206 | IssuesEvent | 2023-06-03 13:18:41 | i-penr/community-analyzer | https://api.github.com/repos/i-penr/community-analyzer | opened | Add "Add individual submission" page | priority:high frontend | Related to #11, the endpoint should go to a page in the front, that basically consists of an input where the user inputs an url. | 1.0 | Add "Add individual submission" page - Related to #11, the endpoint should go to a page in the front, that basically consists of an input where the user inputs an url. | non_main | add add individual submission page related to the endpoint should go to a page in the front that basically consists of an input where the user inputs an url | 0 |
1,265 | 5,368,114,609 | IssuesEvent | 2017-02-22 07:36:12 | antigenomics/vdjdb-db | https://api.github.com/repos/antigenomics/vdjdb-db | opened | V/J mapping-2 | maintainance | * Check old/new segments
* Ensure original (authors') V segment is not changed if 1) it is found in the segment DB, 2) it has a match of at least ``2`` AA, 3) It has a match of more than ``3`` AAs or alternative match is only ``1`` or ``2`` AA longer
> NB: Numbers subject to discussion | True | V/J mapping-2 - * Check old/new segments
* Ensure original (authors') V segment is not changed if 1) it is found in the segment DB, 2) it has a match of at least ``2`` AA, 3) It has a match of more than ``3`` AAs or alternative match is only ``1`` or ``2`` AA longer
> NB: Numbers subject to discussion | main | v j mapping check old new segments ensure original authors v segment is not changed if it is found in the segment db it has a match of at least aa it has a match of more than aas or alternative match is only or aa longer nb numbers subject to discussion | 1 |
2,931 | 10,492,773,380 | IssuesEvent | 2019-09-25 13:53:34 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFP - npcap | Status: Available For Maintainer(s) | Someone can add the support for this package?
https://github.com/nmap/npcap
I ask it because the nmap package install npcap, but sometimes it is not the latest version.
Thanks in advance! | True | RFP - npcap - Someone can add the support for this package?
https://github.com/nmap/npcap
I ask it because the nmap package install npcap, but sometimes it is not the latest version.
Thanks in advance! | main | rfp npcap someone can add the support for this package i ask it because the nmap package install npcap but sometimes it is not the latest version thanks in advance | 1 |
5,096 | 26,007,541,012 | IssuesEvent | 2022-12-20 21:00:34 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | [Feature Request] InlineCode support for sam local | area/lambda-invoke type/feature stage/pm-review maintainer/need-followup | Hello there,
I was thinking on taking some work on the topic as InlineCode is a great feature but a few iteration of sam local invoke would be much appreciated during code development.
thus I was planning something along the line of creating a tempfile with InlineCode string content and passing along the context instead the CodeUri path..
Not sure if you see any major issue with the approach (platform, filename (using handler)) otherwise I would happily start implement.
The hook for the switch case seems reasonably here:
https://github.com/awslabs/aws-sam-cli/blob/1d7df8fce701711963efc32507305c2fdd84a3aa/samcli/commands/local/lib/local_lambda.py#L104 | True | [Feature Request] InlineCode support for sam local - Hello there,
I was thinking on taking some work on the topic as InlineCode is a great feature but a few iteration of sam local invoke would be much appreciated during code development.
thus I was planning something along the line of creating a tempfile with InlineCode string content and passing along the context instead the CodeUri path..
Not sure if you see any major issue with the approach (platform, filename (using handler)) otherwise I would happily start implement.
The hook for the switch case seems reasonably here:
https://github.com/awslabs/aws-sam-cli/blob/1d7df8fce701711963efc32507305c2fdd84a3aa/samcli/commands/local/lib/local_lambda.py#L104 | main | inlinecode support for sam local hello there i was thinking on taking some work on the topic as inlinecode is a great feature but a few iteration of sam local invoke would be much appreciated during code development thus i was planning something along the line of creating a tempfile with inlinecode string content and passing along the context instead the codeuri path not sure if you see any major issue with the approach platform filename using handler otherwise i would happily start implement the hook for the switch case seems reasonably here | 1 |
135,853 | 18,722,147,841 | IssuesEvent | 2021-11-03 13:00:49 | KDWSS/dd-trace-java | https://api.github.com/repos/KDWSS/dd-trace-java | opened | CVE-2021-20328 (Medium) detected in multiple libraries | security vulnerability | ## CVE-2021-20328 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mongodb-driver-sync-4.1.1.jar</b>, <b>mongodb-driver-sync-4.0.0.jar</b>, <b>mongodb-driver-sync-3.10.0.jar</b></p></summary>
<p>
<details><summary><b>mongodb-driver-sync-4.1.1.jar</b></p></summary>
<p>The MongoDB Synchronous Driver</p>
<p>Library home page: <a href="http://www.mongodb.org">http://www.mongodb.org</a></p>
<p>Path to dependency file: dd-trace-java/dd-smoke-tests/springboot-mongo/springboot-mongo.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.mongodb/mongodb-driver-sync/4.1.1/15e5e6961f0c1fc774413ee129754f8f6355fb6f/mongodb-driver-sync-4.1.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-mongodb-2.4.1.jar (Root Library)
- :x: **mongodb-driver-sync-4.1.1.jar** (Vulnerable Library)
</details>
<details><summary><b>mongodb-driver-sync-4.0.0.jar</b></p></summary>
<p>The MongoDB Synchronous Driver</p>
<p>Library home page: <a href="http://www.mongodb.org">http://www.mongodb.org</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/mongo/driver-4.0-test/driver-4.0-test.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/org.mongodb/mongodb-driver-sync/4.0.0/456f4c365e1230f425831ea51ce0603cd788a4f/mongodb-driver-sync-4.0.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **mongodb-driver-sync-4.0.0.jar** (Vulnerable Library)
</details>
<details><summary><b>mongodb-driver-sync-3.10.0.jar</b></p></summary>
<p>The MongoDB Synchronous Driver</p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/mongo/driver-3.10-sync-test/driver-3.10-sync-test.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/org.mongodb/mongodb-driver-sync/3.10.0/db73394463873ef1718f4322c87bfc470c755617/mongodb-driver-sync-3.10.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **mongodb-driver-sync-3.10.0.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Specific versions of the Java driver that support client-side field level encryption (CSFLE) fail to perform correct host name verification on the KMS server’s certificate. This vulnerability in combination with a privileged network position active MITM attack could result in interception of traffic between the Java driver and the KMS service rendering Field Level Encryption ineffective. This issue was discovered during internal testing and affects all versions of the Java driver that support CSFLE. The Java async, Scala, and reactive streams drivers are not impacted. This vulnerability does not impact driver traffic payloads with CSFLE-supported key services originating from applications residing inside the AWS, GCP, and Azure network fabrics due to compensating controls in these environments. This issue does not impact driver workloads that don’t use Field Level Encryption.
<p>Publish Date: 2021-02-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20328>CVE-2021-20328</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://jira.mongodb.org/browse/JAVA-4017">https://jira.mongodb.org/browse/JAVA-4017</a></p>
<p>Release Date: 2021-02-25</p>
<p>Fix Resolution: org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.mongodb","packageName":"mongodb-driver-sync","packageVersion":"4.1.1","packageFilePaths":["/dd-smoke-tests/springboot-mongo/springboot-mongo.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-data-mongodb:2.4.1;org.mongodb:mongodb-driver-sync:4.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3"},{"packageType":"Java","groupId":"org.mongodb","packageName":"mongodb-driver-sync","packageVersion":"4.0.0","packageFilePaths":["/dd-java-agent/instrumentation/mongo/driver-4.0-test/driver-4.0-test.gradle"],"isTransitiveDependency":false,"dependencyTree":"org.mongodb:mongodb-driver-sync:4.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3"},{"packageType":"Java","groupId":"org.mongodb","packageName":"mongodb-driver-sync","packageVersion":"3.10.0","packageFilePaths":["/dd-java-agent/instrumentation/mongo/driver-3.10-sync-test/driver-3.10-sync-test.gradle"],"isTransitiveDependency":false,"dependencyTree":"org.mongodb:mongodb-driver-sync:3.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-20328","vulnerabilityDetails":"Specific versions of the Java driver that support client-side field level encryption (CSFLE) fail to perform correct host name verification on the KMS server’s certificate. This vulnerability in combination with a privileged network position active MITM attack could result in interception of traffic between the Java driver and the KMS service rendering Field Level Encryption ineffective. This issue was discovered during internal testing and affects all versions of the Java driver that support CSFLE. The Java async, Scala, and reactive streams drivers are not impacted. This vulnerability does not impact driver traffic payloads with CSFLE-supported key services originating from applications residing inside the AWS, GCP, and Azure network fabrics due to compensating controls in these environments. This issue does not impact driver workloads that don’t use Field Level Encryption.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20328","cvss3Severity":"medium","cvss3Score":"6.8","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Adjacent","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-20328 (Medium) detected in multiple libraries - ## CVE-2021-20328 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mongodb-driver-sync-4.1.1.jar</b>, <b>mongodb-driver-sync-4.0.0.jar</b>, <b>mongodb-driver-sync-3.10.0.jar</b></p></summary>
<p>
<details><summary><b>mongodb-driver-sync-4.1.1.jar</b></p></summary>
<p>The MongoDB Synchronous Driver</p>
<p>Library home page: <a href="http://www.mongodb.org">http://www.mongodb.org</a></p>
<p>Path to dependency file: dd-trace-java/dd-smoke-tests/springboot-mongo/springboot-mongo.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.mongodb/mongodb-driver-sync/4.1.1/15e5e6961f0c1fc774413ee129754f8f6355fb6f/mongodb-driver-sync-4.1.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-mongodb-2.4.1.jar (Root Library)
- :x: **mongodb-driver-sync-4.1.1.jar** (Vulnerable Library)
</details>
<details><summary><b>mongodb-driver-sync-4.0.0.jar</b></p></summary>
<p>The MongoDB Synchronous Driver</p>
<p>Library home page: <a href="http://www.mongodb.org">http://www.mongodb.org</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/mongo/driver-4.0-test/driver-4.0-test.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/org.mongodb/mongodb-driver-sync/4.0.0/456f4c365e1230f425831ea51ce0603cd788a4f/mongodb-driver-sync-4.0.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **mongodb-driver-sync-4.0.0.jar** (Vulnerable Library)
</details>
<details><summary><b>mongodb-driver-sync-3.10.0.jar</b></p></summary>
<p>The MongoDB Synchronous Driver</p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/instrumentation/mongo/driver-3.10-sync-test/driver-3.10-sync-test.gradle</p>
<p>Path to vulnerable library: /caches/modules-2/files-2.1/org.mongodb/mongodb-driver-sync/3.10.0/db73394463873ef1718f4322c87bfc470c755617/mongodb-driver-sync-3.10.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **mongodb-driver-sync-3.10.0.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Specific versions of the Java driver that support client-side field level encryption (CSFLE) fail to perform correct host name verification on the KMS server’s certificate. This vulnerability in combination with a privileged network position active MITM attack could result in interception of traffic between the Java driver and the KMS service rendering Field Level Encryption ineffective. This issue was discovered during internal testing and affects all versions of the Java driver that support CSFLE. The Java async, Scala, and reactive streams drivers are not impacted. This vulnerability does not impact driver traffic payloads with CSFLE-supported key services originating from applications residing inside the AWS, GCP, and Azure network fabrics due to compensating controls in these environments. This issue does not impact driver workloads that don’t use Field Level Encryption.
<p>Publish Date: 2021-02-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20328>CVE-2021-20328</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://jira.mongodb.org/browse/JAVA-4017">https://jira.mongodb.org/browse/JAVA-4017</a></p>
<p>Release Date: 2021-02-25</p>
<p>Fix Resolution: org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.mongodb","packageName":"mongodb-driver-sync","packageVersion":"4.1.1","packageFilePaths":["/dd-smoke-tests/springboot-mongo/springboot-mongo.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-data-mongodb:2.4.1;org.mongodb:mongodb-driver-sync:4.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3"},{"packageType":"Java","groupId":"org.mongodb","packageName":"mongodb-driver-sync","packageVersion":"4.0.0","packageFilePaths":["/dd-java-agent/instrumentation/mongo/driver-4.0-test/driver-4.0-test.gradle"],"isTransitiveDependency":false,"dependencyTree":"org.mongodb:mongodb-driver-sync:4.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3"},{"packageType":"Java","groupId":"org.mongodb","packageName":"mongodb-driver-sync","packageVersion":"3.10.0","packageFilePaths":["/dd-java-agent/instrumentation/mongo/driver-3.10-sync-test/driver-3.10-sync-test.gradle"],"isTransitiveDependency":false,"dependencyTree":"org.mongodb:mongodb-driver-sync:3.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-20328","vulnerabilityDetails":"Specific versions of the Java driver that support client-side field level encryption (CSFLE) fail to perform correct host name verification on the KMS server’s certificate. This vulnerability in combination with a privileged network position active MITM attack could result in interception of traffic between the Java driver and the KMS service rendering Field Level Encryption ineffective. This issue was discovered during internal testing and affects all versions of the Java driver that support CSFLE. The Java async, Scala, and reactive streams drivers are not impacted. This vulnerability does not impact driver traffic payloads with CSFLE-supported key services originating from applications residing inside the AWS, GCP, and Azure network fabrics due to compensating controls in these environments. This issue does not impact driver workloads that don’t use Field Level Encryption.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20328","cvss3Severity":"medium","cvss3Score":"6.8","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Adjacent","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries mongodb driver sync jar mongodb driver sync jar mongodb driver sync jar mongodb driver sync jar the mongodb synchronous driver library home page a href path to dependency file dd trace java dd smoke tests springboot mongo springboot mongo gradle path to vulnerable library home wss scanner gradle caches modules files org mongodb mongodb driver sync mongodb driver sync jar dependency hierarchy spring boot starter data mongodb jar root library x mongodb driver sync jar vulnerable library mongodb driver sync jar the mongodb synchronous driver library home page a href path to dependency file dd trace java dd java agent instrumentation mongo driver test driver test gradle path to vulnerable library caches modules files org mongodb mongodb driver sync mongodb driver sync jar dependency hierarchy x mongodb driver sync jar vulnerable library mongodb driver sync jar the mongodb synchronous driver path to dependency file dd trace java dd java agent instrumentation mongo driver sync test driver sync test gradle path to vulnerable library caches modules files org mongodb mongodb driver sync mongodb driver sync jar dependency hierarchy x mongodb driver sync jar vulnerable library found in head commit a href found in base branch master vulnerability details specific versions of the java driver that support client side field level encryption csfle fail to perform correct host name verification on the kms server’s certificate this vulnerability in combination with a privileged network position active mitm attack could result in interception of traffic between the java driver and the kms service rendering field level encryption ineffective this issue was discovered during internal testing and affects all versions of the java driver that support csfle the java async scala and reactive streams drivers are not impacted this vulnerability does not impact driver traffic payloads with csfle supported key services originating from applications residing inside the aws gcp and azure network fabrics due to compensating controls in these environments this issue does not impact driver workloads that don’t use field level encryption publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org mongodb mongodb driver sync org mongodb mongodb driver legacy org mongodb mongodb driver org mongodb mongo java driver isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter data mongodb org mongodb mongodb driver sync isminimumfixversionavailable true minimumfixversion org mongodb mongodb driver sync org mongodb mongodb driver legacy org mongodb mongodb driver org mongodb mongo java driver packagetype java groupid org mongodb packagename mongodb driver sync packageversion packagefilepaths istransitivedependency false dependencytree org mongodb mongodb driver sync isminimumfixversionavailable true minimumfixversion org mongodb mongodb driver sync org mongodb mongodb driver legacy org mongodb mongodb driver org mongodb mongo java driver packagetype java groupid org mongodb packagename mongodb driver sync packageversion packagefilepaths istransitivedependency false dependencytree org mongodb mongodb driver sync isminimumfixversionavailable true minimumfixversion org mongodb mongodb driver sync org mongodb mongodb driver legacy org mongodb mongodb driver org mongodb mongo java driver basebranches vulnerabilityidentifier cve vulnerabilitydetails specific versions of the java driver that support client side field level encryption csfle fail to perform correct host name verification on the kms server’s certificate this vulnerability in combination with a privileged network position active mitm attack could result in interception of traffic between the java driver and the kms service rendering field level encryption ineffective this issue was discovered during internal testing and affects all versions of the java driver that support csfle the java async scala and reactive streams drivers are not impacted this vulnerability does not impact driver traffic payloads with csfle supported key services originating from applications residing inside the aws gcp and azure network fabrics due to compensating controls in these environments this issue does not impact driver workloads that don’t use field level encryption vulnerabilityurl | 0 |
2,814 | 10,065,364,461 | IssuesEvent | 2019-07-23 10:43:21 | unoplatform/uno | https://api.github.com/repos/unoplatform/uno | opened | Building Uno.UI.sln from the command line does not download %TEMP%\mono-wasm-e894d683f9f | area/vswin kind/bug kind/contributor-experience kind/maintainer-experience | ## Current behavior
```
empty %TEMP%\*.*
git clone master
cd src
msbuild /m /t:restore Uno.UI.sln
msbuild /m Uno.UI.sln
```
- Compilation fails with `%TEMP%\mono-wasm-e894d683f9f does not exist.`
- Opening Visual Studio and doing right click -> build SamplesApp.Wasm will create `%TEMP%\mono-wasm-e894d683f9f`.
- Afterwards msbuild from the command line will function.
## Expected behavior
```
git clone master
cd src
msbuild just works
```
## How to reproduce it (as minimally and precisely as possible)
Forgot to grab a binlog. Opening up early on the off-chance others come across it.
## Environment
Nuget Package:
Package Version(s):
Affected platform(s):
- [ ] iOS
- [ ] Android
- [ ] WebAssembly
- [ ] Windows
- [ ] Build tasks
Visual Studio
- [ ] 2017 (version: )
- [x] 2019 (version: )
- [ ] for Mac (version: )
Relevant plugins
- [ ] Resharper (version: )
## Anything else we need to know?
<details>
<summary>Microsoft Visual Studio Enterprise 2019 - Version 16.1.6 Details</summary>
Microsoft Visual Studio Enterprise 2019
Version 16.1.6
VisualStudio.16.Release/16.1.6+29102.190
Microsoft .NET Framework
Version 4.8.03752
Installed Version: Enterprise
Visual C++ 2019 00435-60000-00000-AA184
Microsoft Visual C++ 2019
Application Insights Tools for Visual Studio Package 9.1.00429.1
Application Insights Tools for Visual Studio
ASP.NET and Web Tools 2019 16.1.429.50124
ASP.NET and Web Tools 2019
ASP.NET Web Frameworks and Tools 2019 16.1.429.50124
For additional information, visit https://www.asp.net/
Azure App Service Tools v3.0.0 16.1.429.50124
Azure App Service Tools v3.0.0
Azure Functions and Web Jobs Tools 16.1.429.50124
Azure Functions and Web Jobs Tools
C# Tools 3.1.1-beta4-19281-06+58a4b1e79aea28115e66b06f850c83a3f1fcb6d3
C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Child Process Debugging Power Tool 1.0
Power tool to add child process debugging to Visual Studio.
Common Azure Tools 1.10
Provides common services for use by Azure Mobile Services and Microsoft Azure Tools.
Extensibility Message Bus 1.1.77 (master@24013d5)
Provides common messaging-based MEF services for loosely coupled Visual Studio extension components communication and integration.
FormatDocumentOnSave 1.0
Enables auto formatting of the code when you save a file. Visual Studio supports auto formatting of the code with the CTRL+E,D or CTRL+E,F key shortcuts but with this extension the command 'Format Document' is executed on Save.
You can find the source here: https://github.com/Elders/VSE-FormatDocumentOnSave
IntelliCode Extension 1.0
IntelliCode Visual Studio Extension Detailed Info
Microsoft Azure Tools 2.9
Microsoft Azure Tools for Microsoft Visual Studio 0x10 - v2.9.20419.2
Microsoft Continuous Delivery Tools for Visual Studio 0.4
Simplifying the configuration of Azure DevOps pipelines from within the Visual Studio IDE.
Microsoft JVM Debugger 1.0
Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines
Microsoft Library Manager 1.0
Install client-side libraries easily to any web project
Microsoft MI-Based Debugger 1.0
Provides support for connecting Visual Studio to MI compatible debuggers
Microsoft Visual C++ Wizards 1.0
Microsoft Visual C++ Wizards
Microsoft Visual Studio Tools for Containers 1.1
Develop, run, validate your ASP.NET Core applications in the target environment. F5 your application directly into a container with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container.
Microsoft Visual Studio VC Package 1.0
Microsoft Visual Studio VC Package
Mono Debugging for Visual Studio 16.1.1 (2473f22)
Support for debugging Mono processes with Visual Studio.
Node.js Tools 1.5.10424.1 Commit Hash:c3ce0ae0b29c0b3a755ffc12f8a685fe7ddd3600
Adds support for developing and debugging Node.js apps in Visual Studio
NuGet Package Manager 5.1.0
NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/
OzCodePackage Extension 1.0
OzCodePackage Visual Studio Extension Detailed Info
ProjectServicesPackage Extension 1.0
ProjectServicesPackage Visual Studio Extension Detailed Info
ResourcePackage Extension 1.0
ResourcePackage Visual Studio Extension Detailed Info
ResourcePackage Extension 1.0
ResourcePackage Visual Studio Extension Detailed Info
Snapshot Debugging Extension 1.0
Snapshot Debugging Visual Studio Extension Detailed Info
SQL Server Data Tools 16.0.61904.23160
Microsoft SQL Server Data Tools
Syntax Visualizer 1.0
An extension for visualizing Roslyn SyntaxTrees.
Test Adapter for Boost.Test 1.0
Enables Visual Studio's testing tools with unit tests written for Boost.Test. The use terms and Third Party Notices are available in the extension installation directory.
Test Adapter for Google Test 1.0
Enables Visual Studio's testing tools with unit tests written for Google Test. The use terms and Third Party Notices are available in the extension installation directory.
TypeScript Tools 16.0.10506.2004
TypeScript Tools for Microsoft Visual Studio
Visual Basic Tools 3.1.1-beta4-19281-06+58a4b1e79aea28115e66b06f850c83a3f1fcb6d3
Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Visual F# Tools 10.4 for F# 4.6 16.1.0-beta.19253.3+42526fe359672a05fd562dc16a91a43d0fe047a7
Microsoft Visual F# Tools 10.4 for F# 4.6
Visual Studio Code Debug Adapter Host Package 1.0
Interop layer for hosting Visual Studio Code debug adapters in Visual Studio
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for Containers 1.0
Visual Studio Tools for Containers
VisualStudio.Mac 1.0
Mac Extension for Visual Studio
Xamarin 16.1.0.545 (d16-1@db7c858e8)
Visual Studio extension to enable development for Xamarin.iOS and Xamarin.Android.
Xamarin Designer 16.1.0.418 (remotes/origin/d16-1@5b958bb10)
Visual Studio extension to enable Xamarin Designer tools in Visual Studio.
Xamarin Templates 16.2.112 (4db4af4)
Templates for building iOS, Android, and Windows apps with Xamarin and Xamarin.Forms.
Xamarin.Android SDK 9.3.0.23 (HEAD/d0b48056f)
Xamarin.Android Reference Assemblies and MSBuild support.
Mono: mono/mono/2018-08@3a07bd426d3
Java.Interop: xamarin/java.interop/d16-1@5ddc3e3
LibZipSharp: grendello/LibZipSharp/d16-1@44de300
LibZip: nih-at/libzip/rel-1-5-1@b95cf3f
ProGuard: xamarin/proguard/master@905836d
SQLite: xamarin/sqlite/3.27.1@8212a2d
Xamarin.Android Tools: xamarin/xamarin-android-tools/d16-1@acabd26
Xamarin.iOS and Xamarin.Mac SDK 12.10.0.157 (6bd9475)
Xamarin.iOS and Xamarin.Mac Reference Assemblies and MSBuild support.
</details> | True | Building Uno.UI.sln from the command line does not download %TEMP%\mono-wasm-e894d683f9f - ## Current behavior
```
empty %TEMP%\*.*
git clone master
cd src
msbuild /m /t:restore Uno.UI.sln
msbuild /m Uno.UI.sln
```
- Compilation fails with `%TEMP%\mono-wasm-e894d683f9f does not exist.`
- Opening Visual Studio and doing right click -> build SamplesApp.Wasm will create `%TEMP%\mono-wasm-e894d683f9f`.
- Afterwards msbuild from the command line will function.
## Expected behavior
```
git clone master
cd src
msbuild just works
```
## How to reproduce it (as minimally and precisely as possible)
Forgot to grab a binlog. Opening up early on the off-chance others come across it.
## Environment
Nuget Package:
Package Version(s):
Affected platform(s):
- [ ] iOS
- [ ] Android
- [ ] WebAssembly
- [ ] Windows
- [ ] Build tasks
Visual Studio
- [ ] 2017 (version: )
- [x] 2019 (version: )
- [ ] for Mac (version: )
Relevant plugins
- [ ] Resharper (version: )
## Anything else we need to know?
<details>
<summary>Microsoft Visual Studio Enterprise 2019 - Version 16.1.6 Details</summary>
Microsoft Visual Studio Enterprise 2019
Version 16.1.6
VisualStudio.16.Release/16.1.6+29102.190
Microsoft .NET Framework
Version 4.8.03752
Installed Version: Enterprise
Visual C++ 2019 00435-60000-00000-AA184
Microsoft Visual C++ 2019
Application Insights Tools for Visual Studio Package 9.1.00429.1
Application Insights Tools for Visual Studio
ASP.NET and Web Tools 2019 16.1.429.50124
ASP.NET and Web Tools 2019
ASP.NET Web Frameworks and Tools 2019 16.1.429.50124
For additional information, visit https://www.asp.net/
Azure App Service Tools v3.0.0 16.1.429.50124
Azure App Service Tools v3.0.0
Azure Functions and Web Jobs Tools 16.1.429.50124
Azure Functions and Web Jobs Tools
C# Tools 3.1.1-beta4-19281-06+58a4b1e79aea28115e66b06f850c83a3f1fcb6d3
C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Child Process Debugging Power Tool 1.0
Power tool to add child process debugging to Visual Studio.
Common Azure Tools 1.10
Provides common services for use by Azure Mobile Services and Microsoft Azure Tools.
Extensibility Message Bus 1.1.77 (master@24013d5)
Provides common messaging-based MEF services for loosely coupled Visual Studio extension components communication and integration.
FormatDocumentOnSave 1.0
Enables auto formatting of the code when you save a file. Visual Studio supports auto formatting of the code with the CTRL+E,D or CTRL+E,F key shortcuts but with this extension the command 'Format Document' is executed on Save.
You can find the source here: https://github.com/Elders/VSE-FormatDocumentOnSave
IntelliCode Extension 1.0
IntelliCode Visual Studio Extension Detailed Info
Microsoft Azure Tools 2.9
Microsoft Azure Tools for Microsoft Visual Studio 0x10 - v2.9.20419.2
Microsoft Continuous Delivery Tools for Visual Studio 0.4
Simplifying the configuration of Azure DevOps pipelines from within the Visual Studio IDE.
Microsoft JVM Debugger 1.0
Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines
Microsoft Library Manager 1.0
Install client-side libraries easily to any web project
Microsoft MI-Based Debugger 1.0
Provides support for connecting Visual Studio to MI compatible debuggers
Microsoft Visual C++ Wizards 1.0
Microsoft Visual C++ Wizards
Microsoft Visual Studio Tools for Containers 1.1
Develop, run, validate your ASP.NET Core applications in the target environment. F5 your application directly into a container with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container.
Microsoft Visual Studio VC Package 1.0
Microsoft Visual Studio VC Package
Mono Debugging for Visual Studio 16.1.1 (2473f22)
Support for debugging Mono processes with Visual Studio.
Node.js Tools 1.5.10424.1 Commit Hash:c3ce0ae0b29c0b3a755ffc12f8a685fe7ddd3600
Adds support for developing and debugging Node.js apps in Visual Studio
NuGet Package Manager 5.1.0
NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/
OzCodePackage Extension 1.0
OzCodePackage Visual Studio Extension Detailed Info
ProjectServicesPackage Extension 1.0
ProjectServicesPackage Visual Studio Extension Detailed Info
ResourcePackage Extension 1.0
ResourcePackage Visual Studio Extension Detailed Info
ResourcePackage Extension 1.0
ResourcePackage Visual Studio Extension Detailed Info
Snapshot Debugging Extension 1.0
Snapshot Debugging Visual Studio Extension Detailed Info
SQL Server Data Tools 16.0.61904.23160
Microsoft SQL Server Data Tools
Syntax Visualizer 1.0
An extension for visualizing Roslyn SyntaxTrees.
Test Adapter for Boost.Test 1.0
Enables Visual Studio's testing tools with unit tests written for Boost.Test. The use terms and Third Party Notices are available in the extension installation directory.
Test Adapter for Google Test 1.0
Enables Visual Studio's testing tools with unit tests written for Google Test. The use terms and Third Party Notices are available in the extension installation directory.
TypeScript Tools 16.0.10506.2004
TypeScript Tools for Microsoft Visual Studio
Visual Basic Tools 3.1.1-beta4-19281-06+58a4b1e79aea28115e66b06f850c83a3f1fcb6d3
Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Visual F# Tools 10.4 for F# 4.6 16.1.0-beta.19253.3+42526fe359672a05fd562dc16a91a43d0fe047a7
Microsoft Visual F# Tools 10.4 for F# 4.6
Visual Studio Code Debug Adapter Host Package 1.0
Interop layer for hosting Visual Studio Code debug adapters in Visual Studio
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for Containers 1.0
Visual Studio Tools for Containers
VisualStudio.Mac 1.0
Mac Extension for Visual Studio
Xamarin 16.1.0.545 (d16-1@db7c858e8)
Visual Studio extension to enable development for Xamarin.iOS and Xamarin.Android.
Xamarin Designer 16.1.0.418 (remotes/origin/d16-1@5b958bb10)
Visual Studio extension to enable Xamarin Designer tools in Visual Studio.
Xamarin Templates 16.2.112 (4db4af4)
Templates for building iOS, Android, and Windows apps with Xamarin and Xamarin.Forms.
Xamarin.Android SDK 9.3.0.23 (HEAD/d0b48056f)
Xamarin.Android Reference Assemblies and MSBuild support.
Mono: mono/mono/2018-08@3a07bd426d3
Java.Interop: xamarin/java.interop/d16-1@5ddc3e3
LibZipSharp: grendello/LibZipSharp/d16-1@44de300
LibZip: nih-at/libzip/rel-1-5-1@b95cf3f
ProGuard: xamarin/proguard/master@905836d
SQLite: xamarin/sqlite/3.27.1@8212a2d
Xamarin.Android Tools: xamarin/xamarin-android-tools/d16-1@acabd26
Xamarin.iOS and Xamarin.Mac SDK 12.10.0.157 (6bd9475)
Xamarin.iOS and Xamarin.Mac Reference Assemblies and MSBuild support.
</details> | main | building uno ui sln from the command line does not download temp mono wasm current behavior empty temp git clone master cd src msbuild m t restore uno ui sln msbuild m uno ui sln compilation fails with temp mono wasm does not exist opening visual studio and doing right click build samplesapp wasm will create temp mono wasm afterwards msbuild from the command line will function expected behavior git clone master cd src msbuild just works how to reproduce it as minimally and precisely as possible forgot to grab a binlog opening up early on the off chance others come across it environment nuget package package version s affected platform s ios android webassembly windows build tasks visual studio version version for mac version relevant plugins resharper version anything else we need to know microsoft visual studio enterprise version details microsoft visual studio enterprise version visualstudio release microsoft net framework version installed version enterprise visual c microsoft visual c application insights tools for visual studio package application insights tools for visual studio asp net and web tools asp net and web tools asp net web frameworks and tools for additional information visit azure app service tools azure app service tools azure functions and web jobs tools azure functions and web jobs tools c tools c components used in the ide depending on your project type and settings a different version of the compiler may be used child process debugging power tool power tool to add child process debugging to visual studio common azure tools provides common services for use by azure mobile services and microsoft azure tools extensibility message bus master provides common messaging based mef services for loosely coupled visual studio extension components communication and integration formatdocumentonsave enables auto formatting of the code when you save a file visual studio supports auto formatting of the code with the ctrl e d or ctrl e f key shortcuts but with this extension the command format document is executed on save you can find the source here intellicode extension intellicode visual studio extension detailed info microsoft azure tools microsoft azure tools for microsoft visual studio microsoft continuous delivery tools for visual studio simplifying the configuration of azure devops pipelines from within the visual studio ide microsoft jvm debugger provides support for connecting the visual studio debugger to jdwp compatible java virtual machines microsoft library manager install client side libraries easily to any web project microsoft mi based debugger provides support for connecting visual studio to mi compatible debuggers microsoft visual c wizards microsoft visual c wizards microsoft visual studio tools for containers develop run validate your asp net core applications in the target environment your application directly into a container with debugging or ctrl to edit refresh your app without having to rebuild the container microsoft visual studio vc package microsoft visual studio vc package mono debugging for visual studio support for debugging mono processes with visual studio node js tools commit hash adds support for developing and debugging node js apps in visual studio nuget package manager nuget package manager in visual studio for more information about nuget visit ozcodepackage extension ozcodepackage visual studio extension detailed info projectservicespackage extension projectservicespackage visual studio extension detailed info resourcepackage extension resourcepackage visual studio extension detailed info resourcepackage extension resourcepackage visual studio extension detailed info snapshot debugging extension snapshot debugging visual studio extension detailed info sql server data tools microsoft sql server data tools syntax visualizer an extension for visualizing roslyn syntaxtrees test adapter for boost test enables visual studio s testing tools with unit tests written for boost test the use terms and third party notices are available in the extension installation directory test adapter for google test enables visual studio s testing tools with unit tests written for google test the use terms and third party notices are available in the extension installation directory typescript tools typescript tools for microsoft visual studio visual basic tools visual basic components used in the ide depending on your project type and settings a different version of the compiler may be used visual f tools for f beta microsoft visual f tools for f visual studio code debug adapter host package interop layer for hosting visual studio code debug adapters in visual studio visual studio tools for cmake visual studio tools for cmake visual studio tools for cmake visual studio tools for cmake visual studio tools for containers visual studio tools for containers visualstudio mac mac extension for visual studio xamarin visual studio extension to enable development for xamarin ios and xamarin android xamarin designer remotes origin visual studio extension to enable xamarin designer tools in visual studio xamarin templates templates for building ios android and windows apps with xamarin and xamarin forms xamarin android sdk head xamarin android reference assemblies and msbuild support mono mono mono java interop xamarin java interop libzipsharp grendello libzipsharp libzip nih at libzip rel proguard xamarin proguard master sqlite xamarin sqlite xamarin android tools xamarin xamarin android tools xamarin ios and xamarin mac sdk xamarin ios and xamarin mac reference assemblies and msbuild support | 1 |
108,250 | 4,329,923,747 | IssuesEvent | 2016-07-26 18:21:08 | NuGet/Home | https://api.github.com/repos/NuGet/Home | closed | "Upgrade vailable" filter shows upgrades that violate the version constraint | Area:VS.Client ClosedAs:Duplicate Priority:1 Type:Bug | Hello. I'm using Visual Studio 2015 Prof RTM. When I select "Manage NuGet Packages" on my project and select the "Upgrade available" filter I see updates to my packages that violate the version constraint.
Specifically I have this entry in my packages.config:
<package id="Microsoft.AspNet.Mvc" version="4.0.40804.0" targetFramework="net40" allowedVersions="(,5.0.0.0)" />
NuGet shows me an upgrade is available to version 5.2.3.
The expected behavior would be to only show upgrades to versions within my specified constraint. | 1.0 | "Upgrade vailable" filter shows upgrades that violate the version constraint - Hello. I'm using Visual Studio 2015 Prof RTM. When I select "Manage NuGet Packages" on my project and select the "Upgrade available" filter I see updates to my packages that violate the version constraint.
Specifically I have this entry in my packages.config:
<package id="Microsoft.AspNet.Mvc" version="4.0.40804.0" targetFramework="net40" allowedVersions="(,5.0.0.0)" />
NuGet shows me an upgrade is available to version 5.2.3.
The expected behavior would be to only show upgrades to versions within my specified constraint. | non_main | upgrade vailable filter shows upgrades that violate the version constraint hello i m using visual studio prof rtm when i select manage nuget packages on my project and select the upgrade available filter i see updates to my packages that violate the version constraint specifically i have this entry in my packages config nuget shows me an upgrade is available to version the expected behavior would be to only show upgrades to versions within my specified constraint | 0 |
19,012 | 10,289,304,589 | IssuesEvent | 2019-08-27 08:33:01 | ARM-software/ComputeLibrary | https://api.github.com/repos/ARM-software/ComputeLibrary | closed | Performance improvements with lws_hint() instead of default_ndrange() | Performance | **Output of 'strings libarm_compute.so | grep arm_compute_version':**
```
arm_compute_version=v19.02 Build options: {'arch': 'armv7a', 'opencl': '1', 'neon': '1', 'build_dir': 'armv7a-linux.release', 'examples': '0', 'os': 'linux', 'Werror': '0'} Git hash=d2de75135e3741b0613a651668d85a177762ebc4
```
**Platform:**
armv7l
**Operating System:**
Linux odroid
**Problem description:**
This issue is related to `default_ndrange()` in ACL.
When we use acl kernel, we always add the acl kernel to the command queue using `enqueue` function. If we don't add the last paramter of this function, it calls `default_ndrange()` function to set `lws_hint` value by default. However, the function takes a long(?) time to access the device directly using `clGetDeviceInfo` function to get device name. The `ICLKernel` has already stored the `lws_hint` value in the configure step through the `default_ndrange()` function. So I think if we set the `lws_hint` in ICLKernel to the last parameter of `enqueue` function, I expect to be able to reduce the time.
So, I tested two cases:
1. before : original
2. after : Use `lws_hint()` instead of `default_ndrange()`
Some models have improved the performance even though the actual time taken is very small. And by checking the data by operator, I can clearly see the improvements.
* RESHAPE
* [SqueezeNet](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/squeezenet_2018_04_27.tgz): 0.268ms -> 0.125ms (2.144 times faster)
* [ResNet v2 101](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/resnet_v2_101_2018_04_27.tgz) : 0.230ms -> 0.127ms (1.811 times faster)
* [Inception V3](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz) : 0.290ms -> 0.146ms (1.986 times faster)
* MUL
* [DenseNet](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/densenet_2018_04_27.tgz) : 9.747ms -> 6.570ms (1.484 times faster)
* [ResNet v2 101](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/resnet_v2_101_2018_04_27.tgz) : 5.407ms -> 3.485ms (1.552 times faster)
* SQUEEZE
* [Mobilenet v1.0 224](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) : 0.134ms -> 0.054ms (2.481 times faster)
* [Mobilenet v1.0 224 quant](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz) : 0.090ms -> 0.050ms (1.8 times faster)
* FULLY_CONNECTED
* [Inception ResNet V2](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_resnet_v2_2018_04_27.tgz) : 0.405ms -> 0.238ms (1.702 times faster)
* [Inception V4](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v4_2018_04_27.tgz) : 0.380ms -> 0.228ms (1.667 times faster)
* Modified kernel
* CLPermuteKernel
* CLReductionOperationKernel
* CLPixelWiseMultiplicationKernel
* CLWeightsReshapeKernel
* CLWinogradFilterTransformKernel
* CLGEMMReshapeRHSMatrixKernel
* CLReshapeLayerKernel
So I suggest to apply `lws_hint()` in the case of the above kernels. | True | Performance improvements with lws_hint() instead of default_ndrange() - **Output of 'strings libarm_compute.so | grep arm_compute_version':**
```
arm_compute_version=v19.02 Build options: {'arch': 'armv7a', 'opencl': '1', 'neon': '1', 'build_dir': 'armv7a-linux.release', 'examples': '0', 'os': 'linux', 'Werror': '0'} Git hash=d2de75135e3741b0613a651668d85a177762ebc4
```
**Platform:**
armv7l
**Operating System:**
Linux odroid
**Problem description:**
This issue is related to `default_ndrange()` in ACL.
When we use acl kernel, we always add the acl kernel to the command queue using `enqueue` function. If we don't add the last paramter of this function, it calls `default_ndrange()` function to set `lws_hint` value by default. However, the function takes a long(?) time to access the device directly using `clGetDeviceInfo` function to get device name. The `ICLKernel` has already stored the `lws_hint` value in the configure step through the `default_ndrange()` function. So I think if we set the `lws_hint` in ICLKernel to the last parameter of `enqueue` function, I expect to be able to reduce the time.
So, I tested two cases:
1. before : original
2. after : Use `lws_hint()` instead of `default_ndrange()`
Some models have improved the performance even though the actual time taken is very small. And by checking the data by operator, I can clearly see the improvements.
* RESHAPE
* [SqueezeNet](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/squeezenet_2018_04_27.tgz): 0.268ms -> 0.125ms (2.144 times faster)
* [ResNet v2 101](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/resnet_v2_101_2018_04_27.tgz) : 0.230ms -> 0.127ms (1.811 times faster)
* [Inception V3](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz) : 0.290ms -> 0.146ms (1.986 times faster)
* MUL
* [DenseNet](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/densenet_2018_04_27.tgz) : 9.747ms -> 6.570ms (1.484 times faster)
* [ResNet v2 101](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/resnet_v2_101_2018_04_27.tgz) : 5.407ms -> 3.485ms (1.552 times faster)
* SQUEEZE
* [Mobilenet v1.0 224](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) : 0.134ms -> 0.054ms (2.481 times faster)
* [Mobilenet v1.0 224 quant](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz) : 0.090ms -> 0.050ms (1.8 times faster)
* FULLY_CONNECTED
* [Inception ResNet V2](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_resnet_v2_2018_04_27.tgz) : 0.405ms -> 0.238ms (1.702 times faster)
* [Inception V4](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v4_2018_04_27.tgz) : 0.380ms -> 0.228ms (1.667 times faster)
* Modified kernel
* CLPermuteKernel
* CLReductionOperationKernel
* CLPixelWiseMultiplicationKernel
* CLWeightsReshapeKernel
* CLWinogradFilterTransformKernel
* CLGEMMReshapeRHSMatrixKernel
* CLReshapeLayerKernel
So I suggest to apply `lws_hint()` in the case of the above kernels. | non_main | performance improvements with lws hint instead of default ndrange output of strings libarm compute so grep arm compute version arm compute version build options arch opencl neon build dir linux release examples os linux werror git hash platform operating system linux odroid problem description this issue is related to default ndrange in acl when we use acl kernel we always add the acl kernel to the command queue using enqueue function if we don t add the last paramter of this function it calls default ndrange function to set lws hint value by default however the function takes a long time to access the device directly using clgetdeviceinfo function to get device name the iclkernel has already stored the lws hint value in the configure step through the default ndrange function so i think if we set the lws hint in iclkernel to the last parameter of enqueue function i expect to be able to reduce the time so i tested two cases before original after use lws hint instead of default ndrange some models have improved the performance even though the actual time taken is very small and by checking the data by operator i can clearly see the improvements reshape times faster times faster times faster mul times faster times faster squeeze times faster times faster fully connected times faster times faster modified kernel clpermutekernel clreductionoperationkernel clpixelwisemultiplicationkernel clweightsreshapekernel clwinogradfiltertransformkernel clgemmreshaperhsmatrixkernel clreshapelayerkernel so i suggest to apply lws hint in the case of the above kernels | 0 |
74,949 | 20,515,138,925 | IssuesEvent | 2022-03-01 10:56:16 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | opened | AttributeError: module 'tensorflow' has no attribute 'reduce_sum' | type:build/install | Hi while running
python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
To verify my Tensorflow installation I get AttributeError;
AttributeError: module 'tensorflow' has no attribute 'reduce_sum'
tensorflow version: Version: 2.5.0 | 1.0 | AttributeError: module 'tensorflow' has no attribute 'reduce_sum' - Hi while running
python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
To verify my Tensorflow installation I get AttributeError;
AttributeError: module 'tensorflow' has no attribute 'reduce_sum'
tensorflow version: Version: 2.5.0 | non_main | attributeerror module tensorflow has no attribute reduce sum hi while running python c import tensorflow as tf print tf reduce sum tf random normal to verify my tensorflow installation i get attributeerror attributeerror module tensorflow has no attribute reduce sum tensorflow version version | 0 |
538,755 | 15,777,601,198 | IssuesEvent | 2021-04-01 06:34:55 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | west: Question on `west flash --hex-file` behavior with build.dir-fmt | area: West bug priority: low | **Describe the bug**
Cf related dicussion [here](https://github.com/zephyrproject-rtos/zephyr/pull/31118/files#r596729811)
**To Reproduce**
Steps to reproduce the behavior:
0. Configure dir-fmt as follows
[build]
dir-fmt = build/{board}/
1. cd samples/tfm_integration/tfm_ipc
2. west build -b stm32l562e_dk_ns
3. west flash --hex-file build/stm32l562e_dk_ns/tfm_merged.hex
Reported error:
```
FATAL ERROR: --build-dir was not given, /local/mcu/zephyrproject/zephyr/samples/tfm_integration/tfm_ipc is not a build directory and the default build directory cannot be determined. Check your build.dir-fmt configuration option
```
The equivalent procedure works well w/o dir-fmt configuration
**Expected behavior**
`west flash --hex-file` has same behavior whatever dir-fmt configuration.
**Impact**
Annoying | 1.0 | west: Question on `west flash --hex-file` behavior with build.dir-fmt - **Describe the bug**
Cf related dicussion [here](https://github.com/zephyrproject-rtos/zephyr/pull/31118/files#r596729811)
**To Reproduce**
Steps to reproduce the behavior:
0. Configure dir-fmt as follows
[build]
dir-fmt = build/{board}/
1. cd samples/tfm_integration/tfm_ipc
2. west build -b stm32l562e_dk_ns
3. west flash --hex-file build/stm32l562e_dk_ns/tfm_merged.hex
Reported error:
```
FATAL ERROR: --build-dir was not given, /local/mcu/zephyrproject/zephyr/samples/tfm_integration/tfm_ipc is not a build directory and the default build directory cannot be determined. Check your build.dir-fmt configuration option
```
The equivalent procedure works well w/o dir-fmt configuration
**Expected behavior**
`west flash --hex-file` has same behavior whatever dir-fmt configuration.
**Impact**
Annoying | non_main | west question on west flash hex file behavior with build dir fmt describe the bug cf related dicussion to reproduce steps to reproduce the behavior configure dir fmt as follows dir fmt build board cd samples tfm integration tfm ipc west build b dk ns west flash hex file build dk ns tfm merged hex reported error fatal error build dir was not given local mcu zephyrproject zephyr samples tfm integration tfm ipc is not a build directory and the default build directory cannot be determined check your build dir fmt configuration option the equivalent procedure works well w o dir fmt configuration expected behavior west flash hex file has same behavior whatever dir fmt configuration impact annoying | 0 |
3,773 | 15,864,469,764 | IssuesEvent | 2021-04-08 13:49:56 | heroku/heroku-buildpack-python | https://api.github.com/repos/heroku/heroku-buildpack-python | closed | Only export BUILD_DIR and CACHE_DIR once on compile | maintainability-issue | Currently, we export BUILD_DIR and CACHE_DIR as build vars twice.
Once on `bin/compile` line 36:
https://github.com/heroku/heroku-buildpack-python/blob/master/bin/compile#L35-L36
And then again on line 141:
https://github.com/heroku/heroku-buildpack-python/blob/master/bin/compile#L140-L141
- [ ] Add test for presence of BUILD_DIR and CACHE_DIR on compile
- [ ] Remove second instance of BUILD_DIR and CACHE_DIR | True | Only export BUILD_DIR and CACHE_DIR once on compile - Currently, we export BUILD_DIR and CACHE_DIR as build vars twice.
Once on `bin/compile` line 36:
https://github.com/heroku/heroku-buildpack-python/blob/master/bin/compile#L35-L36
And then again on line 141:
https://github.com/heroku/heroku-buildpack-python/blob/master/bin/compile#L140-L141
- [ ] Add test for presence of BUILD_DIR and CACHE_DIR on compile
- [ ] Remove second instance of BUILD_DIR and CACHE_DIR | main | only export build dir and cache dir once on compile currently we export build dir and cache dir as build vars twice once on bin compile line and then again on line add test for presence of build dir and cache dir on compile remove second instance of build dir and cache dir | 1 |
50,338 | 26,588,402,504 | IssuesEvent | 2023-01-23 05:27:12 | redpanda-data/redpanda | https://api.github.com/repos/redpanda-data/redpanda | closed | bad_allocs (via BadLogLines) in `ManyClientsTest`.`test_many_clients` | kind/bug ci-failure performance | Module: **rptest.scale_tests.many_clients_test**
Class: **ManyClientsTest**
Method: **test_many_clients**
on (arm64, VM) https://buildkite.com/redpanda/vtools/builds/5007#01857e64-9b47-44f1-ade7-92fcfff06d73
```
<BadLogLines nodes=ip-172-31-41-82(1) example="ERROR 2023-01-05 02:16:40,047 [shard 1] seastar - Failed to allocate 131072 bytes">
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.10/site-packages/ducktape/tests/runner_client.py", line 135, in run
data = self.run_test()
File "/home/ubuntu/.local/lib/python3.10/site-packages/ducktape/tests/runner_client.py", line 227, in run_test
return self.test_context.function(self.test)
File "/home/ubuntu/redpanda/tests/rptest/services/cluster.py", line 67, in wrapped
self.redpanda.raise_on_bad_logs(allow_list=log_allow_list)
File "/home/ubuntu/redpanda/tests/rptest/services/redpanda.py", line 1621, in raise_on_bad_logs
raise BadLogLines(bad_lines)
rptest.services.utils.BadLogLines: <BadLogLines nodes=ip-172-31-41-82(1) example="ERROR 2023-01-05 02:16:40,047 [shard 1] seastar - Failed to allocate 131072 bytes">
``` | True | bad_allocs (via BadLogLines) in `ManyClientsTest`.`test_many_clients` - Module: **rptest.scale_tests.many_clients_test**
Class: **ManyClientsTest**
Method: **test_many_clients**
on (arm64, VM) https://buildkite.com/redpanda/vtools/builds/5007#01857e64-9b47-44f1-ade7-92fcfff06d73
```
<BadLogLines nodes=ip-172-31-41-82(1) example="ERROR 2023-01-05 02:16:40,047 [shard 1] seastar - Failed to allocate 131072 bytes">
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.10/site-packages/ducktape/tests/runner_client.py", line 135, in run
data = self.run_test()
File "/home/ubuntu/.local/lib/python3.10/site-packages/ducktape/tests/runner_client.py", line 227, in run_test
return self.test_context.function(self.test)
File "/home/ubuntu/redpanda/tests/rptest/services/cluster.py", line 67, in wrapped
self.redpanda.raise_on_bad_logs(allow_list=log_allow_list)
File "/home/ubuntu/redpanda/tests/rptest/services/redpanda.py", line 1621, in raise_on_bad_logs
raise BadLogLines(bad_lines)
rptest.services.utils.BadLogLines: <BadLogLines nodes=ip-172-31-41-82(1) example="ERROR 2023-01-05 02:16:40,047 [shard 1] seastar - Failed to allocate 131072 bytes">
``` | non_main | bad allocs via badloglines in manyclientstest test many clients module rptest scale tests many clients test class manyclientstest method test many clients on vm traceback most recent call last file home ubuntu local lib site packages ducktape tests runner client py line in run data self run test file home ubuntu local lib site packages ducktape tests runner client py line in run test return self test context function self test file home ubuntu redpanda tests rptest services cluster py line in wrapped self redpanda raise on bad logs allow list log allow list file home ubuntu redpanda tests rptest services redpanda py line in raise on bad logs raise badloglines bad lines rptest services utils badloglines | 0 |
979 | 4,731,746,485 | IssuesEvent | 2016-10-19 03:57:04 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | snmp_facts cannot import pysnmp | affects_2.1 bug_report networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
snmp_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /home/yd_hzj/project/AnsibleCmdb/common/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
control node: redhat 7.0
remote hosts: switch
##### SUMMARY
<!--- Explain the problem briefly -->
want to use snmp_facts module , and i installed the pysnmp,
but everytime i execute the module, returns the error "msg": "Missing required pysnmp module (check docs)"
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
(AnsibleCmdb) [yd_hzj@localhost ansible-snmp-facts-master]$ ansible-playbook snmp_facts-playbook.yml -i 10.223.53.3, -vvv
Using /home/yd_hzj/project/rango/ansible/ansible.cfg as config file
PLAYBOOK: snmp_facts-playbook.yml **********************************************
1 plays in snmp_facts-playbook.yml
PLAY [10.223.53.3] *************************************************************
TASK [Get SNMP information from switches] **************************************
task path: /home/yd_hzj/test/ansible-snmp-facts-master/snmp_facts-playbook.yml:6
<10.223.53.3> ESTABLISH LOCAL CONNECTION FOR USER: yd_hzj
<10.223.53.3> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python && sleep 0'
fatal: [10.223.53.3]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"authkey": null, "community": "aaabbb", "host": "10.223.53.3", "integrity": null, "level": null, "privacy": null, "privkey": null, "removeplaceholder": null, "username": null, "version": "v2"}, "module_name": "snmp_facts"}, "msg": "Missing required pysnmp module (check docs)"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @snmp_facts-playbook.retry
PLAY RECAP *********************************************************************
10.223.53.3 : ok=0 changed=0 unreachable=0 failed=1
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
i know it's not the snmp_facts problem, bu i just cannot figure it out
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
i tried to import manully by command line, there is no error on that:
(AnsibleCmdb) [yd_hzj@localhost ansible-snmp-facts-master]$ python
Python 2.7.5 (default, Jul 31 2016, 13:13:56)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> from pysnmp.entity.rfc3413.oneliner import cmdgen
>>>
>>> import pysnmp
>>> print pysnmp.version
(4, 3, 2)
>>>
```
| True | snmp_facts cannot import pysnmp - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
snmp_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /home/yd_hzj/project/AnsibleCmdb/common/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
control node: redhat 7.0
remote hosts: switch
##### SUMMARY
<!--- Explain the problem briefly -->
want to use snmp_facts module , and i installed the pysnmp,
but everytime i execute the module, returns the error "msg": "Missing required pysnmp module (check docs)"
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
(AnsibleCmdb) [yd_hzj@localhost ansible-snmp-facts-master]$ ansible-playbook snmp_facts-playbook.yml -i 10.223.53.3, -vvv
Using /home/yd_hzj/project/rango/ansible/ansible.cfg as config file
PLAYBOOK: snmp_facts-playbook.yml **********************************************
1 plays in snmp_facts-playbook.yml
PLAY [10.223.53.3] *************************************************************
TASK [Get SNMP information from switches] **************************************
task path: /home/yd_hzj/test/ansible-snmp-facts-master/snmp_facts-playbook.yml:6
<10.223.53.3> ESTABLISH LOCAL CONNECTION FOR USER: yd_hzj
<10.223.53.3> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python && sleep 0'
fatal: [10.223.53.3]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"authkey": null, "community": "aaabbb", "host": "10.223.53.3", "integrity": null, "level": null, "privacy": null, "privkey": null, "removeplaceholder": null, "username": null, "version": "v2"}, "module_name": "snmp_facts"}, "msg": "Missing required pysnmp module (check docs)"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @snmp_facts-playbook.retry
PLAY RECAP *********************************************************************
10.223.53.3 : ok=0 changed=0 unreachable=0 failed=1
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
i know it's not the snmp_facts problem, bu i just cannot figure it out
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
i tried to import manully by command line, there is no error on that:
(AnsibleCmdb) [yd_hzj@localhost ansible-snmp-facts-master]$ python
Python 2.7.5 (default, Jul 31 2016, 13:13:56)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> from pysnmp.entity.rfc3413.oneliner import cmdgen
>>>
>>> import pysnmp
>>> print pysnmp.version
(4, 3, 2)
>>>
```
| main | snmp facts cannot import pysnmp issue type bug report component name snmp facts ansible version ansible config file home yd hzj project ansiblecmdb common ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific control node redhat remote hosts switch summary want to use snmp facts module and i installed the pysnmp but everytime i execute the module returns the error msg missing required pysnmp module check docs steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansiblecmdb ansible playbook snmp facts playbook yml i vvv using home yd hzj project rango ansible ansible cfg as config file playbook snmp facts playbook yml plays in snmp facts playbook yml play task task path home yd hzj test ansible snmp facts master snmp facts playbook yml establish local connection for user yd hzj exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python sleep fatal failed changed false failed true invocation module args authkey null community aaabbb host integrity null level null privacy null privkey null removeplaceholder null username null version module name snmp facts msg missing required pysnmp module check docs no more hosts left to retry use limit snmp facts playbook retry play recap ok changed unreachable failed expected results i know it s not the snmp facts problem bu i just cannot figure it out actual results i tried to import manully by command line there is no error on that ansiblecmdb python python default jul on type help copyright credits or license for more information from pysnmp entity oneliner import cmdgen import pysnmp print pysnmp version | 1 |
4,460 | 23,224,109,436 | IssuesEvent | 2022-08-02 21:24:29 | rustsec/advisory-db | https://api.github.com/repos/rustsec/advisory-db | closed | `prettytables-rs` is abandoned | Waiting90d Unmaintained | The `prettytables-rs` [library](https://github.com/phsym/prettytable-rs) has been abandoned.
There's been no active development since roughly three years and the author hasn't been reachable as well.
There exists an [open PR](https://github.com/phsym/prettytable-rs/pull/138) and an [open ticket](https://github.com/phsym/prettytable-rs/issues/139) to mark the crate as deprecated/abandoned, but the author has not answered on either yet.
There hasn't been any comments of the crate's maintainer on any other issue since several years either. | True | `prettytables-rs` is abandoned - The `prettytables-rs` [library](https://github.com/phsym/prettytable-rs) has been abandoned.
There's been no active development since roughly three years and the author hasn't been reachable as well.
There exists an [open PR](https://github.com/phsym/prettytable-rs/pull/138) and an [open ticket](https://github.com/phsym/prettytable-rs/issues/139) to mark the crate as deprecated/abandoned, but the author has not answered on either yet.
There hasn't been any comments of the crate's maintainer on any other issue since several years either. | main | prettytables rs is abandoned the prettytables rs has been abandoned there s been no active development since roughly three years and the author hasn t been reachable as well there exists an and an to mark the crate as deprecated abandoned but the author has not answered on either yet there hasn t been any comments of the crate s maintainer on any other issue since several years either | 1 |
5,840 | 31,025,583,137 | IssuesEvent | 2023-08-10 08:55:10 | albertlauncher/plugins | https://api.github.com/repos/albertlauncher/plugins | closed | [QBM] Add background color for highlight_color | Maintainer wanted Feature request | - CHECK FOR EXISTING ISSUES, ALSO CLOSED ONES!
**yes**
**Description**
With orignal `Widget Box Model`, then the selected highlighted choice is given a different background color. It would be nice to see this feature also with `QML Box Model too` :+1:
**Expected behavior**
The selected item has background color change.
**Steps to reproduce**
* Select `QML Box Model` in settings
* Open theme `PropertyEditor`
* Look for the following theme color setting: `highlight_color` - it exists! But that is the foreground text color
* The foreground text color is too hard to see, from a distance
* Look for the following theme color setting: `highlight_color_bg` - doesnt exists! ..but it would solve the problem
**Screenshots**
**Problem:**
QML Box Model (new style)

**Solution**:
Widget Box model (old style)

**Additional information (please provide the following information):**
`albert 0.17.2`
installed today from obs software repo (ubuntu 21.04) like this:
```sh
echo 'deb http://download.opensuse.org/repositories/home:/manuelschneid3r/xUbuntu_21.04/ /' | sudo tee /etc/apt/sources.list.d/home:manuelschneid3r.list
curl -fsSL https://download.opensuse.org/repositories/home:manuelschneid3r/xUbuntu_21.04/Release.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/home_manuelschneid3r.gpg > /dev/null
sudo apt update
sudo apt install albert
```
<details>
```sh
$ albert --report
core: Albert version: 0.17.2
core: Build date: Oct 29 2017 00:00:00
core: Qt version: 5.15.2
core: QT_QPA_PLATFORMTHEME: qt5ct
core: Binary location: /usr/bin/albert
core: PWD: /home/id/.bin
core: SHELL: /bin/bash
core: LANG: en_GB.UTF-8
core: XDG_SESSION_TYPE: x11
core: XDG_CURRENT_DESKTOP: Budgie:GNOME
core: DESKTOP_SESSION: budgie-desktop
core: XDG_SESSION_DESKTOP: budgie-desktop
core: OS: Ubuntu 21.04
core: OS (type/version): ubuntu/21.04
core: Build ABI: x86_64-little_endian-lp64
core: Arch (build/current): x86_64/x86_64
core: Kernel (type/version): linux/5.11.14-051114-lowlatency
```
</details>
<details>
<summary>Output of `albert` when started in a terminal (stdout/stderr)</summary>
```sh
$ albert
19:28:03 [info:core] Initializing application
19:28:04 [info:default] Systems icon theme is: "Surfn-Evopop"
19:28:04 [debg:default] Icon theme "Surfn-Evopop" not found.
19:28:04 [info:core] Initializing mandatory paths
19:28:04 [info:core] Setup signal handlers
19:28:04 [info:core] Creating running indicator file
19:28:04 [info:core] Initializing core components
19:28:04 [warn:default] file:///usr/share/albert/org.albert.frontend.qmlboxmodel/styles/BoxModel/MainComponent.qml:275:5: QML Connections: Implicitly defined onFoo properties in Connections are deprecated. Use this syntax instead: function onFoo(<arguments>) { ... }
19:28:04 [warn:default] file:///usr/share/albert/org.albert.frontend.qmlboxmodel/styles/BoxModel/DesktopListView.qml:16:5: QML Connections: Implicitly defined onFoo properties in Connections are deprecated. Use this syntax instead: function onFoo(<arguments>) { ... }
19:28:04 [warn:default] file:///usr/share/albert/org.albert.frontend.qmlboxmodel/styles/BoxModel/DesktopListView.qml:16:5: QML Connections: Implicitly defined onFoo properties in Connections are deprecated. Use this syntax instead: function onFoo(<arguments>) { ... }
19:28:04 [info:core] Loading extension org.albert.extension.applications
19:28:04 [info:apps] Start indexing applications.
19:28:04 [debg:core] org.albert.extension.applications loaded in 0 milliseconds
19:28:04 [info:core] Loading extension org.albert.extension.calculator
19:28:04 [debg:core] org.albert.extension.calculator loaded in 3 milliseconds
19:28:04 [info:core] Loading extension org.albert.extension.files
19:28:04 [debg:applications] Loading file index from cache.
```
</details>
| True | [QBM] Add background color for highlight_color - - CHECK FOR EXISTING ISSUES, ALSO CLOSED ONES!
**yes**
**Description**
With orignal `Widget Box Model`, then the selected highlighted choice is given a different background color. It would be nice to see this feature also with `QML Box Model too` :+1:
**Expected behavior**
The selected item has background color change.
**Steps to reproduce**
* Select `QML Box Model` in settings
* Open theme `PropertyEditor`
* Look for the following theme color setting: `highlight_color` - it exists! But that is the foreground text color
* The foreground text color is too hard to see, from a distance
* Look for the following theme color setting: `highlight_color_bg` - doesnt exists! ..but it would solve the problem
**Screenshots**
**Problem:**
QML Box Model (new style)

**Solution**:
Widget Box model (old style)

**Additional information (please provide the following information):**
`albert 0.17.2`
installed today from obs software repo (ubuntu 21.04) like this:
```sh
echo 'deb http://download.opensuse.org/repositories/home:/manuelschneid3r/xUbuntu_21.04/ /' | sudo tee /etc/apt/sources.list.d/home:manuelschneid3r.list
curl -fsSL https://download.opensuse.org/repositories/home:manuelschneid3r/xUbuntu_21.04/Release.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/home_manuelschneid3r.gpg > /dev/null
sudo apt update
sudo apt install albert
```
<details>
```sh
$ albert --report
core: Albert version: 0.17.2
core: Build date: Oct 29 2017 00:00:00
core: Qt version: 5.15.2
core: QT_QPA_PLATFORMTHEME: qt5ct
core: Binary location: /usr/bin/albert
core: PWD: /home/id/.bin
core: SHELL: /bin/bash
core: LANG: en_GB.UTF-8
core: XDG_SESSION_TYPE: x11
core: XDG_CURRENT_DESKTOP: Budgie:GNOME
core: DESKTOP_SESSION: budgie-desktop
core: XDG_SESSION_DESKTOP: budgie-desktop
core: OS: Ubuntu 21.04
core: OS (type/version): ubuntu/21.04
core: Build ABI: x86_64-little_endian-lp64
core: Arch (build/current): x86_64/x86_64
core: Kernel (type/version): linux/5.11.14-051114-lowlatency
```
</details>
<details>
<summary>Output of `albert` when started in a terminal (stdout/stderr)</summary>
```sh
$ albert
19:28:03 [info:core] Initializing application
19:28:04 [info:default] Systems icon theme is: "Surfn-Evopop"
19:28:04 [debg:default] Icon theme "Surfn-Evopop" not found.
19:28:04 [info:core] Initializing mandatory paths
19:28:04 [info:core] Setup signal handlers
19:28:04 [info:core] Creating running indicator file
19:28:04 [info:core] Initializing core components
19:28:04 [warn:default] file:///usr/share/albert/org.albert.frontend.qmlboxmodel/styles/BoxModel/MainComponent.qml:275:5: QML Connections: Implicitly defined onFoo properties in Connections are deprecated. Use this syntax instead: function onFoo(<arguments>) { ... }
19:28:04 [warn:default] file:///usr/share/albert/org.albert.frontend.qmlboxmodel/styles/BoxModel/DesktopListView.qml:16:5: QML Connections: Implicitly defined onFoo properties in Connections are deprecated. Use this syntax instead: function onFoo(<arguments>) { ... }
19:28:04 [warn:default] file:///usr/share/albert/org.albert.frontend.qmlboxmodel/styles/BoxModel/DesktopListView.qml:16:5: QML Connections: Implicitly defined onFoo properties in Connections are deprecated. Use this syntax instead: function onFoo(<arguments>) { ... }
19:28:04 [info:core] Loading extension org.albert.extension.applications
19:28:04 [info:apps] Start indexing applications.
19:28:04 [debg:core] org.albert.extension.applications loaded in 0 milliseconds
19:28:04 [info:core] Loading extension org.albert.extension.calculator
19:28:04 [debg:core] org.albert.extension.calculator loaded in 3 milliseconds
19:28:04 [info:core] Loading extension org.albert.extension.files
19:28:04 [debg:applications] Loading file index from cache.
```
</details>
| main | add background color for highlight color check for existing issues also closed ones yes description with orignal widget box model then the selected highlighted choice is given a different background color it would be nice to see this feature also with qml box model too expected behavior the selected item has background color change steps to reproduce select qml box model in settings open theme propertyeditor look for the following theme color setting highlight color it exists but that is the foreground text color the foreground text color is too hard to see from a distance look for the following theme color setting highlight color bg doesnt exists but it would solve the problem screenshots problem qml box model new style solution widget box model old style additional information please provide the following information albert installed today from obs software repo ubuntu like this sh echo deb sudo tee etc apt sources list d home list curl fssl gpg dearmor sudo tee etc apt trusted gpg d home gpg dev null sudo apt update sudo apt install albert sh albert report core albert version core build date oct core qt version core qt qpa platformtheme core binary location usr bin albert core pwd home id bin core shell bin bash core lang en gb utf core xdg session type core xdg current desktop budgie gnome core desktop session budgie desktop core xdg session desktop budgie desktop core os ubuntu core os type version ubuntu core build abi little endian core arch build current core kernel type version linux lowlatency output of albert when started in a terminal stdout stderr sh albert initializing application systems icon theme is surfn evopop icon theme surfn evopop not found initializing mandatory paths setup signal handlers creating running indicator file initializing core components file usr share albert org albert frontend qmlboxmodel styles boxmodel maincomponent qml qml connections implicitly defined onfoo properties in connections are deprecated use this syntax instead function onfoo file usr share albert org albert frontend qmlboxmodel styles boxmodel desktoplistview qml qml connections implicitly defined onfoo properties in connections are deprecated use this syntax instead function onfoo file usr share albert org albert frontend qmlboxmodel styles boxmodel desktoplistview qml qml connections implicitly defined onfoo properties in connections are deprecated use this syntax instead function onfoo loading extension org albert extension applications start indexing applications org albert extension applications loaded in milliseconds loading extension org albert extension calculator org albert extension calculator loaded in milliseconds loading extension org albert extension files loading file index from cache | 1 |
19,029 | 26,445,885,764 | IssuesEvent | 2023-01-16 07:13:06 | thehcginstitute-com/m1 | https://api.github.com/repos/thehcginstitute-com/m1 | closed | Make the `IWD_OrderManager` module compatible with the `Raveinfosys_Exporter` module | third-party-compatibility | A part of https://github.com/thehcginstitute-com/m1/issues/4

| True | Make the `IWD_OrderManager` module compatible with the `Raveinfosys_Exporter` module - A part of https://github.com/thehcginstitute-com/m1/issues/4

| non_main | make the iwd ordermanager module compatible with the raveinfosys exporter module a part of | 0 |
55,583 | 23,508,538,413 | IssuesEvent | 2022-08-18 14:33:21 | aws/aws-sdk | https://api.github.com/repos/aws/aws-sdk | closed | Add Allowed scope for generating AccessToken from initiate-auth action | feature-request service-api cognito-idp | **Is your feature request related to a problem? Please describe.**
it would be great to be able to generate an access token with some allowed scope according to what we defined in the Corresponding AppClient
**Describe the solution you'd like**
Add a parameter called `[--scopes <value>]` type `string` which will represent the allowed scope to generate the Token. The scopes can be multiple and comma-separated.
**Additional context**
E.g:
```
aws cognito-idp initiate-auth --region us-east-1 --scopes "openid,api-gtw/proxy" --cli-input-json file://auth_data.json
```
Current behavior:
Currently, when I get a Token with the above command I get a token with the default scope `aws.cognito.signin.user.admin`. I'm using an ApiGateway with some custom scopes defined in my AppClient and I'm not able to connect with a token got by awscli because that token doesn't have the corresponding custom scope.

Thanks so much in advance | 1.0 | Add Allowed scope for generating AccessToken from initiate-auth action - **Is your feature request related to a problem? Please describe.**
it would be great to be able to generate an access token with some allowed scope according to what we defined in the Corresponding AppClient
**Describe the solution you'd like**
Add a parameter called `[--scopes <value>]` type `string` which will represent the allowed scope to generate the Token. The scopes can be multiple and comma-separated.
**Additional context**
E.g:
```
aws cognito-idp initiate-auth --region us-east-1 --scopes "openid,api-gtw/proxy" --cli-input-json file://auth_data.json
```
Current behavior:
Currently, when I get a Token with the above command I get a token with the default scope `aws.cognito.signin.user.admin`. I'm using an ApiGateway with some custom scopes defined in my AppClient and I'm not able to connect with a token got by awscli because that token doesn't have the corresponding custom scope.

Thanks so much in advance | non_main | add allowed scope for generating accesstoken from initiate auth action is your feature request related to a problem please describe it would be great to be able to generate an access token with some allowed scope according to what we defined in the corresponding appclient describe the solution you d like add a parameter called type string which will represent the allowed scope to generate the token the scopes can be multiple and comma separated additional context e g aws cognito idp initiate auth region us east scopes openid api gtw proxy cli input json file auth data json current behavior currently when i get a token with the above command i get a token with the default scope aws cognito signin user admin i m using an apigateway with some custom scopes defined in my appclient and i m not able to connect with a token got by awscli because that token doesn t have the corresponding custom scope thanks so much in advance | 0 |
5,137 | 26,196,752,307 | IssuesEvent | 2023-01-03 14:08:19 | precice/precice | https://api.github.com/repos/precice/precice | closed | Can we assume readData is also receiveData? | bug question maintainability | In #1526 I stumbled over a [configuration file](https://github.com/precice/precice/blob/2552a91041bd93aec0b2eb9d5da62287367ef7a0/tests/serial/watch-integral/WatchIntegralScaleAndNoScale.xml) for one of tests. The following part about the configuration looks odd: `DataTwo` is defined as `read-data` for `SolverTwo`, but there is no mapping or exchange defined for `DataTwo`. This looks to me like an invalid configuration that preCICE should be able to detect. From my perspective every `read-data` of a participant should appear in a corresponding `exchange` and therefore be part of the coupling scheme's receive data.
Main questions:
* Is the situation descibed above a valid use-case? I cannot imagine one.
* If we decide that the configuration above is invalid, how can we check it? What's the corrected version of the test in #1526
Additional context:
* Assuming that every readData is part of receiveData makes the implementation simpler and removes some edge cases in #1523. | True | Can we assume readData is also receiveData? - In #1526 I stumbled over a [configuration file](https://github.com/precice/precice/blob/2552a91041bd93aec0b2eb9d5da62287367ef7a0/tests/serial/watch-integral/WatchIntegralScaleAndNoScale.xml) for one of tests. The following part about the configuration looks odd: `DataTwo` is defined as `read-data` for `SolverTwo`, but there is no mapping or exchange defined for `DataTwo`. This looks to me like an invalid configuration that preCICE should be able to detect. From my perspective every `read-data` of a participant should appear in a corresponding `exchange` and therefore be part of the coupling scheme's receive data.
Main questions:
* Is the situation descibed above a valid use-case? I cannot imagine one.
* If we decide that the configuration above is invalid, how can we check it? What's the corrected version of the test in #1526
Additional context:
* Assuming that every readData is part of receiveData makes the implementation simpler and removes some edge cases in #1523. | main | can we assume readdata is also receivedata in i stumbled over a for one of tests the following part about the configuration looks odd datatwo is defined as read data for solvertwo but there is no mapping or exchange defined for datatwo this looks to me like an invalid configuration that precice should be able to detect from my perspective every read data of a participant should appear in a corresponding exchange and therefore be part of the coupling scheme s receive data main questions is the situation descibed above a valid use case i cannot imagine one if we decide that the configuration above is invalid how can we check it what s the corrected version of the test in additional context assuming that every readdata is part of receivedata makes the implementation simpler and removes some edge cases in | 1 |
409,120 | 27,722,809,521 | IssuesEvent | 2023-03-14 22:16:48 | dev-Jaewon/Hello-market | https://api.github.com/repos/dev-Jaewon/Hello-market | opened | msw(mock-service-worker) | documentation | 테스트 케이스 작성시 `mockResolvedValue` 메소드로 test 케이스에 api 호출에 대한 mcok data를 일일히 작성하였는데,
브라우저를 개발 모드에서도 한번더 mock data 세팅이 필요한 불편함을 느꼈다.
그래서 찾아보니 간단한 세팅으로 동일한 데이터를 브라우저 환경과 node jest 환경 양쪽에서 사용할 수 있는 방법을 알게되었다.
---------------------------
세팅에서 가장 중요한거는 dom이 렌더링이 되기전에 mock data를 먼저 설정해주어야한다.
파이썬에서 import문은 기본적으로 `동기`이고, javascript에서 사용될 때는 import문은 `비동기` 처리라고 한다.
그렇다면 `msw 세팅할 떄는 import를 사용하면 안된다.`
>import 문을 이용할경우 msw 세팅이 끝나기도 전에 dom이 렌더링 되고 라이프 사이클에 의해 호출 되는 api 가 실행이 되어 에러가 발생된다.
>그래서 import문을 이용해서 모듈을 가져오면 안된다.
| 1.0 | msw(mock-service-worker) - 테스트 케이스 작성시 `mockResolvedValue` 메소드로 test 케이스에 api 호출에 대한 mcok data를 일일히 작성하였는데,
브라우저를 개발 모드에서도 한번더 mock data 세팅이 필요한 불편함을 느꼈다.
그래서 찾아보니 간단한 세팅으로 동일한 데이터를 브라우저 환경과 node jest 환경 양쪽에서 사용할 수 있는 방법을 알게되었다.
---------------------------
세팅에서 가장 중요한거는 dom이 렌더링이 되기전에 mock data를 먼저 설정해주어야한다.
파이썬에서 import문은 기본적으로 `동기`이고, javascript에서 사용될 때는 import문은 `비동기` 처리라고 한다.
그렇다면 `msw 세팅할 떄는 import를 사용하면 안된다.`
>import 문을 이용할경우 msw 세팅이 끝나기도 전에 dom이 렌더링 되고 라이프 사이클에 의해 호출 되는 api 가 실행이 되어 에러가 발생된다.
>그래서 import문을 이용해서 모듈을 가져오면 안된다.
| non_main | msw mock service worker 테스트 케이스 작성시 mockresolvedvalue 메소드로 test 케이스에 api 호출에 대한 mcok data를 일일히 작성하였는데 브라우저를 개발 모드에서도 한번더 mock data 세팅이 필요한 불편함을 느꼈다 그래서 찾아보니 간단한 세팅으로 동일한 데이터를 브라우저 환경과 node jest 환경 양쪽에서 사용할 수 있는 방법을 알게되었다 세팅에서 가장 중요한거는 dom이 렌더링이 되기전에 mock data를 먼저 설정해주어야한다 파이썬에서 import문은 기본적으로 동기 이고 javascript에서 사용될 때는 import문은 비동기 처리라고 한다 그렇다면 msw 세팅할 떄는 import를 사용하면 안된다 import 문을 이용할경우 msw 세팅이 끝나기도 전에 dom이 렌더링 되고 라이프 사이클에 의해 호출 되는 api 가 실행이 되어 에러가 발생된다 그래서 import문을 이용해서 모듈을 가져오면 안된다 | 0 |
256,467 | 19,423,768,359 | IssuesEvent | 2021-12-21 00:55:20 | cc-tweaked/CC-Tweaked | https://api.github.com/repos/cc-tweaked/CC-Tweaked | closed | "Long-form" documentation | enhancement good first issue area-Documentation | As mentioned in #133, https://tweaked.cc is now in a state I'm pretty happy with. As a result, @Lemmmy and I are planning to phase out the wiki, as it's unlikely to be maintained.
However, there are some things currently that the wiki does do much better, which is "long form" articles. For instance:
- [This page on `require`](https://wiki.computercraft.cc/Require#Creating_a_library_for_require)
- [Or "Network Security"](https://wiki.computercraft.cc/Network_security#Rednet) (and similarly [Network Attacks](http://www.computercraft.info/wiki/Network_Attacks) on the original forums)
- Tutorials
- Information on blocks/items
It'd be good to host some of this on tweaked.cc, mostly so we've got documentation in one place.
That said, the wiki "format" is probably the right thing for these sorts of docs. ~I think the best solution would be to write the docs on this repo's wiki, and then build it as part of the main website.~ I lied. While I guess it is more easy to contribute to, I do still probably want these in version control. | 1.0 | "Long-form" documentation - As mentioned in #133, https://tweaked.cc is now in a state I'm pretty happy with. As a result, @Lemmmy and I are planning to phase out the wiki, as it's unlikely to be maintained.
However, there are some things currently that the wiki does do much better, which is "long form" articles. For instance:
- [This page on `require`](https://wiki.computercraft.cc/Require#Creating_a_library_for_require)
- [Or "Network Security"](https://wiki.computercraft.cc/Network_security#Rednet) (and similarly [Network Attacks](http://www.computercraft.info/wiki/Network_Attacks) on the original forums)
- Tutorials
- Information on blocks/items
It'd be good to host some of this on tweaked.cc, mostly so we've got documentation in one place.
That said, the wiki "format" is probably the right thing for these sorts of docs. ~I think the best solution would be to write the docs on this repo's wiki, and then build it as part of the main website.~ I lied. While I guess it is more easy to contribute to, I do still probably want these in version control. | non_main | long form documentation as mentioned in is now in a state i m pretty happy with as a result lemmmy and i are planning to phase out the wiki as it s unlikely to be maintained however there are some things currently that the wiki does do much better which is long form articles for instance and similarly on the original forums tutorials information on blocks items it d be good to host some of this on tweaked cc mostly so we ve got documentation in one place that said the wiki format is probably the right thing for these sorts of docs i think the best solution would be to write the docs on this repo s wiki and then build it as part of the main website i lied while i guess it is more easy to contribute to i do still probably want these in version control | 0 |
142,988 | 13,047,124,975 | IssuesEvent | 2020-07-29 10:08:42 | awslabs/djl | https://api.github.com/repos/awslabs/djl | closed | Exception during runtime TrainMnist.java | documentation | ## Description
### Expected Behavior
build success
### Error Message
`"C:\Program Files\Java\jdk1.8.0_251\bin\java.exe" "-javaagent:D:\JetBrains\IntelliJ IDEA 2020.1.2\lib\idea_rt.jar=54222:D:\JetBrains\IntelliJ IDEA 2020.1.2\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_251\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\rt.jar;D:\demo\target\classes;C:\Users\Administrator\.m2\repository\com\vmware\vijava\5.1\vijava-5.1.jar;C:\Users\Administrator\.m2\repository\dom4j\dom4j\1.6.1\dom4j-1.6.1.jar;C:\Users\Administrator\.m2\repository\xml-apis\xml-apis\1.0.b2\xml-apis-1.0.b2.jar;C:\Users\Administrator\.m2\repository\commons-cli\commons-cli\1.4\commons-cli-1.4.jar;C:\Users\Administrator\.m2\repository\org\apache\logging\log4j\log4j-slf4j-impl\2.12.1\log4j-slf4j-impl-2.12.1.jar;C:\Users\Administrator\.m2\repository\org\slf4j\slf4j-api\1.7.25\slf4j-api-1.7.25.jar;C:\Users\Administrator\.m2\repository\org\apache\logging\log4j\log4j-api\2.12.1\log4j-api-2.12.1.jar;C:\Users\Administrator\.m2\repository\org\apache\logging\log4j\log4j-core\2.12.1\log4j-core-2.12.1.jar;C:\Users\Administrator\.m2\repository\com\google\code\gson\gson\2.8.5\gson-2.8.5.jar;C:\Users\Administrator\.m2\repository\ai\djl\api\0.6.0\api-0.6.0.jar;C:\Users\Administrator\.m2\repository\net\java\dev\jna\jna\5.3.0\jna-5.3.0.jar;C:\Users\Administrator\.m2\repository\org\apache\commons\commons-compress\1.20\commons-compress-1.20.jar;C:\Users\Administrator\.m2\repository\ai\djl\basicdataset\0.6.0\basicdataset-0.6.0.jar;C:\Users\Administrator\.m2\repository\ai\djl\model-zoo\0.6.0\model-zoo-0.6.0.jar;C:\Users\Administrator\.m2\repository\ai\djl\mxnet\mxnet-model-zoo\0.6.0\mxnet-model-zoo-0.6.0.jar;C:\Users\Administrator\.m2\repository\ai\djl\mxnet\mxnet-engine\0.6.0\mxnet-engine-0.6.0.jar;C:\Users\Administrator\.m2\repository\ai\djl\mxnet\mxnet-native-auto\1.7.0-b\mxnet-native-auto-1.7.0-b.jar" com.zhaowei.training.TrainMnist
[INFO ] - Training on: cpu().
[INFO ] - Load MXNet Engine Version 1.7.0 in 0.211 ms.
Training: 17% |███████ | Accuracy: 0.86, SoftmaxCrossEntropyLoss: 0.50, speed: 1416.17 items/sec[INFO ] - train P50: 23.255 ms, P90: 30.021 ms
[INFO ] - forward P50: 0.874 ms, P90: 1.021 ms
[INFO ] - training-metrics P50: 0.027 ms, P90: 0.035 ms
[INFO ] - backward P50: 1.305 ms, P90: 1.576 ms
[INFO ] - step P50: 1.676 ms, P90: 2.138 ms
Exception in thread "main" ai.djl.engine.EngineException: MXNet engine call failed: MXNetError: can't alloc
at ai.djl.mxnet.jna.JnaUtils.checkCall(JnaUtils.java:1788)
at ai.djl.mxnet.jna.JnaUtils.syncCopyToCPU(JnaUtils.java:473)
at ai.djl.mxnet.engine.MxNDArray.toByteBuffer(MxNDArray.java:294)
at ai.djl.ndarray.NDArray.toLongArray(NDArray.java:300)
at ai.djl.ndarray.NDArray.getLong(NDArray.java:558)
at ai.djl.training.evaluator.AbstractAccuracy.lambda$updateAccumulator$1(AbstractAccuracy.java:85)
at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1877)
at ai.djl.training.evaluator.AbstractAccuracy.updateAccumulator(AbstractAccuracy.java:85)
at ai.djl.training.listener.EvaluatorTrainingListener.updateEvaluators(EvaluatorTrainingListener.java:153)
at ai.djl.training.listener.EvaluatorTrainingListener.onTrainingBatch(EvaluatorTrainingListener.java:112)
at ai.djl.training.EasyTrain.lambda$trainBatch$1(EasyTrain.java:86)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at ai.djl.training.Trainer.notifyListeners(Trainer.java:249)
at ai.djl.training.EasyTrain.trainBatch(EasyTrain.java:86)
at ai.djl.training.EasyTrain.fit(EasyTrain.java:39)
at com.zhaowei.training.TrainMnist.runExample(TrainMnist.java:84)
at com.zhaowei.training.TrainMnist.main(TrainMnist.java:49)
Suppressed: java.lang.NullPointerException
at com.zhaowei.training.TrainMnist.lambda$setupTrainingConfig$0(TrainMnist.java:98)
at ai.djl.training.listener.CheckpointsTrainingListener.saveModel(CheckpointsTrainingListener.java:144)
at ai.djl.training.listener.CheckpointsTrainingListener.onTrainingEnd(CheckpointsTrainingListener.java:102)
at ai.djl.training.Trainer.lambda$close$2(Trainer.java:295)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at ai.djl.training.Trainer.notifyListeners(Trainer.java:249)
at ai.djl.training.Trainer.close(Trainer.java:295)
at com.zhaowei.training.TrainMnist.runExample(TrainMnist.java:87)
... 1 more
Suppressed: ai.djl.engine.EngineException: MXNet engine call failed: MXNetError: can't alloc
at ai.djl.mxnet.jna.JnaUtils.checkCall(JnaUtils.java:1788)
at ai.djl.mxnet.jna.JnaUtils.waitAll(JnaUtils.java:466)
at ai.djl.mxnet.engine.MxModel.close(MxModel.java:176)
at com.zhaowei.training.TrainMnist.runExample(TrainMnist.java:88)
... 1 more
Process finished with exit code 1
`
## Environment Info
JDK 8
Windows 10 X64 CPU I5 8G
Maven Compile
`<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>demo</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<djl.version>0.6.0</djl.version>
</properties>
<repositories>
<repository>
<id>djl.ai</id>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>com.vmware</groupId>
<artifactId>vijava</artifactId>
<version>5.1</version>
</dependency>
<dependency>
<groupId>commons-cli</groupId>
<artifactId>commons-cli</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.12.1</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.5</version>
</dependency>
<dependency>
<groupId>ai.djl</groupId>
<artifactId>api</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl</groupId>
<artifactId>basicdataset</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl</groupId>
<artifactId>model-zoo</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl.mxnet</groupId>
<artifactId>mxnet-model-zoo</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl.mxnet</groupId>
<artifactId>mxnet-engine</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl.mxnet</groupId>
<artifactId>mxnet-native-auto</artifactId>
<version>1.7.0-b</version>
<scope>runtime</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>`
`
package com.zhaowei.training;
import ai.djl.Device;
import ai.djl.Model;
import ai.djl.basicdataset.Mnist;
import ai.djl.basicmodelzoo.basic.Mlp;
import com.zhaowei.training.util.Arguments;
import ai.djl.metric.Metrics;
import ai.djl.ndarray.types.Shape;
import ai.djl.nn.Block;
import ai.djl.training.DefaultTrainingConfig;
import ai.djl.training.EasyTrain;
import ai.djl.training.Trainer;
import ai.djl.training.TrainingResult;
import ai.djl.training.dataset.Dataset;
import ai.djl.training.dataset.RandomAccessDataset;
import ai.djl.training.evaluator.Accuracy;
import ai.djl.training.listener.CheckpointsTrainingListener;
import ai.djl.training.listener.TrainingListener;
import ai.djl.training.loss.Loss;
import ai.djl.training.util.ProgressBar;
import java.io.IOException;
import org.apache.commons.cli.ParseException;
public final class TrainMnist {
private TrainMnist() {}
public static void main(String[] args) throws IOException, ParseException {
TrainMnist.runExample(args);
}
public static TrainingResult runExample(String[] args) throws IOException, ParseException {
Arguments arguments = Arguments.parseArgs(args);
// Construct neural network
Block block =
new Mlp(
Mnist.IMAGE_HEIGHT * Mnist.IMAGE_WIDTH,
Mnist.NUM_CLASSES,
new int[] {128, 64});
try (Model model = Model.newInstance("mlp")) {
model.setBlock(block);
// get training and validation dataset
RandomAccessDataset trainingSet = getDataset(Dataset.Usage.TRAIN, arguments);
RandomAccessDataset validateSet = getDataset(Dataset.Usage.TEST, arguments);
// setup training configuration
DefaultTrainingConfig config = setupTrainingConfig(arguments);
try (Trainer trainer = model.newTrainer(config)) {
trainer.setMetrics(new Metrics());
/*
* MNIST is 28x28 grayscale image and pre processed into 28 * 28 NDArray.
* 1st axis is batch axis, we can use 1 for initialization.
*/
Shape inputShape = new Shape(1, Mnist.IMAGE_HEIGHT * Mnist.IMAGE_WIDTH);
// initialize trainer with proper input shape
trainer.initialize(inputShape);
EasyTrain.fit(trainer, arguments.getEpoch(), trainingSet, validateSet);
return trainer.getTrainingResult();
}
}
}
private static DefaultTrainingConfig setupTrainingConfig(Arguments arguments) {
String outputDir = arguments.getOutputDir();
CheckpointsTrainingListener listener = new CheckpointsTrainingListener(outputDir);
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
return new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss())
.addEvaluator(new Accuracy())
.optDevices(Device.getDevices(arguments.getMaxGpus()))
.addTrainingListeners(TrainingListener.Defaults.logging(outputDir))
.addTrainingListeners(listener);
}
private static RandomAccessDataset getDataset(Dataset.Usage usage, Arguments arguments)
throws IOException {
Mnist mnist =
Mnist.builder()
.optUsage(usage)
.setSampling(arguments.getBatchSize(), true)
.optLimit(arguments.getLimit())
.build();
mnist.prepare(new ProgressBar());
return mnist;
}
}
`
| 1.0 | Exception during runtime TrainMnist.java - ## Description
### Expected Behavior
build success
### Error Message
`"C:\Program Files\Java\jdk1.8.0_251\bin\java.exe" "-javaagent:D:\JetBrains\IntelliJ IDEA 2020.1.2\lib\idea_rt.jar=54222:D:\JetBrains\IntelliJ IDEA 2020.1.2\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_251\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_251\jre\lib\rt.jar;D:\demo\target\classes;C:\Users\Administrator\.m2\repository\com\vmware\vijava\5.1\vijava-5.1.jar;C:\Users\Administrator\.m2\repository\dom4j\dom4j\1.6.1\dom4j-1.6.1.jar;C:\Users\Administrator\.m2\repository\xml-apis\xml-apis\1.0.b2\xml-apis-1.0.b2.jar;C:\Users\Administrator\.m2\repository\commons-cli\commons-cli\1.4\commons-cli-1.4.jar;C:\Users\Administrator\.m2\repository\org\apache\logging\log4j\log4j-slf4j-impl\2.12.1\log4j-slf4j-impl-2.12.1.jar;C:\Users\Administrator\.m2\repository\org\slf4j\slf4j-api\1.7.25\slf4j-api-1.7.25.jar;C:\Users\Administrator\.m2\repository\org\apache\logging\log4j\log4j-api\2.12.1\log4j-api-2.12.1.jar;C:\Users\Administrator\.m2\repository\org\apache\logging\log4j\log4j-core\2.12.1\log4j-core-2.12.1.jar;C:\Users\Administrator\.m2\repository\com\google\code\gson\gson\2.8.5\gson-2.8.5.jar;C:\Users\Administrator\.m2\repository\ai\djl\api\0.6.0\api-0.6.0.jar;C:\Users\Administrator\.m2\repository\net\java\dev\jna\jna\5.3.0\jna-5.3.0.jar;C:\Users\Administrator\.m2\repository\org\apache\commons\commons-compress\1.20\commons-compress-1.20.jar;C:\Users\Administrator\.m2\repository\ai\djl\basicdataset\0.6.0\basicdataset-0.6.0.jar;C:\Users\Administrator\.m2\repository\ai\djl\model-zoo\0.6.0\model-zoo-0.6.0.jar;C:\Users\Administrator\.m2\repository\ai\djl\mxnet\mxnet-model-zoo\0.6.0\mxnet-model-zoo-0.6.0.jar;C:\Users\Administrator\.m2\repository\ai\djl\mxnet\mxnet-engine\0.6.0\mxnet-engine-0.6.0.jar;C:\Users\Administrator\.m2\repository\ai\djl\mxnet\mxnet-native-auto\1.7.0-b\mxnet-native-auto-1.7.0-b.jar" com.zhaowei.training.TrainMnist
[INFO ] - Training on: cpu().
[INFO ] - Load MXNet Engine Version 1.7.0 in 0.211 ms.
Training: 17% |███████ | Accuracy: 0.86, SoftmaxCrossEntropyLoss: 0.50, speed: 1416.17 items/sec[INFO ] - train P50: 23.255 ms, P90: 30.021 ms
[INFO ] - forward P50: 0.874 ms, P90: 1.021 ms
[INFO ] - training-metrics P50: 0.027 ms, P90: 0.035 ms
[INFO ] - backward P50: 1.305 ms, P90: 1.576 ms
[INFO ] - step P50: 1.676 ms, P90: 2.138 ms
Exception in thread "main" ai.djl.engine.EngineException: MXNet engine call failed: MXNetError: can't alloc
at ai.djl.mxnet.jna.JnaUtils.checkCall(JnaUtils.java:1788)
at ai.djl.mxnet.jna.JnaUtils.syncCopyToCPU(JnaUtils.java:473)
at ai.djl.mxnet.engine.MxNDArray.toByteBuffer(MxNDArray.java:294)
at ai.djl.ndarray.NDArray.toLongArray(NDArray.java:300)
at ai.djl.ndarray.NDArray.getLong(NDArray.java:558)
at ai.djl.training.evaluator.AbstractAccuracy.lambda$updateAccumulator$1(AbstractAccuracy.java:85)
at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1877)
at ai.djl.training.evaluator.AbstractAccuracy.updateAccumulator(AbstractAccuracy.java:85)
at ai.djl.training.listener.EvaluatorTrainingListener.updateEvaluators(EvaluatorTrainingListener.java:153)
at ai.djl.training.listener.EvaluatorTrainingListener.onTrainingBatch(EvaluatorTrainingListener.java:112)
at ai.djl.training.EasyTrain.lambda$trainBatch$1(EasyTrain.java:86)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at ai.djl.training.Trainer.notifyListeners(Trainer.java:249)
at ai.djl.training.EasyTrain.trainBatch(EasyTrain.java:86)
at ai.djl.training.EasyTrain.fit(EasyTrain.java:39)
at com.zhaowei.training.TrainMnist.runExample(TrainMnist.java:84)
at com.zhaowei.training.TrainMnist.main(TrainMnist.java:49)
Suppressed: java.lang.NullPointerException
at com.zhaowei.training.TrainMnist.lambda$setupTrainingConfig$0(TrainMnist.java:98)
at ai.djl.training.listener.CheckpointsTrainingListener.saveModel(CheckpointsTrainingListener.java:144)
at ai.djl.training.listener.CheckpointsTrainingListener.onTrainingEnd(CheckpointsTrainingListener.java:102)
at ai.djl.training.Trainer.lambda$close$2(Trainer.java:295)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at ai.djl.training.Trainer.notifyListeners(Trainer.java:249)
at ai.djl.training.Trainer.close(Trainer.java:295)
at com.zhaowei.training.TrainMnist.runExample(TrainMnist.java:87)
... 1 more
Suppressed: ai.djl.engine.EngineException: MXNet engine call failed: MXNetError: can't alloc
at ai.djl.mxnet.jna.JnaUtils.checkCall(JnaUtils.java:1788)
at ai.djl.mxnet.jna.JnaUtils.waitAll(JnaUtils.java:466)
at ai.djl.mxnet.engine.MxModel.close(MxModel.java:176)
at com.zhaowei.training.TrainMnist.runExample(TrainMnist.java:88)
... 1 more
Process finished with exit code 1
`
## Environment Info
JDK 8
Windows 10 X64 CPU I5 8G
Maven Compile
`<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>demo</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<djl.version>0.6.0</djl.version>
</properties>
<repositories>
<repository>
<id>djl.ai</id>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>com.vmware</groupId>
<artifactId>vijava</artifactId>
<version>5.1</version>
</dependency>
<dependency>
<groupId>commons-cli</groupId>
<artifactId>commons-cli</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.12.1</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.5</version>
</dependency>
<dependency>
<groupId>ai.djl</groupId>
<artifactId>api</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl</groupId>
<artifactId>basicdataset</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl</groupId>
<artifactId>model-zoo</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl.mxnet</groupId>
<artifactId>mxnet-model-zoo</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl.mxnet</groupId>
<artifactId>mxnet-engine</artifactId>
<version>${djl.version}</version>
</dependency>
<dependency>
<groupId>ai.djl.mxnet</groupId>
<artifactId>mxnet-native-auto</artifactId>
<version>1.7.0-b</version>
<scope>runtime</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>`
`
package com.zhaowei.training;
import ai.djl.Device;
import ai.djl.Model;
import ai.djl.basicdataset.Mnist;
import ai.djl.basicmodelzoo.basic.Mlp;
import com.zhaowei.training.util.Arguments;
import ai.djl.metric.Metrics;
import ai.djl.ndarray.types.Shape;
import ai.djl.nn.Block;
import ai.djl.training.DefaultTrainingConfig;
import ai.djl.training.EasyTrain;
import ai.djl.training.Trainer;
import ai.djl.training.TrainingResult;
import ai.djl.training.dataset.Dataset;
import ai.djl.training.dataset.RandomAccessDataset;
import ai.djl.training.evaluator.Accuracy;
import ai.djl.training.listener.CheckpointsTrainingListener;
import ai.djl.training.listener.TrainingListener;
import ai.djl.training.loss.Loss;
import ai.djl.training.util.ProgressBar;
import java.io.IOException;
import org.apache.commons.cli.ParseException;
public final class TrainMnist {
private TrainMnist() {}
public static void main(String[] args) throws IOException, ParseException {
TrainMnist.runExample(args);
}
public static TrainingResult runExample(String[] args) throws IOException, ParseException {
Arguments arguments = Arguments.parseArgs(args);
// Construct neural network
Block block =
new Mlp(
Mnist.IMAGE_HEIGHT * Mnist.IMAGE_WIDTH,
Mnist.NUM_CLASSES,
new int[] {128, 64});
try (Model model = Model.newInstance("mlp")) {
model.setBlock(block);
// get training and validation dataset
RandomAccessDataset trainingSet = getDataset(Dataset.Usage.TRAIN, arguments);
RandomAccessDataset validateSet = getDataset(Dataset.Usage.TEST, arguments);
// setup training configuration
DefaultTrainingConfig config = setupTrainingConfig(arguments);
try (Trainer trainer = model.newTrainer(config)) {
trainer.setMetrics(new Metrics());
/*
* MNIST is 28x28 grayscale image and pre processed into 28 * 28 NDArray.
* 1st axis is batch axis, we can use 1 for initialization.
*/
Shape inputShape = new Shape(1, Mnist.IMAGE_HEIGHT * Mnist.IMAGE_WIDTH);
// initialize trainer with proper input shape
trainer.initialize(inputShape);
EasyTrain.fit(trainer, arguments.getEpoch(), trainingSet, validateSet);
return trainer.getTrainingResult();
}
}
}
private static DefaultTrainingConfig setupTrainingConfig(Arguments arguments) {
String outputDir = arguments.getOutputDir();
CheckpointsTrainingListener listener = new CheckpointsTrainingListener(outputDir);
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
return new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss())
.addEvaluator(new Accuracy())
.optDevices(Device.getDevices(arguments.getMaxGpus()))
.addTrainingListeners(TrainingListener.Defaults.logging(outputDir))
.addTrainingListeners(listener);
}
private static RandomAccessDataset getDataset(Dataset.Usage usage, Arguments arguments)
throws IOException {
Mnist mnist =
Mnist.builder()
.optUsage(usage)
.setSampling(arguments.getBatchSize(), true)
.optLimit(arguments.getLimit())
.build();
mnist.prepare(new ProgressBar());
return mnist;
}
}
`
| non_main | exception during runtime trainmnist java description expected behavior build success error message c program files java bin java exe javaagent d jetbrains intellij idea lib idea rt jar d jetbrains intellij idea bin dfile encoding utf classpath c program files java jre lib charsets jar c program files java jre lib deploy jar c program files java jre lib ext access bridge jar c program files java jre lib ext cldrdata jar c program files java jre lib ext dnsns jar c program files java jre lib ext jaccess jar c program files java jre lib ext jfxrt jar c program files java jre lib ext localedata jar c program files java jre lib ext nashorn jar c program files java jre lib ext sunec jar c program files java jre lib ext sunjce provider jar c program files java jre lib ext sunmscapi jar c program files java jre lib ext jar c program files java jre lib ext zipfs jar c program files java jre lib javaws jar c program files java jre lib jce jar c program files java jre lib jfr jar c program files java jre lib jfxswt jar c program files java jre lib jsse jar c program files java jre lib management agent jar c program files java jre lib plugin jar c program files java jre lib resources jar c program files java jre lib rt jar d demo target classes c users administrator repository com vmware vijava vijava jar c users administrator repository jar c users administrator repository xml apis xml apis xml apis jar c users administrator repository commons cli commons cli commons cli jar c users administrator repository org apache logging impl impl jar c users administrator repository org api api jar c users administrator repository org apache logging api api jar c users administrator repository org apache logging core core jar c users administrator repository com google code gson gson gson jar c users administrator repository ai djl api api jar c users administrator repository net java dev jna jna jna jar c users administrator repository org apache commons commons compress commons compress jar c users administrator repository ai djl basicdataset basicdataset jar c users administrator repository ai djl model zoo model zoo jar c users administrator repository ai djl mxnet mxnet model zoo mxnet model zoo jar c users administrator repository ai djl mxnet mxnet engine mxnet engine jar c users administrator repository ai djl mxnet mxnet native auto b mxnet native auto b jar com zhaowei training trainmnist training on cpu load mxnet engine version in ms training ███████ accuracy softmaxcrossentropyloss speed items sec train ms ms forward ms ms training metrics ms ms backward ms ms step ms ms exception in thread main ai djl engine engineexception mxnet engine call failed mxneterror can t alloc at ai djl mxnet jna jnautils checkcall jnautils java at ai djl mxnet jna jnautils synccopytocpu jnautils java at ai djl mxnet engine mxndarray tobytebuffer mxndarray java at ai djl ndarray ndarray tolongarray ndarray java at ai djl ndarray ndarray getlong ndarray java at ai djl training evaluator abstractaccuracy lambda updateaccumulator abstractaccuracy java at java util concurrent concurrenthashmap compute concurrenthashmap java at ai djl training evaluator abstractaccuracy updateaccumulator abstractaccuracy java at ai djl training listener evaluatortraininglistener updateevaluators evaluatortraininglistener java at ai djl training listener evaluatortraininglistener ontrainingbatch evaluatortraininglistener java at ai djl training easytrain lambda trainbatch easytrain java at java util arraylist foreach arraylist java at ai djl training trainer notifylisteners trainer java at ai djl training easytrain trainbatch easytrain java at ai djl training easytrain fit easytrain java at com zhaowei training trainmnist runexample trainmnist java at com zhaowei training trainmnist main trainmnist java suppressed java lang nullpointerexception at com zhaowei training trainmnist lambda setuptrainingconfig trainmnist java at ai djl training listener checkpointstraininglistener savemodel checkpointstraininglistener java at ai djl training listener checkpointstraininglistener ontrainingend checkpointstraininglistener java at ai djl training trainer lambda close trainer java at java util arraylist foreach arraylist java at ai djl training trainer notifylisteners trainer java at ai djl training trainer close trainer java at com zhaowei training trainmnist runexample trainmnist java more suppressed ai djl engine engineexception mxnet engine call failed mxneterror can t alloc at ai djl mxnet jna jnautils checkcall jnautils java at ai djl mxnet jna jnautils waitall jnautils java at ai djl mxnet engine mxmodel close mxmodel java at com zhaowei training trainmnist runexample trainmnist java more process finished with exit code environment info jdk windows cpu maven compile project xmlns xmlns xsi xsi schemalocation com example demo snapshot jar demo demo project for spring boot djl ai com vmware vijava commons cli commons cli org apache logging impl com google code gson gson ai djl api djl version ai djl basicdataset djl version ai djl model zoo djl version ai djl mxnet mxnet model zoo djl version ai djl mxnet mxnet engine djl version ai djl mxnet mxnet native auto b runtime org springframework boot spring boot maven plugin package com zhaowei training import ai djl device import ai djl model import ai djl basicdataset mnist import ai djl basicmodelzoo basic mlp import com zhaowei training util arguments import ai djl metric metrics import ai djl ndarray types shape import ai djl nn block import ai djl training defaulttrainingconfig import ai djl training easytrain import ai djl training trainer import ai djl training trainingresult import ai djl training dataset dataset import ai djl training dataset randomaccessdataset import ai djl training evaluator accuracy import ai djl training listener checkpointstraininglistener import ai djl training listener traininglistener import ai djl training loss loss import ai djl training util progressbar import java io ioexception import org apache commons cli parseexception public final class trainmnist private trainmnist public static void main string args throws ioexception parseexception trainmnist runexample args public static trainingresult runexample string args throws ioexception parseexception arguments arguments arguments parseargs args construct neural network block block new mlp mnist image height mnist image width mnist num classes new int try model model model newinstance mlp model setblock block get training and validation dataset randomaccessdataset trainingset getdataset dataset usage train arguments randomaccessdataset validateset getdataset dataset usage test arguments setup training configuration defaulttrainingconfig config setuptrainingconfig arguments try trainer trainer model newtrainer config trainer setmetrics new metrics mnist is grayscale image and pre processed into ndarray axis is batch axis we can use for initialization shape inputshape new shape mnist image height mnist image width initialize trainer with proper input shape trainer initialize inputshape easytrain fit trainer arguments getepoch trainingset validateset return trainer gettrainingresult private static defaulttrainingconfig setuptrainingconfig arguments arguments string outputdir arguments getoutputdir checkpointstraininglistener listener new checkpointstraininglistener outputdir listener setsavemodelcallback trainer trainingresult result trainer gettrainingresult model model trainer getmodel float accuracy result getvalidateevaluation accuracy model setproperty accuracy string format accuracy model setproperty loss string format result getvalidateloss return new defaulttrainingconfig loss softmaxcrossentropyloss addevaluator new accuracy optdevices device getdevices arguments getmaxgpus addtraininglisteners traininglistener defaults logging outputdir addtraininglisteners listener private static randomaccessdataset getdataset dataset usage usage arguments arguments throws ioexception mnist mnist mnist builder optusage usage setsampling arguments getbatchsize true optlimit arguments getlimit build mnist prepare new progressbar return mnist | 0 |
4,810 | 24,768,886,871 | IssuesEvent | 2022-10-22 22:18:35 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Contrib Group Application: Michael Bagnall (ElusiveMind) | Maintainer application | **Please indicate how you intend to help the Backdrop community by joining this group**
* Option #1: I would like to contribute a project
* Option #2: I would like to maintain a project, but have nothing to contribute at this time
* Option #3: I would like to update documentation and/or triage issue queues
Option: 1
## Based on your selection above, please provide the following information:
** Tweet Feed Module - My Contrib Module from Drupal 7 **
## (option #1) Please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [x] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [x] If porting a Drupal 7 project, Maintain the Git history from Drupal.
GitHub Link To Backdrop Port: https://github.com/ElusiveMind/tweet_feed/tree/1.x-1.x
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement]
YES
| True | Contrib Group Application: Michael Bagnall (ElusiveMind) - **Please indicate how you intend to help the Backdrop community by joining this group**
* Option #1: I would like to contribute a project
* Option #2: I would like to maintain a project, but have nothing to contribute at this time
* Option #3: I would like to update documentation and/or triage issue queues
Option: 1
## Based on your selection above, please provide the following information:
** Tweet Feed Module - My Contrib Module from Drupal 7 **
## (option #1) Please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [x] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [x] If porting a Drupal 7 project, Maintain the Git history from Drupal.
GitHub Link To Backdrop Port: https://github.com/ElusiveMind/tweet_feed/tree/1.x-1.x
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement]
YES
| main | contrib group application michael bagnall elusivemind please indicate how you intend to help the backdrop community by joining this group option i would like to contribute a project option i would like to maintain a project but have nothing to contribute at this time option i would like to update documentation and or triage issue queues option based on your selection above please provide the following information tweet feed module my contrib module from drupal option please note these requirements for new contrib projects include a readme md file containing license and maintainer information you can use this example include a license txt file you can use this example if porting a drupal project maintain the git history from drupal github link to backdrop port if you have chosen option or above do you agree to the yes | 1 |
806,958 | 29,929,123,905 | IssuesEvent | 2023-06-22 08:13:37 | threefoldfoundation/www_threefold_io | https://api.github.com/repos/threefoldfoundation/www_threefold_io | closed | Let's get updated threefold.io live asap | priority_major | **Notes**
- Work in [branch 3.10](https://github.com/threefoldfoundation/www_threefold_io/tree/3.10.0)
- Can be viewed locally or on www3.threefold.io when changes are pushed
- Let's push something to production by the end of the day
- See assets below
**Assets**
https://drive.google.com/drive/folders/1ye3PjDj8XVqrIYrDaFoRyUrBvFxpOtj_?usp=drive_link
**CSS**
- [x] H1 should be Green color (see reference image of home page above, match that color)
- [x] H2 should be Black
- [x] H3 also same green as H1
- [x] Normal text should be dark grey (see reference image of home page again)
**All Pages: Headers**
- [x] As per reference images, please make sure all home pages have the same background graphic, see the assets folder. You can keep the circular images now being used in the header. Only background image needs to be created / added.
**Home Page**
- [x] Keep "Why Do We Need" the same for now
- [x] Evolving the Internet should get updated based on reference image for home page
- [x] Live & Operational block should be converted into black, white, and grey as per reference image
- [x] "The ThreeFold Grid is community-owned" should be updated based on reference image
- [x] Keep "Support the Project"
**About Page**
- [x] Please follow same directions from above. Use alternating backgrounds between blocks, taking from the image assets shared above. You may zoom in, rotate, or flip assets to add variations (e.g. the folded checkered background can be used in different ways) and you can also reposition text as you like.
**Expand Page**
- [x] "Join the Revolution" should be in black, white, and grey, same as on the home page.
- [x] Please go black, white, and grey for the "Simple for Everyone" images
- [x] Same for the "Valuable and In Demand" block at the bottom
**Build Page**
- [x] Same directions as the About Page
- [x] Add links to Titles in "Engage with the Community"
**Token Page**
- [x] Same directions as About and Build Page
- [x] Make sure all images towards the bottom of the page are black, white, and grey
**Other Pages**
- [x] Make sure Blog, Newsroom, Support, and Careers are aligned with updated branding (e.g. backgrounds, black white and grey images, etc. | 1.0 | Let's get updated threefold.io live asap - **Notes**
- Work in [branch 3.10](https://github.com/threefoldfoundation/www_threefold_io/tree/3.10.0)
- Can be viewed locally or on www3.threefold.io when changes are pushed
- Let's push something to production by the end of the day
- See assets below
**Assets**
https://drive.google.com/drive/folders/1ye3PjDj8XVqrIYrDaFoRyUrBvFxpOtj_?usp=drive_link
**CSS**
- [x] H1 should be Green color (see reference image of home page above, match that color)
- [x] H2 should be Black
- [x] H3 also same green as H1
- [x] Normal text should be dark grey (see reference image of home page again)
**All Pages: Headers**
- [x] As per reference images, please make sure all home pages have the same background graphic, see the assets folder. You can keep the circular images now being used in the header. Only background image needs to be created / added.
**Home Page**
- [x] Keep "Why Do We Need" the same for now
- [x] Evolving the Internet should get updated based on reference image for home page
- [x] Live & Operational block should be converted into black, white, and grey as per reference image
- [x] "The ThreeFold Grid is community-owned" should be updated based on reference image
- [x] Keep "Support the Project"
**About Page**
- [x] Please follow same directions from above. Use alternating backgrounds between blocks, taking from the image assets shared above. You may zoom in, rotate, or flip assets to add variations (e.g. the folded checkered background can be used in different ways) and you can also reposition text as you like.
**Expand Page**
- [x] "Join the Revolution" should be in black, white, and grey, same as on the home page.
- [x] Please go black, white, and grey for the "Simple for Everyone" images
- [x] Same for the "Valuable and In Demand" block at the bottom
**Build Page**
- [x] Same directions as the About Page
- [x] Add links to Titles in "Engage with the Community"
**Token Page**
- [x] Same directions as About and Build Page
- [x] Make sure all images towards the bottom of the page are black, white, and grey
**Other Pages**
- [x] Make sure Blog, Newsroom, Support, and Careers are aligned with updated branding (e.g. backgrounds, black white and grey images, etc. | non_main | let s get updated threefold io live asap notes work in can be viewed locally or on threefold io when changes are pushed let s push something to production by the end of the day see assets below assets css should be green color see reference image of home page above match that color should be black also same green as normal text should be dark grey see reference image of home page again all pages headers as per reference images please make sure all home pages have the same background graphic see the assets folder you can keep the circular images now being used in the header only background image needs to be created added home page keep why do we need the same for now evolving the internet should get updated based on reference image for home page live operational block should be converted into black white and grey as per reference image the threefold grid is community owned should be updated based on reference image keep support the project about page please follow same directions from above use alternating backgrounds between blocks taking from the image assets shared above you may zoom in rotate or flip assets to add variations e g the folded checkered background can be used in different ways and you can also reposition text as you like expand page join the revolution should be in black white and grey same as on the home page please go black white and grey for the simple for everyone images same for the valuable and in demand block at the bottom build page same directions as the about page add links to titles in engage with the community token page same directions as about and build page make sure all images towards the bottom of the page are black white and grey other pages make sure blog newsroom support and careers are aligned with updated branding e g backgrounds black white and grey images etc | 0 |
76,660 | 7,543,200,196 | IssuesEvent | 2018-04-17 14:55:46 | ODM2/ODM2DataSharingPortal | https://api.github.com/repos/ODM2/ODM2DataSharingPortal | closed | Changes to tiles in My sites page | in progress ready for testing tested | Rename the tile as the sampling feature code. The code is generally shorter, so the tiles don't get as big. And it's redundant to have the tile named as the sampling feature name and then to immediately list the same information below.

| 2.0 | Changes to tiles in My sites page - Rename the tile as the sampling feature code. The code is generally shorter, so the tiles don't get as big. And it's redundant to have the tile named as the sampling feature name and then to immediately list the same information below.

| non_main | changes to tiles in my sites page rename the tile as the sampling feature code the code is generally shorter so the tiles don t get as big and it s redundant to have the tile named as the sampling feature name and then to immediately list the same information below | 0 |
36,247 | 12,404,344,113 | IssuesEvent | 2020-05-21 15:22:57 | jgeraigery/beaker-notebook | https://api.github.com/repos/jgeraigery/beaker-notebook | opened | CVE-2016-7103 (Medium) detected in jquery-ui-1.10.3.js | security vulnerability | ## CVE-2016-7103 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.10.3.js</b></p></summary>
<p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.10.3/jquery-ui.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.10.3/jquery-ui.js</a></p>
<p>Path to dependency file: /tmp/ws-ua_20200521151458_RSRDSO/archiveExtraction_BOEWOR/20200521151458/ws-scm_depth_0/beaker-notebook/data/allDeps/src/vendor/DataTables-1.10.5/examples/api/tabs_and_scrolling.html</p>
<p>Path to vulnerable library: _depth_0/beaker-notebook/data/allDeps/src/vendor/DataTables-1.10.5/examples/api/tabs_and_scrolling.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.10.3.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/beaker-notebook/commit/e74341acf643e87bd21b092c7a9e9f6bb96fa7c4">e74341acf643e87bd21b092c7a9e9f6bb96fa7c4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in jQuery UI before 1.12.0 might allow remote attackers to inject arbitrary web script or HTML via the closeText parameter of the dialog function.
<p>Publish Date: 2017-03-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7103>CVE-2016-7103</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-7103">https://nvd.nist.gov/vuln/detail/CVE-2016-7103</a></p>
<p>Release Date: 2017-03-15</p>
<p>Fix Resolution: 1.12.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jqueryui","packageVersion":"1.10.3","isTransitiveDependency":false,"dependencyTree":"jqueryui:1.10.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.12.0"}],"vulnerabilityIdentifier":"CVE-2016-7103","vulnerabilityDetails":"Cross-site scripting (XSS) vulnerability in jQuery UI before 1.12.0 might allow remote attackers to inject arbitrary web script or HTML via the closeText parameter of the dialog function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7103","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2016-7103 (Medium) detected in jquery-ui-1.10.3.js - ## CVE-2016-7103 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.10.3.js</b></p></summary>
<p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.10.3/jquery-ui.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.10.3/jquery-ui.js</a></p>
<p>Path to dependency file: /tmp/ws-ua_20200521151458_RSRDSO/archiveExtraction_BOEWOR/20200521151458/ws-scm_depth_0/beaker-notebook/data/allDeps/src/vendor/DataTables-1.10.5/examples/api/tabs_and_scrolling.html</p>
<p>Path to vulnerable library: _depth_0/beaker-notebook/data/allDeps/src/vendor/DataTables-1.10.5/examples/api/tabs_and_scrolling.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.10.3.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/beaker-notebook/commit/e74341acf643e87bd21b092c7a9e9f6bb96fa7c4">e74341acf643e87bd21b092c7a9e9f6bb96fa7c4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in jQuery UI before 1.12.0 might allow remote attackers to inject arbitrary web script or HTML via the closeText parameter of the dialog function.
<p>Publish Date: 2017-03-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7103>CVE-2016-7103</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-7103">https://nvd.nist.gov/vuln/detail/CVE-2016-7103</a></p>
<p>Release Date: 2017-03-15</p>
<p>Fix Resolution: 1.12.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jqueryui","packageVersion":"1.10.3","isTransitiveDependency":false,"dependencyTree":"jqueryui:1.10.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.12.0"}],"vulnerabilityIdentifier":"CVE-2016-7103","vulnerabilityDetails":"Cross-site scripting (XSS) vulnerability in jQuery UI before 1.12.0 might allow remote attackers to inject arbitrary web script or HTML via the closeText parameter of the dialog function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-7103","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_main | cve medium detected in jquery ui js cve medium severity vulnerability vulnerable library jquery ui js a curated set of user interface interactions effects widgets and themes built on top of the jquery javascript library library home page a href path to dependency file tmp ws ua rsrdso archiveextraction boewor ws scm depth beaker notebook data alldeps src vendor datatables examples api tabs and scrolling html path to vulnerable library depth beaker notebook data alldeps src vendor datatables examples api tabs and scrolling html dependency hierarchy x jquery ui js vulnerable library found in head commit a href vulnerability details cross site scripting xss vulnerability in jquery ui before might allow remote attackers to inject arbitrary web script or html via the closetext parameter of the dialog function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails cross site scripting xss vulnerability in jquery ui before might allow remote attackers to inject arbitrary web script or html via the closetext parameter of the dialog function vulnerabilityurl | 0 |
5,362 | 26,979,556,389 | IssuesEvent | 2023-02-09 12:06:33 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Application to join: dBonde (cacheexclude module) | Port in progress Maintainer application | I have ported the Drupal 7 Cache Exclude module and just wanted to contribute it somehow but I am unsure how.
I have attached the zipped module file.
[cacheexclude.zip](https://github.com/backdrop-ops/contrib/files/4803082/cacheexclude.zip)
| True | Application to join: dBonde (cacheexclude module) - I have ported the Drupal 7 Cache Exclude module and just wanted to contribute it somehow but I am unsure how.
I have attached the zipped module file.
[cacheexclude.zip](https://github.com/backdrop-ops/contrib/files/4803082/cacheexclude.zip)
| main | application to join dbonde cacheexclude module i have ported the drupal cache exclude module and just wanted to contribute it somehow but i am unsure how i have attached the zipped module file | 1 |
53,238 | 6,712,828,777 | IssuesEvent | 2017-10-13 10:55:31 | tenders-exposed/elvis-ember | https://api.github.com/repos/tenders-exposed/elvis-ember | closed | sidebar entity circles dont display well with large numbers inside | bug design help wanted to review | if a "sum" is selected as a node size or connection width in the query builder, the numbers inside circles and squares introduced in the new sidebar design do not count with numbers with more than 2 digits. so sums like 167,7 (millions €) do not display correctly and mess up the look. | 1.0 | sidebar entity circles dont display well with large numbers inside - if a "sum" is selected as a node size or connection width in the query builder, the numbers inside circles and squares introduced in the new sidebar design do not count with numbers with more than 2 digits. so sums like 167,7 (millions €) do not display correctly and mess up the look. | non_main | sidebar entity circles dont display well with large numbers inside if a sum is selected as a node size or connection width in the query builder the numbers inside circles and squares introduced in the new sidebar design do not count with numbers with more than digits so sums like millions € do not display correctly and mess up the look | 0 |
3,454 | 13,216,424,954 | IssuesEvent | 2020-08-17 03:38:46 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | LLDP module fails with a 'str' object does not support item assignment | affects_2.2 bot_closed bug collection collection:community.general module needs_collection_redirect needs_maintainer net_tools support:community traceback | <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the module/plugin/task/feature -->
network/lldp.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Ubuntu 16.04
-->
##### SUMMARY
<!--- Explain the problem briefly -->
LLDP module fails with a 'str' object does not support item assignment in some cases. Occasionally when looping through the lldpctl output, it runs across duplicate variables. When it tries to assign the new variable a previously assigned name, it throws an exception. I was able to get around this by using a try except.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
Full command output here for debugging:
https://gist.github.com/antonym/01e1663c8790c65063ba2d3177fd9457
Should be able to put that into a file, change up the module to cat that file out and you'll see the failure.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- name: Ensure LLDP started
service:
name: lldpd
state: started
- name: Gather LLDP facts about server
lldp:
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expected LLDP info to be captured.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'str' object does not support item assignment
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_9j3yPm/ansible_module_lldp.py\", line 94, in <module>\n main()\n File \"/tmp/ansible_9j3yPm/ansible_module_lldp.py\", line 83, in main\n lldp_output = gather_lldp()\n File \"/tmp/ansible_9j3yPm/ansible_module_lldp.py\", line 76, in gather_lldp\n current_dict[final] = value\nTypeError: 'str' object does not support item assignment\n", "module_stdout": "", "msg": "MODULE FAILURE"}
<!--- Paste verbatim command output between quotes below -->
```
root@localhost:~# lldpctl -f keyvalue
lldp.eno1.via=LLDP
lldp.eno1.rid=2
lldp.eno1.age=0 day, 03:33:53
lldp.eno1.chassis.mac=54:7f:ee:a5:79:6c
lldp.eno1.chassis.name=servers-lab-1.sat6
lldp.eno1.chassis.descr=Cisco Nexus Operating System (NX-OS) Software 6.0(2)U5(1)
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2015, Cisco Systems, Inc. All rights reserved.
lldp.eno1.chassis.mgmt-ip=10.127.5.250
lldp.eno1.chassis.Bridge.enabled=on
lldp.eno1.chassis.Router.enabled=on
lldp.eno1.port.ifname=Ethernet1/5
lldp.eno1.port.descr=Ethernet1/5
lldp.eno1.vlan.vlan-id=906
lldp.eno1.vlan.pvid=yes
lldp.eno1.unknown-tlvs.unknown-tlv.oui=00,01,42
lldp.eno1.unknown-tlvs.unknown-tlv.subtype=1
lldp.eno1.unknown-tlvs.unknown-tlv.len=1
lldp.eno1.unknown-tlvs.unknown-tlv=01
lldp.eno1.unknown-tlvs.unknown-tlv.oui=00,01,42 <- fails parsing this line because four lines up, same variable
```
| True | LLDP module fails with a 'str' object does not support item assignment - <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the module/plugin/task/feature -->
network/lldp.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Ubuntu 16.04
-->
##### SUMMARY
<!--- Explain the problem briefly -->
LLDP module fails with a 'str' object does not support item assignment in some cases. Occasionally when looping through the lldpctl output, it runs across duplicate variables. When it tries to assign the new variable a previously assigned name, it throws an exception. I was able to get around this by using a try except.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
Full command output here for debugging:
https://gist.github.com/antonym/01e1663c8790c65063ba2d3177fd9457
Should be able to put that into a file, change up the module to cat that file out and you'll see the failure.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- name: Ensure LLDP started
service:
name: lldpd
state: started
- name: Gather LLDP facts about server
lldp:
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expected LLDP info to be captured.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'str' object does not support item assignment
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_9j3yPm/ansible_module_lldp.py\", line 94, in <module>\n main()\n File \"/tmp/ansible_9j3yPm/ansible_module_lldp.py\", line 83, in main\n lldp_output = gather_lldp()\n File \"/tmp/ansible_9j3yPm/ansible_module_lldp.py\", line 76, in gather_lldp\n current_dict[final] = value\nTypeError: 'str' object does not support item assignment\n", "module_stdout": "", "msg": "MODULE FAILURE"}
<!--- Paste verbatim command output between quotes below -->
```
root@localhost:~# lldpctl -f keyvalue
lldp.eno1.via=LLDP
lldp.eno1.rid=2
lldp.eno1.age=0 day, 03:33:53
lldp.eno1.chassis.mac=54:7f:ee:a5:79:6c
lldp.eno1.chassis.name=servers-lab-1.sat6
lldp.eno1.chassis.descr=Cisco Nexus Operating System (NX-OS) Software 6.0(2)U5(1)
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2015, Cisco Systems, Inc. All rights reserved.
lldp.eno1.chassis.mgmt-ip=10.127.5.250
lldp.eno1.chassis.Bridge.enabled=on
lldp.eno1.chassis.Router.enabled=on
lldp.eno1.port.ifname=Ethernet1/5
lldp.eno1.port.descr=Ethernet1/5
lldp.eno1.vlan.vlan-id=906
lldp.eno1.vlan.pvid=yes
lldp.eno1.unknown-tlvs.unknown-tlv.oui=00,01,42
lldp.eno1.unknown-tlvs.unknown-tlv.subtype=1
lldp.eno1.unknown-tlvs.unknown-tlv.len=1
lldp.eno1.unknown-tlvs.unknown-tlv=01
lldp.eno1.unknown-tlvs.unknown-tlv.oui=00,01,42 <- fails parsing this line because four lines up, same variable
```
| main | lldp module fails with a str object does not support item assignment verify first that your issue request is not already reported on github also test if the latest release and master branch are affected too issue type bug report component name network lldp py ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment ubuntu summary lldp module fails with a str object does not support item assignment in some cases occasionally when looping through the lldpctl output it runs across duplicate variables when it tries to assign the new variable a previously assigned name it throws an exception i was able to get around this by using a try except steps to reproduce for bugs show exactly how to reproduce the problem using a minimal test case for new features show how the feature would be used full command output here for debugging should be able to put that into a file change up the module to cat that file out and you ll see the failure yaml name ensure lldp started service name lldpd state started name gather lldp facts about server lldp expected results expected lldp info to be captured actual results an exception occurred during task execution to see the full traceback use vvv the error was typeerror str object does not support item assignment fatal failed changed false failed true module stderr traceback most recent call last n file tmp ansible ansible module lldp py line in n main n file tmp ansible ansible module lldp py line in main n lldp output gather lldp n file tmp ansible ansible module lldp py line in gather lldp n current dict value ntypeerror str object does not support item assignment n module stdout msg module failure root localhost lldpctl f keyvalue lldp via lldp lldp rid lldp age day lldp chassis mac ee lldp chassis name servers lab lldp chassis descr cisco nexus operating system nx os software tac support copyright c cisco systems inc all rights reserved lldp chassis mgmt ip lldp chassis bridge enabled on lldp chassis router enabled on lldp port ifname lldp port descr lldp vlan vlan id lldp vlan pvid yes lldp unknown tlvs unknown tlv oui lldp unknown tlvs unknown tlv subtype lldp unknown tlvs unknown tlv len lldp unknown tlvs unknown tlv lldp unknown tlvs unknown tlv oui fails parsing this line because four lines up same variable | 1 |
33,387 | 2,764,615,125 | IssuesEvent | 2015-04-29 16:17:54 | numenta/nupic | https://api.github.com/repos/numenta/nupic | opened | capnp installation failure during pip install [OS X Yosemite] | priority:2 type:bug type:installation | When running:
pip install nupic
During the `pycapnp` install step, the `capnp` library is not found, so `pycapnp` attempts to install it. But the following error occurs:
```
building 'capnp.lib.capnp' extension
creating build/temp.macosx-10.5-x86_64-2.7
creating build/temp.macosx-10.5-x86_64-2.7/capnp
creating build/temp.macosx-10.5-x86_64-2.7/capnp/lib
gcc -fno-strict-aliasing -I/Users/kyoungrok/anaconda/include -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch x86_64 -I. -I/Users/kyoungrok/anaconda/include/python2.7 -I/private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include -c capnp/lib/capnp.cpp -o build/temp.macosx-10.5-x86_64-2.7/capnp/lib/capnp.o --std=c++11
In file included from capnp/lib/capnp.cpp:236:
./capnp/helpers/checkCompiler.h:4:8: warning: "Your compiler supports C++11 but your C++ standard library does not. If your system has libc++ installed (as should be the case on e.g. Mac OSX), try adding -stdlib=libc++ to your CFLAGS (ignore the other warning that says to use CXXFLAGS)." [-W#warnings]
#warning "Your compiler supports C++11 but your C++ standard library does not. If your system has libc++ installed (as should be the case on e.g. Mac OSX), try adding -stdlib=libc++ to your CFLAGS (ignore the other warning that says to use CXXFLAGS)."
^
In file included from capnp/lib/capnp.cpp:236:
In file included from ./capnp/helpers/checkCompiler.h:9:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/dynamic.h:40:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/schema.h:33:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/schema.capnp.h:7:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/generated-header-support.h:31:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/layout.h:36:
/private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/kj/common.h:54:8: warning: "Your compiler supports C++11 but your C++ standard library does not. If your " "system has libc++ installed (as should be the case on e.g. Mac OSX), try adding " "-stdlib=libc++ to your CXXFLAGS." [-W#warnings]
#warning "Your compiler supports C++11 but your C++ standard library does not. If your "\
^
/private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/kj/common.h:78:10: fatal error: 'initializer_list' file not found
#include <initializer_list>
^
2 warnings and 1 error generated.
error: command 'gcc' failed with exit status 1
```
The relevant message is:
> Your compiler supports C++11 but your C++ standard library does not. If your system has libc++ installed (as should be the case on e.g. Mac OSX), try adding -stdlib=libc++ to your CFLAGS (ignore the other warning that says to use CXXFLAGS).
However, I don't think there is a way to add CFLAGS from the `pip` command line interface. | 1.0 | capnp installation failure during pip install [OS X Yosemite] - When running:
pip install nupic
During the `pycapnp` install step, the `capnp` library is not found, so `pycapnp` attempts to install it. But the following error occurs:
```
building 'capnp.lib.capnp' extension
creating build/temp.macosx-10.5-x86_64-2.7
creating build/temp.macosx-10.5-x86_64-2.7/capnp
creating build/temp.macosx-10.5-x86_64-2.7/capnp/lib
gcc -fno-strict-aliasing -I/Users/kyoungrok/anaconda/include -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch x86_64 -I. -I/Users/kyoungrok/anaconda/include/python2.7 -I/private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include -c capnp/lib/capnp.cpp -o build/temp.macosx-10.5-x86_64-2.7/capnp/lib/capnp.o --std=c++11
In file included from capnp/lib/capnp.cpp:236:
./capnp/helpers/checkCompiler.h:4:8: warning: "Your compiler supports C++11 but your C++ standard library does not. If your system has libc++ installed (as should be the case on e.g. Mac OSX), try adding -stdlib=libc++ to your CFLAGS (ignore the other warning that says to use CXXFLAGS)." [-W#warnings]
#warning "Your compiler supports C++11 but your C++ standard library does not. If your system has libc++ installed (as should be the case on e.g. Mac OSX), try adding -stdlib=libc++ to your CFLAGS (ignore the other warning that says to use CXXFLAGS)."
^
In file included from capnp/lib/capnp.cpp:236:
In file included from ./capnp/helpers/checkCompiler.h:9:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/dynamic.h:40:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/schema.h:33:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/schema.capnp.h:7:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/generated-header-support.h:31:
In file included from /private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/capnp/layout.h:36:
/private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/kj/common.h:54:8: warning: "Your compiler supports C++11 but your C++ standard library does not. If your " "system has libc++ installed (as should be the case on e.g. Mac OSX), try adding " "-stdlib=libc++ to your CXXFLAGS." [-W#warnings]
#warning "Your compiler supports C++11 but your C++ standard library does not. If your "\
^
/private/var/folders/r2/0598p0bj4s3fd66rdwnp57mr0000gn/T/pip-build-xp8xtv/pycapnp/build/include/kj/common.h:78:10: fatal error: 'initializer_list' file not found
#include <initializer_list>
^
2 warnings and 1 error generated.
error: command 'gcc' failed with exit status 1
```
The relevant message is:
> Your compiler supports C++11 but your C++ standard library does not. If your system has libc++ installed (as should be the case on e.g. Mac OSX), try adding -stdlib=libc++ to your CFLAGS (ignore the other warning that says to use CXXFLAGS).
However, I don't think there is a way to add CFLAGS from the `pip` command line interface. | non_main | capnp installation failure during pip install when running pip install nupic during the pycapnp install step the capnp library is not found so pycapnp attempts to install it but the following error occurs building capnp lib capnp extension creating build temp macosx creating build temp macosx capnp creating build temp macosx capnp lib gcc fno strict aliasing i users kyoungrok anaconda include dndebug g fwrapv wall wstrict prototypes arch i i users kyoungrok anaconda include i private var folders t pip build pycapnp build include c capnp lib capnp cpp o build temp macosx capnp lib capnp o std c in file included from capnp lib capnp cpp capnp helpers checkcompiler h warning your compiler supports c but your c standard library does not if your system has libc installed as should be the case on e g mac osx try adding stdlib libc to your cflags ignore the other warning that says to use cxxflags warning your compiler supports c but your c standard library does not if your system has libc installed as should be the case on e g mac osx try adding stdlib libc to your cflags ignore the other warning that says to use cxxflags in file included from capnp lib capnp cpp in file included from capnp helpers checkcompiler h in file included from private var folders t pip build pycapnp build include capnp dynamic h in file included from private var folders t pip build pycapnp build include capnp schema h in file included from private var folders t pip build pycapnp build include capnp schema capnp h in file included from private var folders t pip build pycapnp build include capnp generated header support h in file included from private var folders t pip build pycapnp build include capnp layout h private var folders t pip build pycapnp build include kj common h warning your compiler supports c but your c standard library does not if your system has libc installed as should be the case on e g mac osx try adding stdlib libc to your cxxflags warning your compiler supports c but your c standard library does not if your private var folders t pip build pycapnp build include kj common h fatal error initializer list file not found include warnings and error generated error command gcc failed with exit status the relevant message is your compiler supports c but your c standard library does not if your system has libc installed as should be the case on e g mac osx try adding stdlib libc to your cflags ignore the other warning that says to use cxxflags however i don t think there is a way to add cflags from the pip command line interface | 0 |
14,788 | 5,791,330,344 | IssuesEvent | 2017-05-02 05:20:08 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | configure does not work on macOS if sed is GNU sed | stat:awaiting tensorflower type:build/install | ### System information
- **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: no
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: macOS 10.12.4
- **TensorFlow installed from (source or binary)**: source
- **TensorFlow version (use command below)**: Current head: 3ce228e
### Describe the problem
If sed is GNU sed on macOS, configure fails with `sed: can't read : No such file or directory`.
| 1.0 | configure does not work on macOS if sed is GNU sed - ### System information
- **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: no
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: macOS 10.12.4
- **TensorFlow installed from (source or binary)**: source
- **TensorFlow version (use command below)**: Current head: 3ce228e
### Describe the problem
If sed is GNU sed on macOS, configure fails with `sed: can't read : No such file or directory`.
| non_main | configure does not work on macos if sed is gnu sed system information have i written custom code as opposed to using a stock example script provided in tensorflow no os platform and distribution e g linux ubuntu macos tensorflow installed from source or binary source tensorflow version use command below current head describe the problem if sed is gnu sed on macos configure fails with sed can t read no such file or directory | 0 |
708 | 4,287,481,785 | IssuesEvent | 2016-07-16 20:05:27 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | ohai command outputting wrong color | awaiting maintainer feedback core | For some reason, the change in #22762 change makes all cask `ohai` commands displayed as white for me - instead of bold black. Reverting this change fixes the problem.
The `ohai` commands issued by homebrew formulas are still the correct color. Just the cask ones. You can see this in action by running `brew cask info zulu` (or any other cask that has "Caveats") and notice the incorrect color of the `Caveats` line. | True | ohai command outputting wrong color - For some reason, the change in #22762 change makes all cask `ohai` commands displayed as white for me - instead of bold black. Reverting this change fixes the problem.
The `ohai` commands issued by homebrew formulas are still the correct color. Just the cask ones. You can see this in action by running `brew cask info zulu` (or any other cask that has "Caveats") and notice the incorrect color of the `Caveats` line. | main | ohai command outputting wrong color for some reason the change in change makes all cask ohai commands displayed as white for me instead of bold black reverting this change fixes the problem the ohai commands issued by homebrew formulas are still the correct color just the cask ones you can see this in action by running brew cask info zulu or any other cask that has caveats and notice the incorrect color of the caveats line | 1 |
5,693 | 29,999,424,890 | IssuesEvent | 2023-06-26 08:19:06 | precice/precice | https://api.github.com/repos/precice/precice | closed | Replace std::vector with std::map for storing meshes and data (and other?) | maintainability | Currently we use `std::vector` very often as a container for storing meshes and data. The mesh or data is then queried via its id. I think using a `std::map` would be more appropriate for this use-case, because we do not have to reserve memory beforehand, keys are still unique by definition of the data structure and we need less checks ([example](https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.cpp#L244-L245)).
### Current use of `std::vector`
1. Initialize an `std::vector` as a container:
https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.cpp#L29
2. Store something at the corresponding ID:
https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.cpp#L95
3. Query via ID:
https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.cpp#L242
### Proposed use of `std::map`
I'm currently working on https://github.com/precice/precice/pull/1159, where `_readDataContext` and `_writeDataContext` will be stored in a `std::map`:
https://github.com/precice/precice/blob/8d17471d1fa1c1f42075971d1b398b86c0d28d33/src/precice/impl/Participant.hpp#L298-L300
Previously, a `std::vector` (see also #1164) was used:
https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.hpp#L311-L315 | True | Replace std::vector with std::map for storing meshes and data (and other?) - Currently we use `std::vector` very often as a container for storing meshes and data. The mesh or data is then queried via its id. I think using a `std::map` would be more appropriate for this use-case, because we do not have to reserve memory beforehand, keys are still unique by definition of the data structure and we need less checks ([example](https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.cpp#L244-L245)).
### Current use of `std::vector`
1. Initialize an `std::vector` as a container:
https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.cpp#L29
2. Store something at the corresponding ID:
https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.cpp#L95
3. Query via ID:
https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.cpp#L242
### Proposed use of `std::map`
I'm currently working on https://github.com/precice/precice/pull/1159, where `_readDataContext` and `_writeDataContext` will be stored in a `std::map`:
https://github.com/precice/precice/blob/8d17471d1fa1c1f42075971d1b398b86c0d28d33/src/precice/impl/Participant.hpp#L298-L300
Previously, a `std::vector` (see also #1164) was used:
https://github.com/precice/precice/blob/8a4f4845f544a6c2a11f45e4b92ebd96874f4456/src/precice/impl/Participant.hpp#L311-L315 | main | replace std vector with std map for storing meshes and data and other currently we use std vector very often as a container for storing meshes and data the mesh or data is then queried via its id i think using a std map would be more appropriate for this use case because we do not have to reserve memory beforehand keys are still unique by definition of the data structure and we need less checks current use of std vector initialize an std vector as a container store something at the corresponding id query via id proposed use of std map i m currently working on where readdatacontext and writedatacontext will be stored in a std map previously a std vector see also was used | 1 |
642,286 | 20,883,483,865 | IssuesEvent | 2022-03-23 00:38:49 | apcountryman/picolibrary | https://api.github.com/repos/apcountryman/picolibrary | closed | Add Microchip facilities namespace | priority-normal status-awaiting_review type-feature | Add Microchip facilities namespace (`::picolibrary::Microchip`).
- [x] The `Microchip` namespace should be defined in the `include/picolibrary/microchip.h`/`source/picolibrary/microchip.cc` header/source file pair | 1.0 | Add Microchip facilities namespace - Add Microchip facilities namespace (`::picolibrary::Microchip`).
- [x] The `Microchip` namespace should be defined in the `include/picolibrary/microchip.h`/`source/picolibrary/microchip.cc` header/source file pair | non_main | add microchip facilities namespace add microchip facilities namespace picolibrary microchip the microchip namespace should be defined in the include picolibrary microchip h source picolibrary microchip cc header source file pair | 0 |
122,886 | 4,846,630,772 | IssuesEvent | 2016-11-10 12:28:32 | bounswe/bounswe2016group7 | https://api.github.com/repos/bounswe/bounswe2016group7 | opened | Create Topic Front End | priority: medium | A form to create topic must be implemented and a registered creator shall reach this form with a button or a link in homepage | 1.0 | Create Topic Front End - A form to create topic must be implemented and a registered creator shall reach this form with a button or a link in homepage | non_main | create topic front end a form to create topic must be implemented and a registered creator shall reach this form with a button or a link in homepage | 0 |
4,768 | 24,554,546,837 | IssuesEvent | 2022-10-12 14:55:41 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Unable to modify Money cell value | type: bug work: frontend restricted: maintainers status: started | ## Reproduce
I can edit most cells, but when I try to edit an "Items"."Acquisition Price" cell, no request is made to the server and the display value of the cell immediately reverts to its value before I entered edit mode.
| True | Unable to modify Money cell value - ## Reproduce
I can edit most cells, but when I try to edit an "Items"."Acquisition Price" cell, no request is made to the server and the display value of the cell immediately reverts to its value before I entered edit mode.
| main | unable to modify money cell value reproduce i can edit most cells but when i try to edit an items acquisition price cell no request is made to the server and the display value of the cell immediately reverts to its value before i entered edit mode | 1 |
81,289 | 3,588,395,675 | IssuesEvent | 2016-01-31 00:12:45 | docker/swarm | https://api.github.com/repos/docker/swarm | reopened | There should be documentation on how to remove a node from the swarm | area/doc kind/enhancement priority/P2 | Per @abronan's request, opening an issue to track lack of docs on this topic. Node removal is kind of tricky, and users need to be away of ways it can fail, as well.
see #197 for reference | 1.0 | There should be documentation on how to remove a node from the swarm - Per @abronan's request, opening an issue to track lack of docs on this topic. Node removal is kind of tricky, and users need to be away of ways it can fail, as well.
see #197 for reference | non_main | there should be documentation on how to remove a node from the swarm per abronan s request opening an issue to track lack of docs on this topic node removal is kind of tricky and users need to be away of ways it can fail as well see for reference | 0 |
1,302 | 5,542,078,195 | IssuesEvent | 2017-03-22 14:20:07 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Volumes do not get dropped when removing the docker container | affects_2.0 bug_report cloud docker waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
_docker module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.0.1.0
```
##### CONFIGURATION
```
[defaults]
host_key_checking = False
forks = 10
var_compression_level=9
retries=5
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/tmp
```
##### OS / ENVIRONMENT
OS From: OS X, Ubuntu 14.04
OS To: Ubuntu 14.04
##### SUMMARY
<!--- Explain the problem briefly -->
Volumes do not get dropped when removing the docker container
##### STEPS TO REPRODUCE
1. Create the docker container with attached volumes
2. fill the volume
3. drop the container
4. repeat several times
5. `docker volume ls -f dangling=true` will show you the not-used containers
See [Docker manual](https://docs.docker.com/v1.10/engine/userguide/containers/dockervolumes/):
> Note: Docker will not warn you when removing a container without providing the `-v` option to delete its volumes. If you remove containers without using the `-v` option, you may end up with “dangling” volumes; volumes that are no longer referenced by a container. You can use `docker volume ls -f dangling=true` to find dangling volumes, and use `docker volume rm <volume name>` to remove a volume that’s no longer needed.
I spin up the container with something like this:
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Spinning up an {{ persistcomponent }}-platform container
docker:
registry: "{{ docker_registry_url }}"
email: "{{ docker_registry_email }}"
username: "{{ docker_registry_user }}"
password: "{{ docker_registry_password }}"
docker_api_version: "{{ docker_api_version }}"
image: "{{ docker_registry_url }}/arena/{{ docker_images[persistcomponent] }}:{{ builds['platform'][persistcomponent] }}"
pull: always
state: reloaded
restart_policy: always
restart_policy_retry: 5
name: "{{ persistcomponent }}-platform"
net: bridge
volumes:
- /srv/gsn/arenasettings.json:/srv/gsn/arenasettings.json:ro
log_driver: fluentd
log_opt:
fluentd-address: 127.0.0.1:24224
fluentd-tag: "docker.{{ '{{' }}.Name{{ '}}' }}"
fluentd-async-connect: "true"
#log_driver: syslog
#log_opt:
# syslog-address: udp://127.0.0.1:15140
# syslog-tag: "docker.{{ '{{' }}.Name{{ '}}' }}"
env:
ARENA_COMPONENT: "{{ persistcomponent }}"
tags:
- "{{ persistcomponent }}_platform_spinup"
- platform_spinup
- "{{ persistcomponent }}_platform"
when: builds['platform'][persistcomponent] not in [ 'Keep', 'None']
notify:
- "drop {{ persistcomponent }}-nginx container"
- "restart {{ persistcomponent }}-nginx container"
```
The container has `VOLUME` in its Dockerfile which we mount to nginx container (we put static files there, collected with Django's `manage.py collectstatic`
##### EXPECTED RESULTS
not to have those dangling volumes
##### ACTUAL RESULTS
we have dangling volumes when we renew the container
| True | Volumes do not get dropped when removing the docker container - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
_docker module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.0.1.0
```
##### CONFIGURATION
```
[defaults]
host_key_checking = False
forks = 10
var_compression_level=9
retries=5
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/tmp
```
##### OS / ENVIRONMENT
OS From: OS X, Ubuntu 14.04
OS To: Ubuntu 14.04
##### SUMMARY
<!--- Explain the problem briefly -->
Volumes do not get dropped when removing the docker container
##### STEPS TO REPRODUCE
1. Create the docker container with attached volumes
2. fill the volume
3. drop the container
4. repeat several times
5. `docker volume ls -f dangling=true` will show you the not-used containers
See [Docker manual](https://docs.docker.com/v1.10/engine/userguide/containers/dockervolumes/):
> Note: Docker will not warn you when removing a container without providing the `-v` option to delete its volumes. If you remove containers without using the `-v` option, you may end up with “dangling” volumes; volumes that are no longer referenced by a container. You can use `docker volume ls -f dangling=true` to find dangling volumes, and use `docker volume rm <volume name>` to remove a volume that’s no longer needed.
I spin up the container with something like this:
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Spinning up an {{ persistcomponent }}-platform container
docker:
registry: "{{ docker_registry_url }}"
email: "{{ docker_registry_email }}"
username: "{{ docker_registry_user }}"
password: "{{ docker_registry_password }}"
docker_api_version: "{{ docker_api_version }}"
image: "{{ docker_registry_url }}/arena/{{ docker_images[persistcomponent] }}:{{ builds['platform'][persistcomponent] }}"
pull: always
state: reloaded
restart_policy: always
restart_policy_retry: 5
name: "{{ persistcomponent }}-platform"
net: bridge
volumes:
- /srv/gsn/arenasettings.json:/srv/gsn/arenasettings.json:ro
log_driver: fluentd
log_opt:
fluentd-address: 127.0.0.1:24224
fluentd-tag: "docker.{{ '{{' }}.Name{{ '}}' }}"
fluentd-async-connect: "true"
#log_driver: syslog
#log_opt:
# syslog-address: udp://127.0.0.1:15140
# syslog-tag: "docker.{{ '{{' }}.Name{{ '}}' }}"
env:
ARENA_COMPONENT: "{{ persistcomponent }}"
tags:
- "{{ persistcomponent }}_platform_spinup"
- platform_spinup
- "{{ persistcomponent }}_platform"
when: builds['platform'][persistcomponent] not in [ 'Keep', 'None']
notify:
- "drop {{ persistcomponent }}-nginx container"
- "restart {{ persistcomponent }}-nginx container"
```
The container has `VOLUME` in its Dockerfile which we mount to nginx container (we put static files there, collected with Django's `manage.py collectstatic`
##### EXPECTED RESULTS
not to have those dangling volumes
##### ACTUAL RESULTS
we have dangling volumes when we renew the container
| main | volumes do not get dropped when removing the docker container issue type bug report component name docker module ansible version ansible configuration host key checking false forks var compression level retries fact caching jsonfile fact caching connection ansible tmp os environment os from os x ubuntu os to ubuntu summary volumes do not get dropped when removing the docker container steps to reproduce create the docker container with attached volumes fill the volume drop the container repeat several times docker volume ls f dangling true will show you the not used containers see note docker will not warn you when removing a container without providing the v option to delete its volumes if you remove containers without using the v option you may end up with “dangling” volumes volumes that are no longer referenced by a container you can use docker volume ls f dangling true to find dangling volumes and use docker volume rm to remove a volume that’s no longer needed i spin up the container with something like this name spinning up an persistcomponent platform container docker registry docker registry url email docker registry email username docker registry user password docker registry password docker api version docker api version image docker registry url arena docker images builds pull always state reloaded restart policy always restart policy retry name persistcomponent platform net bridge volumes srv gsn arenasettings json srv gsn arenasettings json ro log driver fluentd log opt fluentd address fluentd tag docker name fluentd async connect true log driver syslog log opt syslog address udp syslog tag docker name env arena component persistcomponent tags persistcomponent platform spinup platform spinup persistcomponent platform when builds not in notify drop persistcomponent nginx container restart persistcomponent nginx container the container has volume in its dockerfile which we mount to nginx container we put static files there collected with django s manage py collectstatic expected results not to have those dangling volumes actual results we have dangling volumes when we renew the container | 1 |
5,743 | 30,385,916,854 | IssuesEvent | 2023-07-13 00:40:26 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | closed | Fix npm errors shown in `docker-compose up` | bug engineering maintain | ### Describe the bug
Getting npm errors from running `docker-compose up`
### To Reproduce
Steps to reproduce the behavior:
1. check out `main` locally
2. run `inv new-env`
3. run `docker-compose up` and wait
4. notice the npm errors towards the end of the log
### Expected behavior
No npm errors.
### Screenshots
```bash
foundation-2022-nov-watch-static-files-1 | > @ optimize:css:clean /app
foundation-2022-nov-watch-static-files-1 | > shx rm -rf network-api/networkapi/frontend/_css/temp
foundation-2022-nov-watch-static-files-1 |
foundation-2022-nov-watch-static-files-1 | [Error: ENOENT: no such file or directory, open '/app/network-api/networkapi/frontend/_css/temp/buyers-guide.compiled.css'] {
foundation-2022-nov-watch-static-files-1 | errno: -2,
foundation-2022-nov-watch-static-files-1 | code: 'ENOENT',
foundation-2022-nov-watch-static-files-1 | syscall: 'open',
foundation-2022-nov-watch-static-files-1 | path: '/app/network-api/networkapi/frontend/_css/temp/buyers-guide.compiled.css'
foundation-2022-nov-watch-static-files-1 | }
foundation-2022-nov-watch-static-files-1 | npm ERR! code ELIFECYCLE
foundation-2022-nov-watch-static-files-1 | npm ERR! errno 1
foundation-2022-nov-watch-static-files-1 | npm ERR! @ optimize:css:run: `postcss network-api/networkapi/frontend/_css/*.css --dir network-api/networkapi/frontend/_css/temp`
foundation-2022-nov-watch-static-files-1 | npm ERR! Exit status 1
foundation-2022-nov-watch-static-files-1 | npm ERR!
foundation-2022-nov-watch-static-files-1 | npm ERR! Failed at the @ optimize:css:run script.
foundation-2022-nov-watch-static-files-1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
foundation-2022-nov-watch-static-files-1 |
foundation-2022-nov-watch-static-files-1 | npm ERR! A complete log of this run can be found in:
foundation-2022-nov-watch-static-files-1 | npm ERR! /root/.npm/_logs/2022-11-15T22_22_35_954Z-debug.log
foundation-2022-nov-watch-static-files-1 | ERROR: "optimize:css:run" exited with 1.
foundation-2022-nov-watch-static-files-1 | npm ERR! code ELIFECYCLE
foundation-2022-nov-watch-static-files-1 | npm ERR! errno 1
foundation-2022-nov-watch-static-files-1 | npm ERR! @ optimize:css: `run-s optimize:css:*`
foundation-2022-nov-watch-static-files-1 | npm ERR! Exit status 1
foundation-2022-nov-watch-static-files-1 | npm ERR!
foundation-2022-nov-watch-static-files-1 | npm ERR! Failed at the @ optimize:css script.
foundation-2022-nov-watch-static-files-1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
foundation-2022-nov-watch-static-files-1 |
foundation-2022-nov-watch-static-files-1 | npm ERR! A complete log of this run can be found in:
foundation-2022-nov-watch-static-files-1 | npm ERR! /root/.npm/_logs/2022-11-15T22_22_35_988Z-debug.log
foundation-2022-nov-watch-static-files-1 | ERROR: "optimize:css" exited with 1.
foundation-2022-nov-watch-static-files-1 | npm ERR! code ELIFECYCLE
foundation-2022-nov-watch-static-files-1 | npm ERR! errno 1
foundation-2022-nov-watch-static-files-1 | npm ERR! @ build:sass: `run-s build:sass:clean && run-p build:sass:main build:sass:bg && run-s optimize:css`
foundation-2022-nov-watch-static-files-1 | npm ERR! Exit status 1
foundation-2022-nov-watch-static-files-1 | npm ERR!
foundation-2022-nov-watch-static-files-1 | npm ERR! Failed at the @ build:sass script.
foundation-2022-nov-watch-static-files-1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
foundation-2022-nov-watch-static-files-1 |
foundation-2022-nov-watch-static-files-1 | npm ERR! A complete log of this run can be found in:
foundation-2022-nov-watch-static-files-1 | npm ERR! /root/.npm/_logs/2022-11-15T22_22_36_108Z-debug.log
| True | Fix npm errors shown in `docker-compose up` - ### Describe the bug
Getting npm errors from running `docker-compose up`
### To Reproduce
Steps to reproduce the behavior:
1. check out `main` locally
2. run `inv new-env`
3. run `docker-compose up` and wait
4. notice the npm errors towards the end of the log
### Expected behavior
No npm errors.
### Screenshots
```bash
foundation-2022-nov-watch-static-files-1 | > @ optimize:css:clean /app
foundation-2022-nov-watch-static-files-1 | > shx rm -rf network-api/networkapi/frontend/_css/temp
foundation-2022-nov-watch-static-files-1 |
foundation-2022-nov-watch-static-files-1 | [Error: ENOENT: no such file or directory, open '/app/network-api/networkapi/frontend/_css/temp/buyers-guide.compiled.css'] {
foundation-2022-nov-watch-static-files-1 | errno: -2,
foundation-2022-nov-watch-static-files-1 | code: 'ENOENT',
foundation-2022-nov-watch-static-files-1 | syscall: 'open',
foundation-2022-nov-watch-static-files-1 | path: '/app/network-api/networkapi/frontend/_css/temp/buyers-guide.compiled.css'
foundation-2022-nov-watch-static-files-1 | }
foundation-2022-nov-watch-static-files-1 | npm ERR! code ELIFECYCLE
foundation-2022-nov-watch-static-files-1 | npm ERR! errno 1
foundation-2022-nov-watch-static-files-1 | npm ERR! @ optimize:css:run: `postcss network-api/networkapi/frontend/_css/*.css --dir network-api/networkapi/frontend/_css/temp`
foundation-2022-nov-watch-static-files-1 | npm ERR! Exit status 1
foundation-2022-nov-watch-static-files-1 | npm ERR!
foundation-2022-nov-watch-static-files-1 | npm ERR! Failed at the @ optimize:css:run script.
foundation-2022-nov-watch-static-files-1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
foundation-2022-nov-watch-static-files-1 |
foundation-2022-nov-watch-static-files-1 | npm ERR! A complete log of this run can be found in:
foundation-2022-nov-watch-static-files-1 | npm ERR! /root/.npm/_logs/2022-11-15T22_22_35_954Z-debug.log
foundation-2022-nov-watch-static-files-1 | ERROR: "optimize:css:run" exited with 1.
foundation-2022-nov-watch-static-files-1 | npm ERR! code ELIFECYCLE
foundation-2022-nov-watch-static-files-1 | npm ERR! errno 1
foundation-2022-nov-watch-static-files-1 | npm ERR! @ optimize:css: `run-s optimize:css:*`
foundation-2022-nov-watch-static-files-1 | npm ERR! Exit status 1
foundation-2022-nov-watch-static-files-1 | npm ERR!
foundation-2022-nov-watch-static-files-1 | npm ERR! Failed at the @ optimize:css script.
foundation-2022-nov-watch-static-files-1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
foundation-2022-nov-watch-static-files-1 |
foundation-2022-nov-watch-static-files-1 | npm ERR! A complete log of this run can be found in:
foundation-2022-nov-watch-static-files-1 | npm ERR! /root/.npm/_logs/2022-11-15T22_22_35_988Z-debug.log
foundation-2022-nov-watch-static-files-1 | ERROR: "optimize:css" exited with 1.
foundation-2022-nov-watch-static-files-1 | npm ERR! code ELIFECYCLE
foundation-2022-nov-watch-static-files-1 | npm ERR! errno 1
foundation-2022-nov-watch-static-files-1 | npm ERR! @ build:sass: `run-s build:sass:clean && run-p build:sass:main build:sass:bg && run-s optimize:css`
foundation-2022-nov-watch-static-files-1 | npm ERR! Exit status 1
foundation-2022-nov-watch-static-files-1 | npm ERR!
foundation-2022-nov-watch-static-files-1 | npm ERR! Failed at the @ build:sass script.
foundation-2022-nov-watch-static-files-1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
foundation-2022-nov-watch-static-files-1 |
foundation-2022-nov-watch-static-files-1 | npm ERR! A complete log of this run can be found in:
foundation-2022-nov-watch-static-files-1 | npm ERR! /root/.npm/_logs/2022-11-15T22_22_36_108Z-debug.log
| main | fix npm errors shown in docker compose up describe the bug getting npm errors from running docker compose up to reproduce steps to reproduce the behavior check out main locally run inv new env run docker compose up and wait notice the npm errors towards the end of the log expected behavior no npm errors screenshots bash foundation nov watch static files optimize css clean app foundation nov watch static files shx rm rf network api networkapi frontend css temp foundation nov watch static files foundation nov watch static files foundation nov watch static files errno foundation nov watch static files code enoent foundation nov watch static files syscall open foundation nov watch static files path app network api networkapi frontend css temp buyers guide compiled css foundation nov watch static files foundation nov watch static files npm err code elifecycle foundation nov watch static files npm err errno foundation nov watch static files npm err optimize css run postcss network api networkapi frontend css css dir network api networkapi frontend css temp foundation nov watch static files npm err exit status foundation nov watch static files npm err foundation nov watch static files npm err failed at the optimize css run script foundation nov watch static files npm err this is probably not a problem with npm there is likely additional logging output above foundation nov watch static files foundation nov watch static files npm err a complete log of this run can be found in foundation nov watch static files npm err root npm logs debug log foundation nov watch static files error optimize css run exited with foundation nov watch static files npm err code elifecycle foundation nov watch static files npm err errno foundation nov watch static files npm err optimize css run s optimize css foundation nov watch static files npm err exit status foundation nov watch static files npm err foundation nov watch static files npm err failed at the optimize css script foundation nov watch static files npm err this is probably not a problem with npm there is likely additional logging output above foundation nov watch static files foundation nov watch static files npm err a complete log of this run can be found in foundation nov watch static files npm err root npm logs debug log foundation nov watch static files error optimize css exited with foundation nov watch static files npm err code elifecycle foundation nov watch static files npm err errno foundation nov watch static files npm err build sass run s build sass clean run p build sass main build sass bg run s optimize css foundation nov watch static files npm err exit status foundation nov watch static files npm err foundation nov watch static files npm err failed at the build sass script foundation nov watch static files npm err this is probably not a problem with npm there is likely additional logging output above foundation nov watch static files foundation nov watch static files npm err a complete log of this run can be found in foundation nov watch static files npm err root npm logs debug log | 1 |
737,490 | 25,518,487,045 | IssuesEvent | 2022-11-28 18:19:07 | googleapis/nodejs-datastore-session | https://api.github.com/repos/googleapis/nodejs-datastore-session | closed | unexpired session is returned: Should destroy a session failed | priority: p1 type: bug api: datastore flakybot: issue flakybot: flaky | Note: #323 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: a7f10f0af89100abfdac1b97399e94991ec5b5b5
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/8f16ae74-d3da-4ba1-b05e-34b716b6f5b3), [Sponge](http://sponge2/8f16ae74-d3da-4ba1-b05e-34b716b6f5b3)
status: failed
<details><summary>Test output</summary><br><pre>Timeout of 10000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/workspace/build/system-test/session.js)
Error: Timeout of 10000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/workspace/build/system-test/session.js)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)</pre></details> | 1.0 | unexpired session is returned: Should destroy a session failed - Note: #323 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: a7f10f0af89100abfdac1b97399e94991ec5b5b5
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/8f16ae74-d3da-4ba1-b05e-34b716b6f5b3), [Sponge](http://sponge2/8f16ae74-d3da-4ba1-b05e-34b716b6f5b3)
status: failed
<details><summary>Test output</summary><br><pre>Timeout of 10000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/workspace/build/system-test/session.js)
Error: Timeout of 10000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/workspace/build/system-test/session.js)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)</pre></details> | non_main | unexpired session is returned should destroy a session failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output timeout of exceeded for async tests and hooks ensure done is called if returning a promise ensure it resolves workspace build system test session js error timeout of exceeded for async tests and hooks ensure done is called if returning a promise ensure it resolves workspace build system test session js at listontimeout internal timers js at processtimers internal timers js | 0 |
198,402 | 15,708,369,566 | IssuesEvent | 2021-03-26 20:25:11 | UnB-KnEDLe/DODFMiner | https://api.github.com/repos/UnB-KnEDLe/DODFMiner | closed | Encontrar amostra do ato no DODF e Consolidar atos semelhantes (19-24) | documentation | Aviso de reabertura
Aviso de suspensão
Aviso de nova abertura –
Aviso de suspensão –
AVISO SORTEIO SUBCOMISSÃO TÉCNICA
AVISO SUSPENSÃO LICITAÇÃO | 1.0 | Encontrar amostra do ato no DODF e Consolidar atos semelhantes (19-24) - Aviso de reabertura
Aviso de suspensão
Aviso de nova abertura –
Aviso de suspensão –
AVISO SORTEIO SUBCOMISSÃO TÉCNICA
AVISO SUSPENSÃO LICITAÇÃO | non_main | encontrar amostra do ato no dodf e consolidar atos semelhantes aviso de reabertura aviso de suspensão aviso de nova abertura – aviso de suspensão – aviso sorteio subcomissão técnica aviso suspensão licitação | 0 |
10,436 | 6,724,823,785 | IssuesEvent | 2017-10-17 01:03:44 | gahansen/Albany | https://api.github.com/repos/gahansen/Albany | closed | Remove stale src/LCM/doc directory | developer usability LCM | The files in this directory are very old and appear to be stale. The build scripts have been superseded by the module files. Not sure what the use is for the *.i files. In any case, any file that needs to be preserved should go to the doc/LCM directory.
Pinging @jwfoulk and @jtostie since they probably used this back in the day. | True | Remove stale src/LCM/doc directory - The files in this directory are very old and appear to be stale. The build scripts have been superseded by the module files. Not sure what the use is for the *.i files. In any case, any file that needs to be preserved should go to the doc/LCM directory.
Pinging @jwfoulk and @jtostie since they probably used this back in the day. | non_main | remove stale src lcm doc directory the files in this directory are very old and appear to be stale the build scripts have been superseded by the module files not sure what the use is for the i files in any case any file that needs to be preserved should go to the doc lcm directory pinging jwfoulk and jtostie since they probably used this back in the day | 0 |
267 | 3,028,180,177 | IssuesEvent | 2015-08-04 02:10:39 | AKST/akst.io-backend | https://api.github.com/repos/AKST/akst.io-backend | opened | remove children api from Dependency injection | future proofing maintainablity | Currently the DI api isn't compatible with the use of a new operator, unless the class doesn't specify any children. So it's worth removing the children api for this reason. | True | remove children api from Dependency injection - Currently the DI api isn't compatible with the use of a new operator, unless the class doesn't specify any children. So it's worth removing the children api for this reason. | main | remove children api from dependency injection currently the di api isn t compatible with the use of a new operator unless the class doesn t specify any children so it s worth removing the children api for this reason | 1 |
91,777 | 10,728,433,721 | IssuesEvent | 2019-10-28 13:54:04 | redhat-cop/casl-ansible | https://api.github.com/repos/redhat-cop/casl-ansible | closed | Dead links | Documentation bug | We've got some dead links throughout the repo that could use some cleaning up. Low hanging fruit for :ghost: :jack_o_lantern: Hacktoberfest :ghost: :jack_o_lantern: | 1.0 | Dead links - We've got some dead links throughout the repo that could use some cleaning up. Low hanging fruit for :ghost: :jack_o_lantern: Hacktoberfest :ghost: :jack_o_lantern: | non_main | dead links we ve got some dead links throughout the repo that could use some cleaning up low hanging fruit for ghost jack o lantern hacktoberfest ghost jack o lantern | 0 |
447,267 | 31,654,099,861 | IssuesEvent | 2023-09-07 02:31:10 | fga-eps-mds/2023-2-GEROcuidado-Doc | https://api.github.com/repos/fga-eps-mds/2023-2-GEROcuidado-Doc | reopened | Desenvolver Plano de Comunicação | documentation EPS easy | # Descrição
Desenvolver plano de comunicação, ferramentas, horários e reuniões fixas.
# Tarefas
Aba responsável por inserir as demandas desta issue.
- [ ] Estabelecer as ferramentas;
- [ ] Estabelecer os horários das reuniões:
- [x] Estabelecer o horário das reuniões de EPS;
- [x] Estabelecer o horário das reuniões de EPS e MDS;
- [ ] Estabelecer o horário das reuniões com o cliente;
- [ ] Tarefa 2
# Critérios de Aceitação
- [ ] Ter o plano de comunicação adicionado ao mkdocs
- [ ] Ter o plano de comunicação avaliado pelos membros
# Informações Adicionais
- As reuniões de EPS ocorrerão todas às segundas às 20:40;
- As reuniões com todos os membros, EPS e MDS, ocorrerão todas as terças às 20:40; | 1.0 | Desenvolver Plano de Comunicação - # Descrição
Desenvolver plano de comunicação, ferramentas, horários e reuniões fixas.
# Tarefas
Aba responsável por inserir as demandas desta issue.
- [ ] Estabelecer as ferramentas;
- [ ] Estabelecer os horários das reuniões:
- [x] Estabelecer o horário das reuniões de EPS;
- [x] Estabelecer o horário das reuniões de EPS e MDS;
- [ ] Estabelecer o horário das reuniões com o cliente;
- [ ] Tarefa 2
# Critérios de Aceitação
- [ ] Ter o plano de comunicação adicionado ao mkdocs
- [ ] Ter o plano de comunicação avaliado pelos membros
# Informações Adicionais
- As reuniões de EPS ocorrerão todas às segundas às 20:40;
- As reuniões com todos os membros, EPS e MDS, ocorrerão todas as terças às 20:40; | non_main | desenvolver plano de comunicação descrição desenvolver plano de comunicação ferramentas horários e reuniões fixas tarefas aba responsável por inserir as demandas desta issue estabelecer as ferramentas estabelecer os horários das reuniões estabelecer o horário das reuniões de eps estabelecer o horário das reuniões de eps e mds estabelecer o horário das reuniões com o cliente tarefa critérios de aceitação ter o plano de comunicação adicionado ao mkdocs ter o plano de comunicação avaliado pelos membros informações adicionais as reuniões de eps ocorrerão todas às segundas às as reuniões com todos os membros eps e mds ocorrerão todas as terças às | 0 |
15,287 | 5,093,021,082 | IssuesEvent | 2017-01-03 01:59:24 | OData/model-first | https://api.github.com/repos/OData/model-first | closed | [QE] Poli Check for Model-First | 3 - Resolved (code ready) | Poli Check for Model-First and Model-First-APIDesign
Fix Sev1 and Sev2 issues
| 1.0 | [QE] Poli Check for Model-First - Poli Check for Model-First and Model-First-APIDesign
Fix Sev1 and Sev2 issues
| non_main | poli check for model first poli check for model first and model first apidesign fix and issues | 0 |
5,077 | 25,974,044,891 | IssuesEvent | 2022-12-19 13:33:47 | NIAEFEUP/website-niaefeup-backend | https://api.github.com/repos/NIAEFEUP/website-niaefeup-backend | opened | Remove shared cache mode property | maintainability | As can be seen in https://github.com/NIAEFEUP/website-niaefeup-backend/pull/69#discussion_r1052153534, we currently have, in `application.properties`, a cache mode property that simply overrides the deprecated setting used by Hibernate:
https://github.com/NIAEFEUP/website-niaefeup-backend/blob/c07586e68695735ad1b1ee8a4a8f6c3c6ac44e12/src/main/resources/application.properties#L28-L30
You can read more about this issue [here](https://github.com/spring-projects/spring-data-jpa/issues/2717). This property should be removed whenever the problem with Hibernate is fixed. | True | Remove shared cache mode property - As can be seen in https://github.com/NIAEFEUP/website-niaefeup-backend/pull/69#discussion_r1052153534, we currently have, in `application.properties`, a cache mode property that simply overrides the deprecated setting used by Hibernate:
https://github.com/NIAEFEUP/website-niaefeup-backend/blob/c07586e68695735ad1b1ee8a4a8f6c3c6ac44e12/src/main/resources/application.properties#L28-L30
You can read more about this issue [here](https://github.com/spring-projects/spring-data-jpa/issues/2717). This property should be removed whenever the problem with Hibernate is fixed. | main | remove shared cache mode property as can be seen in we currently have in application properties a cache mode property that simply overrides the deprecated setting used by hibernate you can read more about this issue this property should be removed whenever the problem with hibernate is fixed | 1 |
182,058 | 14,100,382,969 | IssuesEvent | 2020-11-06 04:01:35 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | buckled walking mushrooms are able to move their chair around if they're aggroed | Bug Tested/Reproduced | rogue shrooms | 1.0 | buckled walking mushrooms are able to move their chair around if they're aggroed - rogue shrooms | non_main | buckled walking mushrooms are able to move their chair around if they re aggroed rogue shrooms | 0 |
29,150 | 8,301,067,346 | IssuesEvent | 2018-09-21 10:06:07 | GoogleCloudPlatform/agones | https://api.github.com/repos/GoogleCloudPlatform/agones | closed | Agones helm repo | area/build-tools kind/feature | We could setup a chart repository to streamline installation.
See:
https://github.com/kubernetes/helm/blob/master/docs/chart_repository.md
Possible options :
- Google Cloud Storage.
- Github pages.
Once this is done installation step would be:
```
helm repo add agones our_repo_url
helm install agones/agones
``` | 1.0 | Agones helm repo - We could setup a chart repository to streamline installation.
See:
https://github.com/kubernetes/helm/blob/master/docs/chart_repository.md
Possible options :
- Google Cloud Storage.
- Github pages.
Once this is done installation step would be:
```
helm repo add agones our_repo_url
helm install agones/agones
``` | non_main | agones helm repo we could setup a chart repository to streamline installation see possible options google cloud storage github pages once this is done installation step would be helm repo add agones our repo url helm install agones agones | 0 |
2,875 | 10,280,800,344 | IssuesEvent | 2019-08-26 06:45:56 | KazDragon/terminalpp | https://api.github.com/repos/KazDragon/terminalpp | closed | Inconsistent use of std::size_t in string | Compatibility Maintainability | terminalpp::string uses std::size_t for the return type of size(), for the size constructor and for the UDS functions. It then uses string::size_type for other functions.
It should be size_type for all of these. | True | Inconsistent use of std::size_t in string - terminalpp::string uses std::size_t for the return type of size(), for the size constructor and for the UDS functions. It then uses string::size_type for other functions.
It should be size_type for all of these. | main | inconsistent use of std size t in string terminalpp string uses std size t for the return type of size for the size constructor and for the uds functions it then uses string size type for other functions it should be size type for all of these | 1 |
4,000 | 18,672,271,138 | IssuesEvent | 2021-10-30 23:27:06 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Create implementation issues for Working with Views milestone | type: enhancement status: blocked restricted: maintainers work: product | Once the design specs for the "Working with Views" milestone are complete, we will need to create (or update existing) frontend and backend implementation issues to track the work.
Related issues:
- #442
- #263
Marking as blocked until design work is complete:
- #243
- #443
- #466
- #454
- #455
- #456
| True | Create implementation issues for Working with Views milestone - Once the design specs for the "Working with Views" milestone are complete, we will need to create (or update existing) frontend and backend implementation issues to track the work.
Related issues:
- #442
- #263
Marking as blocked until design work is complete:
- #243
- #443
- #466
- #454
- #455
- #456
| main | create implementation issues for working with views milestone once the design specs for the working with views milestone are complete we will need to create or update existing frontend and backend implementation issues to track the work related issues marking as blocked until design work is complete | 1 |
113,914 | 11,826,473,940 | IssuesEvent | 2020-03-21 18:03:16 | coatk1/playground | https://api.github.com/repos/coatk1/playground | opened | [DOCUMENTATION] | documentation | **Does the project need documentation**
Explain what the project need further explanantion on (i.e. what does the project do, examples on usage, etc).
**Does the source code need documentation**
Explain what the source code need further explanantion on (i.e. what does a module or class do, examples on usage, etc).
https://help.github.com/en/github/setting-up-and-managing-organizations-and-teams/managing-default-labels-for-repositories-in-your-organization | 1.0 | [DOCUMENTATION] - **Does the project need documentation**
Explain what the project need further explanantion on (i.e. what does the project do, examples on usage, etc).
**Does the source code need documentation**
Explain what the source code need further explanantion on (i.e. what does a module or class do, examples on usage, etc).
https://help.github.com/en/github/setting-up-and-managing-organizations-and-teams/managing-default-labels-for-repositories-in-your-organization | non_main | does the project need documentation explain what the project need further explanantion on i e what does the project do examples on usage etc does the source code need documentation explain what the source code need further explanantion on i e what does a module or class do examples on usage etc | 0 |
152,535 | 23,986,172,316 | IssuesEvent | 2022-09-13 19:16:39 | GovAlta/ui-components | https://api.github.com/repos/GovAlta/ui-components | closed | Card sort - Storybook | Service design Stale | ### Background:
If we continue to utilize Storybook to communicate our technical documentation externally, we can invest time to ensure that the site structure more closely matches the mental models of our users.
**As a:** Design system team
**I want to:** Ensure that the Storybook website architecture aligns with the mental model of its users
**so that:** Users of the Storybook website will be able to find the content and resources they need
### Acceptance criteria
- [ ] Find some good storybook benchmarks to compare our design system Storybook site to.
- [ ] Recruit 2 groups with a mix of 15-20 designers and developers each
**Group 1**
- [ ] Present an existing site architecture (as found during discovery) and ask them to complete a set of meaningful design system tasked based on their role (front end design, service designer, or developer)
**Group 2**
- [ ] To create a site architecture using an "open" card sort
Next steps
- [ ] As time permits teams to switch assignments to help verify insights.
### Presentable outcomes
- [ ] Findings presented in sprint reivew
### Sprint Ready Checklist
- Acceptance criteria defined
- Team has defined steps to satisfy acceptance criteria
- Acceptance criteria is verifiable / testable
- External / 3rd Party dependencies identified
### Related issues:
### Resources:
| 1.0 | Card sort - Storybook - ### Background:
If we continue to utilize Storybook to communicate our technical documentation externally, we can invest time to ensure that the site structure more closely matches the mental models of our users.
**As a:** Design system team
**I want to:** Ensure that the Storybook website architecture aligns with the mental model of its users
**so that:** Users of the Storybook website will be able to find the content and resources they need
### Acceptance criteria
- [ ] Find some good storybook benchmarks to compare our design system Storybook site to.
- [ ] Recruit 2 groups with a mix of 15-20 designers and developers each
**Group 1**
- [ ] Present an existing site architecture (as found during discovery) and ask them to complete a set of meaningful design system tasked based on their role (front end design, service designer, or developer)
**Group 2**
- [ ] To create a site architecture using an "open" card sort
Next steps
- [ ] As time permits teams to switch assignments to help verify insights.
### Presentable outcomes
- [ ] Findings presented in sprint reivew
### Sprint Ready Checklist
- Acceptance criteria defined
- Team has defined steps to satisfy acceptance criteria
- Acceptance criteria is verifiable / testable
- External / 3rd Party dependencies identified
### Related issues:
### Resources:
| non_main | card sort storybook background if we continue to utilize storybook to communicate our technical documentation externally we can invest time to ensure that the site structure more closely matches the mental models of our users as a design system team i want to ensure that the storybook website architecture aligns with the mental model of its users so that users of the storybook website will be able to find the content and resources they need acceptance criteria find some good storybook benchmarks to compare our design system storybook site to recruit groups with a mix of designers and developers each group present an existing site architecture as found during discovery and ask them to complete a set of meaningful design system tasked based on their role front end design service designer or developer group to create a site architecture using an open card sort next steps as time permits teams to switch assignments to help verify insights presentable outcomes findings presented in sprint reivew sprint ready checklist acceptance criteria defined team has defined steps to satisfy acceptance criteria acceptance criteria is verifiable testable external party dependencies identified related issues resources | 0 |
3,822 | 16,619,439,100 | IssuesEvent | 2021-06-02 21:33:58 | walbourn/directx-vs-templates | https://api.github.com/repos/walbourn/directx-vs-templates | closed | WS_POPUP recommended but not used | maintainence | The comment next to the CreateWindowExW call says to use WS_POPUP for fullscreen, but the WM_SYSKEYDOWN handler seems to set GWL_STYLE to 0 instead. Is this a mistake? | True | WS_POPUP recommended but not used - The comment next to the CreateWindowExW call says to use WS_POPUP for fullscreen, but the WM_SYSKEYDOWN handler seems to set GWL_STYLE to 0 instead. Is this a mistake? | main | ws popup recommended but not used the comment next to the createwindowexw call says to use ws popup for fullscreen but the wm syskeydown handler seems to set gwl style to instead is this a mistake | 1 |
4,226 | 20,909,855,088 | IssuesEvent | 2022-03-24 08:14:38 | coq/platform | https://api.github.com/repos/coq/platform | closed | Add relation-algebra to Coq Platform | package inclusion has maintainer agreement opam package missing | A very useful library with great automation: https://github.com/damien-pous/relation-algebra. (/CC @damien-pous) | True | Add relation-algebra to Coq Platform - A very useful library with great automation: https://github.com/damien-pous/relation-algebra. (/CC @damien-pous) | main | add relation algebra to coq platform a very useful library with great automation cc damien pous | 1 |
4,594 | 5,191,138,024 | IssuesEvent | 2017-01-21 17:16:33 | gsomix/skalarprodukt | https://api.github.com/repos/gsomix/skalarprodukt | closed | Add CI | kind:infrastructure | I recommend to use [Appveyor](https://www.appveyor.com/) and [Travis](https://travis-ci.org/) both. Travis is especially usable for testing on Mono.
- [x] Appveyor
- [x] Travis
| 1.0 | Add CI - I recommend to use [Appveyor](https://www.appveyor.com/) and [Travis](https://travis-ci.org/) both. Travis is especially usable for testing on Mono.
- [x] Appveyor
- [x] Travis
| non_main | add ci i recommend to use and both travis is especially usable for testing on mono appveyor travis | 0 |
5,515 | 27,560,940,577 | IssuesEvent | 2023-03-07 21:58:10 | arcticicestudio/nord-vim | https://api.github.com/repos/arcticicestudio/nord-vim | closed | `nordtheme` organization migration | context-workflow scope-maintainability | <p align="center">
<a href="https://www.nordtheme.com" target="_blank">
<picture>
<source srcset="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/dark/frostic/nord3/spaced.svg?sanitize=true" width="40%" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" />
<source srcset="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/light/frostic/nord6/spaced.svg?sanitize=true" width="40%" media="(prefers-color-scheme: dark)" />
<img src="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/dark/frostic/nord3/spaced.svg?sanitize=true" width="40%" />
</picture>
</a>
</p>
As part of the [“Northern Post — The state and roadmap of Nord“][1] announcement, this repository will be migrated to [the `nordtheme` GitHub organization][2].
This issue is a task of nordtheme/nord#185 epic ([_tasklist_][3]).
[1]: https://github.com/orgs/nordtheme/discussions/183
[2]: https://github.com/nordtheme
[3]: https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/about-task-lists
| True | `nordtheme` organization migration - <p align="center">
<a href="https://www.nordtheme.com" target="_blank">
<picture>
<source srcset="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/dark/frostic/nord3/spaced.svg?sanitize=true" width="40%" media="(prefers-color-scheme: light), (prefers-color-scheme: no-preference)" />
<source srcset="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/light/frostic/nord6/spaced.svg?sanitize=true" width="40%" media="(prefers-color-scheme: dark)" />
<img src="https://raw.githubusercontent.com/nordtheme/assets/main/static/images/logos/heroes/logo-typography/dark/frostic/nord3/spaced.svg?sanitize=true" width="40%" />
</picture>
</a>
</p>
As part of the [“Northern Post — The state and roadmap of Nord“][1] announcement, this repository will be migrated to [the `nordtheme` GitHub organization][2].
This issue is a task of nordtheme/nord#185 epic ([_tasklist_][3]).
[1]: https://github.com/orgs/nordtheme/discussions/183
[2]: https://github.com/nordtheme
[3]: https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/about-task-lists
| main | nordtheme organization migration as part of the announcement this repository will be migrated to this issue is a task of nordtheme nord epic | 1 |
1,388 | 6,017,788,086 | IssuesEvent | 2017-06-07 10:32:20 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | win_robocopy: The variable '$changed' cannot be retrieved because it has not been set. | affects_2.2 bug_report waiting_on_maintainer windows | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
win_robocopy
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
[defaults]
host_key_checking = no
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
##### OS / ENVIRONMENT
Running ansible from ubuntu bash on windows 10 anniversary
Running win_robocopy on host windows 2012 R2
```
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS"
NAME="Ubuntu"
VERSION="14.04.5 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.5 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
```
##### SUMMARY
<!--- Explain the problem briefly -->
I get "The variable '$changed' cannot be retrieved because it has not been set." error when I use win_robocopy
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Backup files
win_robocopy:
src: "{{install_root}}"
dest: "{{install_root}}.bak"
purge: yes
recurse: yes
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
win_robocopy works :)
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
win_robocopy fails on
```
Set-Attr $result.win_robocopy "changed" $changed
```
If $changed is initialize to $false, the script works
at line 112 :
```
Set-Attr $result.win_robocopy "return_code" $rc
Set-Attr $result.win_robocopy "output" $robocopy_output
$cmd_msg = "Success"
$changed = $false
If ($rc -eq 0) {
$cmd_msg = "No files copied."
}
```
**Error stack**
<!--- Paste verbatim command output between quotes below -->
```
TASK [server : Backup files] ***********************************
task path: server/tasks/main.yml:2
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/extras/windows/win_robocopy.ps1
<192.168.33.20> ESTABLISH WINRM CONNECTION FOR USER: vagrant on PORT 5986 TO 192.168.33.20
<192.168.33.20> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1479487487.3-247510806404649").FullName | Write-Host -Separator '';
<192.168.33.20> PUT "/tmp/tmpAzBWvQ" TO "C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1479487487.3-247510806404649\win_robocopy.ps1"
<192.168.33.20> EXEC Set-StrictMode -Version Latest
Try
{
& 'C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1479487487.3-247510806404649\win_robocopy.ps1'
}
Catch
{
$_obj = @{ failed = $true }
If ($_.Exception.GetType)
{
$_obj.Add('msg', $_.Exception.Message)
}
Else
{
$_obj.Add('msg', $_.ToString())
}
If ($_.InvocationInfo.PositionMessage)
{
$_obj.Add('exception', $_.InvocationInfo.PositionMessage)
}
ElseIf ($_.ScriptStackTrace)
{
$_obj.Add('exception', $_.ScriptStackTrace)
}
Try
{
$_obj.Add('error_record', ($_ | ConvertTo-Json | ConvertFrom-Json))
}
Catch
{
}
Echo $_obj | ConvertTo-Json -Compress -Depth 99
Exit 1
}
Finally { Remove-Item "C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1479487487.3-247510806404649" -Force -Recurse -ErrorAction SilentlyContinue }
An exception occurred during task execution. The full traceback is:
At C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1479487487.3-247510806404649\win_robocopy.ps1:392 char:41
+ Set-Attr $result.win_robocopy "changed" $changed
+ ~~~~~~~~
fatal: [ids-server-local]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "win_robocopy"
},
"msg": "The variable '$changed' cannot be retrieved because it has not been set."
```
**I can do the PR if you want** | True | win_robocopy: The variable '$changed' cannot be retrieved because it has not been set. - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
win_robocopy
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
[defaults]
host_key_checking = no
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
##### OS / ENVIRONMENT
Running ansible from ubuntu bash on windows 10 anniversary
Running win_robocopy on host windows 2012 R2
```
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS"
NAME="Ubuntu"
VERSION="14.04.5 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.5 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
```
##### SUMMARY
<!--- Explain the problem briefly -->
I get "The variable '$changed' cannot be retrieved because it has not been set." error when I use win_robocopy
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Backup files
win_robocopy:
src: "{{install_root}}"
dest: "{{install_root}}.bak"
purge: yes
recurse: yes
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
win_robocopy works :)
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
win_robocopy fails on
```
Set-Attr $result.win_robocopy "changed" $changed
```
If $changed is initialize to $false, the script works
at line 112 :
```
Set-Attr $result.win_robocopy "return_code" $rc
Set-Attr $result.win_robocopy "output" $robocopy_output
$cmd_msg = "Success"
$changed = $false
If ($rc -eq 0) {
$cmd_msg = "No files copied."
}
```
**Error stack**
<!--- Paste verbatim command output between quotes below -->
```
TASK [server : Backup files] ***********************************
task path: server/tasks/main.yml:2
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/extras/windows/win_robocopy.ps1
<192.168.33.20> ESTABLISH WINRM CONNECTION FOR USER: vagrant on PORT 5986 TO 192.168.33.20
<192.168.33.20> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1479487487.3-247510806404649").FullName | Write-Host -Separator '';
<192.168.33.20> PUT "/tmp/tmpAzBWvQ" TO "C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1479487487.3-247510806404649\win_robocopy.ps1"
<192.168.33.20> EXEC Set-StrictMode -Version Latest
Try
{
& 'C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1479487487.3-247510806404649\win_robocopy.ps1'
}
Catch
{
$_obj = @{ failed = $true }
If ($_.Exception.GetType)
{
$_obj.Add('msg', $_.Exception.Message)
}
Else
{
$_obj.Add('msg', $_.ToString())
}
If ($_.InvocationInfo.PositionMessage)
{
$_obj.Add('exception', $_.InvocationInfo.PositionMessage)
}
ElseIf ($_.ScriptStackTrace)
{
$_obj.Add('exception', $_.ScriptStackTrace)
}
Try
{
$_obj.Add('error_record', ($_ | ConvertTo-Json | ConvertFrom-Json))
}
Catch
{
}
Echo $_obj | ConvertTo-Json -Compress -Depth 99
Exit 1
}
Finally { Remove-Item "C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1479487487.3-247510806404649" -Force -Recurse -ErrorAction SilentlyContinue }
An exception occurred during task execution. The full traceback is:
At C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1479487487.3-247510806404649\win_robocopy.ps1:392 char:41
+ Set-Attr $result.win_robocopy "changed" $changed
+ ~~~~~~~~
fatal: [ids-server-local]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "win_robocopy"
},
"msg": "The variable '$changed' cannot be retrieved because it has not been set."
```
**I can do the PR if you want** | main | win robocopy the variable changed cannot be retrieved because it has not been set issue type bug report component name win robocopy ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration host key checking no ssh args o controlmaster auto o controlpersist o userknownhostsfile dev null o identitiesonly yes os environment running ansible from ubuntu bash on windows anniversary running win robocopy on host windows distrib id ubuntu distrib release distrib codename trusty distrib description ubuntu lts name ubuntu version lts trusty tahr id ubuntu id like debian pretty name ubuntu lts version id home url support url bug report url summary i get the variable changed cannot be retrieved because it has not been set error when i use win robocopy steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name backup files win robocopy src install root dest install root bak purge yes recurse yes expected results win robocopy works actual results win robocopy fails on set attr result win robocopy changed changed if changed is initialize to false the script works at line set attr result win robocopy return code rc set attr result win robocopy output robocopy output cmd msg success changed false if rc eq cmd msg no files copied error stack task task path server tasks main yml using module file usr lib dist packages ansible modules extras windows win robocopy establish winrm connection for user vagrant on port to exec set strictmode version latest new item type directory path env temp name ansible tmp fullname write host separator put tmp tmpazbwvq to c users vagrant appdata local temp ansible tmp win robocopy exec set strictmode version latest try c users vagrant appdata local temp ansible tmp win robocopy catch obj failed true if exception gettype obj add msg exception message else obj add msg tostring if invocationinfo positionmessage obj add exception invocationinfo positionmessage elseif scriptstacktrace obj add exception scriptstacktrace try obj add error record convertto json convertfrom json catch echo obj convertto json compress depth exit finally remove item c users vagrant appdata local temp ansible tmp force recurse erroraction silentlycontinue an exception occurred during task execution the full traceback is at c users vagrant appdata local temp ansible tmp win robocopy char set attr result win robocopy changed changed fatal failed changed false failed true invocation module name win robocopy msg the variable changed cannot be retrieved because it has not been set i can do the pr if you want | 1 |
3,984 | 18,358,667,981 | IssuesEvent | 2021-10-08 22:52:48 | pmqueiroz/mask-wizard | https://api.github.com/repos/pmqueiroz/mask-wizard | opened | Create mask wizard repl poc | enhancement Maintainers Only | ### Preliminary checks
- [X] I've checked that there aren't [**other open issues**](https://github.com/pmqueiroz/mask-wizard/issues?q=is%3Aissue) on the same topic.
- [X] I want to work on this.
### Describe the problem requiring a solution

### Describe the possible solution
Create a live code playground poc to put on index page on mask wizard website
references:
* https://babeljs.io/
* https://ramdajs.com/repl/
### Additional info
might help:
* https://pusher.com/tutorials/code-playground-react/
* https://nodejs.org/api/repl.html | True | Create mask wizard repl poc - ### Preliminary checks
- [X] I've checked that there aren't [**other open issues**](https://github.com/pmqueiroz/mask-wizard/issues?q=is%3Aissue) on the same topic.
- [X] I want to work on this.
### Describe the problem requiring a solution

### Describe the possible solution
Create a live code playground poc to put on index page on mask wizard website
references:
* https://babeljs.io/
* https://ramdajs.com/repl/
### Additional info
might help:
* https://pusher.com/tutorials/code-playground-react/
* https://nodejs.org/api/repl.html | main | create mask wizard repl poc preliminary checks i ve checked that there aren t on the same topic i want to work on this describe the problem requiring a solution describe the possible solution create a live code playground poc to put on index page on mask wizard website references additional info might help | 1 |
5,599 | 28,041,771,597 | IssuesEvent | 2023-03-28 19:06:30 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | opened | Seemingly dead frontend code refering to create-project-from-upload command | enhancement UI maintainability | The following reference to the `create-project-from-upload` command seems to be dead code.
https://github.com/OpenRefine/OpenRefine/blob/76152500a5af8c38b88d34e15b8b01fcf6d0da6e/main/webapp/modules/core/scripts/index/open-project-ui.js#L316-L344
This request is bound to an event listener on ` #upload-file-button`, but I cannot find any such element in our code base.
https://github.com/OpenRefine/OpenRefine/blob/76152500a5af8c38b88d34e15b8b01fcf6d0da6e/main/webapp/modules/core/scripts/index/open-project-ui.js#L42-L58
The other elements whose values are encoded in the request also do not seem to exist anymore.
One should double check that this is not needed and remove this code (and any other parts that are also never executed anymore).
| True | Seemingly dead frontend code refering to create-project-from-upload command - The following reference to the `create-project-from-upload` command seems to be dead code.
https://github.com/OpenRefine/OpenRefine/blob/76152500a5af8c38b88d34e15b8b01fcf6d0da6e/main/webapp/modules/core/scripts/index/open-project-ui.js#L316-L344
This request is bound to an event listener on ` #upload-file-button`, but I cannot find any such element in our code base.
https://github.com/OpenRefine/OpenRefine/blob/76152500a5af8c38b88d34e15b8b01fcf6d0da6e/main/webapp/modules/core/scripts/index/open-project-ui.js#L42-L58
The other elements whose values are encoded in the request also do not seem to exist anymore.
One should double check that this is not needed and remove this code (and any other parts that are also never executed anymore).
| main | seemingly dead frontend code refering to create project from upload command the following reference to the create project from upload command seems to be dead code this request is bound to an event listener on upload file button but i cannot find any such element in our code base the other elements whose values are encoded in the request also do not seem to exist anymore one should double check that this is not needed and remove this code and any other parts that are also never executed anymore | 1 |
4,382 | 22,301,909,149 | IssuesEvent | 2022-06-13 09:28:46 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | opened | Datalayer changes | engineering Maintain GA4 | ### Describe the feature request
A clear and concise description of your feature request
### Screenshots / mock-ups
If you like, you can copy-paste, or drag-and-drop, a screenshot of the thing you'd like to see improved. Bonus points for faking what you'd like it to look like!
### Additional context
Add any other context that explains the about your use-case that made you go "I really wish we had..."
| True | Datalayer changes - ### Describe the feature request
A clear and concise description of your feature request
### Screenshots / mock-ups
If you like, you can copy-paste, or drag-and-drop, a screenshot of the thing you'd like to see improved. Bonus points for faking what you'd like it to look like!
### Additional context
Add any other context that explains the about your use-case that made you go "I really wish we had..."
| main | datalayer changes describe the feature request a clear and concise description of your feature request screenshots mock ups if you like you can copy paste or drag and drop a screenshot of the thing you d like to see improved bonus points for faking what you d like it to look like additional context add any other context that explains the about your use case that made you go i really wish we had | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.