Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
52,484 | 6,258,606,211 | IssuesEvent | 2017-07-14 15:58:24 | Microsoft/vscode | https://api.github.com/repos/Microsoft/vscode | opened | Test: Multi Root Workspaces | testplan-item | Test for: Multi Root Workspaces
Complexity: 5
- [ ] Windows
- [ ] Linux
- [ ] macOS
In this milestone we rewrote how multi root workspaces surface in VS Code. The previous solution with having a `workspace` setting in user settings is obsolete (there is no migration). See https://github.com/Microsoft/vscode/issues/396#issuecomment-315079618 for more details on our approach.
`Basics`
Most workspace related operations center around a new submenu under the file menu:

Try to work with multi root workspaces and play around with the available actions. Transition between empty workspaces, single folder workspaces and multi-root workspaces. Some things to keep an eye on:
* explorer and search operations work as before in any of the contexts
* you can save "Untitled Workspace" to some location on disk and open them from there
* you can switch workspaces via `File > Open Recent` as well as the recently opened picker (F1 `>open recent`)
* you can add and remove root folders from a multi-root workspace
* you see that you are inside a workspace by a new status bar color as well as the workspace name showing up in the explorer section for folders
* workspaces that are opened will restore in the same way as folder do (you can set `window.restoreWindows`: all to restore multiple windows)
* debugging (node.js and extension host debugging) work as before
`Data`
Once you are in a workspace context, we use the workspaces identifier to associate:
* UI state (e.g. the files you have opened as tabs)
* hot-exit state (e.g. dirty files you left dirty when quitting)
* extension storage (a location on disk where extensions can store data via the [`ExtensionContext.storagePath`](https://github.com/Microsoft/vscode/blob/master/src/vs/vscode.d.ts#L3513) API)
Verify:
* UI state you have inside a workspace is restored next time you open it
* dirty files are restored when you quit and reopen the workspace
* extensions have a stable `ExtensionContext.storagePath` location per workspace
`Settings`
Once you are in a workspace context, workspace settings are no longer stored within the `.vscode` folder, but within the workspace file. Verify that you can still define workspace settings when you are in a workspace context and that settings apply as usual. Also verify that folder settings (the ones we do support, e.g. editor settings) still apply per resource you open of that folder.
| 1.0 | Test: Multi Root Workspaces - Test for: Multi Root Workspaces
Complexity: 5
- [ ] Windows
- [ ] Linux
- [ ] macOS
In this milestone we rewrote how multi root workspaces surface in VS Code. The previous solution with having a `workspace` setting in user settings is obsolete (there is no migration). See https://github.com/Microsoft/vscode/issues/396#issuecomment-315079618 for more details on our approach.
`Basics`
Most workspace related operations center around a new submenu under the file menu:

Try to work with multi root workspaces and play around with the available actions. Transition between empty workspaces, single folder workspaces and multi-root workspaces. Some things to keep an eye on:
* explorer and search operations work as before in any of the contexts
* you can save "Untitled Workspace" to some location on disk and open them from there
* you can switch workspaces via `File > Open Recent` as well as the recently opened picker (F1 `>open recent`)
* you can add and remove root folders from a multi-root workspace
* you see that you are inside a workspace by a new status bar color as well as the workspace name showing up in the explorer section for folders
* workspaces that are opened will restore in the same way as folder do (you can set `window.restoreWindows`: all to restore multiple windows)
* debugging (node.js and extension host debugging) work as before
`Data`
Once you are in a workspace context, we use the workspaces identifier to associate:
* UI state (e.g. the files you have opened as tabs)
* hot-exit state (e.g. dirty files you left dirty when quitting)
* extension storage (a location on disk where extensions can store data via the [`ExtensionContext.storagePath`](https://github.com/Microsoft/vscode/blob/master/src/vs/vscode.d.ts#L3513) API)
Verify:
* UI state you have inside a workspace is restored next time you open it
* dirty files are restored when you quit and reopen the workspace
* extensions have a stable `ExtensionContext.storagePath` location per workspace
`Settings`
Once you are in a workspace context, workspace settings are no longer stored within the `.vscode` folder, but within the workspace file. Verify that you can still define workspace settings when you are in a workspace context and that settings apply as usual. Also verify that folder settings (the ones we do support, e.g. editor settings) still apply per resource you open of that folder.
| non_main | test multi root workspaces test for multi root workspaces complexity windows linux macos in this milestone we rewrote how multi root workspaces surface in vs code the previous solution with having a workspace setting in user settings is obsolete there is no migration see for more details on our approach basics most workspace related operations center around a new submenu under the file menu try to work with multi root workspaces and play around with the available actions transition between empty workspaces single folder workspaces and multi root workspaces some things to keep an eye on explorer and search operations work as before in any of the contexts you can save untitled workspace to some location on disk and open them from there you can switch workspaces via file open recent as well as the recently opened picker open recent you can add and remove root folders from a multi root workspace you see that you are inside a workspace by a new status bar color as well as the workspace name showing up in the explorer section for folders workspaces that are opened will restore in the same way as folder do you can set window restorewindows all to restore multiple windows debugging node js and extension host debugging work as before data once you are in a workspace context we use the workspaces identifier to associate ui state e g the files you have opened as tabs hot exit state e g dirty files you left dirty when quitting extension storage a location on disk where extensions can store data via the api verify ui state you have inside a workspace is restored next time you open it dirty files are restored when you quit and reopen the workspace extensions have a stable extensioncontext storagepath location per workspace settings once you are in a workspace context workspace settings are no longer stored within the vscode folder but within the workspace file verify that you can still define workspace settings when you are in a workspace context and that settings apply as usual also verify that folder settings the ones we do support e g editor settings still apply per resource you open of that folder | 0 |
23,598 | 4,958,082,952 | IssuesEvent | 2016-12-02 08:22:59 | Freeyourgadget/Gadgetbridge | https://api.github.com/repos/Freeyourgadget/Gadgetbridge | closed | "Acquire location" doesn't work | documentation not a bug | I'm on a Samsung Galaxy Note N7000 running CM 11-20160815-nightly (CM11 was never fully released for this phone) - Android 4.4.4. GPS works fine in other apps (e.g. maps.me, osmtracker etc.) But in the pebble settings section of GB, pressing the "acquire location" button doesn't seem to do anything - lat and lon stayed at 0 and 0, and I don't see the GPS icon showing up in the notification area, like it does when any other app accesses the GPS.
I worked around it by entering my lat/lon manually. It didn't seem to affect sunrise/sunset times right away (not sure about that though) so I rebooted. Then it affected tomorrow's sunrise/sunset times on the timeline, but not today's sunset time. | 1.0 | "Acquire location" doesn't work - I'm on a Samsung Galaxy Note N7000 running CM 11-20160815-nightly (CM11 was never fully released for this phone) - Android 4.4.4. GPS works fine in other apps (e.g. maps.me, osmtracker etc.) But in the pebble settings section of GB, pressing the "acquire location" button doesn't seem to do anything - lat and lon stayed at 0 and 0, and I don't see the GPS icon showing up in the notification area, like it does when any other app accesses the GPS.
I worked around it by entering my lat/lon manually. It didn't seem to affect sunrise/sunset times right away (not sure about that though) so I rebooted. Then it affected tomorrow's sunrise/sunset times on the timeline, but not today's sunset time. | non_main | acquire location doesn t work i m on a samsung galaxy note running cm nightly was never fully released for this phone android gps works fine in other apps e g maps me osmtracker etc but in the pebble settings section of gb pressing the acquire location button doesn t seem to do anything lat and lon stayed at and and i don t see the gps icon showing up in the notification area like it does when any other app accesses the gps i worked around it by entering my lat lon manually it didn t seem to affect sunrise sunset times right away not sure about that though so i rebooted then it affected tomorrow s sunrise sunset times on the timeline but not today s sunset time | 0 |
548 | 3,984,358,139 | IssuesEvent | 2016-05-07 04:22:30 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | opened | Conversions: No teaspoon/tablespoon support | Maintainer Input Requested | Queries like `teaspoons in a tablespoon` or `6 tsp in tbsp` return no Instant Answer.
I'm not sure where in `Conversions.pm` the possible units of measurement are defined, except for temperatures in `sub convert_temperatures`. Would be glad to help implement this if someone could help me follow the breadcrumbs.
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft | True | Conversions: No teaspoon/tablespoon support - Queries like `teaspoons in a tablespoon` or `6 tsp in tbsp` return no Instant Answer.
I'm not sure where in `Conversions.pm` the possible units of measurement are defined, except for temperatures in `sub convert_temperatures`. Would be glad to help implement this if someone could help me follow the breadcrumbs.
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft | main | conversions no teaspoon tablespoon support queries like teaspoons in a tablespoon or tsp in tbsp return no instant answer i m not sure where in conversions pm the possible units of measurement are defined except for temperatures in sub convert temperatures would be glad to help implement this if someone could help me follow the breadcrumbs ia page mintsoft | 1 |
1,473 | 6,396,817,982 | IssuesEvent | 2017-08-04 16:24:44 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Tips: Ensure a currency is always specified | Low-Hanging Fruit Maintainer Input Requested Triggering | Currently the Tips IA can handle queries such as `25% of 500`, which it shouldn't.
We should make sure that a currency is always present so we are only doing tip calculations, not arbitrary percentages. If the word 'tip' is present we can probably assume this is what they want.
---
IA Page: http://duck.co/ia/view/tips
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mattlehning
| True | Tips: Ensure a currency is always specified - Currently the Tips IA can handle queries such as `25% of 500`, which it shouldn't.
We should make sure that a currency is always present so we are only doing tip calculations, not arbitrary percentages. If the word 'tip' is present we can probably assume this is what they want.
---
IA Page: http://duck.co/ia/view/tips
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mattlehning
| main | tips ensure a currency is always specified currently the tips ia can handle queries such as of which it shouldn t we should make sure that a currency is always present so we are only doing tip calculations not arbitrary percentages if the word tip is present we can probably assume this is what they want ia page mattlehning | 1 |
2,582 | 8,774,513,708 | IssuesEvent | 2018-12-18 20:02:49 | arcticicestudio/nord-docs | https://api.github.com/repos/arcticicestudio/nord-docs | closed | Google Analytics | context-workflow scope-maintainability scope-quality scope-stability type-feature | <p align="center"><img src="https://user-images.githubusercontent.com/7836623/50167256-14bbd380-02e9-11e9-8aca-a31baf745cd8.png" width="20%"/></p>
> Associated epic: #86
This issue documents the implementation of [Google Analytics][ga-mark] like documented in the [“Analytics & Statistics” design concept][gh-86].
<p align="center"><img src="https://user-images.githubusercontent.com/7836623/50167593-c824c800-02e9-11e9-9b70-84b6fc40c05f.png " width="20%"/></p>
The main tool to collect and analyze data will be [Google Analytics][ga-mark]. It is a stable and proven service with a lot of useful configurable features and a reliable persistence.
_Nord Docs_ will use the latest and recommended [gtag.js][gdev-ga-gtag] library that optionally allows, next to Google Analytics itself, the integration of almost all Google Marketing services like e.g. [Google Tag Manager][gdev-tm].
The library will be integrated through [gatsby-plugin-google-gtag][gh-gb-p-ga-tag].
## Tasks
- [x] Install required packages:
- [gatsby-plugin-google-gtag][npm-gp-gtag]
- [x] Implement required internal constants.
- [x] Implement the plugin configuration.
[g-sup-anonip]: https://support.google.com/analytics/answer/2763052
[gh-gb-p-ga-tag]: https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-plugin-google-gtag
[gh-86]: https://github.com/arcticicestudio/nord-docs/issues/86
[ga-mark]: https://marketingplatform.google.com/about/analytics
[gdev-ga-gtag]: https://developers.google.com/analytics/devguides/collection/gtagjs
[gdev-tm]: https://developers.google.com/tag-manager
[wiki-a]: https://en.wikipedia.org/wiki/Analytics
[wiki-s]: https://en.wikipedia.org/wiki/Statistics
[wiki-dnt]: https://en.wikipedia.org/wiki/Do_Not_Track
[npm-gp-gtag]: https://www.npmjs.com/package/gatsby-plugin-google-gtag
| True | Google Analytics - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/50167256-14bbd380-02e9-11e9-8aca-a31baf745cd8.png" width="20%"/></p>
> Associated epic: #86
This issue documents the implementation of [Google Analytics][ga-mark] like documented in the [“Analytics & Statistics” design concept][gh-86].
<p align="center"><img src="https://user-images.githubusercontent.com/7836623/50167593-c824c800-02e9-11e9-9b70-84b6fc40c05f.png " width="20%"/></p>
The main tool to collect and analyze data will be [Google Analytics][ga-mark]. It is a stable and proven service with a lot of useful configurable features and a reliable persistence.
_Nord Docs_ will use the latest and recommended [gtag.js][gdev-ga-gtag] library that optionally allows, next to Google Analytics itself, the integration of almost all Google Marketing services like e.g. [Google Tag Manager][gdev-tm].
The library will be integrated through [gatsby-plugin-google-gtag][gh-gb-p-ga-tag].
## Tasks
- [x] Install required packages:
- [gatsby-plugin-google-gtag][npm-gp-gtag]
- [x] Implement required internal constants.
- [x] Implement the plugin configuration.
[g-sup-anonip]: https://support.google.com/analytics/answer/2763052
[gh-gb-p-ga-tag]: https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-plugin-google-gtag
[gh-86]: https://github.com/arcticicestudio/nord-docs/issues/86
[ga-mark]: https://marketingplatform.google.com/about/analytics
[gdev-ga-gtag]: https://developers.google.com/analytics/devguides/collection/gtagjs
[gdev-tm]: https://developers.google.com/tag-manager
[wiki-a]: https://en.wikipedia.org/wiki/Analytics
[wiki-s]: https://en.wikipedia.org/wiki/Statistics
[wiki-dnt]: https://en.wikipedia.org/wiki/Do_Not_Track
[npm-gp-gtag]: https://www.npmjs.com/package/gatsby-plugin-google-gtag
| main | google analytics associated epic this issue documents the implementation of like documented in the the main tool to collect and analyze data will be it is a stable and proven service with a lot of useful configurable features and a reliable persistence nord docs will use the latest and recommended library that optionally allows next to google analytics itself the integration of almost all google marketing services like e g the library will be integrated through tasks install required packages implement required internal constants implement the plugin configuration | 1 |
169,726 | 20,841,888,408 | IssuesEvent | 2022-03-21 01:46:24 | ekediala/inventory | https://api.github.com/repos/ekediala/inventory | opened | CVE-2022-24772 (High) detected in node-forge-0.8.2.tgz | security vulnerability | ## CVE-2022-24772 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.8.2.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.8.2.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.8.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.1.4.tgz (Root Library)
- webpack-dev-server-3.8.1.tgz
- selfsigned-1.10.6.tgz
- :x: **node-forge-0.8.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
<p>Publish Date: 2022-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24772>CVE-2022-24772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772</a></p>
<p>Release Date: 2022-03-18</p>
<p>Fix Resolution: node-forge - 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-24772 (High) detected in node-forge-0.8.2.tgz - ## CVE-2022-24772 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.8.2.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.8.2.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.8.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.1.4.tgz (Root Library)
- webpack-dev-server-3.8.1.tgz
- selfsigned-1.10.6.tgz
- :x: **node-forge-0.8.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
<p>Publish Date: 2022-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24772>CVE-2022-24772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772</a></p>
<p>Release Date: 2022-03-18</p>
<p>Fix Resolution: node-forge - 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in node forge tgz cve high severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file package json path to vulnerable library node modules node forge package json dependency hierarchy laravel mix tgz root library webpack dev server tgz selfsigned tgz x node forge tgz vulnerable library vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code does not check for tailing garbage bytes after decoding a digestinfo asn structure this can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource | 0 |
101,867 | 31,717,746,432 | IssuesEvent | 2023-09-10 03:31:33 | TABConf/2023.tabconf.com | https://api.github.com/repos/TABConf/2023.tabconf.com | closed | TABconf hackathon by emeralize x PlebLab | Builder Days Project Accepted SM |
`author: ThrillerX` `author: Santos Hernandez`
# Description
### What is this project about? Please give us as many details as possible.
The purpose of the Hackathon is to promote innovation in the fields of Bitcoin, Lightning Network, Nostr, and AI. Our goal is to facilitate an environment that encourages collaboration, education, and the creation of new and exciting projects. We aim to grow the Lightning Network and Nostr communities by providing a platform for developers to showcase their skills and share their work. Above all, we want to have fun and celebrate bleeding-edge technology!
### Is it a FOSS or Open Source project?
Open Source, we want to work with the community on this to get their feedback and incorporate it.
### What would an attendee learn from visiting this projects table at Builder Days?
How to build Lightning Apps and then building a brand new project!
### Is there anything people should read up on before finding the table at Builder Day?
Yes, we can hook up folks with lots of course content to prepare them from nocode to writing a LN script to a full blown Lightning or Nostr App. It all just depends on how much time you want to put into it and what you want to do as well as how much experience you have.
### Relevant Links
Courses: https://emeralize.app/marketplace/
Docs: https://docs.zebedee.io
Book: https://book.pleblab.com
# Project Details
### Who will run the table? Are those people current contributors or maintainers who can answer questions and get people onboarded?
- Car Gonzalez (https://twitter.com/ThrillerX_)
- Santos Hernandez (https://twitter.com/5antoshernandez)
# Prize Pool
10K in total prizes. TBD on specifics.
# Strategy
- Focused categories for the participants. This will ensure that there is a fluid theme throughout the hackathon.
- Bitcoin
- Lightning Network
- AI (ChatGPT)
- Nostr
- Educational resources to prepare and market in advance to adequately prepare folks.
- Buy-in: 25,000 sats.
- Only new projects being worked on, the true essence and purpose of a hackathon.
# Agenda / Schedule
1. Start at the beginning of the conference - Friday
2. Pitch ideas
3. Team selection
4. Execution
5. Pitches
6. Judging
7. Announcements of the top 3 winners on mainstage.
# Value Proposition
- Promotion of the teams entering
- Promotion of the winners
- Interviews
- Trophy
- Nostr badge
- Podcasts
- Articles
- Promotion of projects
# What else you'll win
- Trophy
- Nostr badge issuance
## People
- Supertestnet
- Austin
- Car
- Santos
- Jure Grahek
## Company Sponsors
We're currently looking for sponsorship
## Space
- Full room
- Mainstage for announcement of winners
# Questions
- 25,000 sats buy-in — Would you be interested in this? The total could be included as a bonus earnings. It puts some skin in the game.
So what does everyone think? We'd love to make this happen! Let us know below!
| 1.0 | TABconf hackathon by emeralize x PlebLab -
`author: ThrillerX` `author: Santos Hernandez`
# Description
### What is this project about? Please give us as many details as possible.
The purpose of the Hackathon is to promote innovation in the fields of Bitcoin, Lightning Network, Nostr, and AI. Our goal is to facilitate an environment that encourages collaboration, education, and the creation of new and exciting projects. We aim to grow the Lightning Network and Nostr communities by providing a platform for developers to showcase their skills and share their work. Above all, we want to have fun and celebrate bleeding-edge technology!
### Is it a FOSS or Open Source project?
Open Source, we want to work with the community on this to get their feedback and incorporate it.
### What would an attendee learn from visiting this projects table at Builder Days?
How to build Lightning Apps and then building a brand new project!
### Is there anything people should read up on before finding the table at Builder Day?
Yes, we can hook up folks with lots of course content to prepare them from nocode to writing a LN script to a full blown Lightning or Nostr App. It all just depends on how much time you want to put into it and what you want to do as well as how much experience you have.
### Relevant Links
Courses: https://emeralize.app/marketplace/
Docs: https://docs.zebedee.io
Book: https://book.pleblab.com
# Project Details
### Who will run the table? Are those people current contributors or maintainers who can answer questions and get people onboarded?
- Car Gonzalez (https://twitter.com/ThrillerX_)
- Santos Hernandez (https://twitter.com/5antoshernandez)
# Prize Pool
10K in total prizes. TBD on specifics.
# Strategy
- Focused categories for the participants. This will ensure that there is a fluid theme throughout the hackathon.
- Bitcoin
- Lightning Network
- AI (ChatGPT)
- Nostr
- Educational resources to prepare and market in advance to adequately prepare folks.
- Buy-in: 25,000 sats.
- Only new projects being worked on, the true essence and purpose of a hackathon.
# Agenda / Schedule
1. Start at the beginning of the conference - Friday
2. Pitch ideas
3. Team selection
4. Execution
5. Pitches
6. Judging
7. Announcements of the top 3 winners on mainstage.
# Value Proposition
- Promotion of the teams entering
- Promotion of the winners
- Interviews
- Trophy
- Nostr badge
- Podcasts
- Articles
- Promotion of projects
# What else you'll win
- Trophy
- Nostr badge issuance
## People
- Supertestnet
- Austin
- Car
- Santos
- Jure Grahek
## Company Sponsors
We're currently looking for sponsorship
## Space
- Full room
- Mainstage for announcement of winners
# Questions
- 25,000 sats buy-in — Would you be interested in this? The total could be included as a bonus earnings. It puts some skin in the game.
So what does everyone think? We'd love to make this happen! Let us know below!
| non_main | tabconf hackathon by emeralize x pleblab author thrillerx author santos hernandez description what is this project about please give us as many details as possible the purpose of the hackathon is to promote innovation in the fields of bitcoin lightning network nostr and ai our goal is to facilitate an environment that encourages collaboration education and the creation of new and exciting projects we aim to grow the lightning network and nostr communities by providing a platform for developers to showcase their skills and share their work above all we want to have fun and celebrate bleeding edge technology is it a foss or open source project open source we want to work with the community on this to get their feedback and incorporate it what would an attendee learn from visiting this projects table at builder days how to build lightning apps and then building a brand new project is there anything people should read up on before finding the table at builder day yes we can hook up folks with lots of course content to prepare them from nocode to writing a ln script to a full blown lightning or nostr app it all just depends on how much time you want to put into it and what you want to do as well as how much experience you have relevant links courses docs book project details who will run the table are those people current contributors or maintainers who can answer questions and get people onboarded car gonzalez santos hernandez prize pool in total prizes tbd on specifics strategy focused categories for the participants this will ensure that there is a fluid theme throughout the hackathon bitcoin lightning network ai chatgpt nostr educational resources to prepare and market in advance to adequately prepare folks buy in sats only new projects being worked on the true essence and purpose of a hackathon agenda schedule start at the beginning of the conference friday pitch ideas team selection execution pitches judging announcements of the top winners on mainstage value proposition promotion of the teams entering promotion of the winners interviews trophy nostr badge podcasts articles promotion of projects what else you ll win trophy nostr badge issuance people supertestnet austin car santos jure grahek company sponsors we re currently looking for sponsorship space full room mainstage for announcement of winners questions sats buy in — would you be interested in this the total could be included as a bonus earnings it puts some skin in the game so what does everyone think we d love to make this happen let us know below | 0 |
1,871 | 6,577,493,724 | IssuesEvent | 2017-09-12 01:18:10 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_group check mode is inaccurate | affects_2.0 aws bug_report cloud feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_group
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When running ansible with `--check`, all security groups are listed as having changes.
##### STEPS TO REPRODUCE
You should be able to reproduce this with [the example task](https://docs.ansible.com/ansible/ec2_group_module.html#examples), or anything simpler.
##### EXPECTED RESULTS
I expected the comparison to be made between the local declarations and the currently existing definitions in AWS, and only those that would normally be changed would show changes. Additionally, it'd be really nice if `--diff` produced any sort of output indicating what the diff between them is.
##### ACTUAL RESULTS
Ansible reports changes to every security group, with no additional information. Reading through [the module](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/ec2_group.py), there are a bunch of places where `check_mode` just causes a conditional to be skipped, and so probably some of that logic needs to be moved out of the conditional.
| True | ec2_group check mode is inaccurate - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_group
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When running ansible with `--check`, all security groups are listed as having changes.
##### STEPS TO REPRODUCE
You should be able to reproduce this with [the example task](https://docs.ansible.com/ansible/ec2_group_module.html#examples), or anything simpler.
##### EXPECTED RESULTS
I expected the comparison to be made between the local declarations and the currently existing definitions in AWS, and only those that would normally be changed would show changes. Additionally, it'd be really nice if `--diff` produced any sort of output indicating what the diff between them is.
##### ACTUAL RESULTS
Ansible reports changes to every security group, with no additional information. Reading through [the module](https://github.com/ansible/ansible-modules-core/blob/devel/cloud/amazon/ec2_group.py), there are a bunch of places where `check_mode` just causes a conditional to be skipped, and so probably some of that logic needs to be moved out of the conditional.
| main | group check mode is inaccurate issue type feature idea component name group ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment n a summary when running ansible with check all security groups are listed as having changes steps to reproduce you should be able to reproduce this with or anything simpler expected results i expected the comparison to be made between the local declarations and the currently existing definitions in aws and only those that would normally be changed would show changes additionally it d be really nice if diff produced any sort of output indicating what the diff between them is actual results ansible reports changes to every security group with no additional information reading through there are a bunch of places where check mode just causes a conditional to be skipped and so probably some of that logic needs to be moved out of the conditional | 1 |
2,966 | 10,651,165,243 | IssuesEvent | 2019-10-17 09:47:55 | valbergconsulting/bitcore-abc | https://api.github.com/repos/valbergconsulting/bitcore-abc | opened | Remove all use of CAmount | maintainance | Upstream refactored `CAmount` into a class (`Amount`). Replacing all use of CAmount with Amount would save time and reduce risk of bugs when merging upstream changes. | True | Remove all use of CAmount - Upstream refactored `CAmount` into a class (`Amount`). Replacing all use of CAmount with Amount would save time and reduce risk of bugs when merging upstream changes. | main | remove all use of camount upstream refactored camount into a class amount replacing all use of camount with amount would save time and reduce risk of bugs when merging upstream changes | 1 |
1,099 | 4,970,653,581 | IssuesEvent | 2016-12-05 16:33:55 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ec2_elb_facts should support check mode | affects_2.2 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_elb_facts
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
*N/A*
##### OS / ENVIRONMENT
*N/A*
##### SUMMARY
Since the `ec2_elb_facts` is strictly a read-only operation, it should support running with `--check`
##### STEPS TO REPRODUCE
```sh
ansible-playbook \
-i hosts \
-l my-elb-host \
ec2_elb_facts_check.yml \
-vv \
--check
```
```yaml
- hosts: all
connection: local
gather_facts: no
tasks:
- name: Collect ELB facts
ec2_elb_facts:
names: "my-elb"
region: "us-east-1"
register: elbfacts
tags: always
```
##### EXPECTED RESULTS
It would be expected that `ec2_elb_facts` would still fetch the instance information. This being omitted, prevents the ability to enumerate ELB instance hosts, dynamically add them to the inventory, and then conduct `--check` mode against what would *actually* be getting done.
##### ACTUAL RESULTS
```
TASK [Collect ELB facts] ***********************************************
task path: /Projects/ec2_elb_facts_check.yml:6
skipping: [my-elb-host] => {
"changed": false,
"skipped": true
}
MSG:
remote module (ec2_elb_facts) does not support check mode
``` | True | ec2_elb_facts should support check mode - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_elb_facts
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
*N/A*
##### OS / ENVIRONMENT
*N/A*
##### SUMMARY
Since the `ec2_elb_facts` is strictly a read-only operation, it should support running with `--check`
##### STEPS TO REPRODUCE
```sh
ansible-playbook \
-i hosts \
-l my-elb-host \
ec2_elb_facts_check.yml \
-vv \
--check
```
```yaml
- hosts: all
connection: local
gather_facts: no
tasks:
- name: Collect ELB facts
ec2_elb_facts:
names: "my-elb"
region: "us-east-1"
register: elbfacts
tags: always
```
##### EXPECTED RESULTS
It would be expected that `ec2_elb_facts` would still fetch the instance information. This being omitted, prevents the ability to enumerate ELB instance hosts, dynamically add them to the inventory, and then conduct `--check` mode against what would *actually* be getting done.
##### ACTUAL RESULTS
```
TASK [Collect ELB facts] ***********************************************
task path: /Projects/ec2_elb_facts_check.yml:6
skipping: [my-elb-host] => {
"changed": false,
"skipped": true
}
MSG:
remote module (ec2_elb_facts) does not support check mode
``` | main | elb facts should support check mode issue type bug report component name elb facts ansible version ansible config file configured module search path default w o overrides configuration n a os environment n a summary since the elb facts is strictly a read only operation it should support running with check steps to reproduce sh ansible playbook i hosts l my elb host elb facts check yml vv check yaml hosts all connection local gather facts no tasks name collect elb facts elb facts names my elb region us east register elbfacts tags always expected results it would be expected that elb facts would still fetch the instance information this being omitted prevents the ability to enumerate elb instance hosts dynamically add them to the inventory and then conduct check mode against what would actually be getting done actual results task task path projects elb facts check yml skipping changed false skipped true msg remote module elb facts does not support check mode | 1 |
1,591 | 6,572,373,105 | IssuesEvent | 2017-09-11 01:48:39 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | bigip_selfip fails when using traffic_group parameter | affects_2.2 bug_report networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
F5 bigip (bigip_selfip)
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### F5 BIGIP LTM VERSION
```
Sys::Version
Main Package
Product BIG-IP
Version 12.1.1
Build 0.0.184
Edition Final
Date Thu Aug 11 17:09:01 PDT 2016
```
##### PYTHON VERSION
```
Python 2.7.12
```
##### CONFIGURATION
```
retry_files_enabled = False
host_key_checking=False
```
##### OS / ENVIRONMENT
Alpine 3.4 (Docker Container)
##### SUMMARY
<!--- Explain the problem briefly -->
If the `traffic_group` parameter in the `bigip_selfip` module is set to a valid traffic group that exists on the remote device such as `traffic-group-local-only` or `traffic-group-1`, the `bigip_selfip` module will always return the error: `The specified traffic group was not found`.
##### STEPS TO REPRODUCE
Create a task that uses the `bigip_selfip` module and specify a valid `traffic_group` parameter.
_NOTE: The default traffic group `traffic-group-local-only` will always exist, and the traffic group `traffic-group-1` will be automatically created during the process of configuring device HA through the HA Wizard._
```
- name: Assign Floating IP to external
bigip_selfip:
address: "1.1.1.1"
name: "external_floating"
netmask: "255.255.255.0"
password: "{{ bigip_password }}"
server: "{{ inventory_hostname }}"
traffic_group: "traffic-group-1"
user: "{{ bigip_username }}"
validate_certs: "{{ validate_certs }}"
vlan: "external"
```
##### EXPECTED RESULTS
It should set the self ip's traffic group to the valid traffic group specified.
##### ACTUAL RESULTS
The Ansible task fails with the following error: `The specified traffic group was not found`.
```
TASK [Assign Floating IP to internal] ******************************************
task path: /site/site.yaml:96
Using module file /site/library/bigip_selfip.py
<lbl11.example.com> ESTABLISH LOCAL CONNECTION FOR USER: root
<lbl11.example.com> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314 `" && echo ansible-tmp-1474482164.33-122340317119314="` echo $HOME/.ansibl
e/tmp/ansible-tmp-1474482164.33-122340317119314 `" ) && sleep 0'
<lbl11.example.com> PUT /tmp/tmpo34T9a TO /root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/bigip_selfip.py
<lbl11.example.com> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/ /root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/bigip_selfip.py && sleep 0'
<lbl11.example.com> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/bigip_selfip.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/" >
/dev/null 2>&1 && sleep 0'
fatal: [lbl11.example.com]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"address": "10.11.50.9",
"allow_service": null,
"name": "internal_floating",
"netmask": "255.255.254.0",
"partition": "Common",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"server": "lbl11.example.com",
"server_port": 443,
"state": "present",
"traffic_group": "traffic-group-1",
"user": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"validate_certs": false,
"vlan": "internal"
},
"module_name": "bigip_selfip"
},
"msg": "The specified traffic group was not found"
}
to retry, use: --limit @/site/site.retry
```
| True | bigip_selfip fails when using traffic_group parameter - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
F5 bigip (bigip_selfip)
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### F5 BIGIP LTM VERSION
```
Sys::Version
Main Package
Product BIG-IP
Version 12.1.1
Build 0.0.184
Edition Final
Date Thu Aug 11 17:09:01 PDT 2016
```
##### PYTHON VERSION
```
Python 2.7.12
```
##### CONFIGURATION
```
retry_files_enabled = False
host_key_checking=False
```
##### OS / ENVIRONMENT
Alpine 3.4 (Docker Container)
##### SUMMARY
<!--- Explain the problem briefly -->
If the `traffic_group` parameter in the `bigip_selfip` module is set to a valid traffic group that exists on the remote device such as `traffic-group-local-only` or `traffic-group-1`, the `bigip_selfip` module will always return the error: `The specified traffic group was not found`.
##### STEPS TO REPRODUCE
Create a task that uses the `bigip_selfip` module and specify a valid `traffic_group` parameter.
_NOTE: The default traffic group `traffic-group-local-only` will always exist, and the traffic group `traffic-group-1` will be automatically created during the process of configuring device HA through the HA Wizard._
```
- name: Assign Floating IP to external
bigip_selfip:
address: "1.1.1.1"
name: "external_floating"
netmask: "255.255.255.0"
password: "{{ bigip_password }}"
server: "{{ inventory_hostname }}"
traffic_group: "traffic-group-1"
user: "{{ bigip_username }}"
validate_certs: "{{ validate_certs }}"
vlan: "external"
```
##### EXPECTED RESULTS
It should set the self ip's traffic group to the valid traffic group specified.
##### ACTUAL RESULTS
The Ansible task fails with the following error: `The specified traffic group was not found`.
```
TASK [Assign Floating IP to internal] ******************************************
task path: /site/site.yaml:96
Using module file /site/library/bigip_selfip.py
<lbl11.example.com> ESTABLISH LOCAL CONNECTION FOR USER: root
<lbl11.example.com> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314 `" && echo ansible-tmp-1474482164.33-122340317119314="` echo $HOME/.ansibl
e/tmp/ansible-tmp-1474482164.33-122340317119314 `" ) && sleep 0'
<lbl11.example.com> PUT /tmp/tmpo34T9a TO /root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/bigip_selfip.py
<lbl11.example.com> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/ /root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/bigip_selfip.py && sleep 0'
<lbl11.example.com> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/bigip_selfip.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1474482164.33-122340317119314/" >
/dev/null 2>&1 && sleep 0'
fatal: [lbl11.example.com]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"address": "10.11.50.9",
"allow_service": null,
"name": "internal_floating",
"netmask": "255.255.254.0",
"partition": "Common",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"server": "lbl11.example.com",
"server_port": 443,
"state": "present",
"traffic_group": "traffic-group-1",
"user": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"validate_certs": false,
"vlan": "internal"
},
"module_name": "bigip_selfip"
},
"msg": "The specified traffic group was not found"
}
to retry, use: --limit @/site/site.retry
```
| main | bigip selfip fails when using traffic group parameter issue type bug report component name bigip bigip selfip ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides bigip ltm version sys version main package product big ip version build edition final date thu aug pdt python version python configuration retry files enabled false host key checking false os environment alpine docker container summary if the traffic group parameter in the bigip selfip module is set to a valid traffic group that exists on the remote device such as traffic group local only or traffic group the bigip selfip module will always return the error the specified traffic group was not found steps to reproduce create a task that uses the bigip selfip module and specify a valid traffic group parameter note the default traffic group traffic group local only will always exist and the traffic group traffic group will be automatically created during the process of configuring device ha through the ha wizard name assign floating ip to external bigip selfip address name external floating netmask password bigip password server inventory hostname traffic group traffic group user bigip username validate certs validate certs vlan external expected results it should set the self ip s traffic group to the valid traffic group specified actual results the ansible task fails with the following error the specified traffic group was not found task task path site site yaml using module file site library bigip selfip py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansibl e tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp bigip selfip py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp bigip selfip py sleep exec bin sh c usr bin python root ansible tmp ansible tmp bigip selfip py rm rf root ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args address allow service null name internal floating netmask partition common password value specified in no log parameter server example com server port state present traffic group traffic group user value specified in no log parameter validate certs false vlan internal module name bigip selfip msg the specified traffic group was not found to retry use limit site site retry | 1 |
5,229 | 26,517,024,708 | IssuesEvent | 2023-01-18 21:45:40 | aws/aws-lambda-builders | https://api.github.com/repos/aws/aws-lambda-builders | closed | Bug: recursively copies artifact dir when it's in source dir | type/feature maintainer/need-followup | Recursively copies artifact dir when it's in source dir until it errors out:
JSON-RPC input:
```json
{
...
"source_dir": "path/to/source",
"artifacts_dir": "path/to/source/.build/artifact",
...
}
```
Then running `lambda-builders <JSON INPUT>`
Results in
```bash
PythonPipBuilder:CopySource - [Errno 63] File name too long: '/path/to/source/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/python_dateutil-2.8.2.dist-info/top_level.txt'
```
Setting `artifact_dir` outside the `source_dir` solves this issue.
----
**Is this a bug, or is this expected? Perhaps we should add `ignore: {}` interface? I can contribute a PR for this if you guys can point me in the right direction.**
----
#### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Mac
2. If using SAM CLI, `sam --version`:
3. AWS region: us-east-1 (shouldn't matter)
`Add --debug flag to any SAM CLI commands you are running`
| True | Bug: recursively copies artifact dir when it's in source dir - Recursively copies artifact dir when it's in source dir until it errors out:
JSON-RPC input:
```json
{
...
"source_dir": "path/to/source",
"artifacts_dir": "path/to/source/.build/artifact",
...
}
```
Then running `lambda-builders <JSON INPUT>`
Results in
```bash
PythonPipBuilder:CopySource - [Errno 63] File name too long: '/path/to/source/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/.build/artifact/python_dateutil-2.8.2.dist-info/top_level.txt'
```
Setting `artifact_dir` outside the `source_dir` solves this issue.
----
**Is this a bug, or is this expected? Perhaps we should add `ignore: {}` interface? I can contribute a PR for this if you guys can point me in the right direction.**
----
#### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Mac
2. If using SAM CLI, `sam --version`:
3. AWS region: us-east-1 (shouldn't matter)
`Add --debug flag to any SAM CLI commands you are running`
| main | bug recursively copies artifact dir when it s in source dir recursively copies artifact dir when it s in source dir until it errors out json rpc input json source dir path to source artifacts dir path to source build artifact then running lambda builders results in bash pythonpipbuilder copysource file name too long path to source build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact build artifact python dateutil dist info top level txt setting artifact dir outside the source dir solves this issue is this a bug or is this expected perhaps we should add ignore interface i can contribute a pr for this if you guys can point me in the right direction additional environment details ex windows mac amazon linux etc os mac if using sam cli sam version aws region us east shouldn t matter add debug flag to any sam cli commands you are running | 1 |
119,987 | 4,778,758,126 | IssuesEvent | 2016-10-27 20:18:57 | easydigitaldownloads/easy-digital-downloads | https://api.github.com/repos/easydigitaldownloads/easy-digital-downloads | closed | problems with multiple instances of the cart widget | Bug Frontend Priority: Medium | So it appears that if more than one cart widget is on a page, the second (and beyond) widget in the HTML flow will not display or behave properly. Here's what I see so far:
* The second widget [specifically] does not display a title: http://glui.me/?i=ypc8f0vqffffcgv/2015-07-09_at_12.56_PM.png/
* Once you add an item to the cart and the widget updates (NO page refresh), the cart items are missing from the second widget and beyond: http://glui.me/?i=ga7qhlotrlt6zo9/2015-07-09_at_12.58_PM.png/
* Refresh the page and the cart items show up (still not second widget title): http://glui.me/?i=ztc92e627eq3pwo/2015-07-09_at_12.59_PM.png/ | 1.0 | problems with multiple instances of the cart widget - So it appears that if more than one cart widget is on a page, the second (and beyond) widget in the HTML flow will not display or behave properly. Here's what I see so far:
* The second widget [specifically] does not display a title: http://glui.me/?i=ypc8f0vqffffcgv/2015-07-09_at_12.56_PM.png/
* Once you add an item to the cart and the widget updates (NO page refresh), the cart items are missing from the second widget and beyond: http://glui.me/?i=ga7qhlotrlt6zo9/2015-07-09_at_12.58_PM.png/
* Refresh the page and the cart items show up (still not second widget title): http://glui.me/?i=ztc92e627eq3pwo/2015-07-09_at_12.59_PM.png/ | non_main | problems with multiple instances of the cart widget so it appears that if more than one cart widget is on a page the second and beyond widget in the html flow will not display or behave properly here s what i see so far the second widget does not display a title once you add an item to the cart and the widget updates no page refresh the cart items are missing from the second widget and beyond refresh the page and the cart items show up still not second widget title | 0 |
99,060 | 30,268,069,549 | IssuesEvent | 2023-07-07 13:23:20 | cms-sw/cmssw | https://api.github.com/repos/cms-sw/cmssw | closed | Build CMSSW_13_0_10 | release-notes-requested release-announced release-build-request slc7_amd64_gcc11-finished el8_amd64_gcc11-finished el8_aarch64_gcc11-finished el8_ppc64le_gcc11-finished el9_amd64_gcc11-finished | To start the MC production campaign for Run3 2023
The build will go in parallel with the IB tests in CMSSW_13_0_X_2023-07-05-1100, to speed up the procedure: the release will get uploaded only if those tests show no issues. | 1.0 | Build CMSSW_13_0_10 - To start the MC production campaign for Run3 2023
The build will go in parallel with the IB tests in CMSSW_13_0_X_2023-07-05-1100, to speed up the procedure: the release will get uploaded only if those tests show no issues. | non_main | build cmssw to start the mc production campaign for the build will go in parallel with the ib tests in cmssw x to speed up the procedure the release will get uploaded only if those tests show no issues | 0 |
2,368 | 8,470,346,938 | IssuesEvent | 2018-10-24 03:45:47 | AllAlgorithms/cpp | https://api.github.com/repos/AllAlgorithms/cpp | opened | Looking for a new C++ maintainer. | Hacktoberfest help wanted looking for maintainers 🙈 | Since we are a small team reviewing pull requests, we are looking a C++ maintainer to resolve the Outstanding pull requests:
Requirements:
- At least 6 month on Github (Experience reviewig code, etc..)
- Previus C++ Knowladge.
- Decided to review and work on issues at least every 3 days.
- Open source enthusiast
- Must start this project :)
How to apply? Please Join our [Gitter Chat](https://gitter.im/allalgorithms/cpp), and let me know in private or here! | True | Looking for a new C++ maintainer. - Since we are a small team reviewing pull requests, we are looking a C++ maintainer to resolve the Outstanding pull requests:
Requirements:
- At least 6 month on Github (Experience reviewig code, etc..)
- Previus C++ Knowladge.
- Decided to review and work on issues at least every 3 days.
- Open source enthusiast
- Must start this project :)
How to apply? Please Join our [Gitter Chat](https://gitter.im/allalgorithms/cpp), and let me know in private or here! | main | looking for a new c maintainer since we are a small team reviewing pull requests we are looking a c maintainer to resolve the outstanding pull requests requirements at least month on github experience reviewig code etc previus c knowladge decided to review and work on issues at least every days open source enthusiast must start this project how to apply please join our and let me know in private or here | 1 |
98,112 | 11,045,380,040 | IssuesEvent | 2019-12-09 15:01:08 | 18F/dtmo-ei | https://api.github.com/repos/18F/dtmo-ei | closed | DTMO – Mid-Point Check-In | Epic documentation | As ```a representative of DTMO``` I want to know ```what the 18F has been and is up to``` in order to ```assess if they are providing value to our organization and fulfilling the scope.``` | 1.0 | DTMO – Mid-Point Check-In - As ```a representative of DTMO``` I want to know ```what the 18F has been and is up to``` in order to ```assess if they are providing value to our organization and fulfilling the scope.``` | non_main | dtmo – mid point check in as a representative of dtmo i want to know what the has been and is up to in order to assess if they are providing value to our organization and fulfilling the scope | 0 |
3,559 | 14,237,307,961 | IssuesEvent | 2020-11-18 17:03:14 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Permissions change request: Search API | Maintainer change request | Can someone with appropriate permissions adjust the Github settings for `search_api`? @earlyburg is listed as a maintainer at this point but does not have write access. See:
https://github.com/backdrop-contrib/search_api/pull/11#issuecomment-727858612
Also, less urgently, he is not a maintainer on Entity Plus but indicates he does have write access there. | True | Permissions change request: Search API - Can someone with appropriate permissions adjust the Github settings for `search_api`? @earlyburg is listed as a maintainer at this point but does not have write access. See:
https://github.com/backdrop-contrib/search_api/pull/11#issuecomment-727858612
Also, less urgently, he is not a maintainer on Entity Plus but indicates he does have write access there. | main | permissions change request search api can someone with appropriate permissions adjust the github settings for search api earlyburg is listed as a maintainer at this point but does not have write access see also less urgently he is not a maintainer on entity plus but indicates he does have write access there | 1 |
4,729 | 24,411,872,431 | IssuesEvent | 2022-10-05 13:03:11 | coq/platform | https://api.github.com/repos/coq/platform | closed | Add MathComp Word to the Coq Platform | kind: package inclusion approval: has maintainer agreement | [MathComp Word](https://github.com/jasmin-lang/coqword) is a Coq library on machine words based on Mathematical Components. It is a core dependency of [SSProve](https://github.com/SSProve/ssprove) (a [candidate](https://github.com/coq/platform/issues/177) to join the Platform) and other projects such as the [Jasmin compiler](https://github.com/jasmin-lang/jasmin). Machine words are a frequently formalized concept useful in many verification projects.
To allow SSProve and other projects to more easily use MathComp Word, and based on [this discussion on Zulip](https://coq.zulipchat.com/#narrow/stream/237977-Coq-users/topic/Word.20libraries.20and.20duplication/near/289583415), I propose that MathComp Word is added to the Coq Platform.
The primary maintainers of MathComp Word are @vbgl and @strub. In accordance with the [Platform package inclusion process](https://github.com/coq/platform/blob/main/charter.md#package-inclusion-process), we would like for them to comment here that they agree on including the library. In practice, this means committing to making a Git tag for every major Coq release in the GitHub repository (this tag is then ideally packaged in Coq opam repository).
cc: @spitters @gares (can this package use the `coq-math-comp-` prefix?) | True | Add MathComp Word to the Coq Platform - [MathComp Word](https://github.com/jasmin-lang/coqword) is a Coq library on machine words based on Mathematical Components. It is a core dependency of [SSProve](https://github.com/SSProve/ssprove) (a [candidate](https://github.com/coq/platform/issues/177) to join the Platform) and other projects such as the [Jasmin compiler](https://github.com/jasmin-lang/jasmin). Machine words are a frequently formalized concept useful in many verification projects.
To allow SSProve and other projects to more easily use MathComp Word, and based on [this discussion on Zulip](https://coq.zulipchat.com/#narrow/stream/237977-Coq-users/topic/Word.20libraries.20and.20duplication/near/289583415), I propose that MathComp Word is added to the Coq Platform.
The primary maintainers of MathComp Word are @vbgl and @strub. In accordance with the [Platform package inclusion process](https://github.com/coq/platform/blob/main/charter.md#package-inclusion-process), we would like for them to comment here that they agree on including the library. In practice, this means committing to making a Git tag for every major Coq release in the GitHub repository (this tag is then ideally packaged in Coq opam repository).
cc: @spitters @gares (can this package use the `coq-math-comp-` prefix?) | main | add mathcomp word to the coq platform is a coq library on machine words based on mathematical components it is a core dependency of a to join the platform and other projects such as the machine words are a frequently formalized concept useful in many verification projects to allow ssprove and other projects to more easily use mathcomp word and based on i propose that mathcomp word is added to the coq platform the primary maintainers of mathcomp word are vbgl and strub in accordance with the we would like for them to comment here that they agree on including the library in practice this means committing to making a git tag for every major coq release in the github repository this tag is then ideally packaged in coq opam repository cc spitters gares can this package use the coq math comp prefix | 1 |
20,229 | 3,317,720,833 | IssuesEvent | 2015-11-06 23:16:42 | spockframework/spock | https://api.github.com/repos/spockframework/spock | closed | Cannot resolve symbol from where: section after instanceof | Module-Core not a bug Status-New Type-Defect | Originally reported on Google Code with ID 343
```
I want to do something like this:
@Unroll
def " should contain #clazz.getSimpleName() bean"() {
given:
builder.build()
when:
def component = builder.getContainerComponent(clazz)
then:
component instanceof clazz
where:
clazz << [Javers, EntityManager, TypeMapper, DiffFactory]
}
but in line 10 i have compilation problem. The IntelliJ tell's me that cannot resolve
symbol clazz, but when i use Assertj and do something like this (in line 10):
assertThat(component) isInstanceOf(clazz)
it works ;)
I try in Intellij 12 and 13.
What version of Spock and Groovy are you using?
0.7-groovy-2.0
```
Reported by `pawel.szymczyk90` on 2014-01-31 18:50:06
| 1.0 | Cannot resolve symbol from where: section after instanceof - Originally reported on Google Code with ID 343
```
I want to do something like this:
@Unroll
def " should contain #clazz.getSimpleName() bean"() {
given:
builder.build()
when:
def component = builder.getContainerComponent(clazz)
then:
component instanceof clazz
where:
clazz << [Javers, EntityManager, TypeMapper, DiffFactory]
}
but in line 10 i have compilation problem. The IntelliJ tell's me that cannot resolve
symbol clazz, but when i use Assertj and do something like this (in line 10):
assertThat(component) isInstanceOf(clazz)
it works ;)
I try in Intellij 12 and 13.
What version of Spock and Groovy are you using?
0.7-groovy-2.0
```
Reported by `pawel.szymczyk90` on 2014-01-31 18:50:06
| non_main | cannot resolve symbol from where section after instanceof originally reported on google code with id i want to do something like this unroll def should contain clazz getsimplename bean given builder build when def component builder getcontainercomponent clazz then component instanceof clazz where clazz but in line i have compilation problem the intellij tell s me that cannot resolve symbol clazz but when i use assertj and do something like this in line assertthat component isinstanceof clazz it works i try in intellij and what version of spock and groovy are you using groovy reported by pawel on | 0 |
1,499 | 6,488,377,257 | IssuesEvent | 2017-08-20 16:29:32 | ocaml/opam-repository | https://api.github.com/repos/ocaml/opam-repository | closed | ocamlfind fails to compile with jocaml switch | bug needs admin action needs maintainer action | When installing oasis with the jocaml switch, I get the following error message:
~~~~
[pkl@phi ocamlec]$ opam switch 4.01.0+jocaml
[pkl@phi ocamlec]$ opam install oasis
The following actions will be performed:
∗ install ocamlfind 1.7.1 [required by oasis]
∗ install ocamlmod 0.0.8 [required by oasis]
∗ install ocamlify 0.0.1 [required by oasis]
∗ install oasis 0.4.8
===== ∗ 4 =====
Do you want to continue ? [Y/n] y
=-=- Gathering sources =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
[oasis] Archive in cache
[ocamlfind] Archive in cache
[ocamlify] Archive in cache
[ocamlmod] Archive in cache
=-=- Processing actions -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
[ERROR] The compilation of ocamlfind failed at "make install".
Processing 1/4: [ocamlfind: make uninstall]
#=== ERROR while installing ocamlfind.1.7.1 ===================================#
# opam-version 1.2.2
# os linux
# command make install
# path /home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1
# compiler 4.01.0+jocaml
# exit-code 2
# env-file /home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/ocamlfind-13726-a00279.env
# stdout-file /home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/ocamlfind-13726-a00279.out
# stderr-file /home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/ocamlfind-13726-a00279.err
### stdout ###
# [...]
# make[1]: Leaving directory '/home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1'
# for p in findlib; do ( cd src/$p; make install ); done
# make[1]: Entering directory '/home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/src/findlib'
# ocamldep *.ml *.mli >depend
# mkdir -p "/home/pkl/.opam/4.01.0+jocaml/lib/findlib"
# mkdir -p "/home/pkl/.opam/4.01.0+jocaml/bin"
# test 1 -eq 0 || cp topfind "/usr/lib/ocaml"
# Makefile:122: recipe for target 'install' failed
# make[1]: Leaving directory '/home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/src/findlib'
# Makefile:20: recipe for target 'install' failed
### stderr ###
# cp: cannot create regular file '/usr/lib/ocaml/topfind': Permission denied
# make[1]: *** [install] Error 1
# make: *** [install] Error 2
=-=- Error report -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
The following actions were aborted
∗ install oasis 0.4.8
∗ install ocamlify 0.0.1
∗ install ocamlmod 0.0.8
The following actions failed
∗ install ocamlfind 1.7.1
No changes have been performed
~~~~ | True | ocamlfind fails to compile with jocaml switch - When installing oasis with the jocaml switch, I get the following error message:
~~~~
[pkl@phi ocamlec]$ opam switch 4.01.0+jocaml
[pkl@phi ocamlec]$ opam install oasis
The following actions will be performed:
∗ install ocamlfind 1.7.1 [required by oasis]
∗ install ocamlmod 0.0.8 [required by oasis]
∗ install ocamlify 0.0.1 [required by oasis]
∗ install oasis 0.4.8
===== ∗ 4 =====
Do you want to continue ? [Y/n] y
=-=- Gathering sources =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
[oasis] Archive in cache
[ocamlfind] Archive in cache
[ocamlify] Archive in cache
[ocamlmod] Archive in cache
=-=- Processing actions -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
[ERROR] The compilation of ocamlfind failed at "make install".
Processing 1/4: [ocamlfind: make uninstall]
#=== ERROR while installing ocamlfind.1.7.1 ===================================#
# opam-version 1.2.2
# os linux
# command make install
# path /home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1
# compiler 4.01.0+jocaml
# exit-code 2
# env-file /home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/ocamlfind-13726-a00279.env
# stdout-file /home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/ocamlfind-13726-a00279.out
# stderr-file /home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/ocamlfind-13726-a00279.err
### stdout ###
# [...]
# make[1]: Leaving directory '/home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1'
# for p in findlib; do ( cd src/$p; make install ); done
# make[1]: Entering directory '/home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/src/findlib'
# ocamldep *.ml *.mli >depend
# mkdir -p "/home/pkl/.opam/4.01.0+jocaml/lib/findlib"
# mkdir -p "/home/pkl/.opam/4.01.0+jocaml/bin"
# test 1 -eq 0 || cp topfind "/usr/lib/ocaml"
# Makefile:122: recipe for target 'install' failed
# make[1]: Leaving directory '/home/pkl/.opam/4.01.0+jocaml/build/ocamlfind.1.7.1/src/findlib'
# Makefile:20: recipe for target 'install' failed
### stderr ###
# cp: cannot create regular file '/usr/lib/ocaml/topfind': Permission denied
# make[1]: *** [install] Error 1
# make: *** [install] Error 2
=-=- Error report -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
The following actions were aborted
∗ install oasis 0.4.8
∗ install ocamlify 0.0.1
∗ install ocamlmod 0.0.8
The following actions failed
∗ install ocamlfind 1.7.1
No changes have been performed
~~~~ | main | ocamlfind fails to compile with jocaml switch when installing oasis with the jocaml switch i get the following error message opam switch jocaml opam install oasis the following actions will be performed ∗ install ocamlfind ∗ install ocamlmod ∗ install ocamlify ∗ install oasis ∗ do you want to continue y gathering sources archive in cache archive in cache archive in cache archive in cache processing actions the compilation of ocamlfind failed at make install processing error while installing ocamlfind opam version os linux command make install path home pkl opam jocaml build ocamlfind compiler jocaml exit code env file home pkl opam jocaml build ocamlfind ocamlfind env stdout file home pkl opam jocaml build ocamlfind ocamlfind out stderr file home pkl opam jocaml build ocamlfind ocamlfind err stdout make leaving directory home pkl opam jocaml build ocamlfind for p in findlib do cd src p make install done make entering directory home pkl opam jocaml build ocamlfind src findlib ocamldep ml mli depend mkdir p home pkl opam jocaml lib findlib mkdir p home pkl opam jocaml bin test eq cp topfind usr lib ocaml makefile recipe for target install failed make leaving directory home pkl opam jocaml build ocamlfind src findlib makefile recipe for target install failed stderr cp cannot create regular file usr lib ocaml topfind permission denied make error make error error report the following actions were aborted ∗ install oasis ∗ install ocamlify ∗ install ocamlmod the following actions failed ∗ install ocamlfind no changes have been performed | 1 |
3,068 | 11,493,772,154 | IssuesEvent | 2020-02-11 23:50:22 | alacritty/alacritty | https://api.github.com/repos/alacritty/alacritty | closed | Failure to detect full URL containing unclosed single quote | A - deps C - waiting on maintainer enhancement | ### Reproduction Steps
Here an example link to display this bug:
```
echo "https://example.rs/alacritty's_thing.html"
```
The URL mouseover highlight ends at the single quote and when clicking on it only the URL up to that point is opened in the browser.
Closing the single quote does fix things as can be seen by adding one to the end of the link:
```
echo "https://example.rs/alacritty's_thing.html'"
```
### System
OS: Linux
Version: 0.4.1
Linux/BSD: Wayland (sway)
| True | Failure to detect full URL containing unclosed single quote - ### Reproduction Steps
Here an example link to display this bug:
```
echo "https://example.rs/alacritty's_thing.html"
```
The URL mouseover highlight ends at the single quote and when clicking on it only the URL up to that point is opened in the browser.
Closing the single quote does fix things as can be seen by adding one to the end of the link:
```
echo "https://example.rs/alacritty's_thing.html'"
```
### System
OS: Linux
Version: 0.4.1
Linux/BSD: Wayland (sway)
| main | failure to detect full url containing unclosed single quote reproduction steps here an example link to display this bug echo the url mouseover highlight ends at the single quote and when clicking on it only the url up to that point is opened in the browser closing the single quote does fix things as can be seen by adding one to the end of the link echo system os linux version linux bsd wayland sway | 1 |
74,745 | 15,368,451,569 | IssuesEvent | 2021-03-02 05:35:15 | iUoB/help.iuob.uk | https://api.github.com/repos/iUoB/help.iuob.uk | closed | CVE-2019-14863 (Medium) detected in angular-1.4.2.min.js | security vulnerability | ## CVE-2019-14863 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.4.2.min.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js</a></p>
<p>Path to dependency file: help.iuob.uk/node_modules/autocomplete.js/examples/basic_angular.html</p>
<p>Path to vulnerable library: help.iuob.uk/node_modules/autocomplete.js/examples/basic_angular.html,help.iuob.uk/node_modules/autocomplete.js/test/playground_angular.html</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.4.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/iUoB/help.iuob.uk/commit/bcc75f2ab9e6cb9d1e223057420d85c615ce7619">bcc75f2ab9e6cb9d1e223057420d85c615ce7619</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is a vulnerability in all angular versions before 1.5.0-beta.0, where after escaping the context of the web application, the web application delivers data to its users along with other trusted dynamic content, without validating it.
<p>Publish Date: 2020-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14863>CVE-2019-14863</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/angular/angular.js/pull/12524">https://github.com/angular/angular.js/pull/12524</a></p>
<p>Release Date: 2020-01-02</p>
<p>Fix Resolution: angular - v1.5.0-beta.1;org.webjars:angularjs:1.5.0-rc.0 </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-14863 (Medium) detected in angular-1.4.2.min.js - ## CVE-2019-14863 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.4.2.min.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.2/angular.min.js</a></p>
<p>Path to dependency file: help.iuob.uk/node_modules/autocomplete.js/examples/basic_angular.html</p>
<p>Path to vulnerable library: help.iuob.uk/node_modules/autocomplete.js/examples/basic_angular.html,help.iuob.uk/node_modules/autocomplete.js/test/playground_angular.html</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.4.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/iUoB/help.iuob.uk/commit/bcc75f2ab9e6cb9d1e223057420d85c615ce7619">bcc75f2ab9e6cb9d1e223057420d85c615ce7619</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is a vulnerability in all angular versions before 1.5.0-beta.0, where after escaping the context of the web application, the web application delivers data to its users along with other trusted dynamic content, without validating it.
<p>Publish Date: 2020-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14863>CVE-2019-14863</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/angular/angular.js/pull/12524">https://github.com/angular/angular.js/pull/12524</a></p>
<p>Release Date: 2020-01-02</p>
<p>Fix Resolution: angular - v1.5.0-beta.1;org.webjars:angularjs:1.5.0-rc.0 </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in angular min js cve medium severity vulnerability vulnerable library angular min js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to dependency file help iuob uk node modules autocomplete js examples basic angular html path to vulnerable library help iuob uk node modules autocomplete js examples basic angular html help iuob uk node modules autocomplete js test playground angular html dependency hierarchy x angular min js vulnerable library found in head commit a href found in base branch master vulnerability details there is a vulnerability in all angular versions before beta where after escaping the context of the web application the web application delivers data to its users along with other trusted dynamic content without validating it publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution angular beta org webjars angularjs rc step up your open source security game with whitesource | 0 |
3,531 | 13,911,578,268 | IssuesEvent | 2020-10-20 17:35:50 | grey-software/LinkedIn-Focus | https://api.github.com/repos/grey-software/LinkedIn-Focus | opened | 🚀 Feature Request: Add Chrome and Firefox web store links | Domain: User Experience Role: Maintainer Type: Enhancement hacktoberfest-accepted | ### Problem Overview 👁️🗨️
Users should be able to see the links to the LinkedIn-Focus extension (on the Chrome And Firefox Web Store) in README.md.
### What would you like? 🧰
Add the links to the LinkedIn-Focus web extension that has been published on the Chrome and Firefox web stores to the README.md file.
### What alternatives have you considered? 🔍
N/A
### Additional details ℹ️
Links to LinkedIn-Focus on the web stores:
Chrome Web Store: https://chrome.google.com/webstore/detail/linkedin-focus/cmafljjdkloacahjddlpaognhjpacdff?hl=en
Mozilla Firefox Web Store: https://addons.mozilla.org/en-US/firefox/addon/linkedinfocus/
| True | 🚀 Feature Request: Add Chrome and Firefox web store links - ### Problem Overview 👁️🗨️
Users should be able to see the links to the LinkedIn-Focus extension (on the Chrome And Firefox Web Store) in README.md.
### What would you like? 🧰
Add the links to the LinkedIn-Focus web extension that has been published on the Chrome and Firefox web stores to the README.md file.
### What alternatives have you considered? 🔍
N/A
### Additional details ℹ️
Links to LinkedIn-Focus on the web stores:
Chrome Web Store: https://chrome.google.com/webstore/detail/linkedin-focus/cmafljjdkloacahjddlpaognhjpacdff?hl=en
Mozilla Firefox Web Store: https://addons.mozilla.org/en-US/firefox/addon/linkedinfocus/
| main | 🚀 feature request add chrome and firefox web store links problem overview 👁️🗨️ users should be able to see the links to the linkedin focus extension on the chrome and firefox web store in readme md what would you like 🧰 add the links to the linkedin focus web extension that has been published on the chrome and firefox web stores to the readme md file what alternatives have you considered 🔍 n a additional details ℹ️ links to linkedin focus on the web stores chrome web store mozilla firefox web store | 1 |
672 | 4,214,089,113 | IssuesEvent | 2016-06-29 21:11:02 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Zanran: 'US Corporate Tax Rates' triggers on weak relevance | Bug Maintainer Timeout Relevancy | The results for 'US Corporate Tax Rates' (see attached) seem worse than the links, and so arguably this shouldn't pass relevancy checks.

------
IA Page: http://duck.co/ia/view/zanran
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @taw | True | Zanran: 'US Corporate Tax Rates' triggers on weak relevance - The results for 'US Corporate Tax Rates' (see attached) seem worse than the links, and so arguably this shouldn't pass relevancy checks.

------
IA Page: http://duck.co/ia/view/zanran
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @taw | main | zanran us corporate tax rates triggers on weak relevance the results for us corporate tax rates see attached seem worse than the links and so arguably this shouldn t pass relevancy checks ia page taw | 1 |
3,589 | 14,480,916,217 | IssuesEvent | 2020-12-10 11:52:04 | grey-software/org | https://api.github.com/repos/grey-software/org | opened | 🥅 Initiative: Create a dashboard for open source organizations | Domain: User Experience Role: Maintainer Role: Product Owner | ### Motivation 🏁
<!--
A clear and concise motivation for this initiative? How will this help execute the vision of the org?
-->
As the technical lead for an open-source organization, I have found managing multiple software repositories and informing myself of all the events occurring throughout the various platforms I'm on.
At the moment, if I'd like an overview of the analytics, discussions, for all my repositories, I have to click through multiple web pages and parse the valuable information myself.
If I get an insight from a high-level look at the repositories and I want to create an issue, I'll have to once again navigate to the repo's page and create the issue.
### Initiative Overview 👁️🗨️
<!--
A clear and concise description of what the initiative is.
-->
I propose creating a dashboard to help open-source organization teams get relevant information and act quickly.
**Implementation Details 🛠️ **
<!--- Please share a plan to help realize this initiative -->
Here are some early ideas I have:
- I should be able to view the project boards, analytics, issues, and PRs for all/pinned repositories
- I should be able to create an issue without having to make multiple clicks
- I should be able to view a feed of the community discussions
- I should have relevant notifications enter my feed | True | 🥅 Initiative: Create a dashboard for open source organizations - ### Motivation 🏁
<!--
A clear and concise motivation for this initiative? How will this help execute the vision of the org?
-->
As the technical lead for an open-source organization, I have found managing multiple software repositories and informing myself of all the events occurring throughout the various platforms I'm on.
At the moment, if I'd like an overview of the analytics, discussions, for all my repositories, I have to click through multiple web pages and parse the valuable information myself.
If I get an insight from a high-level look at the repositories and I want to create an issue, I'll have to once again navigate to the repo's page and create the issue.
### Initiative Overview 👁️🗨️
<!--
A clear and concise description of what the initiative is.
-->
I propose creating a dashboard to help open-source organization teams get relevant information and act quickly.
**Implementation Details 🛠️ **
<!--- Please share a plan to help realize this initiative -->
Here are some early ideas I have:
- I should be able to view the project boards, analytics, issues, and PRs for all/pinned repositories
- I should be able to create an issue without having to make multiple clicks
- I should be able to view a feed of the community discussions
- I should have relevant notifications enter my feed | main | 🥅 initiative create a dashboard for open source organizations motivation 🏁 a clear and concise motivation for this initiative how will this help execute the vision of the org as the technical lead for an open source organization i have found managing multiple software repositories and informing myself of all the events occurring throughout the various platforms i m on at the moment if i d like an overview of the analytics discussions for all my repositories i have to click through multiple web pages and parse the valuable information myself if i get an insight from a high level look at the repositories and i want to create an issue i ll have to once again navigate to the repo s page and create the issue initiative overview 👁️🗨️ a clear and concise description of what the initiative is i propose creating a dashboard to help open source organization teams get relevant information and act quickly implementation details 🛠️ here are some early ideas i have i should be able to view the project boards analytics issues and prs for all pinned repositories i should be able to create an issue without having to make multiple clicks i should be able to view a feed of the community discussions i should have relevant notifications enter my feed | 1 |
3,094 | 11,744,538,505 | IssuesEvent | 2020-03-12 07:58:15 | PointCloudLibrary/pcl | https://api.github.com/repos/PointCloudLibrary/pcl | opened | Ambiguous comment by codebase to developer | kind: question module: common needs: maintainer feedback | <!--- WARNING: This is an issue tracker. Before opening a new issue make sure you read https://github.com/PointCloudLibrary/pcl/blob/master/CONTRIBUTING.md#using-the-issue-tracker. -->
<!--- Provide a general summary of the issue in the Title above -->
## Context
https://github.com/PointCloudLibrary/pcl/blob/master/common/include/pcl/point_traits.h#L200
`point_traits.h`:200 refers to a bug #821 but that doesn't explain anything.
## Possible Solution
What's the actual reason?
It makes sense from C++ perspective that the container of 0 fields will have a 1 byte memory usage, but I'm not sure that's what's happening here
| True | Ambiguous comment by codebase to developer - <!--- WARNING: This is an issue tracker. Before opening a new issue make sure you read https://github.com/PointCloudLibrary/pcl/blob/master/CONTRIBUTING.md#using-the-issue-tracker. -->
<!--- Provide a general summary of the issue in the Title above -->
## Context
https://github.com/PointCloudLibrary/pcl/blob/master/common/include/pcl/point_traits.h#L200
`point_traits.h`:200 refers to a bug #821 but that doesn't explain anything.
## Possible Solution
What's the actual reason?
It makes sense from C++ perspective that the container of 0 fields will have a 1 byte memory usage, but I'm not sure that's what's happening here
| main | ambiguous comment by codebase to developer context point traits h refers to a bug but that doesn t explain anything possible solution what s the actual reason it makes sense from c perspective that the container of fields will have a byte memory usage but i m not sure that s what s happening here | 1 |
24,506 | 12,306,725,801 | IssuesEvent | 2020-05-12 02:20:53 | apache/incubator-doris | https://api.github.com/repos/apache/incubator-doris | closed | optimize cross join performance when where clause is or predicate and has common equal predicate exprs | Performance SQL Improvement | Queries like below cannot finish in a acceptable time, `store_sales` has 2800w rows, `customer_address` has 5w rows, for now Doris will create only one cross join node to execute this sql,
the time of eval the where clause is about 200-300 ns, the total count of eval will be 2800w * 5w, this is extremely large, and this will cost 2800w * 5w * 250 ns = 4 billion seconds;
```
select avg(ss_quantity)
,avg(ss_ext_sales_price)
,avg(ss_ext_wholesale_cost)
,sum(ss_ext_wholesale_cost)
from store_sales, customer_address
where ((ss_addr_sk = ca_address_sk
and ca_country = 'United States'
and ca_state in ('CO', 'IL', 'MN')
and ss_net_profit between 100 and 200
) or
(ss_addr_sk = ca_address_sk
and ca_country = 'United States'
and ca_state in ('OH', 'MT', 'NM')
and ss_net_profit between 150 and 300
) or
(ss_addr_sk = ca_address_sk
and ca_country = 'United States'
and ca_state in ('TX', 'MO', 'MI')
and ss_net_profit between 50 and 250
))
```
but this sql can be rewrite to
```
select avg(ss_quantity)
,avg(ss_ext_sales_price)
,avg(ss_ext_wholesale_cost)
,sum(ss_ext_wholesale_cost)
from store_sales, customer_address
where ss_addr_sk = ca_address_sk
and ca_country = 'United States' and (((ca_state in ('CO', 'IL', 'MN')
and ss_net_profit between 100 and 200
) or
(ca_state in ('OH', 'MT', 'NM')
and ss_net_profit between 150 and 300
) or
(ca_state in ('TX', 'MO', 'MI')
and ss_net_profit between 50 and 250
))
)
```
there for we can do a hash join first and then use
```
(((ca_state in ('CO', 'IL', 'MN')
and ss_net_profit between 100 and 200
) or
(ca_state in ('OH', 'MT', 'NM')
and ss_net_profit between 150 and 300
) or
(ca_state in ('TX', 'MO', 'MI')
and ss_net_profit between 50 and 250
))
)
```
to filter the value,
in TPCDS 10g dataset, the rewritten sql only cost about 1 seconds.

so we should implements this optimize | True | optimize cross join performance when where clause is or predicate and has common equal predicate exprs - Queries like below cannot finish in a acceptable time, `store_sales` has 2800w rows, `customer_address` has 5w rows, for now Doris will create only one cross join node to execute this sql,
the time of eval the where clause is about 200-300 ns, the total count of eval will be 2800w * 5w, this is extremely large, and this will cost 2800w * 5w * 250 ns = 4 billion seconds;
```
select avg(ss_quantity)
,avg(ss_ext_sales_price)
,avg(ss_ext_wholesale_cost)
,sum(ss_ext_wholesale_cost)
from store_sales, customer_address
where ((ss_addr_sk = ca_address_sk
and ca_country = 'United States'
and ca_state in ('CO', 'IL', 'MN')
and ss_net_profit between 100 and 200
) or
(ss_addr_sk = ca_address_sk
and ca_country = 'United States'
and ca_state in ('OH', 'MT', 'NM')
and ss_net_profit between 150 and 300
) or
(ss_addr_sk = ca_address_sk
and ca_country = 'United States'
and ca_state in ('TX', 'MO', 'MI')
and ss_net_profit between 50 and 250
))
```
but this sql can be rewrite to
```
select avg(ss_quantity)
,avg(ss_ext_sales_price)
,avg(ss_ext_wholesale_cost)
,sum(ss_ext_wholesale_cost)
from store_sales, customer_address
where ss_addr_sk = ca_address_sk
and ca_country = 'United States' and (((ca_state in ('CO', 'IL', 'MN')
and ss_net_profit between 100 and 200
) or
(ca_state in ('OH', 'MT', 'NM')
and ss_net_profit between 150 and 300
) or
(ca_state in ('TX', 'MO', 'MI')
and ss_net_profit between 50 and 250
))
)
```
there for we can do a hash join first and then use
```
(((ca_state in ('CO', 'IL', 'MN')
and ss_net_profit between 100 and 200
) or
(ca_state in ('OH', 'MT', 'NM')
and ss_net_profit between 150 and 300
) or
(ca_state in ('TX', 'MO', 'MI')
and ss_net_profit between 50 and 250
))
)
```
to filter the value,
in TPCDS 10g dataset, the rewritten sql only cost about 1 seconds.

so we should implements this optimize | non_main | optimize cross join performance when where clause is or predicate and has common equal predicate exprs queries like below cannot finish in a acceptable time store sales has rows customer address has rows for now doris will create only one cross join node to execute this sql the time of eval the where clause is about ns the total count of eval will be this is extremely large and this will cost ns billion seconds; select avg ss quantity avg ss ext sales price avg ss ext wholesale cost sum ss ext wholesale cost from store sales customer address where ss addr sk ca address sk and ca country united states and ca state in co il mn and ss net profit between and or ss addr sk ca address sk and ca country united states and ca state in oh mt nm and ss net profit between and or ss addr sk ca address sk and ca country united states and ca state in tx mo mi and ss net profit between and but this sql can be rewrite to select avg ss quantity avg ss ext sales price avg ss ext wholesale cost sum ss ext wholesale cost from store sales customer address where ss addr sk ca address sk and ca country united states and ca state in co il mn and ss net profit between and or ca state in oh mt nm and ss net profit between and or ca state in tx mo mi and ss net profit between and there for we can do a hash join first and then use ca state in co il mn and ss net profit between and or ca state in oh mt nm and ss net profit between and or ca state in tx mo mi and ss net profit between and to filter the value in tpcds dataset the rewritten sql only cost about seconds so we should implements this optimize | 0 |
3,449 | 13,213,144,088 | IssuesEvent | 2020-08-16 11:14:56 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | Terraform module can't handle list/dict variables | affects_2.7 bot_closed bug cloud collection collection:community.general has_pr module needs_collection_redirect needs_maintainer python3 support:community | ##### SUMMARY
Terraform module does not escape correctly list/dict variables, so they can't be passed to Terraform command.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.7.6
config file = None
configured module search path = ['/home/my_user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/my_user/.virtualenvs/my_project/lib/python3.6/site-packages/ansible
executable location = /home/my_user/.virtualenvs/my_project/bin/ansible
python version = 3.6.8 (default, Dec 24 2018, 19:24:27) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
```
(no output for "ansible-config dump --only-changed")
```
##### OS / ENVIRONMENT
Ubuntu 16.04.5 LTS
##### STEPS TO REPRODUCE
- Terraform vars file:
```
variable "vms" {
type = "list"
}
```
- Terraform main file:
```
resource "null_resource" "debug" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = "echo '${jsonencode(var.vms)}' > /home/my_user/tmp/out"
}
}
```
- Ansible playbook
```yaml
- hosts: localhost
vars:
vms:
- "asdf"
- "qwer"
tasks:
- register: terraform_output
terraform:
project_path: "{{ playbook_dir }}/terraform/"
state: present
force_init: true
variables:
vms: "{{ vms }}"
```
##### EXPECTED RESULTS
Variable being passed to Terraform as you can do with "-var 'vms=["asdf", "qwer"]' "
##### ACTUAL RESULTS
```
TASK [Create resources] ************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to validate Terraform configuration files:\r\n\u001b[31mUsage: terraform validate [options] [dir]\n\n Validate the terraform files in a directory. Validation includes a\n basic check of syntax as well as checking that all variables declared\n in the configuration are specified in one of the possible ways:\n\n -var foo=...\n -var-file=foo.vars\n TF_VAR_foo environment variable\n terraform.tfvars\n default value\n\n If dir is not specified, then the current directory will be used.\n\nOptions:\n\n -check-variables=true If set to true (default), the command will check\n whether all required variables have been specified.\n\n -no-color If specified, output won't contain any color.\n\n -var 'foo=bar' Set a variable in the Terraform configuration. This\n flag can be set multiple times.\n\n -var-file=foo Set variables in the Terraform configuration from\n a file. If \"terraform.tfvars\" is present, it will be\n automatically loaded if this flag is not specified.\u001b[0m\u001b[0m\n"}
```
| True | Terraform module can't handle list/dict variables - ##### SUMMARY
Terraform module does not escape correctly list/dict variables, so they can't be passed to Terraform command.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.7.6
config file = None
configured module search path = ['/home/my_user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/my_user/.virtualenvs/my_project/lib/python3.6/site-packages/ansible
executable location = /home/my_user/.virtualenvs/my_project/bin/ansible
python version = 3.6.8 (default, Dec 24 2018, 19:24:27) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
```
(no output for "ansible-config dump --only-changed")
```
##### OS / ENVIRONMENT
Ubuntu 16.04.5 LTS
##### STEPS TO REPRODUCE
- Terraform vars file:
```
variable "vms" {
type = "list"
}
```
- Terraform main file:
```
resource "null_resource" "debug" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = "echo '${jsonencode(var.vms)}' > /home/my_user/tmp/out"
}
}
```
- Ansible playbook
```yaml
- hosts: localhost
vars:
vms:
- "asdf"
- "qwer"
tasks:
- register: terraform_output
terraform:
project_path: "{{ playbook_dir }}/terraform/"
state: present
force_init: true
variables:
vms: "{{ vms }}"
```
##### EXPECTED RESULTS
Variable being passed to Terraform as you can do with "-var 'vms=["asdf", "qwer"]' "
##### ACTUAL RESULTS
```
TASK [Create resources] ************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to validate Terraform configuration files:\r\n\u001b[31mUsage: terraform validate [options] [dir]\n\n Validate the terraform files in a directory. Validation includes a\n basic check of syntax as well as checking that all variables declared\n in the configuration are specified in one of the possible ways:\n\n -var foo=...\n -var-file=foo.vars\n TF_VAR_foo environment variable\n terraform.tfvars\n default value\n\n If dir is not specified, then the current directory will be used.\n\nOptions:\n\n -check-variables=true If set to true (default), the command will check\n whether all required variables have been specified.\n\n -no-color If specified, output won't contain any color.\n\n -var 'foo=bar' Set a variable in the Terraform configuration. This\n flag can be set multiple times.\n\n -var-file=foo Set variables in the Terraform configuration from\n a file. If \"terraform.tfvars\" is present, it will be\n automatically loaded if this flag is not specified.\u001b[0m\u001b[0m\n"}
```
| main | terraform module can t handle list dict variables summary terraform module does not escape correctly list dict variables so they can t be passed to terraform command issue type bug report component name terraform ansible version paste below ansible config file none configured module search path ansible python module location home my user virtualenvs my project lib site packages ansible executable location home my user virtualenvs my project bin ansible python version default dec configuration no output for ansible config dump only changed os environment ubuntu lts steps to reproduce terraform vars file variable vms type list terraform main file resource null resource debug provisioner local exec interpreter command echo jsonencode var vms home my user tmp out ansible playbook yaml hosts localhost vars vms asdf qwer tasks register terraform output terraform project path playbook dir terraform state present force init true variables vms vms expected results variable being passed to terraform as you can do with var vms actual results task fatal failed changed false msg failed to validate terraform configuration files r n n n validate the terraform files in a directory validation includes a n basic check of syntax as well as checking that all variables declared n in the configuration are specified in one of the possible ways n n var foo n var file foo vars n tf var foo environment variable n terraform tfvars n default value n n if dir is not specified then the current directory will be used n noptions n n check variables true if set to true default the command will check n whether all required variables have been specified n n no color if specified output won t contain any color n n var foo bar set a variable in the terraform configuration this n flag can be set multiple times n n var file foo set variables in the terraform configuration from n a file if terraform tfvars is present it will be n automatically loaded if this flag is not specified n | 1 |
28,233 | 23,098,395,533 | IssuesEvent | 2022-07-26 22:17:55 | ampproject/amp-wp | https://api.github.com/repos/ampproject/amp-wp | closed | Adding continuous performance testing with Blackfire | Task P1 Infrastructure Performance WS:Perf Groomed | The proposed solution for adding robust manual and continuous profiling to monitor and improve the performance of the plugin is to go with the [Blackfire Profiler](https://blackfire.io/). This offers very robust and easy to analyse profiling as shown during a brief screenshare in our recent Plugin Sync meeting.
Any developer can run manual Blackfire tests at any time as needed. This can even be done with the free "Hack" plan, albeit with less features.
The required license for integrating Blackfire directly with GitHub and have it run profiling for every PR is the Enterprise license This comes at $289 / month billed yearly, so a total of $3468 per year.
## How to run Blackfire continuously
There are two main ways of running Blackfire continuously:
a.) Using HTTP access and the [Blackfire Player](https://blackfire.io/docs/player/index) to run Blackfire on a website we have deployed in some way
b.) Using the [Blackfire PHP SDK](https://blackfire.io/docs/reference-guide/php-sdk) to run performance tests against the source code.
c.) Integrate Blackfire directly into the [PHPUnit tests](https://blackfire.io/docs/integrations/phpunit) to use asserts based on Blackfire metrics (like asserting that the SQL queries are less than 10).
For a first iteration, I suggest concentrating on b.) and c.) only, as this is way more straight-forward to implement and maintain, and will provide us with a large chunk of the benefits.
Once we're in a good place with our performance tests using the PHP SDK, we can discuss what site(s) to deploy, and where to deploy them, so we can run Blackfire against entire sites. This is then similar to the e2e tests, with the difference that they will profile the backend performance while controlling the frontend.
## Steps needed to integrate Blackfire continuous profiling into this plugin using the PHP SDK:
- [ ] Add a separate env to Travis for performance testing.
- [ ] Commit an encrypted file `.blackfire.travis.ini.enc` to the repository that contains the encrypted Blackfire credentials (see [Travis integration](https://blackfire.io/docs/integrations/travis) docs)
- [ ] Adapt the travis file to download and configure Blackfire on `before_install` and to disable XDebug and launch the Blackfire agent on `before_script` (see [Travis integration](https://blackfire.io/docs/integrations/travis) docs)
- [ ] Write one or more scenario(s) that regroup multiple profile tests and assemble them into a build (see [Scenarios & Builds](https://blackfire.io/docs/reference-guide/php-sdk#php-sdk-builds) docs).
- [ ] When creating the build, the `'external_id'` should be the SHA1 of the pull request, and the `'external_parent_id'` should be the SHA1 of the base branch of the pull request. This is needed so that we can send a notification about the build status back to GitHub (see [Enabling the Update of Git Commit Statuses](https://blackfire.io/docs/reference-guide/php-sdk#php-sdk-commit-status) docs).
- [ ] Hook up the Blackfire Build configuration to the GitHub notification channel (see [Setting up the GitHub Notification Channel](https://blackfire.io/docs/integrations/github#setting-up-the-github-notification-channel) docs).
## How will the scenarios look like?
Here's an example of how scenarios and builds look like. Note that this is PHP code, and can therefore be made as DRY as we want.
We'd have at least 1 build that gets triggered by pull requests, and that build should test multiple scenarios.
> Note: the following code is untested.
```php
$blackfire = new Blackfire\Client();
$build = $blackfire->startBuild( 'AMP WP Plugin', [
'title' => 'Build from Travis',
'trigger_name' => 'pull-request',
'external_id' => getenv( 'TRAVIS_COMMIT' ),
'external_parent_id' => getenv( 'TRAVIS_PULL_REQUEST_SHA' ) . ':' . getenv( 'TRAVIS_BRANCH' ),
] );
$config = ( new Blackfire\Profile\Configuration() )
// We can define how many samples to profile to average out fluctuations.
->setSamples( $samples )
// We can have multiple environments to store the results in.
->setEnv('amp-wp');
// For each scenario, we adapt the configuration object.
$scenario = $blackfire->startScenario( $build, [
'title' => 'Tag & Attribute Sanitizer',
'metadata' => [
'pull-request' => getenv( 'TRAVIS_PULL_REQUEST' ),
'category' => 'sanitizer',
],
] );
$config->setScenario( $scenario );
// In PHP, we can manually control the probe and only enable it
// for the parts of the code we want to profile.
$probe = $blackfire->createProbe( $config, false );
for ( $sample = 1; $sample <= $samples; $sample++ ) {
// Start the actual profile run.
$probe->enable();
foo(); // The code we want to profile.
// Finish the profile run.
$probe->close();
}
// Send the results back to Blackfire.
$profile = $blackfire->endProbe( $probe );
// We need to close the scenario now to start the next.
// This returns the report, in case we want to act on it here.
$report = $blackfire->closeScenario( $scenario );
// After we went through all scenarios, we can close the build.
$blackfire->closeBuild( $build );
```
## What about the PHPUnit integration
Within PHPUnit, we can use Blackfire for assertions. We can assert againt the dimensions of any metric. The available dimensions for metrics are the following ones:
* `count`
* `wall_time`
* `cpu_time`
* `memory`
* `peak_memory`
* `network_in`
* `network_out`
* `io`
For comparisons, the following two functions can be used as well:
* `percent()` - i.e. `percent(main.wall_time) < 10%`
* `diff()` - i.e. `diff(metrics.sql.queries.count) < 2`
Apart from the [built-in metrics](https://blackfire.io/docs/reference-guide/metrics#built-in-metrics), we can define our own custom metrics that we assert the dimensions against. Here's an example of how that could work:
```php
use Blackfire\Profile\Metric;
$metric = new Metric( 'content_sanitizer.sanitize', '=AMP_Content_Sanitizer::sanitize' );
// Then we can add this custom metric to our profile's config object.
$config->defineMetric( $metric );
```
Now, let's see how we could use this metric in an assertion when running PHPUnit tests.
> Note: the following code is untested.
```php
use Blackfire\Bridge\PhpUnit\TestCaseTrait;
use Blackfire\Profile;
class AMP_Img_Sanitizer_Test extends WP_UnitTestCase
{
use TestCaseTrait;
/** @var Blackfire\Profile\Configuration */
private $config;
public function setUp() {
$this->config = new Blackfire\Profile\Configuration();
$metric = new Metric(
'content_sanitizer.sanitize',
'=AMP_Content_Sanitizer::sanitize'
);
$this->config->defineMetric( $metric );
}
/**
* @group blackfire
* @requires extension blackfire
*/
public function testSomething()
{
// First we need to define our assertions.
$this->config
->assert('content_sanitizer.sanitize.wall_time < 200ms', 'Content sanitization time' )
->assert('content_sanitizer.sanitize.memory < 2MB', 'Content sanitiztaion memory' )
->assert('content_sanitizer.sanitize.io < 5ms', 'Content sanitization I/O' )
;
// Then we can do a profile run to see whether they hold true.
$profile = $this->assertBlackfire( $config, function () {
// Here we run the code that needs to be profiled.
} );
}
}
```
One way of using these asserts is to define performance budgets for the different subsystems and then make sure we can actually hit these budgets and enforce them.
*Nice tip I gathered from the docs:*
When defining custom metrics, you can also reason about the argument that is being passed in. This is most useful if we have place in the code where multiple code paths flow through based on differing arguments. You can define the metric to create separate nodes for differing arguments that were passed in. This lets us verify whether we run a method for a given argument multiple times (which could then be cached) and whether there are very slow instances of doing so. Additionally, it lets us filter to only take said method into account for specific arguments, like getting a detailed profile for all actions/filters where the first arguments starts with `'amp_'`.
## What about HTTP access using the Blackfire Player (option a.) above)
For this to work, we'd need to deploy a site in such a way that it is accessible via HTTP to Blackfire. This could mean a docker container we prepare within Travis (not 100% sure on timing here), or an external hosting we deploy to.
Blackfire comes with a built-in integration for [platform.sh], and that has a "development-only" plan at $10 / month. However, I would prefer to concentrate on options b.) and c.) above, and discuss in parallel on first defining what the "reference site(s)" for AMP should be that we want to run the tests against. Without reference sites to run the Blackfire Player tests against, it makes no sense to invest time and money into this side of the infrastructure.
Fixes #1017 | 1.0 | Adding continuous performance testing with Blackfire - The proposed solution for adding robust manual and continuous profiling to monitor and improve the performance of the plugin is to go with the [Blackfire Profiler](https://blackfire.io/). This offers very robust and easy to analyse profiling as shown during a brief screenshare in our recent Plugin Sync meeting.
Any developer can run manual Blackfire tests at any time as needed. This can even be done with the free "Hack" plan, albeit with less features.
The required license for integrating Blackfire directly with GitHub and have it run profiling for every PR is the Enterprise license This comes at $289 / month billed yearly, so a total of $3468 per year.
## How to run Blackfire continuously
There are two main ways of running Blackfire continuously:
a.) Using HTTP access and the [Blackfire Player](https://blackfire.io/docs/player/index) to run Blackfire on a website we have deployed in some way
b.) Using the [Blackfire PHP SDK](https://blackfire.io/docs/reference-guide/php-sdk) to run performance tests against the source code.
c.) Integrate Blackfire directly into the [PHPUnit tests](https://blackfire.io/docs/integrations/phpunit) to use asserts based on Blackfire metrics (like asserting that the SQL queries are less than 10).
For a first iteration, I suggest concentrating on b.) and c.) only, as this is way more straight-forward to implement and maintain, and will provide us with a large chunk of the benefits.
Once we're in a good place with our performance tests using the PHP SDK, we can discuss what site(s) to deploy, and where to deploy them, so we can run Blackfire against entire sites. This is then similar to the e2e tests, with the difference that they will profile the backend performance while controlling the frontend.
## Steps needed to integrate Blackfire continuous profiling into this plugin using the PHP SDK:
- [ ] Add a separate env to Travis for performance testing.
- [ ] Commit an encrypted file `.blackfire.travis.ini.enc` to the repository that contains the encrypted Blackfire credentials (see [Travis integration](https://blackfire.io/docs/integrations/travis) docs)
- [ ] Adapt the travis file to download and configure Blackfire on `before_install` and to disable XDebug and launch the Blackfire agent on `before_script` (see [Travis integration](https://blackfire.io/docs/integrations/travis) docs)
- [ ] Write one or more scenario(s) that regroup multiple profile tests and assemble them into a build (see [Scenarios & Builds](https://blackfire.io/docs/reference-guide/php-sdk#php-sdk-builds) docs).
- [ ] When creating the build, the `'external_id'` should be the SHA1 of the pull request, and the `'external_parent_id'` should be the SHA1 of the base branch of the pull request. This is needed so that we can send a notification about the build status back to GitHub (see [Enabling the Update of Git Commit Statuses](https://blackfire.io/docs/reference-guide/php-sdk#php-sdk-commit-status) docs).
- [ ] Hook up the Blackfire Build configuration to the GitHub notification channel (see [Setting up the GitHub Notification Channel](https://blackfire.io/docs/integrations/github#setting-up-the-github-notification-channel) docs).
## How will the scenarios look like?
Here's an example of how scenarios and builds look like. Note that this is PHP code, and can therefore be made as DRY as we want.
We'd have at least 1 build that gets triggered by pull requests, and that build should test multiple scenarios.
> Note: the following code is untested.
```php
$blackfire = new Blackfire\Client();
$build = $blackfire->startBuild( 'AMP WP Plugin', [
'title' => 'Build from Travis',
'trigger_name' => 'pull-request',
'external_id' => getenv( 'TRAVIS_COMMIT' ),
'external_parent_id' => getenv( 'TRAVIS_PULL_REQUEST_SHA' ) . ':' . getenv( 'TRAVIS_BRANCH' ),
] );
$config = ( new Blackfire\Profile\Configuration() )
// We can define how many samples to profile to average out fluctuations.
->setSamples( $samples )
// We can have multiple environments to store the results in.
->setEnv('amp-wp');
// For each scenario, we adapt the configuration object.
$scenario = $blackfire->startScenario( $build, [
'title' => 'Tag & Attribute Sanitizer',
'metadata' => [
'pull-request' => getenv( 'TRAVIS_PULL_REQUEST' ),
'category' => 'sanitizer',
],
] );
$config->setScenario( $scenario );
// In PHP, we can manually control the probe and only enable it
// for the parts of the code we want to profile.
$probe = $blackfire->createProbe( $config, false );
for ( $sample = 1; $sample <= $samples; $sample++ ) {
// Start the actual profile run.
$probe->enable();
foo(); // The code we want to profile.
// Finish the profile run.
$probe->close();
}
// Send the results back to Blackfire.
$profile = $blackfire->endProbe( $probe );
// We need to close the scenario now to start the next.
// This returns the report, in case we want to act on it here.
$report = $blackfire->closeScenario( $scenario );
// After we went through all scenarios, we can close the build.
$blackfire->closeBuild( $build );
```
## What about the PHPUnit integration
Within PHPUnit, we can use Blackfire for assertions. We can assert againt the dimensions of any metric. The available dimensions for metrics are the following ones:
* `count`
* `wall_time`
* `cpu_time`
* `memory`
* `peak_memory`
* `network_in`
* `network_out`
* `io`
For comparisons, the following two functions can be used as well:
* `percent()` - i.e. `percent(main.wall_time) < 10%`
* `diff()` - i.e. `diff(metrics.sql.queries.count) < 2`
Apart from the [built-in metrics](https://blackfire.io/docs/reference-guide/metrics#built-in-metrics), we can define our own custom metrics that we assert the dimensions against. Here's an example of how that could work:
```php
use Blackfire\Profile\Metric;
$metric = new Metric( 'content_sanitizer.sanitize', '=AMP_Content_Sanitizer::sanitize' );
// Then we can add this custom metric to our profile's config object.
$config->defineMetric( $metric );
```
Now, let's see how we could use this metric in an assertion when running PHPUnit tests.
> Note: the following code is untested.
```php
use Blackfire\Bridge\PhpUnit\TestCaseTrait;
use Blackfire\Profile;
class AMP_Img_Sanitizer_Test extends WP_UnitTestCase
{
use TestCaseTrait;
/** @var Blackfire\Profile\Configuration */
private $config;
public function setUp() {
$this->config = new Blackfire\Profile\Configuration();
$metric = new Metric(
'content_sanitizer.sanitize',
'=AMP_Content_Sanitizer::sanitize'
);
$this->config->defineMetric( $metric );
}
/**
* @group blackfire
* @requires extension blackfire
*/
public function testSomething()
{
// First we need to define our assertions.
$this->config
->assert('content_sanitizer.sanitize.wall_time < 200ms', 'Content sanitization time' )
->assert('content_sanitizer.sanitize.memory < 2MB', 'Content sanitiztaion memory' )
->assert('content_sanitizer.sanitize.io < 5ms', 'Content sanitization I/O' )
;
// Then we can do a profile run to see whether they hold true.
$profile = $this->assertBlackfire( $config, function () {
// Here we run the code that needs to be profiled.
} );
}
}
```
One way of using these asserts is to define performance budgets for the different subsystems and then make sure we can actually hit these budgets and enforce them.
*Nice tip I gathered from the docs:*
When defining custom metrics, you can also reason about the argument that is being passed in. This is most useful if we have place in the code where multiple code paths flow through based on differing arguments. You can define the metric to create separate nodes for differing arguments that were passed in. This lets us verify whether we run a method for a given argument multiple times (which could then be cached) and whether there are very slow instances of doing so. Additionally, it lets us filter to only take said method into account for specific arguments, like getting a detailed profile for all actions/filters where the first arguments starts with `'amp_'`.
## What about HTTP access using the Blackfire Player (option a.) above)
For this to work, we'd need to deploy a site in such a way that it is accessible via HTTP to Blackfire. This could mean a docker container we prepare within Travis (not 100% sure on timing here), or an external hosting we deploy to.
Blackfire comes with a built-in integration for [platform.sh], and that has a "development-only" plan at $10 / month. However, I would prefer to concentrate on options b.) and c.) above, and discuss in parallel on first defining what the "reference site(s)" for AMP should be that we want to run the tests against. Without reference sites to run the Blackfire Player tests against, it makes no sense to invest time and money into this side of the infrastructure.
Fixes #1017 | non_main | adding continuous performance testing with blackfire the proposed solution for adding robust manual and continuous profiling to monitor and improve the performance of the plugin is to go with the this offers very robust and easy to analyse profiling as shown during a brief screenshare in our recent plugin sync meeting any developer can run manual blackfire tests at any time as needed this can even be done with the free hack plan albeit with less features the required license for integrating blackfire directly with github and have it run profiling for every pr is the enterprise license this comes at month billed yearly so a total of per year how to run blackfire continuously there are two main ways of running blackfire continuously a using http access and the to run blackfire on a website we have deployed in some way b using the to run performance tests against the source code c integrate blackfire directly into the to use asserts based on blackfire metrics like asserting that the sql queries are less than for a first iteration i suggest concentrating on b and c only as this is way more straight forward to implement and maintain and will provide us with a large chunk of the benefits once we re in a good place with our performance tests using the php sdk we can discuss what site s to deploy and where to deploy them so we can run blackfire against entire sites this is then similar to the tests with the difference that they will profile the backend performance while controlling the frontend steps needed to integrate blackfire continuous profiling into this plugin using the php sdk add a separate env to travis for performance testing commit an encrypted file blackfire travis ini enc to the repository that contains the encrypted blackfire credentials see docs adapt the travis file to download and configure blackfire on before install and to disable xdebug and launch the blackfire agent on before script see docs write one or more scenario s that regroup multiple profile tests and assemble them into a build see docs when creating the build the external id should be the of the pull request and the external parent id should be the of the base branch of the pull request this is needed so that we can send a notification about the build status back to github see docs hook up the blackfire build configuration to the github notification channel see docs how will the scenarios look like here s an example of how scenarios and builds look like note that this is php code and can therefore be made as dry as we want we d have at least build that gets triggered by pull requests and that build should test multiple scenarios note the following code is untested php blackfire new blackfire client build blackfire startbuild amp wp plugin title build from travis trigger name pull request external id getenv travis commit external parent id getenv travis pull request sha getenv travis branch config new blackfire profile configuration we can define how many samples to profile to average out fluctuations setsamples samples we can have multiple environments to store the results in setenv amp wp for each scenario we adapt the configuration object scenario blackfire startscenario build title tag attribute sanitizer metadata pull request getenv travis pull request category sanitizer config setscenario scenario in php we can manually control the probe and only enable it for the parts of the code we want to profile probe blackfire createprobe config false for sample sample samples sample start the actual profile run probe enable foo the code we want to profile finish the profile run probe close send the results back to blackfire profile blackfire endprobe probe we need to close the scenario now to start the next this returns the report in case we want to act on it here report blackfire closescenario scenario after we went through all scenarios we can close the build blackfire closebuild build what about the phpunit integration within phpunit we can use blackfire for assertions we can assert againt the dimensions of any metric the available dimensions for metrics are the following ones count wall time cpu time memory peak memory network in network out io for comparisons the following two functions can be used as well percent i e percent main wall time diff i e diff metrics sql queries count apart from the we can define our own custom metrics that we assert the dimensions against here s an example of how that could work php use blackfire profile metric metric new metric content sanitizer sanitize amp content sanitizer sanitize then we can add this custom metric to our profile s config object config definemetric metric now let s see how we could use this metric in an assertion when running phpunit tests note the following code is untested php use blackfire bridge phpunit testcasetrait use blackfire profile class amp img sanitizer test extends wp unittestcase use testcasetrait var blackfire profile configuration private config public function setup this config new blackfire profile configuration metric new metric content sanitizer sanitize amp content sanitizer sanitize this config definemetric metric group blackfire requires extension blackfire public function testsomething first we need to define our assertions this config assert content sanitizer sanitize wall time content sanitization time assert content sanitizer sanitize memory content sanitiztaion memory assert content sanitizer sanitize io content sanitization i o then we can do a profile run to see whether they hold true profile this assertblackfire config function here we run the code that needs to be profiled one way of using these asserts is to define performance budgets for the different subsystems and then make sure we can actually hit these budgets and enforce them nice tip i gathered from the docs when defining custom metrics you can also reason about the argument that is being passed in this is most useful if we have place in the code where multiple code paths flow through based on differing arguments you can define the metric to create separate nodes for differing arguments that were passed in this lets us verify whether we run a method for a given argument multiple times which could then be cached and whether there are very slow instances of doing so additionally it lets us filter to only take said method into account for specific arguments like getting a detailed profile for all actions filters where the first arguments starts with amp what about http access using the blackfire player option a above for this to work we d need to deploy a site in such a way that it is accessible via http to blackfire this could mean a docker container we prepare within travis not sure on timing here or an external hosting we deploy to blackfire comes with a built in integration for and that has a development only plan at month however i would prefer to concentrate on options b and c above and discuss in parallel on first defining what the reference site s for amp should be that we want to run the tests against without reference sites to run the blackfire player tests against it makes no sense to invest time and money into this side of the infrastructure fixes | 0 |
148,479 | 5,683,144,064 | IssuesEvent | 2017-04-13 11:50:51 | openaddresses/openaddresses | https://api.github.com/repos/openaddresses/openaddresses | reopened | North Dakota Addresses | data-priority-2 | North Dakota is just empty. Lets fill in the gaps!
Day one - reached out to the following counties:Bottineau, Burke, Cavalier, Divide, Pembina, Renville, Rolette, and Towner. I am very optimistic about Pembina (http://pembinacountynd.gov/county/departments/it-gis-and-maps/ ) but no other county has a gis website, so hopefully everyone I emailed can help us out somehow.
Im going to email these in groups a row a day - total project time would be five days.
day 1 progress:

| 1.0 | North Dakota Addresses - North Dakota is just empty. Lets fill in the gaps!
Day one - reached out to the following counties:Bottineau, Burke, Cavalier, Divide, Pembina, Renville, Rolette, and Towner. I am very optimistic about Pembina (http://pembinacountynd.gov/county/departments/it-gis-and-maps/ ) but no other county has a gis website, so hopefully everyone I emailed can help us out somehow.
Im going to email these in groups a row a day - total project time would be five days.
day 1 progress:

| non_main | north dakota addresses north dakota is just empty lets fill in the gaps day one reached out to the following counties bottineau burke cavalier divide pembina renville rolette and towner i am very optimistic about pembina but no other county has a gis website so hopefully everyone i emailed can help us out somehow im going to email these in groups a row a day total project time would be five days day progress | 0 |
2,935 | 10,514,369,678 | IssuesEvent | 2019-09-28 00:13:35 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | closed | zap trash: elevated permissions | awaiting maintainer feedback core help wanted | Currently it launches a popup even if `uninstall` prompts for sudo.
Also causes CI failures.
https://github.com/caskroom/homebrew-cask/pull/45540#issuecomment-378613552
https://travis-ci.org/caskroom/homebrew-cask/builds/355729073#L1929
```
$ brew cask zap virtualbox
==> Implied "brew cask uninstall virtualbox"
==> Running uninstall process for virtualbox; your password may be necessary
==> Running uninstall script VirtualBox_Uninstall.tool
Password:
==>
==> Welcome to the VirtualBox uninstaller script.
==>
==> The following files and directories (bundles) will be removed:
==> /Users/commitay/Library/LaunchAgents/org.virtualbox.vboxwebsrv.plist
==> /usr/local/bin/VirtualBox
==> /usr/local/bin/VBoxManage
==> /usr/local/bin/VBoxVRDP
==> /usr/local/bin/VBoxHeadless
==> /usr/local/bin/vboxwebsrv
==> /usr/local/bin/VBoxBugReport
==> /usr/local/bin/VBoxBalloonCtrl
==> /usr/local/bin/VBoxAutostart
==> /usr/local/bin/VBoxDTrace
==> /usr/local/bin/vbox-img
==> /Library/LaunchDaemons/org.virtualbox.startup.plist
==> /Library/Python/2.7/site-packages/vboxapi/VirtualBox_constants.py
==> /Library/Python/2.7/site-packages/vboxapi/VirtualBox_constants.pyc
==> /Library/Python/2.7/site-packages/vboxapi/__init__.py
==> /Library/Python/2.7/site-packages/vboxapi/__init__.pyc
==> /Library/Python/2.7/site-packages/vboxapi-1.0-py2.7.egg-info
==> /Library/Application Support/VirtualBox/LaunchDaemons/
==> /Library/Application Support/VirtualBox/VBoxDrv.kext/
==> /Library/Application Support/VirtualBox/VBoxUSB.kext/
==> /Library/Application Support/VirtualBox/VBoxNetFlt.kext/
==> /Library/Application Support/VirtualBox/VBoxNetAdp.kext/
==> /Applications/VirtualBox.app/
==> /Library/Python/2.7/site-packages/vboxapi/
==>
==> And the following KEXTs will be unloaded:
==> org.virtualbox.kext.VBoxUSB
==> org.virtualbox.kext.VBoxNetFlt
==> org.virtualbox.kext.VBoxNetAdp
==> org.virtualbox.kext.VBoxDrv
==>
==> And the traces of following packages will be removed:
==> org.virtualbox.pkg.vboxkexts
==> org.virtualbox.pkg.virtualbox
==> org.virtualbox.pkg.virtualboxcli
==>
==> The uninstallation processes requires administrative privileges
==> because some of the installed files cannot be removed by a normal
==> user. You may be prompted for your password now...
==>
==> unloading org.virtualbox.kext.VBoxUSB
==> unloading org.virtualbox.kext.VBoxNetFlt
==> unloading org.virtualbox.kext.VBoxNetAdp
==> unloading org.virtualbox.kext.VBoxDrv
==> Successfully unloaded VirtualBox kernel extensions.
==> Forgot package 'org.virtualbox.pkg.vboxkexts' on '/'.
==> Forgot package 'org.virtualbox.pkg.virtualbox' on '/'.
==> Forgot package 'org.virtualbox.pkg.virtualboxcli' on '/'.
==> Done.
==> Uninstalling packages:
==> Dispatching zap stanza
==> Running zap process for virtualbox; your password may be necessary
==> Trashing files:
/Library/Application Support/VirtualBox
**NEEDS PERMISSIONS**
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/org.virtualbox.app.virtualbox.sfl*
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/org.virtualbox.app.virtualboxvm.sfl*
~/Library/VirtualBox
~/Library/Preferences/org.virtualbox.app.VirtualBox.plist
~/Library/Preferences/org.virtualbox.app.VirtualBoxVM.plist
~/Library/Saved Application State/org.virtualbox.app.VirtualBox.savedState
~/Library/Saved Application State/org.virtualbox.app.VirtualBoxVM.savedState
**REQUESTS PERMISSIONS**
==> Removing directories if empty:
~/VirtualBox VMs
==> Removing all staged versions of Cask 'virtualbox'
```
 | True | zap trash: elevated permissions - Currently it launches a popup even if `uninstall` prompts for sudo.
Also causes CI failures.
https://github.com/caskroom/homebrew-cask/pull/45540#issuecomment-378613552
https://travis-ci.org/caskroom/homebrew-cask/builds/355729073#L1929
```
$ brew cask zap virtualbox
==> Implied "brew cask uninstall virtualbox"
==> Running uninstall process for virtualbox; your password may be necessary
==> Running uninstall script VirtualBox_Uninstall.tool
Password:
==>
==> Welcome to the VirtualBox uninstaller script.
==>
==> The following files and directories (bundles) will be removed:
==> /Users/commitay/Library/LaunchAgents/org.virtualbox.vboxwebsrv.plist
==> /usr/local/bin/VirtualBox
==> /usr/local/bin/VBoxManage
==> /usr/local/bin/VBoxVRDP
==> /usr/local/bin/VBoxHeadless
==> /usr/local/bin/vboxwebsrv
==> /usr/local/bin/VBoxBugReport
==> /usr/local/bin/VBoxBalloonCtrl
==> /usr/local/bin/VBoxAutostart
==> /usr/local/bin/VBoxDTrace
==> /usr/local/bin/vbox-img
==> /Library/LaunchDaemons/org.virtualbox.startup.plist
==> /Library/Python/2.7/site-packages/vboxapi/VirtualBox_constants.py
==> /Library/Python/2.7/site-packages/vboxapi/VirtualBox_constants.pyc
==> /Library/Python/2.7/site-packages/vboxapi/__init__.py
==> /Library/Python/2.7/site-packages/vboxapi/__init__.pyc
==> /Library/Python/2.7/site-packages/vboxapi-1.0-py2.7.egg-info
==> /Library/Application Support/VirtualBox/LaunchDaemons/
==> /Library/Application Support/VirtualBox/VBoxDrv.kext/
==> /Library/Application Support/VirtualBox/VBoxUSB.kext/
==> /Library/Application Support/VirtualBox/VBoxNetFlt.kext/
==> /Library/Application Support/VirtualBox/VBoxNetAdp.kext/
==> /Applications/VirtualBox.app/
==> /Library/Python/2.7/site-packages/vboxapi/
==>
==> And the following KEXTs will be unloaded:
==> org.virtualbox.kext.VBoxUSB
==> org.virtualbox.kext.VBoxNetFlt
==> org.virtualbox.kext.VBoxNetAdp
==> org.virtualbox.kext.VBoxDrv
==>
==> And the traces of following packages will be removed:
==> org.virtualbox.pkg.vboxkexts
==> org.virtualbox.pkg.virtualbox
==> org.virtualbox.pkg.virtualboxcli
==>
==> The uninstallation processes requires administrative privileges
==> because some of the installed files cannot be removed by a normal
==> user. You may be prompted for your password now...
==>
==> unloading org.virtualbox.kext.VBoxUSB
==> unloading org.virtualbox.kext.VBoxNetFlt
==> unloading org.virtualbox.kext.VBoxNetAdp
==> unloading org.virtualbox.kext.VBoxDrv
==> Successfully unloaded VirtualBox kernel extensions.
==> Forgot package 'org.virtualbox.pkg.vboxkexts' on '/'.
==> Forgot package 'org.virtualbox.pkg.virtualbox' on '/'.
==> Forgot package 'org.virtualbox.pkg.virtualboxcli' on '/'.
==> Done.
==> Uninstalling packages:
==> Dispatching zap stanza
==> Running zap process for virtualbox; your password may be necessary
==> Trashing files:
/Library/Application Support/VirtualBox
**NEEDS PERMISSIONS**
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/org.virtualbox.app.virtualbox.sfl*
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/org.virtualbox.app.virtualboxvm.sfl*
~/Library/VirtualBox
~/Library/Preferences/org.virtualbox.app.VirtualBox.plist
~/Library/Preferences/org.virtualbox.app.VirtualBoxVM.plist
~/Library/Saved Application State/org.virtualbox.app.VirtualBox.savedState
~/Library/Saved Application State/org.virtualbox.app.VirtualBoxVM.savedState
**REQUESTS PERMISSIONS**
==> Removing directories if empty:
~/VirtualBox VMs
==> Removing all staged versions of Cask 'virtualbox'
```
 | main | zap trash elevated permissions currently it launches a popup even if uninstall prompts for sudo also causes ci failures brew cask zap virtualbox implied brew cask uninstall virtualbox running uninstall process for virtualbox your password may be necessary running uninstall script virtualbox uninstall tool password welcome to the virtualbox uninstaller script the following files and directories bundles will be removed users commitay library launchagents org virtualbox vboxwebsrv plist usr local bin virtualbox usr local bin vboxmanage usr local bin vboxvrdp usr local bin vboxheadless usr local bin vboxwebsrv usr local bin vboxbugreport usr local bin vboxballoonctrl usr local bin vboxautostart usr local bin vboxdtrace usr local bin vbox img library launchdaemons org virtualbox startup plist library python site packages vboxapi virtualbox constants py library python site packages vboxapi virtualbox constants pyc library python site packages vboxapi init py library python site packages vboxapi init pyc library python site packages vboxapi egg info library application support virtualbox launchdaemons library application support virtualbox vboxdrv kext library application support virtualbox vboxusb kext library application support virtualbox vboxnetflt kext library application support virtualbox vboxnetadp kext applications virtualbox app library python site packages vboxapi and the following kexts will be unloaded org virtualbox kext vboxusb org virtualbox kext vboxnetflt org virtualbox kext vboxnetadp org virtualbox kext vboxdrv and the traces of following packages will be removed org virtualbox pkg vboxkexts org virtualbox pkg virtualbox org virtualbox pkg virtualboxcli the uninstallation processes requires administrative privileges because some of the installed files cannot be removed by a normal user you may be prompted for your password now unloading org virtualbox kext vboxusb unloading org virtualbox kext vboxnetflt unloading org virtualbox kext vboxnetadp unloading org virtualbox kext vboxdrv successfully unloaded virtualbox kernel extensions forgot package org virtualbox pkg vboxkexts on forgot package org virtualbox pkg virtualbox on forgot package org virtualbox pkg virtualboxcli on done uninstalling packages dispatching zap stanza running zap process for virtualbox your password may be necessary trashing files library application support virtualbox needs permissions library application support com apple sharedfilelist com apple lssharedfilelist applicationrecentdocuments org virtualbox app virtualbox sfl library application support com apple sharedfilelist com apple lssharedfilelist applicationrecentdocuments org virtualbox app virtualboxvm sfl library virtualbox library preferences org virtualbox app virtualbox plist library preferences org virtualbox app virtualboxvm plist library saved application state org virtualbox app virtualbox savedstate library saved application state org virtualbox app virtualboxvm savedstate requests permissions removing directories if empty virtualbox vms removing all staged versions of cask virtualbox | 1 |
4,283 | 21,553,727,185 | IssuesEvent | 2022-04-30 03:50:40 | Numble-challenge-Team/client | https://api.github.com/repos/Numble-challenge-Team/client | opened | 프론트엔드 개발 환경 설정 | maintain | ### ISSUE
- Type: chore
- Page: -
### 변경 사항
- lint, prettier fix 명령어 추가
- tsconfig include 값에 next.config.js 추가 | True | 프론트엔드 개발 환경 설정 - ### ISSUE
- Type: chore
- Page: -
### 변경 사항
- lint, prettier fix 명령어 추가
- tsconfig include 값에 next.config.js 추가 | main | 프론트엔드 개발 환경 설정 issue type chore page 변경 사항 lint prettier fix 명령어 추가 tsconfig include 값에 next config js 추가 | 1 |
74,788 | 25,329,165,655 | IssuesEvent | 2022-11-18 11:49:23 | scipy/scipy | https://api.github.com/repos/scipy/scipy | opened | BUG: spatial.ConvexHull hangs | defect | ### Describe your issue.
I have encountered an issue with spatial.ConvexHull, where the code simply hangs on the creation of the hull. It doesn't give any errors and cannot even be interrupted, it just hangs.
I am not using spatial.ConvexHull directly, but rather through the trimesh package; but have reproduced the issue in direct usage of spatial.ConvexHull with the same arguments used by trimesh.
I have only encountered this issue on one specific set of points, in many many calls to it in my project. I was unable to reduce these points to a minimal hardcoded set that reproduces the issue, so am attaching them as a csv file (it's not huge, there are 46 points).
[bad_points.csv](https://github.com/scipy/scipy/files/10040961/bad_points.csv)
### Reproducing Code Example
```python
import os
import numpy
from scipy import spatial
def try_bad_points():
dir = os.path.dirname(__file__)
points = numpy.loadtxt(dir + '/bad_points.csv', delimiter=',')
hull = spatial.ConvexHull(points) # this is fine
hull = spatial.ConvexHull(points, qhull_options='QbB Pp Qt') # usage as in trimesh - this hangs
hull = spatial.ConvexHull(points, qhull_options='QbB') # this also hangs
return
if __name__ == '__main__':
try_bad_points()
```
### Error message
```shell
-
```
### SciPy/NumPy/Python version information
1.9.3 1.21.6 sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0) | 1.0 | BUG: spatial.ConvexHull hangs - ### Describe your issue.
I have encountered an issue with spatial.ConvexHull, where the code simply hangs on the creation of the hull. It doesn't give any errors and cannot even be interrupted, it just hangs.
I am not using spatial.ConvexHull directly, but rather through the trimesh package; but have reproduced the issue in direct usage of spatial.ConvexHull with the same arguments used by trimesh.
I have only encountered this issue on one specific set of points, in many many calls to it in my project. I was unable to reduce these points to a minimal hardcoded set that reproduces the issue, so am attaching them as a csv file (it's not huge, there are 46 points).
[bad_points.csv](https://github.com/scipy/scipy/files/10040961/bad_points.csv)
### Reproducing Code Example
```python
import os
import numpy
from scipy import spatial
def try_bad_points():
dir = os.path.dirname(__file__)
points = numpy.loadtxt(dir + '/bad_points.csv', delimiter=',')
hull = spatial.ConvexHull(points) # this is fine
hull = spatial.ConvexHull(points, qhull_options='QbB Pp Qt') # usage as in trimesh - this hangs
hull = spatial.ConvexHull(points, qhull_options='QbB') # this also hangs
return
if __name__ == '__main__':
try_bad_points()
```
### Error message
```shell
-
```
### SciPy/NumPy/Python version information
1.9.3 1.21.6 sys.version_info(major=3, minor=10, micro=6, releaselevel='final', serial=0) | non_main | bug spatial convexhull hangs describe your issue i have encountered an issue with spatial convexhull where the code simply hangs on the creation of the hull it doesn t give any errors and cannot even be interrupted it just hangs i am not using spatial convexhull directly but rather through the trimesh package but have reproduced the issue in direct usage of spatial convexhull with the same arguments used by trimesh i have only encountered this issue on one specific set of points in many many calls to it in my project i was unable to reduce these points to a minimal hardcoded set that reproduces the issue so am attaching them as a csv file it s not huge there are points reproducing code example python import os import numpy from scipy import spatial def try bad points dir os path dirname file points numpy loadtxt dir bad points csv delimiter hull spatial convexhull points this is fine hull spatial convexhull points qhull options qbb pp qt usage as in trimesh this hangs hull spatial convexhull points qhull options qbb this also hangs return if name main try bad points error message shell scipy numpy python version information sys version info major minor micro releaselevel final serial | 0 |
271,370 | 29,477,934,988 | IssuesEvent | 2023-06-02 01:05:36 | samq-ghdemo/SEARCH-NCJIS-nibrs | https://api.github.com/repos/samq-ghdemo/SEARCH-NCJIS-nibrs | opened | CVE-2023-20861 (Medium) detected in multiple libraries | Mend: dependency security vulnerability | ## CVE-2023-20861 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-expression-4.3.11.RELEASE.jar</b>, <b>spring-expression-5.0.9.RELEASE.jar</b>, <b>spring-expression-5.1.7.RELEASE.jar</b>, <b>spring-expression-3.2.16.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-expression-4.3.11.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/4.3.11.RELEASE/spring-expression-4.3.11.RELEASE.jar,/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-expression-4.3.11.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-expression-4.3.11.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-5.0.9.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /tools/nibrs-summary-report/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-expression-5.0.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-expression-5.0.9.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-5.1.7.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.1.7.RELEASE/spring-expression-5.1.7.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-webmvc-5.1.7.RELEASE.jar
- :x: **spring-expression-5.1.7.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-3.2.16.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/SpringSource/spring-framework">https://github.com/SpringSource/spring-framework</a></p>
<p>Path to dependency file: /tools/nibrs-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/3.2.16.RELEASE/spring-expression-3.2.16.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- uimafit-core-2.2.0.jar
- spring-context-3.2.16.RELEASE.jar
- :x: **spring-expression-3.2.16.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/SEARCH-NCJIS-nibrs/commit/2643373aa9a184ff4ea81e98caf4009bf2ee8e91">2643373aa9a184ff4ea81e98caf4009bf2ee8e91</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 6.0.0 - 6.0.6, 5.3.0 - 5.3.25, 5.2.0.RELEASE - 5.2.22.RELEASE, and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial-of-service (DoS) condition.
<p>Publish Date: 2023-03-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20861>CVE-2023-20861</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/security/cve-2023-20861">https://spring.io/security/cve-2023-20861</a></p>
<p>Release Date: 2023-03-23</p>
<p>Fix Resolution (org.springframework:spring-expression): 5.2.23.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.4.0</p><p>Fix Resolution (org.springframework:spring-expression): 5.2.23.RELEASE</p>
<p>Direct dependency fix Resolution (org.apache.tika:tika-parsers): 1.21</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2023-20861 (Medium) detected in multiple libraries - ## CVE-2023-20861 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-expression-4.3.11.RELEASE.jar</b>, <b>spring-expression-5.0.9.RELEASE.jar</b>, <b>spring-expression-5.1.7.RELEASE.jar</b>, <b>spring-expression-3.2.16.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-expression-4.3.11.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/4.3.11.RELEASE/spring-expression-4.3.11.RELEASE.jar,/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-expression-4.3.11.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-expression-4.3.11.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-5.0.9.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /tools/nibrs-summary-report/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.0.9.RELEASE/spring-expression-5.0.9.RELEASE.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-expression-5.0.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-expression-5.0.9.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-5.1.7.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.1.7.RELEASE/spring-expression-5.1.7.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-webmvc-5.1.7.RELEASE.jar
- :x: **spring-expression-5.1.7.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-3.2.16.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/SpringSource/spring-framework">https://github.com/SpringSource/spring-framework</a></p>
<p>Path to dependency file: /tools/nibrs-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/3.2.16.RELEASE/spring-expression-3.2.16.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- uimafit-core-2.2.0.jar
- spring-context-3.2.16.RELEASE.jar
- :x: **spring-expression-3.2.16.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/SEARCH-NCJIS-nibrs/commit/2643373aa9a184ff4ea81e98caf4009bf2ee8e91">2643373aa9a184ff4ea81e98caf4009bf2ee8e91</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 6.0.0 - 6.0.6, 5.3.0 - 5.3.25, 5.2.0.RELEASE - 5.2.22.RELEASE, and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial-of-service (DoS) condition.
<p>Publish Date: 2023-03-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20861>CVE-2023-20861</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/security/cve-2023-20861">https://spring.io/security/cve-2023-20861</a></p>
<p>Release Date: 2023-03-23</p>
<p>Fix Resolution (org.springframework:spring-expression): 5.2.23.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.4.0</p><p>Fix Resolution (org.springframework:spring-expression): 5.2.23.RELEASE</p>
<p>Direct dependency fix Resolution (org.apache.tika:tika-parsers): 1.21</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_main | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries spring expression release jar spring expression release jar spring expression release jar spring expression release jar spring expression release jar spring expression language spel library home page a href path to dependency file tools nibrs fbi service pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar tools nibrs fbi service target nibrs fbi service web inf lib spring expression release jar dependency hierarchy x spring expression release jar vulnerable library spring expression release jar spring expression language spel library home page a href path to dependency file tools nibrs summary report pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar web nibrs web target nibrs web web inf lib spring expression release jar dependency hierarchy x spring expression release jar vulnerable library spring expression release jar spring expression language spel library home page a href path to dependency file tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar dependency hierarchy spring boot starter web release jar root library spring webmvc release jar x spring expression release jar vulnerable library spring expression release jar spring expression language spel library home page a href path to dependency file tools nibrs common pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar dependency hierarchy tika parsers jar root library uimafit core jar spring context release jar x spring expression release jar vulnerable library found in head commit a href found in base branch master vulnerability details in spring framework versions release release and older unsupported versions it is possible for a user to provide a specially crafted spel expression that may cause a denial of service dos condition publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring expression release direct dependency fix resolution org springframework boot spring boot starter web fix resolution org springframework spring expression release direct dependency fix resolution org apache tika tika parsers check this box to open an automated fix pr | 0 |
5,321 | 26,876,613,789 | IssuesEvent | 2023-02-05 04:47:17 | Homebrew/homebrew-core | https://api.github.com/repos/Homebrew/homebrew-core | closed | CI: stop using workaround for cache issue | help wanted maintainer feedback | We have this line in our CI scripts:
https://github.com/Homebrew/homebrew-core/blob/master/.github/workflows/publish-commit-bottles.yml#L41-L42
Unfortunately, that PR is rejected so we'll need to find a better way to resolve the issue we had that prompted this PR. | True | CI: stop using workaround for cache issue - We have this line in our CI scripts:
https://github.com/Homebrew/homebrew-core/blob/master/.github/workflows/publish-commit-bottles.yml#L41-L42
Unfortunately, that PR is rejected so we'll need to find a better way to resolve the issue we had that prompted this PR. | main | ci stop using workaround for cache issue we have this line in our ci scripts unfortunately that pr is rejected so we ll need to find a better way to resolve the issue we had that prompted this pr | 1 |
1,895 | 6,577,538,836 | IssuesEvent | 2017-09-12 01:37:05 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | user/group not accepting an actual array | affects_2.0 bug_report waiting_on_maintainer | ##### Issue Type:
- Bug Report
##### Plugin Name:
user
##### Ansible Version:
ansible 2.0.1.0
config file =
configured module search path = Default w/o overrides
##### Ansible Configuration:
##### Environment:
raspbian
##### Summary:
user module group param is expected to accept an array
##### Steps To Reproduce:
user:
name: foo
groups:
- a
- b
- c
append: true
##### Expected Results:
user foo added to groups a, b and c
##### Actual Results:
group does not exist ['a','b','c']
| True | user/group not accepting an actual array - ##### Issue Type:
- Bug Report
##### Plugin Name:
user
##### Ansible Version:
ansible 2.0.1.0
config file =
configured module search path = Default w/o overrides
##### Ansible Configuration:
##### Environment:
raspbian
##### Summary:
user module group param is expected to accept an array
##### Steps To Reproduce:
user:
name: foo
groups:
- a
- b
- c
append: true
##### Expected Results:
user foo added to groups a, b and c
##### Actual Results:
group does not exist ['a','b','c']
| main | user group not accepting an actual array issue type bug report plugin name user ansible version ansible config file configured module search path default w o overrides ansible configuration environment raspbian summary user module group param is expected to accept an array steps to reproduce user name foo groups a b c append true expected results user foo added to groups a b and c actual results group does not exist | 1 |
3,927 | 2,938,384,432 | IssuesEvent | 2015-07-01 10:24:18 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Broken Batch modal layout | No Code Attached Yet | Go to Com_content. You need > 5 several categories and alt least one article.
Activate the Batch icon
In 3.4.1 the dropdown of categories is visible in ful length
In 3.4.2 the Dropdown is cut

See the screens. a small patch resolves the problem.
 | 1.0 | Broken Batch modal layout - Go to Com_content. You need > 5 several categories and alt least one article.
Activate the Batch icon
In 3.4.1 the dropdown of categories is visible in ful length
In 3.4.2 the Dropdown is cut

See the screens. a small patch resolves the problem.
 | non_main | broken batch modal layout go to com content you need several categories and alt least one article activate the batch icon in the dropdown of categories is visible in ful length in the dropdown is cut see the screens a small patch resolves the problem | 0 |
3,421 | 13,182,099,383 | IssuesEvent | 2020-08-12 15:15:54 | duo-labs/cloudmapper | https://api.github.com/repos/duo-labs/cloudmapper | closed | Feature: Add AWS VPN Gateway and Connections | map unmaintained_functionality | We have a few AWS VPN connections to external data centers and it would be nice to see the gateway IP and which VPCs have VPN connections to that gateway. All of our traffic is outbound over the vpn and not inbound so the normal process of Security Group rules showing connections doesn't work for this use case. | True | Feature: Add AWS VPN Gateway and Connections - We have a few AWS VPN connections to external data centers and it would be nice to see the gateway IP and which VPCs have VPN connections to that gateway. All of our traffic is outbound over the vpn and not inbound so the normal process of Security Group rules showing connections doesn't work for this use case. | main | feature add aws vpn gateway and connections we have a few aws vpn connections to external data centers and it would be nice to see the gateway ip and which vpcs have vpn connections to that gateway all of our traffic is outbound over the vpn and not inbound so the normal process of security group rules showing connections doesn t work for this use case | 1 |
385,206 | 26,624,287,186 | IssuesEvent | 2023-01-24 13:32:24 | Giveth/giveth-planning | https://api.github.com/repos/Giveth/giveth-planning | closed | Deprecate Giveth TRACE documentation | documentation | Update information about the shutdown of Giveth TRACE in https://docs.giveth.io/dapps/introTrace
It would be nice to create some content about the experience that was Trace and why we decided to move away from it and still leave the record of the useful information about how it worked in the docs.
Maybe someone is interested in writing this? @c0ric0ri | 1.0 | Deprecate Giveth TRACE documentation - Update information about the shutdown of Giveth TRACE in https://docs.giveth.io/dapps/introTrace
It would be nice to create some content about the experience that was Trace and why we decided to move away from it and still leave the record of the useful information about how it worked in the docs.
Maybe someone is interested in writing this? @c0ric0ri | non_main | deprecate giveth trace documentation update information about the shutdown of giveth trace in it would be nice to create some content about the experience that was trace and why we decided to move away from it and still leave the record of the useful information about how it worked in the docs maybe someone is interested in writing this | 0 |
247,344 | 20,973,496,845 | IssuesEvent | 2022-03-28 13:27:46 | IloveDev-Crew/anonymous-server | https://api.github.com/repos/IloveDev-Crew/anonymous-server | closed | feat: add Board CRUD | enhancement good first issue v1 Unit Test Integration Test | **Describe the issue**
Write bulletin board curd and write unit tests.
- [ ] : board crud
- [ ] : board unit test
| 2.0 | feat: add Board CRUD - **Describe the issue**
Write bulletin board curd and write unit tests.
- [ ] : board crud
- [ ] : board unit test
| non_main | feat add board crud describe the issue write bulletin board curd and write unit tests board crud board unit test | 0 |
724,998 | 24,948,204,748 | IssuesEvent | 2022-11-01 03:22:39 | akvo/akvo-rsr | https://api.github.com/repos/akvo/akvo-rsr | closed | Feature Request: Cumulative updates hint in Enumerator web form | Feature request Priority: Medium | ### What are you trying to do?
I'm trying to pull all previous / latest updates from the current enumerator to show it on the web form
### Describe the solution you'd like
Currently, the previous update showed from the last submitted user. But, when the cumulative feature comes up, we need to show the previous update from the current enumerator as well as a hint.
### Have you consider alternatives?
_No response_
### Additional context
_No response_ | 1.0 | Feature Request: Cumulative updates hint in Enumerator web form - ### What are you trying to do?
I'm trying to pull all previous / latest updates from the current enumerator to show it on the web form
### Describe the solution you'd like
Currently, the previous update showed from the last submitted user. But, when the cumulative feature comes up, we need to show the previous update from the current enumerator as well as a hint.
### Have you consider alternatives?
_No response_
### Additional context
_No response_ | non_main | feature request cumulative updates hint in enumerator web form what are you trying to do i m trying to pull all previous latest updates from the current enumerator to show it on the web form describe the solution you d like currently the previous update showed from the last submitted user but when the cumulative feature comes up we need to show the previous update from the current enumerator as well as a hint have you consider alternatives no response additional context no response | 0 |
178,508 | 14,671,867,002 | IssuesEvent | 2020-12-30 09:15:10 | CohenArthur/jinko | https://api.github.com/repos/CohenArthur/jinko | opened | Fix old documentation in REPL module | documentation good first issue | Right now, the documentation of the function `Repl::parse_instruction` is still of the old one. The function used to take a reference
on a interpreter and add the newly-parsed function to it. Now, it returns an instruction if it has found one. | 1.0 | Fix old documentation in REPL module - Right now, the documentation of the function `Repl::parse_instruction` is still of the old one. The function used to take a reference
on a interpreter and add the newly-parsed function to it. Now, it returns an instruction if it has found one. | non_main | fix old documentation in repl module right now the documentation of the function repl parse instruction is still of the old one the function used to take a reference on a interpreter and add the newly parsed function to it now it returns an instruction if it has found one | 0 |
209,599 | 7,177,719,065 | IssuesEvent | 2018-01-31 14:32:05 | containous/traefik | https://api.github.com/repos/containous/traefik | closed | Buffering client requests. | kind/enhancement priority/P3 | ### Do you want to request a *feature* or report a *bug*?
Feature
### What did you do?
We have a Traefik instance set up as ingress controller in our Kubernetes cluster. Before we used Nginx as a proxy for our services.
It was configured to buffer client' requests before sending them to upstream/destination.
Based on access metrics from Nginx and self-reported ones from microservices we set up alerts to trigger when response time passes the defined threshold. It was working fine.
However Traefik doesn't buffer clients' requests and pipes them straight to client.
In case of clients with slow user connection and e.g. file upload request between traefik and microservice/destination can take very long (tens of seconds).
It is generally fine, as we don't have any control over users connection. Problem is with metrics as we are no longer able to tell whether microservice behaves correctly based on request time.
Our Apdex alerts don't work.
Another drawback of not buffering requests is using more resources on microservices behind Traefik which work in thread-per-request model.
### What did you expect to see?
We can base view on microservice healthiness on metrics we get from Traefik / self-reported by microservice.
### What did you see instead?
Metrics are highly affected by clients connection speed.
### Proposed solutions
* allow requests buffering (AFAIK it is supported by https://github.com/vulcand/oxy)
* provide more information about request processing time in logs. (e.g. last byte sent to destination to last byte received from destination)
| 1.0 | Buffering client requests. - ### Do you want to request a *feature* or report a *bug*?
Feature
### What did you do?
We have a Traefik instance set up as ingress controller in our Kubernetes cluster. Before we used Nginx as a proxy for our services.
It was configured to buffer client' requests before sending them to upstream/destination.
Based on access metrics from Nginx and self-reported ones from microservices we set up alerts to trigger when response time passes the defined threshold. It was working fine.
However Traefik doesn't buffer clients' requests and pipes them straight to client.
In case of clients with slow user connection and e.g. file upload request between traefik and microservice/destination can take very long (tens of seconds).
It is generally fine, as we don't have any control over users connection. Problem is with metrics as we are no longer able to tell whether microservice behaves correctly based on request time.
Our Apdex alerts don't work.
Another drawback of not buffering requests is using more resources on microservices behind Traefik which work in thread-per-request model.
### What did you expect to see?
We can base view on microservice healthiness on metrics we get from Traefik / self-reported by microservice.
### What did you see instead?
Metrics are highly affected by clients connection speed.
### Proposed solutions
* allow requests buffering (AFAIK it is supported by https://github.com/vulcand/oxy)
* provide more information about request processing time in logs. (e.g. last byte sent to destination to last byte received from destination)
| non_main | buffering client requests do you want to request a feature or report a bug feature what did you do we have a traefik instance set up as ingress controller in our kubernetes cluster before we used nginx as a proxy for our services it was configured to buffer client requests before sending them to upstream destination based on access metrics from nginx and self reported ones from microservices we set up alerts to trigger when response time passes the defined threshold it was working fine however traefik doesn t buffer clients requests and pipes them straight to client in case of clients with slow user connection and e g file upload request between traefik and microservice destination can take very long tens of seconds it is generally fine as we don t have any control over users connection problem is with metrics as we are no longer able to tell whether microservice behaves correctly based on request time our apdex alerts don t work another drawback of not buffering requests is using more resources on microservices behind traefik which work in thread per request model what did you expect to see we can base view on microservice healthiness on metrics we get from traefik self reported by microservice what did you see instead metrics are highly affected by clients connection speed proposed solutions allow requests buffering afaik it is supported by provide more information about request processing time in logs e g last byte sent to destination to last byte received from destination | 0 |
826 | 4,461,295,829 | IssuesEvent | 2016-08-24 04:35:21 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | opened | Help Line: Add links to HelpLine entries | Maintainer Input Requested | For New Zealand there is just one helpline listed (Lifeline NZ); however there is no way to access their website given it doesn't appear in the search results.
I would like to make one or more of the following changes:
- Make the name link to the organisation's website.
- Add a (i) icon to the name or phone number.
- Display the url underneath each entry.
------
IA Page: http://duck.co/ia/view/help_line
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @conorfl | True | Help Line: Add links to HelpLine entries - For New Zealand there is just one helpline listed (Lifeline NZ); however there is no way to access their website given it doesn't appear in the search results.
I would like to make one or more of the following changes:
- Make the name link to the organisation's website.
- Add a (i) icon to the name or phone number.
- Display the url underneath each entry.
------
IA Page: http://duck.co/ia/view/help_line
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @conorfl | main | help line add links to helpline entries for new zealand there is just one helpline listed lifeline nz however there is no way to access their website given it doesn t appear in the search results i would like to make one or more of the following changes make the name link to the organisation s website add a i icon to the name or phone number display the url underneath each entry ia page conorfl | 1 |
72,062 | 18,984,995,359 | IssuesEvent | 2021-11-21 15:12:16 | Seddryck/NBi | https://api.github.com/repos/Seddryck/NBi | opened | Using roapi to serve tests dependant of REST calls | build dependencies | For integration and acceptance tests, we're using some external websites to respond to our API calls. Unfortunately, these external API are evolving or the content returned is also evolving making it difficult to test. Usage of [Roapi](http://github.com/roapi/roapi) would allow us to define our datafiles for testing. | 1.0 | Using roapi to serve tests dependant of REST calls - For integration and acceptance tests, we're using some external websites to respond to our API calls. Unfortunately, these external API are evolving or the content returned is also evolving making it difficult to test. Usage of [Roapi](http://github.com/roapi/roapi) would allow us to define our datafiles for testing. | non_main | using roapi to serve tests dependant of rest calls for integration and acceptance tests we re using some external websites to respond to our api calls unfortunately these external api are evolving or the content returned is also evolving making it difficult to test usage of would allow us to define our datafiles for testing | 0 |
1,751 | 6,574,956,800 | IssuesEvent | 2017-09-11 14:36:33 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | git module always fails on update if local has modification | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
git
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
debian:8.0 jessie
##### SUMMARY
<!--- Explain the problem briefly -->
If local git repository has modification, an update attempt of always fails with Local modifications exist, even if force=yes was given.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
tasks:
- name: update project dependency
git: dest={{item.location|quote}} repo={{item.scm_url|quote}} version={{item.scm_revision|quote}} force=yes refspec={{item.scm_refspec}} accept_hostkey=yes
with_items: "{{ deps }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
failed: [127.0.0.1] => {
"failed": true,
"invocation": {
"module_name": "git"
},
"item": {
"location": "/opt/tiger/neihan/conf",
"name": "neihan/conf",
"scm_refspec": "refs/heads/master",
"scm_revision": "master",
"scm_url": "ssh://*********/neihan/conf"
},
"module_stderr": "Shared connection to 127.0.0.1 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_U00Fwd/ansible_module_git.py\", line 1023, in <module>\r\n main()\r\n File \"/tmp/ansible_U00Fwd/ansible_module_git.py\", line 974, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n",
"msg": "MODULE FAILURE"
}
```
| True | git module always fails on update if local has modification - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
git
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
debian:8.0 jessie
##### SUMMARY
<!--- Explain the problem briefly -->
If local git repository has modification, an update attempt of always fails with Local modifications exist, even if force=yes was given.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
tasks:
- name: update project dependency
git: dest={{item.location|quote}} repo={{item.scm_url|quote}} version={{item.scm_revision|quote}} force=yes refspec={{item.scm_refspec}} accept_hostkey=yes
with_items: "{{ deps }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
failed: [127.0.0.1] => {
"failed": true,
"invocation": {
"module_name": "git"
},
"item": {
"location": "/opt/tiger/neihan/conf",
"name": "neihan/conf",
"scm_refspec": "refs/heads/master",
"scm_revision": "master",
"scm_url": "ssh://*********/neihan/conf"
},
"module_stderr": "Shared connection to 127.0.0.1 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_U00Fwd/ansible_module_git.py\", line 1023, in <module>\r\n main()\r\n File \"/tmp/ansible_U00Fwd/ansible_module_git.py\", line 974, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n",
"msg": "MODULE FAILURE"
}
```
| main | git module always fails on update if local has modification issue type bug report component name git ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific debian jessie summary if local git repository has modification an update attempt of always fails with local modifications exist even if force yes was given steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used tasks name update project dependency git dest item location quote repo item scm url quote version item scm revision quote force yes refspec item scm refspec accept hostkey yes with items deps expected results actual results failed failed true invocation module name git item location opt tiger neihan conf name neihan conf scm refspec refs heads master scm revision master scm url ssh neihan conf module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module git py line in r n main r n file tmp ansible ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure | 1 |
5,023 | 25,781,809,733 | IssuesEvent | 2022-12-09 16:34:22 | carbon-design-system/gatsby-theme-carbon | https://api.github.com/repos/carbon-design-system/gatsby-theme-carbon | closed | can i make navigation search engine with default open | status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 |
### Specific timeline issues / requests
I want to make the search button in the navigation open when the application is first run, because my client wants the search to be easier to see and use
| True | can i make navigation search engine with default open -
### Specific timeline issues / requests
I want to make the search button in the navigation open when the application is first run, because my client wants the search to be easier to see and use
| main | can i make navigation search engine with default open specific timeline issues requests i want to make the search button in the navigation open when the application is first run because my client wants the search to be easier to see and use | 1 |
726 | 4,318,961,692 | IssuesEvent | 2016-07-24 11:08:00 | gogits/gogs | https://api.github.com/repos/gogits/gogs | closed | Mention fails, when username contains dash | kind/bug status/assigned to maintainer status/needs feedback |
- Gogs version (or commit ref): 0.9.25.0506 0a78d99
- Git version: 2.8.1
- Operating system: FreeBSD 10.1-RELEASE-p34
Reproduced at https://try.gogs.io/jeppech-aaa/testtestesetsetset/issues/1
| True | Mention fails, when username contains dash -
- Gogs version (or commit ref): 0.9.25.0506 0a78d99
- Git version: 2.8.1
- Operating system: FreeBSD 10.1-RELEASE-p34
Reproduced at https://try.gogs.io/jeppech-aaa/testtestesetsetset/issues/1
| main | mention fails when username contains dash gogs version or commit ref git version operating system freebsd release reproduced at | 1 |
435,438 | 12,535,574,618 | IssuesEvent | 2020-06-04 21:39:42 | olivertwistor/rtm-tools | https://api.github.com/repos/olivertwistor/rtm-tools | closed | Config file to store API credentials | priority: must | This config file should be outside of the build, so users can provide their own credentials without having to build the tools themselves. | 1.0 | Config file to store API credentials - This config file should be outside of the build, so users can provide their own credentials without having to build the tools themselves. | non_main | config file to store api credentials this config file should be outside of the build so users can provide their own credentials without having to build the tools themselves | 0 |
309,028 | 26,648,373,982 | IssuesEvent | 2023-01-25 11:47:51 | spring-projects/spring-framework | https://api.github.com/repos/spring-projects/spring-framework | closed | Investigate Kotlin DSL options for `expectAll()` in `WebTestClient` | in: test in: web status: declined in: kotlin | ## Overview
Commit 3c2dfebf4ec5e9d791c1f3c9fa0ad35a5a9fcd6b introduced a new `expectAll()` method in `WebTestClient` in order to support _soft assertions_.
We should investigate options for improving the developer experience with a Kotlin DSL.
## Related Issues
- #27317 | 1.0 | Investigate Kotlin DSL options for `expectAll()` in `WebTestClient` - ## Overview
Commit 3c2dfebf4ec5e9d791c1f3c9fa0ad35a5a9fcd6b introduced a new `expectAll()` method in `WebTestClient` in order to support _soft assertions_.
We should investigate options for improving the developer experience with a Kotlin DSL.
## Related Issues
- #27317 | non_main | investigate kotlin dsl options for expectall in webtestclient overview commit introduced a new expectall method in webtestclient in order to support soft assertions we should investigate options for improving the developer experience with a kotlin dsl related issues | 0 |
55,846 | 14,707,593,023 | IssuesEvent | 2021-01-04 21:53:42 | idaholab/moose | https://api.github.com/repos/idaholab/moose | reopened | Unit tests don't run in Ubuntu | C: MOOSEUnit T: defect | ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
One of our users wishes to compile and run individual unit tests for a module. Our standard container distro is Ubuntu 18.04 with CentOS 8 as a fall-back. As shown below, what we've found is that unit tests indicate zero tests and zero test cases specifically on Ubuntu. Everything works in a CentOS container, and we get zero tests and zero test cases on non-container Ubuntu. From my testing, the problem lies with Ubuntu somehow.

## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
Spin up Ubuntu in some fashion, and attempt to build and run a module's unit tests. @milljm, you'll find more detail in the email chain from a while ago with the subject `Branches and Building Unit Tests in Containers`.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
I believe we'll need this working for SQA, but it's not preventing much work from being done. | 1.0 | Unit tests don't run in Ubuntu - ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
One of our users wishes to compile and run individual unit tests for a module. Our standard container distro is Ubuntu 18.04 with CentOS 8 as a fall-back. As shown below, what we've found is that unit tests indicate zero tests and zero test cases specifically on Ubuntu. Everything works in a CentOS container, and we get zero tests and zero test cases on non-container Ubuntu. From my testing, the problem lies with Ubuntu somehow.

## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
Spin up Ubuntu in some fashion, and attempt to build and run a module's unit tests. @milljm, you'll find more detail in the email chain from a while ago with the subject `Branches and Building Unit Tests in Containers`.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
I believe we'll need this working for SQA, but it's not preventing much work from being done. | non_main | unit tests don t run in ubuntu bug description one of our users wishes to compile and run individual unit tests for a module our standard container distro is ubuntu with centos as a fall back as shown below what we ve found is that unit tests indicate zero tests and zero test cases specifically on ubuntu everything works in a centos container and we get zero tests and zero test cases on non container ubuntu from my testing the problem lies with ubuntu somehow steps to reproduce spin up ubuntu in some fashion and attempt to build and run a module s unit tests milljm you ll find more detail in the email chain from a while ago with the subject branches and building unit tests in containers impact i believe we ll need this working for sqa but it s not preventing much work from being done | 0 |
224,702 | 17,198,009,070 | IssuesEvent | 2021-07-16 20:46:33 | theislab/scanpy | https://api.github.com/repos/theislab/scanpy | closed | Sphinx 4.1.0 doesn't like ScanpyConfig | Area - Documentation 📒 Bug 🐛 | Update:
Docs don't build with sphinx 4.1.0 due to a error triggered by `scanpydoc`. Sphinx will be pinned until this is solved (which is when this issue should be closed). It's not obvious to me at the moment whether sphinx or scanpydoc is at fault.
---------------
Trying to build the docs with Sphinx 4.1.0 fails with the following output:
<details>
<summary> </summary>
```sh
$ make html
Running Sphinx v4.1.0
loading intersphinx inventory from https://anndata.readthedocs.io/en/stable/objects.inv...
loading intersphinx inventory from https://bbknn.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://matplotlib.org/cycler/objects.inv...
loading intersphinx inventory from http://docs.h5py.org/en/stable/objects.inv...
loading intersphinx inventory from https://ipython.readthedocs.io/en/stable/objects.inv...
loading intersphinx inventory from https://leidenalg.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://louvain-igraph.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://matplotlib.org/objects.inv...
loading intersphinx inventory from https://networkx.github.io/documentation/networkx-1.10/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/numpy/objects.inv...
loading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...
loading intersphinx inventory from https://docs.pytest.org/en/latest/objects.inv...
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/scipy/reference/objects.inv...
loading intersphinx inventory from https://seaborn.pydata.org/objects.inv...
loading intersphinx inventory from https://scikit-learn.org/stable/objects.inv...
loading intersphinx inventory from https://scanpy-tutorials.readthedocs.io/en/latest/objects.inv...
intersphinx inventory has moved: https://networkx.github.io/documentation/networkx-1.10/objects.inv -> https://networkx.org/documentation/networkx-1.10/objects.inv
intersphinx inventory has moved: https://docs.scipy.org/doc/numpy/objects.inv -> https://numpy.org/doc/stable/objects.inv
intersphinx inventory has moved: http://docs.h5py.org/en/stable/objects.inv -> https://docs.h5py.org/en/stable/objects.inv
[autosummary] generating autosummary for: _key_contributors.rst, api.rst, basic_usage.rst, community.rst, contributors.rst, dev/ci.rst, dev/code.rst, dev/documentation.rst, dev/external-tools.rst, dev/getting-set-up.rst, ..., release-notes/1.7.1.rst, release-notes/1.7.2.rst, release-notes/1.8.0.rst, release-notes/1.8.1.rst, release-notes/1.8.2.rst, release-notes/1.9.0.rst, release-notes/index.rst, release-notes/release-latest.rst, tutorials.rst, usage-principles.rst
Error in github_url('scanpy._settings.ScanpyConfig.N_PCS'):
Extension error (sphinx.ext.autosummary):
Handler <function process_generate_options at 0x139c4a940> for event 'builder-inited' threw an exception (exception: type object 'ScanpyConfig' has no attribute 'N_PCS')
make: *** [html] Error 2
```
</details>
However, I'm entirely sure if this is Sphinx's fault, or our own. Currently the [N_PCS parameter isn't in the rendered documentation](https://scanpy.readthedocs.io/en/stable/generated/scanpy._settings.ScanpyConfig.html#scanpy._settings.ScanpyConfig). I think it should be, and am not sure why it's not showing up here.
To summarize:
* Previous versions of our doc builds didn't seem to be including attribute docstrings for `ScanpyConfig`.
* Sphinx 4.1.0 raises an error when it hits this attribute | 1.0 | Sphinx 4.1.0 doesn't like ScanpyConfig - Update:
Docs don't build with sphinx 4.1.0 due to a error triggered by `scanpydoc`. Sphinx will be pinned until this is solved (which is when this issue should be closed). It's not obvious to me at the moment whether sphinx or scanpydoc is at fault.
---------------
Trying to build the docs with Sphinx 4.1.0 fails with the following output:
<details>
<summary> </summary>
```sh
$ make html
Running Sphinx v4.1.0
loading intersphinx inventory from https://anndata.readthedocs.io/en/stable/objects.inv...
loading intersphinx inventory from https://bbknn.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://matplotlib.org/cycler/objects.inv...
loading intersphinx inventory from http://docs.h5py.org/en/stable/objects.inv...
loading intersphinx inventory from https://ipython.readthedocs.io/en/stable/objects.inv...
loading intersphinx inventory from https://leidenalg.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://louvain-igraph.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://matplotlib.org/objects.inv...
loading intersphinx inventory from https://networkx.github.io/documentation/networkx-1.10/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/numpy/objects.inv...
loading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...
loading intersphinx inventory from https://docs.pytest.org/en/latest/objects.inv...
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/scipy/reference/objects.inv...
loading intersphinx inventory from https://seaborn.pydata.org/objects.inv...
loading intersphinx inventory from https://scikit-learn.org/stable/objects.inv...
loading intersphinx inventory from https://scanpy-tutorials.readthedocs.io/en/latest/objects.inv...
intersphinx inventory has moved: https://networkx.github.io/documentation/networkx-1.10/objects.inv -> https://networkx.org/documentation/networkx-1.10/objects.inv
intersphinx inventory has moved: https://docs.scipy.org/doc/numpy/objects.inv -> https://numpy.org/doc/stable/objects.inv
intersphinx inventory has moved: http://docs.h5py.org/en/stable/objects.inv -> https://docs.h5py.org/en/stable/objects.inv
[autosummary] generating autosummary for: _key_contributors.rst, api.rst, basic_usage.rst, community.rst, contributors.rst, dev/ci.rst, dev/code.rst, dev/documentation.rst, dev/external-tools.rst, dev/getting-set-up.rst, ..., release-notes/1.7.1.rst, release-notes/1.7.2.rst, release-notes/1.8.0.rst, release-notes/1.8.1.rst, release-notes/1.8.2.rst, release-notes/1.9.0.rst, release-notes/index.rst, release-notes/release-latest.rst, tutorials.rst, usage-principles.rst
Error in github_url('scanpy._settings.ScanpyConfig.N_PCS'):
Extension error (sphinx.ext.autosummary):
Handler <function process_generate_options at 0x139c4a940> for event 'builder-inited' threw an exception (exception: type object 'ScanpyConfig' has no attribute 'N_PCS')
make: *** [html] Error 2
```
</details>
However, I'm entirely sure if this is Sphinx's fault, or our own. Currently the [N_PCS parameter isn't in the rendered documentation](https://scanpy.readthedocs.io/en/stable/generated/scanpy._settings.ScanpyConfig.html#scanpy._settings.ScanpyConfig). I think it should be, and am not sure why it's not showing up here.
To summarize:
* Previous versions of our doc builds didn't seem to be including attribute docstrings for `ScanpyConfig`.
* Sphinx 4.1.0 raises an error when it hits this attribute | non_main | sphinx doesn t like scanpyconfig update docs don t build with sphinx due to a error triggered by scanpydoc sphinx will be pinned until this is solved which is when this issue should be closed it s not obvious to me at the moment whether sphinx or scanpydoc is at fault trying to build the docs with sphinx fails with the following output sh make html running sphinx loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from intersphinx inventory has moved intersphinx inventory has moved intersphinx inventory has moved generating autosummary for key contributors rst api rst basic usage rst community rst contributors rst dev ci rst dev code rst dev documentation rst dev external tools rst dev getting set up rst release notes rst release notes rst release notes rst release notes rst release notes rst release notes rst release notes index rst release notes release latest rst tutorials rst usage principles rst error in github url scanpy settings scanpyconfig n pcs extension error sphinx ext autosummary handler for event builder inited threw an exception exception type object scanpyconfig has no attribute n pcs make error however i m entirely sure if this is sphinx s fault or our own currently the i think it should be and am not sure why it s not showing up here to summarize previous versions of our doc builds didn t seem to be including attribute docstrings for scanpyconfig sphinx raises an error when it hits this attribute | 0 |
29,010 | 2,712,810,757 | IssuesEvent | 2015-04-09 15:45:08 | mavoine/tarsius | https://api.github.com/repos/mavoine/tarsius | closed | tag photos by drag and drop | auto-migrated Priority-Medium Type-Enhancement | ```
tag photos by drag and drop in both directions (drag tags on photos OR drag
photos on tags)
```
Original issue reported on code.google.com by `avoin...@gmail.com` on 11 Dec 2009 at 6:27 | 1.0 | tag photos by drag and drop - ```
tag photos by drag and drop in both directions (drag tags on photos OR drag
photos on tags)
```
Original issue reported on code.google.com by `avoin...@gmail.com` on 11 Dec 2009 at 6:27 | non_main | tag photos by drag and drop tag photos by drag and drop in both directions drag tags on photos or drag photos on tags original issue reported on code google com by avoin gmail com on dec at | 0 |
3,338 | 12,951,567,701 | IssuesEvent | 2020-07-19 17:15:32 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | opened | How to deal with mpv | awaiting maintainer feedback discussion help wanted | This is open to anyone to offer a suggestion.
[mpv](https://mpv.io/) is a popular media player. It has both [a formula](https://github.com/Homebrew/homebrew-core/blob/master/Formula/mpv.rb) and [a cask](https://github.com/Homebrew/homebrew-cask/blob/master/Casks/mpv.rb). The former does not provide an .app, but the latter does, because we want to have a clear separation of formulae and casks and avoid shipping GUIs in formulae.
The issue is that many users want the app bundle but there’s no official source to get it. In these cases, if upstream recommends a third-party build we’ll accept it as officially sanctioned. But in this instance they’re calling the builds they link to—and that we use—“[Unofficial third-party builds](https://mpv.io/installation/)”.
In other words, by providing mpv as a cask, we’re not following our own rules. How should we handle this?
---
#### Extra notes
As a fan of mpv, the way I handle it for myself is with [a custom formula](https://github.com/vitorgalvao/homebrew-mpv), based on the Homebrew/core one with `head` and the commands to build the bundle. I do it not because I don’t trust the source in the cask, but because mpv releases are arbitrary and rare; all they care about is HEAD, so that’s what I want.
If you want to take this matter up with upstream, I’ll request we discuss it here first, as a few of their maintainers are *extremely* biased and openly hostile towards macOS. I’ve experienced some of them who were not even part of a discussion join in just to insult macOS users.
If you intend to engage with upstream on your own, *please be nice* and respectful. Do not take it personally and keep the focus on technical terms for the conversation to be productive. Asking them to provide compiled app bundle would be a waste of time, but if you’re willing to do the leg work—make it fit into their build system and maintain it—you may have a shot. | True | How to deal with mpv - This is open to anyone to offer a suggestion.
[mpv](https://mpv.io/) is a popular media player. It has both [a formula](https://github.com/Homebrew/homebrew-core/blob/master/Formula/mpv.rb) and [a cask](https://github.com/Homebrew/homebrew-cask/blob/master/Casks/mpv.rb). The former does not provide an .app, but the latter does, because we want to have a clear separation of formulae and casks and avoid shipping GUIs in formulae.
The issue is that many users want the app bundle but there’s no official source to get it. In these cases, if upstream recommends a third-party build we’ll accept it as officially sanctioned. But in this instance they’re calling the builds they link to—and that we use—“[Unofficial third-party builds](https://mpv.io/installation/)”.
In other words, by providing mpv as a cask, we’re not following our own rules. How should we handle this?
---
#### Extra notes
As a fan of mpv, the way I handle it for myself is with [a custom formula](https://github.com/vitorgalvao/homebrew-mpv), based on the Homebrew/core one with `head` and the commands to build the bundle. I do it not because I don’t trust the source in the cask, but because mpv releases are arbitrary and rare; all they care about is HEAD, so that’s what I want.
If you want to take this matter up with upstream, I’ll request we discuss it here first, as a few of their maintainers are *extremely* biased and openly hostile towards macOS. I’ve experienced some of them who were not even part of a discussion join in just to insult macOS users.
If you intend to engage with upstream on your own, *please be nice* and respectful. Do not take it personally and keep the focus on technical terms for the conversation to be productive. Asking them to provide compiled app bundle would be a waste of time, but if you’re willing to do the leg work—make it fit into their build system and maintain it—you may have a shot. | main | how to deal with mpv this is open to anyone to offer a suggestion is a popular media player it has both and the former does not provide an app but the latter does because we want to have a clear separation of formulae and casks and avoid shipping guis in formulae the issue is that many users want the app bundle but there’s no official source to get it in these cases if upstream recommends a third party build we’ll accept it as officially sanctioned but in this instance they’re calling the builds they link to—and that we use—“ in other words by providing mpv as a cask we’re not following our own rules how should we handle this extra notes as a fan of mpv the way i handle it for myself is with based on the homebrew core one with head and the commands to build the bundle i do it not because i don’t trust the source in the cask but because mpv releases are arbitrary and rare all they care about is head so that’s what i want if you want to take this matter up with upstream i’ll request we discuss it here first as a few of their maintainers are extremely biased and openly hostile towards macos i’ve experienced some of them who were not even part of a discussion join in just to insult macos users if you intend to engage with upstream on your own please be nice and respectful do not take it personally and keep the focus on technical terms for the conversation to be productive asking them to provide compiled app bundle would be a waste of time but if you’re willing to do the leg work—make it fit into their build system and maintain it—you may have a shot | 1 |
34 | 2,576,424,015 | IssuesEvent | 2015-02-12 09:59:17 | daisy/pipeline-issues | https://api.github.com/repos/daisy/pipeline-issues | opened | Move px:dtbook-validator.select-schema from scripts to scripts-utils | 0 - Backlog enhancement Maintainability | see also #454
<!---
@huboard:{"order":1.7462298274040222e-10}
-->
| True | Move px:dtbook-validator.select-schema from scripts to scripts-utils - see also #454
<!---
@huboard:{"order":1.7462298274040222e-10}
-->
| main | move px dtbook validator select schema from scripts to scripts utils see also huboard order | 1 |
2,788 | 9,998,010,740 | IssuesEvent | 2019-07-12 06:56:08 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | opened | Do not use TimeSpan ctors directly | Area: analyzer Area: maintainability feature | When it comes to code readability, the creation of `TimeSpan` values is hard to read.
This is due to the nature of the ctors that have a lot of parameters - it cannot be easily detected which value is for which parameter.
Example:
```C#
var interval = new TimeSpan(42, 08, 15);
var interval = new TimeSpan(08, 15, 47, 11);
var interval = new TimeSpan(42, 08, 15, 47, 11);
```
The `TimeSpan` type provides static methods, such as `FromMinutes`, `FromMilliseconds`, etc.
So using them would be better because now the value can be easily spot.
````C#
Thread.Sleep(new TimeSpan(0, 3, 0));
vs.
Thread.Sleep(TimeSpan.FromMinutes(3));
```
However, it still is cumbersome to read. Therefore, extension methods could be used.
```C#
Thread.Sleep(3.Minutes());
vs.
Thread.Sleep(TimeSpan.FromMinutes(3));
```
The extension method itself could look like
```C#
public static TimeSpan Minutes(this int value) => TimeSpan.FromMinutes(value);
``` | True | Do not use TimeSpan ctors directly - When it comes to code readability, the creation of `TimeSpan` values is hard to read.
This is due to the nature of the ctors that have a lot of parameters - it cannot be easily detected which value is for which parameter.
Example:
```C#
var interval = new TimeSpan(42, 08, 15);
var interval = new TimeSpan(08, 15, 47, 11);
var interval = new TimeSpan(42, 08, 15, 47, 11);
```
The `TimeSpan` type provides static methods, such as `FromMinutes`, `FromMilliseconds`, etc.
So using them would be better because now the value can be easily spot.
````C#
Thread.Sleep(new TimeSpan(0, 3, 0));
vs.
Thread.Sleep(TimeSpan.FromMinutes(3));
```
However, it still is cumbersome to read. Therefore, extension methods could be used.
```C#
Thread.Sleep(3.Minutes());
vs.
Thread.Sleep(TimeSpan.FromMinutes(3));
```
The extension method itself could look like
```C#
public static TimeSpan Minutes(this int value) => TimeSpan.FromMinutes(value);
``` | main | do not use timespan ctors directly when it comes to code readability the creation of timespan values is hard to read this is due to the nature of the ctors that have a lot of parameters it cannot be easily detected which value is for which parameter example c var interval new timespan var interval new timespan var interval new timespan the timespan type provides static methods such as fromminutes frommilliseconds etc so using them would be better because now the value can be easily spot c thread sleep new timespan vs thread sleep timespan fromminutes however it still is cumbersome to read therefore extension methods could be used c thread sleep minutes vs thread sleep timespan fromminutes the extension method itself could look like c public static timespan minutes this int value timespan fromminutes value | 1 |
2,184 | 7,696,404,452 | IssuesEvent | 2018-05-18 15:13:11 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Can we please return the roundend report to the chatbox | Maintainability/Hinders improvements | 1. You can't access it after the round ends
2. The pop-up window gets in the way of the post-round murder you're either committing or trying to avoid | True | Can we please return the roundend report to the chatbox - 1. You can't access it after the round ends
2. The pop-up window gets in the way of the post-round murder you're either committing or trying to avoid | main | can we please return the roundend report to the chatbox you can t access it after the round ends the pop up window gets in the way of the post round murder you re either committing or trying to avoid | 1 |
1,104 | 4,981,606,644 | IssuesEvent | 2016-12-07 08:34:55 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Implement sysctl reload functionality | affects_2.3 feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
sysctl
##### ANSIBLE VERSION
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
As part of transforming all command/shell actions into proper modules, we have a need to reload sysctl (`command: sysctl -p`). The main use-case is to use this as a notification handler when we template/assemble the sysctl.conf file.
| True | Implement sysctl reload functionality - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
sysctl
##### ANSIBLE VERSION
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
As part of transforming all command/shell actions into proper modules, we have a need to reload sysctl (`command: sysctl -p`). The main use-case is to use this as a notification handler when we template/assemble the sysctl.conf file.
| main | implement sysctl reload functionality issue type feature idea component name sysctl ansible version n a summary as part of transforming all command shell actions into proper modules we have a need to reload sysctl command sysctl p the main use case is to use this as a notification handler when we template assemble the sysctl conf file | 1 |
35,621 | 12,365,444,975 | IssuesEvent | 2020-05-18 08:50:52 | NatalyaDalid/NatRepository | https://api.github.com/repos/NatalyaDalid/NatRepository | closed | CVE-2020-11023 (Medium) detected in jquery-3.3.1.tgz | security vulnerability | ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/NatRepository/docs/package.json</p>
<p>Path to vulnerable library: /NatRepository/docs/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/NatalyaDalid/NatRepository/commit/d5855b917e28b880e479d9131093e8937cf1b61c">d5855b917e28b880e479d9131093e8937cf1b61c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jpcq-cgw6-v4j6">https://github.com/advisories/GHSA-jpcq-cgw6-v4j6</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: 3.5.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"jquery","packageVersion":"3.3.1","isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11023","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-11023 (Medium) detected in jquery-3.3.1.tgz - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/NatRepository/docs/package.json</p>
<p>Path to vulnerable library: /NatRepository/docs/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/NatalyaDalid/NatRepository/commit/d5855b917e28b880e479d9131093e8937cf1b61c">d5855b917e28b880e479d9131093e8937cf1b61c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jpcq-cgw6-v4j6">https://github.com/advisories/GHSA-jpcq-cgw6-v4j6</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: 3.5.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"jquery","packageVersion":"3.3.1","isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11023","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_main | cve medium detected in jquery tgz cve medium severity vulnerability vulnerable library jquery tgz javascript library for dom operations library home page a href path to dependency file tmp ws scm natrepository docs package json path to vulnerable library natrepository docs node modules jquery package json dependency hierarchy x jquery tgz vulnerable library found in head commit a href vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery vulnerabilityurl | 0 |
3,702 | 15,108,728,084 | IssuesEvent | 2021-02-08 16:58:23 | ajour/ajour | https://api.github.com/repos/ajour/ajour | closed | GitHub releases perpetually show as update available | B - bug C - waiting on maintainer | **Describe the bug**
Certain GitHub releases with a version number mismatch compared to the actual add-on will forever be stuck as having updates available.
**To Reproduce**
Steps to reproduce the behavior:
1. Install either of these two add-ons via GitHub URLs: https://github.com/Kruithne/TransmogTokens - https://github.com/Vicious-wow/XIV_Databar
2. Refresh the add-on list
3. See issue
**Expected behavior**
Ajour should either store a time stamp of the last time the add-on was updated, and that this time stamp would be compared with the GitHub release, or potentially store the hash of the zip file (assuming the hash is available from the GitHub API when querying for the latest tagged release).
That way, the add-on will only show up as mismatched version if there's a material difference in the release zip file.
**Screenshots**

**Software involved**
Please complete the following information:
- OS: Windows 10
- Ajour version: 0.6.3
- Add-ons: TransmogTokens, XIV_Databar, potentially others
**Additional context**
I realise that this could be solved by the repository owners fixing their version numbering, but both of those repositories have authors who are either combative (XIV_Databar) or not very present (TransmogTokens). It would be preferable if this could be handled on Ajour's end.
**Log Output**
```
14:11:08.292 [ajour][DEBUG] Ajour updated successfully
14:11:08.292 [ajour][INFO] Ajour 0.6.3 has started.
14:11:08.293 [ajour::gui][DEBUG] config loaded:
Config {
wow: Wow {
directory: Some(
"X:\\World of Warcraft",
),
flavor: Retail,
},
addons: Addons {
global_release_channel: Stable,
ignored: {
Retail: [
"TradeSkillMaster",
"TradeSkillMaster_AppHelper",
"RaiderIO",
"+Wowhead_Looter",
],
},
release_channels: {
Retail: {},
},
delete_saved_variables: false,
},
theme: None,
column_config: V3 {
my_addons_columns: [
ColumnConfigV2 {
key: "title",
width: None,
hidden: false,
},
ColumnConfigV2 {
key: "local",
width: Some(
150,
),
hidden: false,
},
ColumnConfigV2 {
key: "remote",
width: Some(
150,
),
hidden: false,
},
ColumnConfigV2 {
key: "status",
width: Some(
85,
),
hidden: false,
},
ColumnConfigV2 {
key: "channel",
width: Some(
85,
),
hidden: false,
},
ColumnConfigV2 {
key: "author",
width: Some(
85,
),
hidden: false,
},
ColumnConfigV2 {
key: "game_version",
width: Some(
110,
),
hidden: false,
},
ColumnConfigV2 {
key: "date_released",
width: Some(
110,
),
hidden: false,
},
ColumnConfigV2 {
key: "source",
width: Some(
110,
),
hidden: false,
},
],
catalog_columns: [
ColumnConfigV2 {
key: "addon",
width: None,
hidden: false,
},
ColumnConfigV2 {
key: "description",
width: Some(
150,
),
hidden: false,
},
ColumnConfigV2 {
key: "source",
width: Some(
110,
),
hidden: false,
},
ColumnConfigV2 {
key: "num_downloads",
width: Some(
105,
),
hidden: false,
},
ColumnConfigV2 {
key: "game_version",
width: Some(
105,
),
hidden: false,
},
ColumnConfigV2 {
key: "date_released",
width: Some(
105,
),
hidden: false,
},
ColumnConfigV2 {
key: "install",
width: Some(
85,
),
hidden: false,
},
],
aura_columns: [
ColumnConfigV2 {
key: "title",
width: None,
hidden: false,
},
ColumnConfigV2 {
key: "local",
width: Some(
120,
),
hidden: false,
},
ColumnConfigV2 {
key: "remote",
width: Some(
120,
),
hidden: false,
},
ColumnConfigV2 {
key: "author",
width: Some(
85,
),
hidden: false,
},
ColumnConfigV2 {
key: "status",
width: Some(
110,
),
hidden: false,
},
],
},
window_size: Some(
(
2048,
1089,
),
),
scale: None,
backup_directory: None,
backup_addons: false,
backup_wtf: false,
hide_ignored_addons: false,
self_update_channel: Stable,
weak_auras_account: {
Retail: "<>",
},
alternating_row_colors: true,
language: English,
}
14:11:08.293 [ajour::gui][DEBUG] antialiasing: true
14:11:08.575 [ajour_core::utility][DEBUG] checking for application update
14:11:08.575 [ajour_core::theme][DEBUG] loading user themes
14:11:08.577 [ajour_core::fs::theme][DEBUG] loaded 0 user themes
14:11:08.642 [ajour::gui::update][DEBUG] Message::ThemesLoaded(0 themes)
14:11:08.643 [ajour::gui::update][DEBUG] Message::CachesLoaded(error: false)
14:11:08.647 [ajour::gui::update][DEBUG] Message::Parse
14:11:08.647 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_retail_\\Interface/AddOns"
14:11:08.647 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_ptr_\\Interface/AddOns"
14:11:08.648 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_beta_\\Interface/AddOns"
14:11:08.648 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_classic_\\Interface/AddOns"
14:11:08.648 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_classic_ptr_\\Interface/AddOns"
14:11:08.648 [ajour_core::parse][DEBUG] Retail PTR - parsing addons folder
14:11:08.648 [ajour_core::parse][DEBUG] Retail Beta - parsing addons folder
14:11:08.648 [ajour_core::parse][DEBUG] Retail - parsing addons folder
14:11:08.648 [ajour_core::parse][DEBUG] Classic PTR - parsing addons folder
14:11:08.648 [ajour_core::parse][DEBUG] Classic - parsing addons folder
14:11:08.649 [ajour_core::parse][DEBUG] Retail - 261 folders in AddOns directory to parse
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Classic PTR, is_installed: false)
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail, is_installed: true)
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail Beta, is_installed: false)
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Classic, is_installed: false)
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail PTR, is_installed: false)
14:11:08.653 [ajour::gui::update][DEBUG] Message::ListWeakAurasAccounts(Retail, num_accounts: 1)
14:11:08.656 [ajour][ERROR] Failed to parse addons
14:11:08.656 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_ptr_\\Interface/AddOns"
14:11:08.663 [ajour][ERROR] Failed to parse addons
14:11:08.663 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_beta_\\Interface/AddOns"
14:11:08.669 [ajour][ERROR] Failed to parse addons
14:11:08.669 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_classic_ptr_\\Interface/AddOns"
14:11:08.676 [ajour][ERROR] Failed to parse addons
14:11:08.676 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_classic_\\Interface/AddOns"
14:11:08.677 [ajour_core::parse][DEBUG] Retail - 261 fingerprints: 261 cached, 0 calculated, 0 added, 0 removed
14:11:08.692 [ajour_core::parse][DEBUG] Retail - 261 addon folders successfully parsed from '.toc'
14:11:08.692 [ajour_core::parse][DEBUG] Retail - 5 valid cache entries retrieved
14:11:08.692 [ajour_core::parse][DEBUG] Retail - 256 unique fingerprints to check against curse api
14:11:08.818 [ajour::gui::update][DEBUG] Message::LatestRelease(Some("0.6.3"))
14:11:09.413 [ajour_core::catalog][DEBUG] Successfully fetched and parsed https://github.com/casperstorm/ajour-catalog/releases/latest/download/tukui.json
14:11:09.796 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:11:10.227 [ajour_core::catalog][DEBUG] Successfully fetched and parsed https://github.com/casperstorm/ajour-catalog/releases/latest/download/wowi.json
14:11:10.419 [ajour_core::catalog][DEBUG] Successfully fetched and parsed https://github.com/casperstorm/ajour-catalog/releases/latest/download/curse.json
14:11:10.459 [ajour::gui::update][DEBUG] Message::CatalogDownloaded(15822 addons in catalog)
14:11:11.857 [ajour_core::parse][DEBUG] Retail - 103 curse packages fetched
14:11:12.031 [ajour_core::parse][DEBUG] Retail - 3 tukui packages fetched
14:11:12.566 [ajour_core::parse][DEBUG] Retail - 27 wowi packages fetched
14:11:13.138 [ajour_core::parse][DEBUG] Retail - 3 git packages fetched
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 103 addons built from curse packages
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 3 addons built from tukui packages
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 0 addons built from wowi packages
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 3 addons built from git packages
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 9 unknown addon folders
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 118 addons successfully parsed
14:11:13.142 [ajour::gui::update][DEBUG] Message::ParsedAddons(Retail, 118 addons)
14:11:21.691 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyWeakAuras(Retail))
14:11:25.757 [ajour::gui::update][DEBUG] Interaction::Refresh(My WeakAuras)
14:11:27.213 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:11:27.819 [ajour::gui::update][DEBUG] Interaction::Refresh(My WeakAuras)
14:11:29.203 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:11:29.623 [ajour::gui::update][DEBUG] Interaction::Refresh(My WeakAuras)
14:11:33.673 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:11:50.646 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyAddons(Retail))
14:12:04.205 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyWeakAuras(Retail))
14:13:22.576 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyAddons(Retail))
14:13:27.715 [ajour::gui::update][DEBUG] Interaction::Expand(Details("XIV_Databar"))
14:13:30.600 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://github.com/Vicious-wow/XIV_Databar)
14:13:37.043 [ajour::gui::update][DEBUG] Interaction::Update(XIV_Databar)
14:13:37.043 [ajour_core::network][DEBUG] downloading remote version v9.02.001 for XIV_Databar
14:13:38.372 [ajour::gui::update][DEBUG] Message::DownloadedAddon((Retail, XIV_Databar, error: false))
14:13:39.080 [ajour::gui::update][DEBUG] Message::UnpackedAddon((XIV_Databar, error: false))
14:13:39.080 [ajour_core::parse][DEBUG] Retail - updating fingerprint for XIV_Databar
14:13:39.089 [ajour::gui::update][DEBUG] Message::AddonCacheUpdated(XIV_Databar)
14:13:39.109 [ajour::gui::update][DEBUG] Message::UpdateFingerprint((Retail, XIV_Databar, error: false))
14:13:42.034 [ajour::gui::update][DEBUG] Interaction::Refresh(My Addons)
14:13:42.037 [ajour::gui::update][DEBUG] Message::Parse
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_retail_\\Interface/AddOns"
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_ptr_\\Interface/AddOns"
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_beta_\\Interface/AddOns"
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_classic_\\Interface/AddOns"
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_classic_ptr_\\Interface/AddOns"
14:13:42.037 [ajour_core::parse][DEBUG] Retail PTR - parsing addons folder
14:13:42.037 [ajour_core::parse][DEBUG] Classic - parsing addons folder
14:13:42.037 [ajour_core::parse][DEBUG] Retail Beta - parsing addons folder
14:13:42.037 [ajour_core::parse][DEBUG] Classic PTR - parsing addons folder
14:13:42.037 [ajour_core::parse][DEBUG] Retail - parsing addons folder
14:13:42.038 [ajour_core::parse][DEBUG] Retail - 261 folders in AddOns directory to parse
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail, is_installed: true)
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Classic PTR, is_installed: false)
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail PTR, is_installed: false)
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Classic, is_installed: false)
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail Beta, is_installed: false)
14:13:42.039 [ajour::gui::update][DEBUG] Message::ListWeakAurasAccounts(Retail, num_accounts: 1)
14:13:42.043 [ajour][ERROR] Failed to parse addons
14:13:42.043 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_ptr_\\Interface/AddOns"
14:13:42.049 [ajour][ERROR] Failed to parse addons
14:13:42.049 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_classic_\\Interface/AddOns"
14:13:42.055 [ajour][ERROR] Failed to parse addons
14:13:42.055 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_beta_\\Interface/AddOns"
14:13:42.060 [ajour][ERROR] Failed to parse addons
14:13:42.060 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_classic_ptr_\\Interface/AddOns"
14:13:42.061 [ajour_core::parse][DEBUG] Retail - 261 fingerprints: 261 cached, 0 calculated, 0 added, 0 removed
14:13:42.072 [ajour_core::parse][DEBUG] Retail - 261 addon folders successfully parsed from '.toc'
14:13:42.072 [ajour_core::parse][DEBUG] Retail - 5 valid cache entries retrieved
14:13:42.072 [ajour_core::parse][DEBUG] Retail - 256 unique fingerprints to check against curse api
14:13:44.642 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:13:45.411 [ajour_core::parse][DEBUG] Retail - 103 curse packages fetched
14:13:45.581 [ajour_core::parse][DEBUG] Retail - 3 tukui packages fetched
14:13:45.901 [ajour_core::parse][DEBUG] Retail - 27 wowi packages fetched
14:13:46.509 [ajour_core::parse][DEBUG] Retail - 3 git packages fetched
14:13:46.510 [ajour_core::parse][DEBUG] Retail - 103 addons built from curse packages
14:13:46.510 [ajour_core::parse][DEBUG] Retail - 3 addons built from tukui packages
14:13:46.510 [ajour_core::parse][DEBUG] Retail - 0 addons built from wowi packages
14:13:46.510 [ajour_core::parse][DEBUG] Retail - 3 addons built from git packages
14:13:46.511 [ajour_core::parse][DEBUG] Retail - 9 unknown addon folders
14:13:46.511 [ajour_core::parse][DEBUG] Retail - 118 addons successfully parsed
14:13:46.513 [ajour::gui::update][DEBUG] Message::ParsedAddons(Retail, 118 addons)
14:16:15.619 [ajour::gui::update][DEBUG] Interaction::ModeSelected(About)
14:16:25.579 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://getajour.com)
14:16:31.415 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyAddons(Retail))
14:16:42.680 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyWeakAuras(Retail))
14:16:50.530 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://wago.io/Afenar_DK/104)
14:17:17.588 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyAddons(Retail))
14:22:24.253 [ajour::gui::update][DEBUG] Interaction::Expand(Details("TransmogTokens"))
14:22:25.714 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://github.com/Kruithne/TransmogTokens)
14:22:50.888 [ajour::gui::update][DEBUG] Interaction::Expand(Details("XIV_Databar"))
14:22:52.907 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://github.com/Vicious-wow/XIV_Databar)
14:23:06.491 [ajour::gui::update][DEBUG] Interaction::Expand(Details("XIV_Databar"))
```
| True | GitHub releases perpetually show as update available - **Describe the bug**
Certain GitHub releases with a version number mismatch compared to the actual add-on will forever be stuck as having updates available.
**To Reproduce**
Steps to reproduce the behavior:
1. Install either of these two add-ons via GitHub URLs: https://github.com/Kruithne/TransmogTokens - https://github.com/Vicious-wow/XIV_Databar
2. Refresh the add-on list
3. See issue
**Expected behavior**
Ajour should either store a time stamp of the last time the add-on was updated, and that this time stamp would be compared with the GitHub release, or potentially store the hash of the zip file (assuming the hash is available from the GitHub API when querying for the latest tagged release).
That way, the add-on will only show up as mismatched version if there's a material difference in the release zip file.
**Screenshots**

**Software involved**
Please complete the following information:
- OS: Windows 10
- Ajour version: 0.6.3
- Add-ons: TransmogTokens, XIV_Databar, potentially others
**Additional context**
I realise that this could be solved by the repository owners fixing their version numbering, but both of those repositories have authors who are either combative (XIV_Databar) or not very present (TransmogTokens). It would be preferable if this could be handled on Ajour's end.
**Log Output**
```
14:11:08.292 [ajour][DEBUG] Ajour updated successfully
14:11:08.292 [ajour][INFO] Ajour 0.6.3 has started.
14:11:08.293 [ajour::gui][DEBUG] config loaded:
Config {
wow: Wow {
directory: Some(
"X:\\World of Warcraft",
),
flavor: Retail,
},
addons: Addons {
global_release_channel: Stable,
ignored: {
Retail: [
"TradeSkillMaster",
"TradeSkillMaster_AppHelper",
"RaiderIO",
"+Wowhead_Looter",
],
},
release_channels: {
Retail: {},
},
delete_saved_variables: false,
},
theme: None,
column_config: V3 {
my_addons_columns: [
ColumnConfigV2 {
key: "title",
width: None,
hidden: false,
},
ColumnConfigV2 {
key: "local",
width: Some(
150,
),
hidden: false,
},
ColumnConfigV2 {
key: "remote",
width: Some(
150,
),
hidden: false,
},
ColumnConfigV2 {
key: "status",
width: Some(
85,
),
hidden: false,
},
ColumnConfigV2 {
key: "channel",
width: Some(
85,
),
hidden: false,
},
ColumnConfigV2 {
key: "author",
width: Some(
85,
),
hidden: false,
},
ColumnConfigV2 {
key: "game_version",
width: Some(
110,
),
hidden: false,
},
ColumnConfigV2 {
key: "date_released",
width: Some(
110,
),
hidden: false,
},
ColumnConfigV2 {
key: "source",
width: Some(
110,
),
hidden: false,
},
],
catalog_columns: [
ColumnConfigV2 {
key: "addon",
width: None,
hidden: false,
},
ColumnConfigV2 {
key: "description",
width: Some(
150,
),
hidden: false,
},
ColumnConfigV2 {
key: "source",
width: Some(
110,
),
hidden: false,
},
ColumnConfigV2 {
key: "num_downloads",
width: Some(
105,
),
hidden: false,
},
ColumnConfigV2 {
key: "game_version",
width: Some(
105,
),
hidden: false,
},
ColumnConfigV2 {
key: "date_released",
width: Some(
105,
),
hidden: false,
},
ColumnConfigV2 {
key: "install",
width: Some(
85,
),
hidden: false,
},
],
aura_columns: [
ColumnConfigV2 {
key: "title",
width: None,
hidden: false,
},
ColumnConfigV2 {
key: "local",
width: Some(
120,
),
hidden: false,
},
ColumnConfigV2 {
key: "remote",
width: Some(
120,
),
hidden: false,
},
ColumnConfigV2 {
key: "author",
width: Some(
85,
),
hidden: false,
},
ColumnConfigV2 {
key: "status",
width: Some(
110,
),
hidden: false,
},
],
},
window_size: Some(
(
2048,
1089,
),
),
scale: None,
backup_directory: None,
backup_addons: false,
backup_wtf: false,
hide_ignored_addons: false,
self_update_channel: Stable,
weak_auras_account: {
Retail: "<>",
},
alternating_row_colors: true,
language: English,
}
14:11:08.293 [ajour::gui][DEBUG] antialiasing: true
14:11:08.575 [ajour_core::utility][DEBUG] checking for application update
14:11:08.575 [ajour_core::theme][DEBUG] loading user themes
14:11:08.577 [ajour_core::fs::theme][DEBUG] loaded 0 user themes
14:11:08.642 [ajour::gui::update][DEBUG] Message::ThemesLoaded(0 themes)
14:11:08.643 [ajour::gui::update][DEBUG] Message::CachesLoaded(error: false)
14:11:08.647 [ajour::gui::update][DEBUG] Message::Parse
14:11:08.647 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_retail_\\Interface/AddOns"
14:11:08.647 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_ptr_\\Interface/AddOns"
14:11:08.648 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_beta_\\Interface/AddOns"
14:11:08.648 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_classic_\\Interface/AddOns"
14:11:08.648 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_classic_ptr_\\Interface/AddOns"
14:11:08.648 [ajour_core::parse][DEBUG] Retail PTR - parsing addons folder
14:11:08.648 [ajour_core::parse][DEBUG] Retail Beta - parsing addons folder
14:11:08.648 [ajour_core::parse][DEBUG] Retail - parsing addons folder
14:11:08.648 [ajour_core::parse][DEBUG] Classic PTR - parsing addons folder
14:11:08.648 [ajour_core::parse][DEBUG] Classic - parsing addons folder
14:11:08.649 [ajour_core::parse][DEBUG] Retail - 261 folders in AddOns directory to parse
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Classic PTR, is_installed: false)
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail, is_installed: true)
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail Beta, is_installed: false)
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Classic, is_installed: false)
14:11:08.651 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail PTR, is_installed: false)
14:11:08.653 [ajour::gui::update][DEBUG] Message::ListWeakAurasAccounts(Retail, num_accounts: 1)
14:11:08.656 [ajour][ERROR] Failed to parse addons
14:11:08.656 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_ptr_\\Interface/AddOns"
14:11:08.663 [ajour][ERROR] Failed to parse addons
14:11:08.663 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_beta_\\Interface/AddOns"
14:11:08.669 [ajour][ERROR] Failed to parse addons
14:11:08.669 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_classic_ptr_\\Interface/AddOns"
14:11:08.676 [ajour][ERROR] Failed to parse addons
14:11:08.676 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_classic_\\Interface/AddOns"
14:11:08.677 [ajour_core::parse][DEBUG] Retail - 261 fingerprints: 261 cached, 0 calculated, 0 added, 0 removed
14:11:08.692 [ajour_core::parse][DEBUG] Retail - 261 addon folders successfully parsed from '.toc'
14:11:08.692 [ajour_core::parse][DEBUG] Retail - 5 valid cache entries retrieved
14:11:08.692 [ajour_core::parse][DEBUG] Retail - 256 unique fingerprints to check against curse api
14:11:08.818 [ajour::gui::update][DEBUG] Message::LatestRelease(Some("0.6.3"))
14:11:09.413 [ajour_core::catalog][DEBUG] Successfully fetched and parsed https://github.com/casperstorm/ajour-catalog/releases/latest/download/tukui.json
14:11:09.796 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:11:10.227 [ajour_core::catalog][DEBUG] Successfully fetched and parsed https://github.com/casperstorm/ajour-catalog/releases/latest/download/wowi.json
14:11:10.419 [ajour_core::catalog][DEBUG] Successfully fetched and parsed https://github.com/casperstorm/ajour-catalog/releases/latest/download/curse.json
14:11:10.459 [ajour::gui::update][DEBUG] Message::CatalogDownloaded(15822 addons in catalog)
14:11:11.857 [ajour_core::parse][DEBUG] Retail - 103 curse packages fetched
14:11:12.031 [ajour_core::parse][DEBUG] Retail - 3 tukui packages fetched
14:11:12.566 [ajour_core::parse][DEBUG] Retail - 27 wowi packages fetched
14:11:13.138 [ajour_core::parse][DEBUG] Retail - 3 git packages fetched
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 103 addons built from curse packages
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 3 addons built from tukui packages
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 0 addons built from wowi packages
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 3 addons built from git packages
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 9 unknown addon folders
14:11:13.139 [ajour_core::parse][DEBUG] Retail - 118 addons successfully parsed
14:11:13.142 [ajour::gui::update][DEBUG] Message::ParsedAddons(Retail, 118 addons)
14:11:21.691 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyWeakAuras(Retail))
14:11:25.757 [ajour::gui::update][DEBUG] Interaction::Refresh(My WeakAuras)
14:11:27.213 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:11:27.819 [ajour::gui::update][DEBUG] Interaction::Refresh(My WeakAuras)
14:11:29.203 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:11:29.623 [ajour::gui::update][DEBUG] Interaction::Refresh(My WeakAuras)
14:11:33.673 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:11:50.646 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyAddons(Retail))
14:12:04.205 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyWeakAuras(Retail))
14:13:22.576 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyAddons(Retail))
14:13:27.715 [ajour::gui::update][DEBUG] Interaction::Expand(Details("XIV_Databar"))
14:13:30.600 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://github.com/Vicious-wow/XIV_Databar)
14:13:37.043 [ajour::gui::update][DEBUG] Interaction::Update(XIV_Databar)
14:13:37.043 [ajour_core::network][DEBUG] downloading remote version v9.02.001 for XIV_Databar
14:13:38.372 [ajour::gui::update][DEBUG] Message::DownloadedAddon((Retail, XIV_Databar, error: false))
14:13:39.080 [ajour::gui::update][DEBUG] Message::UnpackedAddon((XIV_Databar, error: false))
14:13:39.080 [ajour_core::parse][DEBUG] Retail - updating fingerprint for XIV_Databar
14:13:39.089 [ajour::gui::update][DEBUG] Message::AddonCacheUpdated(XIV_Databar)
14:13:39.109 [ajour::gui::update][DEBUG] Message::UpdateFingerprint((Retail, XIV_Databar, error: false))
14:13:42.034 [ajour::gui::update][DEBUG] Interaction::Refresh(My Addons)
14:13:42.037 [ajour::gui::update][DEBUG] Message::Parse
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_retail_\\Interface/AddOns"
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_ptr_\\Interface/AddOns"
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_beta_\\Interface/AddOns"
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_classic_\\Interface/AddOns"
14:13:42.037 [ajour::gui::update][DEBUG] preparing to parse addons in "X:\\World of Warcraft\\_classic_ptr_\\Interface/AddOns"
14:13:42.037 [ajour_core::parse][DEBUG] Retail PTR - parsing addons folder
14:13:42.037 [ajour_core::parse][DEBUG] Classic - parsing addons folder
14:13:42.037 [ajour_core::parse][DEBUG] Retail Beta - parsing addons folder
14:13:42.037 [ajour_core::parse][DEBUG] Classic PTR - parsing addons folder
14:13:42.037 [ajour_core::parse][DEBUG] Retail - parsing addons folder
14:13:42.038 [ajour_core::parse][DEBUG] Retail - 261 folders in AddOns directory to parse
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail, is_installed: true)
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Classic PTR, is_installed: false)
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail PTR, is_installed: false)
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Classic, is_installed: false)
14:13:42.038 [ajour::gui::update][DEBUG] Message::CheckWeakAurasInstalled(Retail Beta, is_installed: false)
14:13:42.039 [ajour::gui::update][DEBUG] Message::ListWeakAurasAccounts(Retail, num_accounts: 1)
14:13:42.043 [ajour][ERROR] Failed to parse addons
14:13:42.043 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_ptr_\\Interface/AddOns"
14:13:42.049 [ajour][ERROR] Failed to parse addons
14:13:42.049 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_classic_\\Interface/AddOns"
14:13:42.055 [ajour][ERROR] Failed to parse addons
14:13:42.055 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_beta_\\Interface/AddOns"
14:13:42.060 [ajour][ERROR] Failed to parse addons
14:13:42.060 [ajour][ERROR] caused by: Addon directory not found: "X:\\World of Warcraft\\_classic_ptr_\\Interface/AddOns"
14:13:42.061 [ajour_core::parse][DEBUG] Retail - 261 fingerprints: 261 cached, 0 calculated, 0 added, 0 removed
14:13:42.072 [ajour_core::parse][DEBUG] Retail - 261 addon folders successfully parsed from '.toc'
14:13:42.072 [ajour_core::parse][DEBUG] Retail - 5 valid cache entries retrieved
14:13:42.072 [ajour_core::parse][DEBUG] Retail - 256 unique fingerprints to check against curse api
14:13:44.642 [ajour::gui::update][DEBUG] Message::ParsedAuras(Retail, num_auras: 23)
14:13:45.411 [ajour_core::parse][DEBUG] Retail - 103 curse packages fetched
14:13:45.581 [ajour_core::parse][DEBUG] Retail - 3 tukui packages fetched
14:13:45.901 [ajour_core::parse][DEBUG] Retail - 27 wowi packages fetched
14:13:46.509 [ajour_core::parse][DEBUG] Retail - 3 git packages fetched
14:13:46.510 [ajour_core::parse][DEBUG] Retail - 103 addons built from curse packages
14:13:46.510 [ajour_core::parse][DEBUG] Retail - 3 addons built from tukui packages
14:13:46.510 [ajour_core::parse][DEBUG] Retail - 0 addons built from wowi packages
14:13:46.510 [ajour_core::parse][DEBUG] Retail - 3 addons built from git packages
14:13:46.511 [ajour_core::parse][DEBUG] Retail - 9 unknown addon folders
14:13:46.511 [ajour_core::parse][DEBUG] Retail - 118 addons successfully parsed
14:13:46.513 [ajour::gui::update][DEBUG] Message::ParsedAddons(Retail, 118 addons)
14:16:15.619 [ajour::gui::update][DEBUG] Interaction::ModeSelected(About)
14:16:25.579 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://getajour.com)
14:16:31.415 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyAddons(Retail))
14:16:42.680 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyWeakAuras(Retail))
14:16:50.530 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://wago.io/Afenar_DK/104)
14:17:17.588 [ajour::gui::update][DEBUG] Interaction::ModeSelected(MyAddons(Retail))
14:22:24.253 [ajour::gui::update][DEBUG] Interaction::Expand(Details("TransmogTokens"))
14:22:25.714 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://github.com/Kruithne/TransmogTokens)
14:22:50.888 [ajour::gui::update][DEBUG] Interaction::Expand(Details("XIV_Databar"))
14:22:52.907 [ajour::gui::update][DEBUG] Interaction::OpenLink(https://github.com/Vicious-wow/XIV_Databar)
14:23:06.491 [ajour::gui::update][DEBUG] Interaction::Expand(Details("XIV_Databar"))
```
| main | github releases perpetually show as update available describe the bug certain github releases with a version number mismatch compared to the actual add on will forever be stuck as having updates available to reproduce steps to reproduce the behavior install either of these two add ons via github urls refresh the add on list see issue expected behavior ajour should either store a time stamp of the last time the add on was updated and that this time stamp would be compared with the github release or potentially store the hash of the zip file assuming the hash is available from the github api when querying for the latest tagged release that way the add on will only show up as mismatched version if there s a material difference in the release zip file screenshots software involved please complete the following information os windows ajour version add ons transmogtokens xiv databar potentially others additional context i realise that this could be solved by the repository owners fixing their version numbering but both of those repositories have authors who are either combative xiv databar or not very present transmogtokens it would be preferable if this could be handled on ajour s end log output ajour updated successfully ajour has started config loaded config wow wow directory some x world of warcraft flavor retail addons addons global release channel stable ignored retail tradeskillmaster tradeskillmaster apphelper raiderio wowhead looter release channels retail delete saved variables false theme none column config my addons columns key title width none hidden false key local width some hidden false key remote width some hidden false key status width some hidden false key channel width some hidden false key author width some hidden false key game version width some hidden false key date released width some hidden false key source width some hidden false catalog columns key addon width none hidden false key description width some hidden false key source width some hidden false key num downloads width some hidden false key game version width some hidden false key date released width some hidden false key install width some hidden false aura columns key title width none hidden false key local width some hidden false key remote width some hidden false key author width some hidden false key status width some hidden false window size some scale none backup directory none backup addons false backup wtf false hide ignored addons false self update channel stable weak auras account retail alternating row colors true language english antialiasing true checking for application update loading user themes loaded user themes message themesloaded themes message cachesloaded error false message parse preparing to parse addons in x world of warcraft retail interface addons preparing to parse addons in x world of warcraft ptr interface addons preparing to parse addons in x world of warcraft beta interface addons preparing to parse addons in x world of warcraft classic interface addons preparing to parse addons in x world of warcraft classic ptr interface addons retail ptr parsing addons folder retail beta parsing addons folder retail parsing addons folder classic ptr parsing addons folder classic parsing addons folder retail folders in addons directory to parse message checkweakaurasinstalled classic ptr is installed false message checkweakaurasinstalled retail is installed true message checkweakaurasinstalled retail beta is installed false message checkweakaurasinstalled classic is installed false message checkweakaurasinstalled retail ptr is installed false message listweakaurasaccounts retail num accounts failed to parse addons caused by addon directory not found x world of warcraft ptr interface addons failed to parse addons caused by addon directory not found x world of warcraft beta interface addons failed to parse addons caused by addon directory not found x world of warcraft classic ptr interface addons failed to parse addons caused by addon directory not found x world of warcraft classic interface addons retail fingerprints cached calculated added removed retail addon folders successfully parsed from toc retail valid cache entries retrieved retail unique fingerprints to check against curse api message latestrelease some successfully fetched and parsed message parsedauras retail num auras successfully fetched and parsed successfully fetched and parsed message catalogdownloaded addons in catalog retail curse packages fetched retail tukui packages fetched retail wowi packages fetched retail git packages fetched retail addons built from curse packages retail addons built from tukui packages retail addons built from wowi packages retail addons built from git packages retail unknown addon folders retail addons successfully parsed message parsedaddons retail addons interaction modeselected myweakauras retail interaction refresh my weakauras message parsedauras retail num auras interaction refresh my weakauras message parsedauras retail num auras interaction refresh my weakauras message parsedauras retail num auras interaction modeselected myaddons retail interaction modeselected myweakauras retail interaction modeselected myaddons retail interaction expand details xiv databar interaction openlink interaction update xiv databar downloading remote version for xiv databar message downloadedaddon retail xiv databar error false message unpackedaddon xiv databar error false retail updating fingerprint for xiv databar message addoncacheupdated xiv databar message updatefingerprint retail xiv databar error false interaction refresh my addons message parse preparing to parse addons in x world of warcraft retail interface addons preparing to parse addons in x world of warcraft ptr interface addons preparing to parse addons in x world of warcraft beta interface addons preparing to parse addons in x world of warcraft classic interface addons preparing to parse addons in x world of warcraft classic ptr interface addons retail ptr parsing addons folder classic parsing addons folder retail beta parsing addons folder classic ptr parsing addons folder retail parsing addons folder retail folders in addons directory to parse message checkweakaurasinstalled retail is installed true message checkweakaurasinstalled classic ptr is installed false message checkweakaurasinstalled retail ptr is installed false message checkweakaurasinstalled classic is installed false message checkweakaurasinstalled retail beta is installed false message listweakaurasaccounts retail num accounts failed to parse addons caused by addon directory not found x world of warcraft ptr interface addons failed to parse addons caused by addon directory not found x world of warcraft classic interface addons failed to parse addons caused by addon directory not found x world of warcraft beta interface addons failed to parse addons caused by addon directory not found x world of warcraft classic ptr interface addons retail fingerprints cached calculated added removed retail addon folders successfully parsed from toc retail valid cache entries retrieved retail unique fingerprints to check against curse api message parsedauras retail num auras retail curse packages fetched retail tukui packages fetched retail wowi packages fetched retail git packages fetched retail addons built from curse packages retail addons built from tukui packages retail addons built from wowi packages retail addons built from git packages retail unknown addon folders retail addons successfully parsed message parsedaddons retail addons interaction modeselected about interaction openlink interaction modeselected myaddons retail interaction modeselected myweakauras retail interaction openlink interaction modeselected myaddons retail interaction expand details transmogtokens interaction openlink interaction expand details xiv databar interaction openlink interaction expand details xiv databar | 1 |
1,843 | 6,577,379,701 | IssuesEvent | 2017-09-12 00:30:16 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_vpc module vpc creation fails sometimes with invalidvpcid.notfound | affects_2.0 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_vpc module
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = /home/arlindo/projects/ts/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ansible is running on Ubuntu 14.04 managing AWS
##### SUMMARY
ec2_vpc module vpc creation fails sometimes with invalidvpcid.notfound
##### STEPS TO REPRODUCE
Issue is sporadic, therefore can't reproduce at will.
```
- name: VPC | Creating an AWS VPC inside mentioned Region
local_action:
module: ec2_vpc
region: "{{ vpc_region }}"
state: present
cidr_block: "{{ vpc_cidr_block }}"
resource_tags: { "Name":"{{ vpc_name }}" }
subnets: "{{ vpc_subnets }}"
internet_gateway: yes
route_tables: "{{ public_subnet_rt }}"
register: vpc
```
##### EXPECTED RESULTS
For a new VPC to be created in AWS
##### ACTUAL RESULTS
```
TASK [VPC | Creating an AWS VPC inside mentioned Region] ***********************
task path: /home/arlindo/projects/ts/playbooks/aws/tasks/vpc.yml:12
ESTABLISH LOCAL CONNECTION FOR USER: arlindo
localhost EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001 `" )'
localhost PUT /tmp/tmpJj1mmV TO /home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc
localhost EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc; rm -rf "/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/" > /dev/null 2>&1'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc", line 2944, in <module>
main()
File "/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc", line 731, in main
(vpc_dict, new_vpc_id, subnets_changed, igw_id, changed) = create_vpc(module, vpc_conn)
File "/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc", line 387, in create_vpc
vpc_conn.create_tags(vpc.id, new_tags)
File "/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/ec2/connection.py", line 4219, in create_tags
return self.get_status('CreateTags', params, verb='POST')
File "/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/connection.py", line 1227, in get_status
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>InvalidVpcID.NotFound</Code><Message>The vpc ID 'vpc-7a82401d' does not exist</Message></Error></Errors><RequestID>a0ffcb87-f56e-495f-94ea-893746b8dba8</RequestID></Response>
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "ec2_vpc"}, "module_stderr": "Traceback (most recent call last):\n File \"/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\", line 2944, in <module>\n main()\n File \"/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\", line 731, in main\n (vpc_dict, new_vpc_id, subnets_changed, igw_id, changed) = create_vpc(module, vpc_conn)\n File \"/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\", line 387, in create_vpc\n vpc_conn.create_tags(vpc.id, new_tags)\n File \"/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/ec2/connection.py\", line 4219, in create_tags\n return self.get_status('CreateTags', params, verb='POST')\n File \"/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/connection.py\", line 1227, in get_status\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Response><Errors><Error><Code>InvalidVpcID.NotFound</Code><Message>The vpc ID 'vpc-7a82401d' does not exist</Message></Error></Errors><RequestID>a0ffcb87-f56e-495f-94ea-893746b8dba8</RequestID></Response>\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
to retry, use: --limit @playbooks/provisionaws.retry
```
| True | ec2_vpc module vpc creation fails sometimes with invalidvpcid.notfound - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_vpc module
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = /home/arlindo/projects/ts/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ansible is running on Ubuntu 14.04 managing AWS
##### SUMMARY
ec2_vpc module vpc creation fails sometimes with invalidvpcid.notfound
##### STEPS TO REPRODUCE
Issue is sporadic, therefore can't reproduce at will.
```
- name: VPC | Creating an AWS VPC inside mentioned Region
local_action:
module: ec2_vpc
region: "{{ vpc_region }}"
state: present
cidr_block: "{{ vpc_cidr_block }}"
resource_tags: { "Name":"{{ vpc_name }}" }
subnets: "{{ vpc_subnets }}"
internet_gateway: yes
route_tables: "{{ public_subnet_rt }}"
register: vpc
```
##### EXPECTED RESULTS
For a new VPC to be created in AWS
##### ACTUAL RESULTS
```
TASK [VPC | Creating an AWS VPC inside mentioned Region] ***********************
task path: /home/arlindo/projects/ts/playbooks/aws/tasks/vpc.yml:12
ESTABLISH LOCAL CONNECTION FOR USER: arlindo
localhost EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001 `" )'
localhost PUT /tmp/tmpJj1mmV TO /home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc
localhost EXEC /bin/sh -c 'LANG=en_CA.UTF-8 LC_ALL=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8 /usr/bin/python /home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc; rm -rf "/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/" > /dev/null 2>&1'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc", line 2944, in <module>
main()
File "/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc", line 731, in main
(vpc_dict, new_vpc_id, subnets_changed, igw_id, changed) = create_vpc(module, vpc_conn)
File "/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc", line 387, in create_vpc
vpc_conn.create_tags(vpc.id, new_tags)
File "/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/ec2/connection.py", line 4219, in create_tags
return self.get_status('CreateTags', params, verb='POST')
File "/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/connection.py", line 1227, in get_status
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>InvalidVpcID.NotFound</Code><Message>The vpc ID 'vpc-7a82401d' does not exist</Message></Error></Errors><RequestID>a0ffcb87-f56e-495f-94ea-893746b8dba8</RequestID></Response>
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "ec2_vpc"}, "module_stderr": "Traceback (most recent call last):\n File \"/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\", line 2944, in <module>\n main()\n File \"/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\", line 731, in main\n (vpc_dict, new_vpc_id, subnets_changed, igw_id, changed) = create_vpc(module, vpc_conn)\n File \"/home/arlindo/.ansible/tmp/ansible-tmp-1463535811.57-271206026537001/ec2_vpc\", line 387, in create_vpc\n vpc_conn.create_tags(vpc.id, new_tags)\n File \"/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/ec2/connection.py\", line 4219, in create_tags\n return self.get_status('CreateTags', params, verb='POST')\n File \"/usr/local/lib/python2.7/dist-packages/boto-2.39.0-py2.7.egg/boto/connection.py\", line 1227, in get_status\n raise self.ResponseError(response.status, response.reason, body)\nboto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Response><Errors><Error><Code>InvalidVpcID.NotFound</Code><Message>The vpc ID 'vpc-7a82401d' does not exist</Message></Error></Errors><RequestID>a0ffcb87-f56e-495f-94ea-893746b8dba8</RequestID></Response>\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
to retry, use: --limit @playbooks/provisionaws.retry
```
| main | vpc module vpc creation fails sometimes with invalidvpcid notfound issue type bug report component name vpc module ansible version ansible config file home arlindo projects ts ansible cfg configured module search path default w o overrides configuration os environment ansible is running on ubuntu managing aws summary vpc module vpc creation fails sometimes with invalidvpcid notfound steps to reproduce issue is sporadic therefore can t reproduce at will name vpc creating an aws vpc inside mentioned region local action module vpc region vpc region state present cidr block vpc cidr block resource tags name vpc name subnets vpc subnets internet gateway yes route tables public subnet rt register vpc expected results for a new vpc to be created in aws actual results task task path home arlindo projects ts playbooks aws tasks vpc yml establish local connection for user arlindo localhost exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp localhost put tmp to home arlindo ansible tmp ansible tmp vpc localhost exec bin sh c lang en ca utf lc all en ca utf lc messages en ca utf usr bin python home arlindo ansible tmp ansible tmp vpc rm rf home arlindo ansible tmp ansible tmp dev null an exception occurred during task execution the full traceback is traceback most recent call last file home arlindo ansible tmp ansible tmp vpc line in main file home arlindo ansible tmp ansible tmp vpc line in main vpc dict new vpc id subnets changed igw id changed create vpc module vpc conn file home arlindo ansible tmp ansible tmp vpc line in create vpc vpc conn create tags vpc id new tags file usr local lib dist packages boto egg boto connection py line in create tags return self get status createtags params verb post file usr local lib dist packages boto egg boto connection py line in get status raise self responseerror response status response reason body boto exception bad request invalidvpcid notfound the vpc id vpc does not exist fatal failed changed false failed true invocation module name vpc module stderr traceback most recent call last n file home arlindo ansible tmp ansible tmp vpc line in n main n file home arlindo ansible tmp ansible tmp vpc line in main n vpc dict new vpc id subnets changed igw id changed create vpc module vpc conn n file home arlindo ansible tmp ansible tmp vpc line in create vpc n vpc conn create tags vpc id new tags n file usr local lib dist packages boto egg boto connection py line in create tags n return self get status createtags params verb post n file usr local lib dist packages boto egg boto connection py line in get status n raise self responseerror response status response reason body nboto exception bad request n n invalidvpcid notfound the vpc id vpc does not exist n module stdout msg module failure parsed false to retry use limit playbooks provisionaws retry | 1 |
284,785 | 8,750,575,817 | IssuesEvent | 2018-12-13 19:37:45 | angular-klingon/klingon | https://api.github.com/repos/angular-klingon/klingon | closed | Prepopulate the Project Root directory field with a 'default' value | effort1: easy (hours) help wanted package: @klingon/server package: @klingon/ui priority: medium type: enhancement | **Describe the solution you'd like**
We could prepopulate the Project Root directory field in the UI with the current directory allocated to the server (ie, process.cwd()), another value (maybe from config?).
This can prevent issue #81
@sumitparakh what do you think? | 1.0 | Prepopulate the Project Root directory field with a 'default' value - **Describe the solution you'd like**
We could prepopulate the Project Root directory field in the UI with the current directory allocated to the server (ie, process.cwd()), another value (maybe from config?).
This can prevent issue #81
@sumitparakh what do you think? | non_main | prepopulate the project root directory field with a default value describe the solution you d like we could prepopulate the project root directory field in the ui with the current directory allocated to the server ie process cwd another value maybe from config this can prevent issue sumitparakh what do you think | 0 |
3,347 | 12,974,075,182 | IssuesEvent | 2020-07-21 14:55:39 | short-d/short | https://api.github.com/repos/short-d/short | opened | [Refactor] Search API Request & Response | maintainability | **What is frustrating you?**
Clients of Search API cannot determine directly whether the returned link is matched for Short (or) Long link. Also, clients do not have control over the type of links searched for a particular request.
**Your solution**
Accept a match-type parameter (Short/Long/All) in the request and return the matched links in the response as separate arrays (ShortLinkMatches / LongLinkMatches).
**Additional context**
Explored as part of #866
| True | [Refactor] Search API Request & Response - **What is frustrating you?**
Clients of Search API cannot determine directly whether the returned link is matched for Short (or) Long link. Also, clients do not have control over the type of links searched for a particular request.
**Your solution**
Accept a match-type parameter (Short/Long/All) in the request and return the matched links in the response as separate arrays (ShortLinkMatches / LongLinkMatches).
**Additional context**
Explored as part of #866
| main | search api request response what is frustrating you clients of search api cannot determine directly whether the returned link is matched for short or long link also clients do not have control over the type of links searched for a particular request your solution accept a match type parameter short long all in the request and return the matched links in the response as separate arrays shortlinkmatches longlinkmatches additional context explored as part of | 1 |
5,060 | 25,919,288,336 | IssuesEvent | 2022-12-15 20:15:35 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Include linking constraint detail in the table widget header | type: enhancement work: frontend status: ready restricted: maintainers | ## Current behavior
- On the Record Page, when the table is referenced twice from the same linked table (via two separate columns), the list of table widgets appears to contain duplicates

## Desired behavior
- Aside the table name, we should show some information about the FK column.
- We should be able to be smart enough to show this extra detail only when needed to disambiguate the otherwise seemingly duplicate widgets (in order not to clutter the UI).
| True | Include linking constraint detail in the table widget header - ## Current behavior
- On the Record Page, when the table is referenced twice from the same linked table (via two separate columns), the list of table widgets appears to contain duplicates

## Desired behavior
- Aside the table name, we should show some information about the FK column.
- We should be able to be smart enough to show this extra detail only when needed to disambiguate the otherwise seemingly duplicate widgets (in order not to clutter the UI).
| main | include linking constraint detail in the table widget header current behavior on the record page when the table is referenced twice from the same linked table via two separate columns the list of table widgets appears to contain duplicates desired behavior aside the table name we should show some information about the fk column we should be able to be smart enough to show this extra detail only when needed to disambiguate the otherwise seemingly duplicate widgets in order not to clutter the ui | 1 |
3,150 | 12,155,116,478 | IssuesEvent | 2020-04-25 11:39:12 | precice/precice | https://api.github.com/repos/precice/precice | closed | Refactor type alias Bounding Box into class | good first issue maintainability | What once started as a small addition
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/mesh/Mesh.hpp#L38
has now actually much associated functionality:
(0) computation in
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/mesh/Mesh.cpp#L210
(1) merging in
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/partition/ReceivedPartition.cpp#L461
(2) intersection in
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/partition/ReceivedPartition.cpp#L448
(3) testing whether it contains a specific vertex
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/partition/ReceivedPartition.cpp#L507
(4) an own communication wrapper class
https://github.com/precice/precice/blob/develop/src/com/CommunicateBoundingBox.hpp
It's time to bundle most of this functionality ( (1)-(3) yes, (4) no, (0)? my first suggestion) in a separate class within the `mesh` package.
Which relation to implement to `Mesh` needs careful consideration, meaning does a `BoundingBox` have a reference to a `Mesh`?
| True | Refactor type alias Bounding Box into class - What once started as a small addition
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/mesh/Mesh.hpp#L38
has now actually much associated functionality:
(0) computation in
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/mesh/Mesh.cpp#L210
(1) merging in
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/partition/ReceivedPartition.cpp#L461
(2) intersection in
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/partition/ReceivedPartition.cpp#L448
(3) testing whether it contains a specific vertex
https://github.com/precice/precice/blob/6cb67dd1478b79aff87cfcc12e63df19ad9dca9f/src/partition/ReceivedPartition.cpp#L507
(4) an own communication wrapper class
https://github.com/precice/precice/blob/develop/src/com/CommunicateBoundingBox.hpp
It's time to bundle most of this functionality ( (1)-(3) yes, (4) no, (0)? my first suggestion) in a separate class within the `mesh` package.
Which relation to implement to `Mesh` needs careful consideration, meaning does a `BoundingBox` have a reference to a `Mesh`?
| main | refactor type alias bounding box into class what once started as a small addition has now actually much associated functionality computation in merging in intersection in testing whether it contains a specific vertex an own communication wrapper class it s time to bundle most of this functionality yes no my first suggestion in a separate class within the mesh package which relation to implement to mesh needs careful consideration meaning does a boundingbox have a reference to a mesh | 1 |
839 | 4,479,326,075 | IssuesEvent | 2016-08-27 14:51:18 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | git module SHA version not work for clone | docs_report P3 waiting_on_maintainer | In documentation:
What version of the repository to check out. This can be the full 40-character SHA-1 hash, the literal string HEAD, a branch name, or a tag name.
But it not work for clone, because `git clone` not support this.
So in this case better note about this in documentation.
| True | git module SHA version not work for clone - In documentation:
What version of the repository to check out. This can be the full 40-character SHA-1 hash, the literal string HEAD, a branch name, or a tag name.
But it not work for clone, because `git clone` not support this.
So in this case better note about this in documentation.
| main | git module sha version not work for clone in documentation what version of the repository to check out this can be the full character sha hash the literal string head a branch name or a tag name but it not work for clone because git clone not support this so in this case better note about this in documentation | 1 |
51,556 | 6,177,764,007 | IssuesEvent | 2017-07-02 04:53:29 | nix-rust/nix | https://api.github.com/repos/nix-rust/nix | closed | Failed mount tests on x86_64 Ubuntu 16.04.4 | A-bug A-testing | ```
thread 'main' panicked at 'write failed: Value too large for defined data type (os error 75)', test/test_mount.rs:53
stack backtrace:
0: std::sys::imp::backtrace::tracing::imp::unwind_backtrace
at /checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::_print
at /checkout/src/libstd/sys_common/backtrace.rs:71
2: std::panicking::default_hook::{{closure}}
at /checkout/src/libstd/sys_common/backtrace.rs:60
at /checkout/src/libstd/panicking.rs:355
3: std::panicking::default_hook
at /checkout/src/libstd/panicking.rs:371
4: std::panicking::rust_panic_with_hook
at /checkout/src/libstd/panicking.rs:549
5: std::panicking::begin_panic
at /checkout/src/libstd/panicking.rs:511
6: std::panicking::begin_panic_fmt
at /checkout/src/libstd/panicking.rs:495
7: test_mount::test_mount::test_mount_tmpfs_without_flags_allows_rwx::{{closure}}
at ./test/test_mount.rs:53
8: <core::result::Result<T, E>>::unwrap_or_else
at /checkout/src/libcore/result.rs:706
9: test_mount::test_mount::test_mount_tmpfs_without_flags_allows_rwx
at ./test/test_mount.rs:47
10: test_mount::main
at ./test/test_mount.rs:217
11: std::panicking::try::do_call
at /checkout/src/libstd/panicking.rs:454
12: __rust_maybe_catch_panic
at /checkout/src/libpanic_unwind/lib.rs:98
13: std::rt::lang_start
at /checkout/src/libstd/panicking.rs:433
at /checkout/src/libstd/panic.rs:361
at /checkout/src/libstd/rt.rs:57
14: main
15: __libc_start_main
16: _start
test test_mount::test_mount_tmpfs_without_flags_allows_rwx ... error: test failed
```
| 1.0 | Failed mount tests on x86_64 Ubuntu 16.04.4 - ```
thread 'main' panicked at 'write failed: Value too large for defined data type (os error 75)', test/test_mount.rs:53
stack backtrace:
0: std::sys::imp::backtrace::tracing::imp::unwind_backtrace
at /checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::_print
at /checkout/src/libstd/sys_common/backtrace.rs:71
2: std::panicking::default_hook::{{closure}}
at /checkout/src/libstd/sys_common/backtrace.rs:60
at /checkout/src/libstd/panicking.rs:355
3: std::panicking::default_hook
at /checkout/src/libstd/panicking.rs:371
4: std::panicking::rust_panic_with_hook
at /checkout/src/libstd/panicking.rs:549
5: std::panicking::begin_panic
at /checkout/src/libstd/panicking.rs:511
6: std::panicking::begin_panic_fmt
at /checkout/src/libstd/panicking.rs:495
7: test_mount::test_mount::test_mount_tmpfs_without_flags_allows_rwx::{{closure}}
at ./test/test_mount.rs:53
8: <core::result::Result<T, E>>::unwrap_or_else
at /checkout/src/libcore/result.rs:706
9: test_mount::test_mount::test_mount_tmpfs_without_flags_allows_rwx
at ./test/test_mount.rs:47
10: test_mount::main
at ./test/test_mount.rs:217
11: std::panicking::try::do_call
at /checkout/src/libstd/panicking.rs:454
12: __rust_maybe_catch_panic
at /checkout/src/libpanic_unwind/lib.rs:98
13: std::rt::lang_start
at /checkout/src/libstd/panicking.rs:433
at /checkout/src/libstd/panic.rs:361
at /checkout/src/libstd/rt.rs:57
14: main
15: __libc_start_main
16: _start
test test_mount::test_mount_tmpfs_without_flags_allows_rwx ... error: test failed
```
| non_main | failed mount tests on ubuntu thread main panicked at write failed value too large for defined data type os error test test mount rs stack backtrace std sys imp backtrace tracing imp unwind backtrace at checkout src libstd sys unix backtrace tracing gcc s rs std sys common backtrace print at checkout src libstd sys common backtrace rs std panicking default hook closure at checkout src libstd sys common backtrace rs at checkout src libstd panicking rs std panicking default hook at checkout src libstd panicking rs std panicking rust panic with hook at checkout src libstd panicking rs std panicking begin panic at checkout src libstd panicking rs std panicking begin panic fmt at checkout src libstd panicking rs test mount test mount test mount tmpfs without flags allows rwx closure at test test mount rs unwrap or else at checkout src libcore result rs test mount test mount test mount tmpfs without flags allows rwx at test test mount rs test mount main at test test mount rs std panicking try do call at checkout src libstd panicking rs rust maybe catch panic at checkout src libpanic unwind lib rs std rt lang start at checkout src libstd panicking rs at checkout src libstd panic rs at checkout src libstd rt rs main libc start main start test test mount test mount tmpfs without flags allows rwx error test failed | 0 |
5,606 | 28,066,349,240 | IssuesEvent | 2023-03-29 15:40:50 | albertlauncher/plugins | https://api.github.com/repos/albertlauncher/plugins | closed | No media control plugin available? | Maintainer wanted | In previous versions there was a plugin, I'm not sure how it was called, but it existed to control Spotify/Chrome actions like play/pause/next.
In the current version (0.18.3+377), I can't see it in the list:

and when I have spotify playing music, the "next" options doesn't appear:

| True | No media control plugin available? - In previous versions there was a plugin, I'm not sure how it was called, but it existed to control Spotify/Chrome actions like play/pause/next.
In the current version (0.18.3+377), I can't see it in the list:

and when I have spotify playing music, the "next" options doesn't appear:

| main | no media control plugin available in previous versions there was a plugin i m not sure how it was called but it existed to control spotify chrome actions like play pause next in the current version i can t see it in the list and when i have spotify playing music the next options doesn t appear | 1 |
108,684 | 4,349,222,312 | IssuesEvent | 2016-07-30 12:16:30 | siteorigin/siteorigin-north | https://api.github.com/repos/siteorigin/siteorigin-north | closed | WooCommerce: Adjust archive button spacing when no sidebar present | bug priority-1 | Sidebar present, all ok:

No sidebar, buttons need some love:

| 1.0 | WooCommerce: Adjust archive button spacing when no sidebar present - Sidebar present, all ok:

No sidebar, buttons need some love:

| non_main | woocommerce adjust archive button spacing when no sidebar present sidebar present all ok no sidebar buttons need some love | 0 |
5,462 | 27,315,575,237 | IssuesEvent | 2023-02-24 15:26:30 | PowerShell/PowerShell | https://api.github.com/repos/PowerShell/PowerShell | opened | Use correct exceptions | Issue-Enhancement Review - Maintainer Needs-Triage | ### Summary of the new feature / enhancement
In PowerShell code base there are some places where we are not accurate with exceptions.
Typical example: https://github.com/PowerShell/PowerShell/blob/4314e634cadde65a743d359106def376ed1c59ee/src/System.Management.Automation/engine/CommandInfo.cs#L286-L294
Here we throw ArgumentNullException that is not correct if the argument is empty. (There are examples with other exceptions.)
My proposal is to fix such code and throw ArgumentNullException if argument is null and ArgumentException is argument is empty.
This is not even a breaking change, as it is not a functional exception, and it is worth correcting it for the correct one. If someone even uses this behavior in their code, it is catastrophically bad code.
The proposal comes from the fact that we are blocked from using new .Net API like ArgumentException.ThrowIfNullOrEmpty(). The API makes code more friendly for .Net Rumtime optimizations (like inlining). We already use this kind of API in a lot of places and it's worth updating the rest.
### Proposed technical implementation details (optional)
_No response_ | True | Use correct exceptions - ### Summary of the new feature / enhancement
In PowerShell code base there are some places where we are not accurate with exceptions.
Typical example: https://github.com/PowerShell/PowerShell/blob/4314e634cadde65a743d359106def376ed1c59ee/src/System.Management.Automation/engine/CommandInfo.cs#L286-L294
Here we throw ArgumentNullException that is not correct if the argument is empty. (There are examples with other exceptions.)
My proposal is to fix such code and throw ArgumentNullException if argument is null and ArgumentException is argument is empty.
This is not even a breaking change, as it is not a functional exception, and it is worth correcting it for the correct one. If someone even uses this behavior in their code, it is catastrophically bad code.
The proposal comes from the fact that we are blocked from using new .Net API like ArgumentException.ThrowIfNullOrEmpty(). The API makes code more friendly for .Net Rumtime optimizations (like inlining). We already use this kind of API in a lot of places and it's worth updating the rest.
### Proposed technical implementation details (optional)
_No response_ | main | use correct exceptions summary of the new feature enhancement in powershell code base there are some places where we are not accurate with exceptions typical example here we throw argumentnullexception that is not correct if the argument is empty there are examples with other exceptions my proposal is to fix such code and throw argumentnullexception if argument is null and argumentexception is argument is empty this is not even a breaking change as it is not a functional exception and it is worth correcting it for the correct one if someone even uses this behavior in their code it is catastrophically bad code the proposal comes from the fact that we are blocked from using new net api like argumentexception throwifnullorempty the api makes code more friendly for net rumtime optimizations like inlining we already use this kind of api in a lot of places and it s worth updating the rest proposed technical implementation details optional no response | 1 |
1,964 | 6,694,161,050 | IssuesEvent | 2017-10-10 00:00:26 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Last.fm Artist: Query "IT band" is triggering the IA | Maintainer Submitted | Last.fm Artist triggers on the word "band" which can surface unrelated results such as "IT band" (if you notice the links at the bottom it's all related to one's anatomy). Find a way to reject these types of irrelevant results from coming up. It may be the case where we'd have to implement this on our end by using the result links as triggers, i.e., if last.fm comes up as one of the links, this IA should show up)
<img width="838" alt="screen shot 2016-06-15 at 12 22 51 am" src="https://cloud.githubusercontent.com/assets/81969/16067971/ab97d510-328f-11e6-90e9-668af956d85b.png">
---
Maintainer: @jagtalon
IA Page: https://duck.co/ia/view/lastfm_artist
| True | Last.fm Artist: Query "IT band" is triggering the IA - Last.fm Artist triggers on the word "band" which can surface unrelated results such as "IT band" (if you notice the links at the bottom it's all related to one's anatomy). Find a way to reject these types of irrelevant results from coming up. It may be the case where we'd have to implement this on our end by using the result links as triggers, i.e., if last.fm comes up as one of the links, this IA should show up)
<img width="838" alt="screen shot 2016-06-15 at 12 22 51 am" src="https://cloud.githubusercontent.com/assets/81969/16067971/ab97d510-328f-11e6-90e9-668af956d85b.png">
---
Maintainer: @jagtalon
IA Page: https://duck.co/ia/view/lastfm_artist
| main | last fm artist query it band is triggering the ia last fm artist triggers on the word band which can surface unrelated results such as it band if you notice the links at the bottom it s all related to one s anatomy find a way to reject these types of irrelevant results from coming up it may be the case where we d have to implement this on our end by using the result links as triggers i e if last fm comes up as one of the links this ia should show up img width alt screen shot at am src maintainer jagtalon ia page | 1 |
4,967 | 25,520,082,311 | IssuesEvent | 2022-11-28 19:40:34 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Implement new design for Table Inspector | type: enhancement work: frontend status: ready restricted: maintainers | Figma: https://www.figma.com/file/xHb5oIqye3fnXtb2heRH34/Styling?node-id=611%3A3416
## Tasks
### General Items
- [x] Update the designs for the table inspector tabs.
- [x] Update the designs for `Collapsible` components
### Table Mode
- [x] Update the designs for table `properties` section.
- [x] Update the designs for table `links` section.
- [x] Update the designs for table `actions` section.
- [x] Update the designs for table `constraints` section not including the constraints modal.
### Column Mode
- [x] Update the design of `No Cell selected` view.
- [x] Update the designs for column `properties` section.
- [ ] #1941
- [x] Update the designs for column `default value` section.
- [x] Update the designs for column `actions` section.
### Row Mode
- [x] Update the design of `No Row selected` view.
- [x] Update the designs for row `actions` section. | True | Implement new design for Table Inspector - Figma: https://www.figma.com/file/xHb5oIqye3fnXtb2heRH34/Styling?node-id=611%3A3416
## Tasks
### General Items
- [x] Update the designs for the table inspector tabs.
- [x] Update the designs for `Collapsible` components
### Table Mode
- [x] Update the designs for table `properties` section.
- [x] Update the designs for table `links` section.
- [x] Update the designs for table `actions` section.
- [x] Update the designs for table `constraints` section not including the constraints modal.
### Column Mode
- [x] Update the design of `No Cell selected` view.
- [x] Update the designs for column `properties` section.
- [ ] #1941
- [x] Update the designs for column `default value` section.
- [x] Update the designs for column `actions` section.
### Row Mode
- [x] Update the design of `No Row selected` view.
- [x] Update the designs for row `actions` section. | main | implement new design for table inspector figma tasks general items update the designs for the table inspector tabs update the designs for collapsible components table mode update the designs for table properties section update the designs for table links section update the designs for table actions section update the designs for table constraints section not including the constraints modal column mode update the design of no cell selected view update the designs for column properties section update the designs for column default value section update the designs for column actions section row mode update the design of no row selected view update the designs for row actions section | 1 |
633 | 4,148,716,025 | IssuesEvent | 2016-06-15 12:09:15 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | opened | Bible: Add feature for random verse | Maintainer Input Requested | The API currently being used supports random bible verses using this URL:
http://labs.bible.org/api/?passage=random&type=json
It shouldn't be too difficult to add this so that it's triggered by the search queries `bible verse` or `random bible verse`.
(Idea by @pnpninja and @gaulrobe)
------
IA Page: http://duck.co/ia/view/bible
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @hunterlang | True | Bible: Add feature for random verse - The API currently being used supports random bible verses using this URL:
http://labs.bible.org/api/?passage=random&type=json
It shouldn't be too difficult to add this so that it's triggered by the search queries `bible verse` or `random bible verse`.
(Idea by @pnpninja and @gaulrobe)
------
IA Page: http://duck.co/ia/view/bible
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @hunterlang | main | bible add feature for random verse the api currently being used supports random bible verses using this url it shouldn t be too difficult to add this so that it s triggered by the search queries bible verse or random bible verse idea by pnpninja and gaulrobe ia page hunterlang | 1 |
132,424 | 10,746,705,964 | IssuesEvent | 2019-10-30 11:37:25 | AdoptOpenJDK/openjdk-infrastructure | https://api.github.com/repos/AdoptOpenJDK/openjdk-infrastructure | closed | openjdk_openj9_jdk11 jtreg tests are failing throughout the platforms - causes infrastructure isues | bug testFail | Here is AdoptOpenJDK openjdk_openj9_jdk11 builds for all platform: https://ci.adoptopenjdk.net/view/Test_openjdk/
**openjdk_ppc64le_linux**
Test Name | Duration | Age
-- | -- | --
java/util/logging/CheckZombieLockTest.java.CheckZombieLockTest | 7.2 sec | 1
**openjdk_s390x_linux**
java/net/httpclient/ConnectTimeoutNoProxyAsync.java.ConnectTimeoutNoProxyAsync | 10 sec | 22
-- | -- | --
java/net/httpclient/ConnectTimeoutNoProxyAsync.java.ConnectTimeoutNoProxyAsync | 12 sec | 22
java/net/httpclient/ConnectTimeoutNoProxyAsync.java.ConnectTimeoutNoProxyAsync | |
**openjdk_x86-64_linux**
Test Name | Duration | Age
-- | -- | --
java/util/stream/test/org/openjdk/tests/java/util/SplittableRandomTest.java.SplittableRandomTest | 56 sec | 4
**openjdk_x86-64_mac_xl**
Test Name | Duration | Age
-- | -- | --
java/util/UUID/UUIDTest.java.UUIDTest | 16 min | 1
**openjdk_x86-64_windows**
Test Name | Duration | Age
-- | -- | --
java/net/DatagramSocket/ReuseAddressTest.java.ReuseAddressTest | 3 sec | 2
**.openjdk_x86-64_windows_xl**
java/net/DatagramSocket/ReuseAddressTest.java.ReuseAddressTest | 2.7 sec | 2
-- | -- | --
I have also tested these above tests in my local machine Ubuntu 16.4 having same configuration and build as Adopt and all test passed.
I used this document to create the same setup as Adopt https://github.com/eclipse/openj9/wiki/Reproducing-Test-Failures-Locally
| 1.0 | openjdk_openj9_jdk11 jtreg tests are failing throughout the platforms - causes infrastructure isues - Here is AdoptOpenJDK openjdk_openj9_jdk11 builds for all platform: https://ci.adoptopenjdk.net/view/Test_openjdk/
**openjdk_ppc64le_linux**
Test Name | Duration | Age
-- | -- | --
java/util/logging/CheckZombieLockTest.java.CheckZombieLockTest | 7.2 sec | 1
**openjdk_s390x_linux**
java/net/httpclient/ConnectTimeoutNoProxyAsync.java.ConnectTimeoutNoProxyAsync | 10 sec | 22
-- | -- | --
java/net/httpclient/ConnectTimeoutNoProxyAsync.java.ConnectTimeoutNoProxyAsync | 12 sec | 22
java/net/httpclient/ConnectTimeoutNoProxyAsync.java.ConnectTimeoutNoProxyAsync | |
**openjdk_x86-64_linux**
Test Name | Duration | Age
-- | -- | --
java/util/stream/test/org/openjdk/tests/java/util/SplittableRandomTest.java.SplittableRandomTest | 56 sec | 4
**openjdk_x86-64_mac_xl**
Test Name | Duration | Age
-- | -- | --
java/util/UUID/UUIDTest.java.UUIDTest | 16 min | 1
**openjdk_x86-64_windows**
Test Name | Duration | Age
-- | -- | --
java/net/DatagramSocket/ReuseAddressTest.java.ReuseAddressTest | 3 sec | 2
**.openjdk_x86-64_windows_xl**
java/net/DatagramSocket/ReuseAddressTest.java.ReuseAddressTest | 2.7 sec | 2
-- | -- | --
I have also tested these above tests in my local machine Ubuntu 16.4 having same configuration and build as Adopt and all test passed.
I used this document to create the same setup as Adopt https://github.com/eclipse/openj9/wiki/Reproducing-Test-Failures-Locally
| non_main | openjdk jtreg tests are failing throughout the platforms causes infrastructure isues here is adoptopenjdk openjdk builds for all platform openjdk linux test name duration age java util logging checkzombielocktest java checkzombielocktest sec openjdk linux java net httpclient connecttimeoutnoproxyasync java connecttimeoutnoproxyasync sec java net httpclient connecttimeoutnoproxyasync java connecttimeoutnoproxyasync sec java net httpclient connecttimeoutnoproxyasync java connecttimeoutnoproxyasync openjdk linux test name duration age java util stream test org openjdk tests java util splittablerandomtest java splittablerandomtest sec openjdk mac xl test name duration age java util uuid uuidtest java uuidtest min openjdk windows test name duration age java net datagramsocket reuseaddresstest java reuseaddresstest sec openjdk windows xl java net datagramsocket reuseaddresstest java reuseaddresstest sec i have also tested these above tests in my local machine ubuntu having same configuration and build as adopt and all test passed i used this document to create the same setup as adopt | 0 |
5,760 | 30,532,774,550 | IssuesEvent | 2023-07-19 15:14:41 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | closed | Set up `django-pattern-library` for easier template development | engineering maintain | # Description
> The [django-pattern-library](https://pypi.org/project/django-pattern-library/) package automates the maintenance of UI pattern libraries or styleguides for Django projects, and allows developers to experiment with Django templates without having to create Django views and models.
> * Create reusable patterns by creating Django templates files as usual.
> * All patterns automatically show up in the pattern library’s interface.
> * Define data as YAML files for the templates to render with the relevant Django context.
> * Override Django templates tags as needed to mock the template’s dependencies.
> * Document your patterns with Markdown.
This allows us to decouple the development of frontend and backend. It supports us in keeping the templates structured and organized. It also serves as a library of existing template components, which should improve the reuse of templates.
Also:
> As we discussed in chapter 1, the benefits of pattern libraries are many:
>
> They promote consistency and cohesion across the entire experience.
> They speed up your team’s workflow, saving time and money.
> They establish a more collaborative workflow between all disciplines involved in a project.
> They establish a shared vocabulary between everyone in an organization, including outside vendors.
> They provide helpful documentation to help educate stakeholders, colleagues, and even third parties.
> They make cross-browser/device, performance, and accessibility testing easier.
> They serve as a future-friendly foundation for teams to modify, extend, and improve on over time.
>
— [Brad Frost, Atomic Design](https://atomicdesign.bradfrost.com/chapter-3/)
# Acceptance criteria
- [ ] Django pattern library is installed and configured in the project.
- [ ] A couple of templates are added as examples to the pattern library.
- [ ] The dev team had a small workshop to get introduced to working with the pattern library. | True | Set up `django-pattern-library` for easier template development - # Description
> The [django-pattern-library](https://pypi.org/project/django-pattern-library/) package automates the maintenance of UI pattern libraries or styleguides for Django projects, and allows developers to experiment with Django templates without having to create Django views and models.
> * Create reusable patterns by creating Django templates files as usual.
> * All patterns automatically show up in the pattern library’s interface.
> * Define data as YAML files for the templates to render with the relevant Django context.
> * Override Django templates tags as needed to mock the template’s dependencies.
> * Document your patterns with Markdown.
This allows us to decouple the development of frontend and backend. It supports us in keeping the templates structured and organized. It also serves as a library of existing template components, which should improve the reuse of templates.
Also:
> As we discussed in chapter 1, the benefits of pattern libraries are many:
>
> They promote consistency and cohesion across the entire experience.
> They speed up your team’s workflow, saving time and money.
> They establish a more collaborative workflow between all disciplines involved in a project.
> They establish a shared vocabulary between everyone in an organization, including outside vendors.
> They provide helpful documentation to help educate stakeholders, colleagues, and even third parties.
> They make cross-browser/device, performance, and accessibility testing easier.
> They serve as a future-friendly foundation for teams to modify, extend, and improve on over time.
>
— [Brad Frost, Atomic Design](https://atomicdesign.bradfrost.com/chapter-3/)
# Acceptance criteria
- [ ] Django pattern library is installed and configured in the project.
- [ ] A couple of templates are added as examples to the pattern library.
- [ ] The dev team had a small workshop to get introduced to working with the pattern library. | main | set up django pattern library for easier template development description the package automates the maintenance of ui pattern libraries or styleguides for django projects and allows developers to experiment with django templates without having to create django views and models create reusable patterns by creating django templates files as usual all patterns automatically show up in the pattern library’s interface define data as yaml files for the templates to render with the relevant django context override django templates tags as needed to mock the template’s dependencies document your patterns with markdown this allows us to decouple the development of frontend and backend it supports us in keeping the templates structured and organized it also serves as a library of existing template components which should improve the reuse of templates also as we discussed in chapter the benefits of pattern libraries are many they promote consistency and cohesion across the entire experience they speed up your team’s workflow saving time and money they establish a more collaborative workflow between all disciplines involved in a project they establish a shared vocabulary between everyone in an organization including outside vendors they provide helpful documentation to help educate stakeholders colleagues and even third parties they make cross browser device performance and accessibility testing easier they serve as a future friendly foundation for teams to modify extend and improve on over time — acceptance criteria django pattern library is installed and configured in the project a couple of templates are added as examples to the pattern library the dev team had a small workshop to get introduced to working with the pattern library | 1 |
661,372 | 22,051,864,640 | IssuesEvent | 2022-05-30 09:21:51 | quickwit-oss/quickwit | https://api.github.com/repos/quickwit-oss/quickwit | closed | Rest API Ingest triggers infinite loop if there is a document error. | bug high-priority | The Rest API ingest endpoint seems to trigger an infinite loop with error logging if any document in the request has any kind of issue. This is on main.
This request is missing fields in the document.
```
curl 'localhost:7280/api/v1/noaa-1m/ingest' -d '{"station_id":"01088099999","name":"VADSO, NO","temperature_c":-7.2}'
{
"num_ingested_docs": 1
}
```
The server logs this error endlessly and will continue logging it even if you restart the process. Clearing the queues is required to make it stop.
```
2022-05-25T20:10:22.236Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127451}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.236Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127452}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127453}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127454}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127455}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127456}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127457}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
``` | 1.0 | Rest API Ingest triggers infinite loop if there is a document error. - The Rest API ingest endpoint seems to trigger an infinite loop with error logging if any document in the request has any kind of issue. This is on main.
This request is missing fields in the document.
```
curl 'localhost:7280/api/v1/noaa-1m/ingest' -d '{"station_id":"01088099999","name":"VADSO, NO","temperature_c":-7.2}'
{
"num_ingested_docs": 1
}
```
The server logs this error endlessly and will continue logging it even if you restart the process. Clearing the queues is required to make it stop.
```
2022-05-25T20:10:22.236Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127451}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.236Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127452}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127453}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127454}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127455}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127456}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
2022-05-25T20:10:22.237Z WARN {actor=quickwit_indexing::actors::indexing_service::IndexingService}:{msg_id=1}::{index=noaa-1m gen=0}:{actor=Indexer}:{msg_id=127457}: quickwit_indexing::actors::indexer: err=RequiredFastField("wind_direction_deg")
``` | non_main | rest api ingest triggers infinite loop if there is a document error the rest api ingest endpoint seems to trigger an infinite loop with error logging if any document in the request has any kind of issue this is on main this request is missing fields in the document curl localhost api noaa ingest d station id name vadso no temperature c num ingested docs the server logs this error endlessly and will continue logging it even if you restart the process clearing the queues is required to make it stop warn actor quickwit indexing actors indexing service indexingservice msg id index noaa gen actor indexer msg id quickwit indexing actors indexer err requiredfastfield wind direction deg warn actor quickwit indexing actors indexing service indexingservice msg id index noaa gen actor indexer msg id quickwit indexing actors indexer err requiredfastfield wind direction deg warn actor quickwit indexing actors indexing service indexingservice msg id index noaa gen actor indexer msg id quickwit indexing actors indexer err requiredfastfield wind direction deg warn actor quickwit indexing actors indexing service indexingservice msg id index noaa gen actor indexer msg id quickwit indexing actors indexer err requiredfastfield wind direction deg warn actor quickwit indexing actors indexing service indexingservice msg id index noaa gen actor indexer msg id quickwit indexing actors indexer err requiredfastfield wind direction deg warn actor quickwit indexing actors indexing service indexingservice msg id index noaa gen actor indexer msg id quickwit indexing actors indexer err requiredfastfield wind direction deg warn actor quickwit indexing actors indexing service indexingservice msg id index noaa gen actor indexer msg id quickwit indexing actors indexer err requiredfastfield wind direction deg | 0 |
4,783 | 24,607,428,236 | IssuesEvent | 2022-10-14 17:37:45 | MozillaFoundation/donate-wagtail | https://api.github.com/repos/MozillaFoundation/donate-wagtail | closed | Delete old "Master" branch from git history | engineering Maintain | Comment below is outdated:
_Depends on #1568
Once we make the change to "Main" we would like to remove the "Master" branch from git history. However, there may be some time between these two tickets as it might be a good idea to have the "master" branch in the history for some time just in case._
### Updated task:
The work for this ticket would be to remove the Master branch from github. | True | Delete old "Master" branch from git history - Comment below is outdated:
_Depends on #1568
Once we make the change to "Main" we would like to remove the "Master" branch from git history. However, there may be some time between these two tickets as it might be a good idea to have the "master" branch in the history for some time just in case._
### Updated task:
The work for this ticket would be to remove the Master branch from github. | main | delete old master branch from git history comment below is outdated depends on once we make the change to main we would like to remove the master branch from git history however there may be some time between these two tickets as it might be a good idea to have the master branch in the history for some time just in case updated task the work for this ticket would be to remove the master branch from github | 1 |
23,366 | 11,873,782,614 | IssuesEvent | 2020-03-26 17:51:34 | neontribe/gbptm | https://api.github.com/repos/neontribe/gbptm | closed | Identify and implement project credential stores for 3rd party services | 3rd party services | We will need secure credential and artefact stores for the project.
## Adobe Phonegap Build
Developer Account
## Apple App Store
Developer account
Product Owner account
Developer Certificate
Production Certificate
## Google Play Store
Product Owner Account
Developer account
Certificate | 1.0 | Identify and implement project credential stores for 3rd party services - We will need secure credential and artefact stores for the project.
## Adobe Phonegap Build
Developer Account
## Apple App Store
Developer account
Product Owner account
Developer Certificate
Production Certificate
## Google Play Store
Product Owner Account
Developer account
Certificate | non_main | identify and implement project credential stores for party services we will need secure credential and artefact stores for the project adobe phonegap build developer account apple app store developer account product owner account developer certificate production certificate google play store product owner account developer account certificate | 0 |
3,491 | 13,631,903,338 | IssuesEvent | 2020-09-24 18:45:55 | amyjko/faculty | https://api.github.com/repos/amyjko/faculty | closed | Setup Jest tests | maintainability | Check out: https://jestjs.io/
Some of the super basic tests I might want to write:
* The React App component mounts
• There are no console errors on any page | True | Setup Jest tests - Check out: https://jestjs.io/
Some of the super basic tests I might want to write:
* The React App component mounts
• There are no console errors on any page | main | setup jest tests check out some of the super basic tests i might want to write the react app component mounts • there are no console errors on any page | 1 |
1,835 | 6,577,363,973 | IssuesEvent | 2017-09-12 00:23:39 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | user module fails on SLES11 SP1-SP3 | affects_2.0 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
user
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
deprecation_warnings=False
-->
##### OS / ENVIRONMENT
Source: N/A, happens on RHEL, OSX
Target: SLES11 SP1-SP3
##### SUMMARY
user module fails when user does not exist but group does.
##### STEPS TO REPRODUCE
```
- name: configure usergroup
group:
name: usergroup
gid: 60003
state: present
- name: configure user account
user:
name: user
shell: /bin/bash
skeleton: /etc/skel
password: "{{ password }}"
groups: usergroup
append: true
no_log: true
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
User is created and added to the indicated group
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
fatal: [targethost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"append": true, "comment": null, "createhome": true, "expires": null, "force": false, "generate_ssh_key": null, "group": null, "groups": "sysadm", "home": null, "login_class": null, "move_home": false, "name": "user", "non_unique": false, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "remove": false, "shell": "/bin/bash", "skeleton": "/etc/skel", "ssh_key_bits": "2048", "ssh_key_comment": "ansible-generated on sonata", "ssh_key_file": null, "ssh_key_passphrase": null, "ssh_key_type": "rsa", "state": "present", "system": false, "uid": "60004", "update_password": "always"}, "module_name": "user"}, "msg": "/usr/sbin/useradd: invalid option -- 'N'\nTry `useradd --help' or `useradd --usage' for more information.\n", "name": "sysadm", "rc": 2}
```
see: https://github.com/ansible/ansible-modules-core/blob/76b7de943b065a831fe8639aa0348ebceee1ae02/system/user.py#L345
looks like ansible defaults to appending -N to the useradd command when the system is not redhat, but suse11 sp1-sp3 do not support the -N flag
| True | user module fails on SLES11 SP1-SP3 - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
user
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
deprecation_warnings=False
-->
##### OS / ENVIRONMENT
Source: N/A, happens on RHEL, OSX
Target: SLES11 SP1-SP3
##### SUMMARY
user module fails when user does not exist but group does.
##### STEPS TO REPRODUCE
```
- name: configure usergroup
group:
name: usergroup
gid: 60003
state: present
- name: configure user account
user:
name: user
shell: /bin/bash
skeleton: /etc/skel
password: "{{ password }}"
groups: usergroup
append: true
no_log: true
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
User is created and added to the indicated group
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
fatal: [targethost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"append": true, "comment": null, "createhome": true, "expires": null, "force": false, "generate_ssh_key": null, "group": null, "groups": "sysadm", "home": null, "login_class": null, "move_home": false, "name": "user", "non_unique": false, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "remove": false, "shell": "/bin/bash", "skeleton": "/etc/skel", "ssh_key_bits": "2048", "ssh_key_comment": "ansible-generated on sonata", "ssh_key_file": null, "ssh_key_passphrase": null, "ssh_key_type": "rsa", "state": "present", "system": false, "uid": "60004", "update_password": "always"}, "module_name": "user"}, "msg": "/usr/sbin/useradd: invalid option -- 'N'\nTry `useradd --help' or `useradd --usage' for more information.\n", "name": "sysadm", "rc": 2}
```
see: https://github.com/ansible/ansible-modules-core/blob/76b7de943b065a831fe8639aa0348ebceee1ae02/system/user.py#L345
looks like ansible defaults to appending -N to the useradd command when the system is not redhat, but suse11 sp1-sp3 do not support the -N flag
| main | user module fails on issue type bug report component name user ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration deprecation warnings false os environment source n a happens on rhel osx target summary user module fails when user does not exist but group does steps to reproduce name configure usergroup group name usergroup gid state present name configure user account user name user shell bin bash skeleton etc skel password password groups usergroup append true no log true expected results user is created and added to the indicated group actual results fatal failed changed false failed true invocation module args append true comment null createhome true expires null force false generate ssh key null group null groups sysadm home null login class null move home false name user non unique false password value specified in no log parameter remove false shell bin bash skeleton etc skel ssh key bits ssh key comment ansible generated on sonata ssh key file null ssh key passphrase null ssh key type rsa state present system false uid update password always module name user msg usr sbin useradd invalid option n ntry useradd help or useradd usage for more information n name sysadm rc see looks like ansible defaults to appending n to the useradd command when the system is not redhat but do not support the n flag | 1 |
564,855 | 16,743,357,505 | IssuesEvent | 2021-06-11 12:43:52 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | idmsa.apple.com - site is not usable | browser-firefox-ios os-ios priority-critical | <!-- @browser: Firefox iOS 34.0 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/34.0 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/76221 -->
<!-- @extra_labels: browser-firefox-ios -->
**URL**: https://idmsa.apple.com/IDMSWebAuth/signin?path=%2F%2Fcreate%2Fquestion%3FarticleId%3DHT204306%26title%3DConfig%2Bfile%2Bquestion%26login%3Dtrue
**Browser / Version**: Firefox iOS 34.0
**Operating System**: iOS 14.6
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Unable to login
**Steps to Reproduce**:
Don’t loading Apple ID, after open page idmsa.apple.com
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/6/b46b0e57-7219-4c7a-ac34-13f9051237cf.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | idmsa.apple.com - site is not usable - <!-- @browser: Firefox iOS 34.0 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/34.0 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/76221 -->
<!-- @extra_labels: browser-firefox-ios -->
**URL**: https://idmsa.apple.com/IDMSWebAuth/signin?path=%2F%2Fcreate%2Fquestion%3FarticleId%3DHT204306%26title%3DConfig%2Bfile%2Bquestion%26login%3Dtrue
**Browser / Version**: Firefox iOS 34.0
**Operating System**: iOS 14.6
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Unable to login
**Steps to Reproduce**:
Don’t loading Apple ID, after open page idmsa.apple.com
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/6/b46b0e57-7219-4c7a-ac34-13f9051237cf.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | idmsa apple com site is not usable url browser version firefox ios operating system ios tested another browser no problem type site is not usable description unable to login steps to reproduce don’t loading apple id after open page idmsa apple com view the screenshot img alt screenshot src browser configuration none from with ❤️ | 0 |
1,219 | 9,695,259,517 | IssuesEvent | 2019-05-24 21:43:41 | MakeAWishFoundation/SwiftyMocky | https://api.github.com/repos/MakeAWishFoundation/SwiftyMocky | opened | Extract integration tests into separate repository | automation enhancement | We are now testing generation and test suite for swift 4.2 and 5.0, on iOS and tvOS.
But we are missing integration tests for SwiftyMocky:
- as a unit tests pod
- as a UI tests pod
- as a prototyping pod in main app
- carthage integration for test targets
- carthage integration for app targets
We could extract additional repository and link it to the Travis, in order to maintain test projects with separate (integration only) test suite. | 1.0 | Extract integration tests into separate repository - We are now testing generation and test suite for swift 4.2 and 5.0, on iOS and tvOS.
But we are missing integration tests for SwiftyMocky:
- as a unit tests pod
- as a UI tests pod
- as a prototyping pod in main app
- carthage integration for test targets
- carthage integration for app targets
We could extract additional repository and link it to the Travis, in order to maintain test projects with separate (integration only) test suite. | non_main | extract integration tests into separate repository we are now testing generation and test suite for swift and on ios and tvos but we are missing integration tests for swiftymocky as a unit tests pod as a ui tests pod as a prototyping pod in main app carthage integration for test targets carthage integration for app targets we could extract additional repository and link it to the travis in order to maintain test projects with separate integration only test suite | 0 |
993 | 4,756,825,059 | IssuesEvent | 2016-10-24 15:00:15 | Particular/NServiceBus.AzureStorageQueues | https://api.github.com/repos/Particular/NServiceBus.AzureStorageQueues | opened | Consider providing migration tool from AzureStorage based timeouts to Native Delayed Delivery | Tag: Maintainer Prio | /cc @SeanFeldman | True | Consider providing migration tool from AzureStorage based timeouts to Native Delayed Delivery - /cc @SeanFeldman | main | consider providing migration tool from azurestorage based timeouts to native delayed delivery cc seanfeldman | 1 |
5,816 | 30,792,425,568 | IssuesEvent | 2023-07-31 17:10:25 | jupyter-naas/awesome-notebooks | https://api.github.com/repos/jupyter-naas/awesome-notebooks | closed | JSON - Convert Python Objects to JSON | templates maintainer | This notebook will show how to convert Python objects to JSON and how to deserialize JSON back into Python objects. It is usefull for organizations that need to store data in a structured format.
| True | JSON - Convert Python Objects to JSON - This notebook will show how to convert Python objects to JSON and how to deserialize JSON back into Python objects. It is usefull for organizations that need to store data in a structured format.
| main | json convert python objects to json this notebook will show how to convert python objects to json and how to deserialize json back into python objects it is usefull for organizations that need to store data in a structured format | 1 |
832,316 | 32,077,494,389 | IssuesEvent | 2023-09-25 12:01:04 | googleapis/google-cloud-ruby | https://api.github.com/repos/googleapis/google-cloud-ruby | closed | [Nightly CI Failures] Failures detected for google-cloud-dataform | type: bug priority: p1 nightly failure | At 2023-09-09 09:39:17 UTC, detected failures in google-cloud-dataform for: test.
The CI logs can be found [here](https://github.com/googleapis/google-cloud-ruby/actions/runs/6129868852)
report_key_0c1d77c9c06abfcf4876163de8ffa295 | 1.0 | [Nightly CI Failures] Failures detected for google-cloud-dataform - At 2023-09-09 09:39:17 UTC, detected failures in google-cloud-dataform for: test.
The CI logs can be found [here](https://github.com/googleapis/google-cloud-ruby/actions/runs/6129868852)
report_key_0c1d77c9c06abfcf4876163de8ffa295 | non_main | failures detected for google cloud dataform at utc detected failures in google cloud dataform for test the ci logs can be found report key | 0 |
173,469 | 21,165,423,052 | IssuesEvent | 2022-04-07 13:13:10 | metao1/springboot-redis-lettuce | https://api.github.com/repos/metao1/springboot-redis-lettuce | opened | CVE-2021-36090 (High) detected in commons-compress-1.20.jar | security vulnerability | ## CVE-2021-36090 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.20.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.20/b8df472b31e1f17c232d2ad78ceb1c84e00c641b/commons-compress-1.20.jar</p>
<p>
Dependency Hierarchy:
- testcontainers-1.16.0.jar (Root Library)
- :x: **commons-compress-1.20.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/metao1/springboot-redis-lettuce/commit/0965ce53a268e17f5e792de48449cce9155ba4b0">0965ce53a268e17f5e792de48449cce9155ba4b0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted ZIP archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' zip package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-36090>CVE-2021-36090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-36090 (High) detected in commons-compress-1.20.jar - ## CVE-2021-36090 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.20.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.20/b8df472b31e1f17c232d2ad78ceb1c84e00c641b/commons-compress-1.20.jar</p>
<p>
Dependency Hierarchy:
- testcontainers-1.16.0.jar (Root Library)
- :x: **commons-compress-1.20.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/metao1/springboot-redis-lettuce/commit/0965ce53a268e17f5e792de48449cce9155ba4b0">0965ce53a268e17f5e792de48449cce9155ba4b0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted ZIP archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' zip package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-36090>CVE-2021-36090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in commons compress jar cve high severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress deflate brotli zstandard and ar cpio jar tar zip dump arj library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org apache commons commons compress commons compress jar dependency hierarchy testcontainers jar root library x commons compress jar vulnerable library found in head commit a href found in base branch master vulnerability details when reading a specially crafted zip archive compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs this could be used to mount a denial of service attack against services that use compress zip package publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons compress step up your open source security game with whitesource | 0 |
1,045 | 4,858,106,704 | IssuesEvent | 2016-11-12 23:33:58 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | lambda module doesn't upload code from S3 | affects_2.3 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lambda
##### ANSIBLE VERSION
```
# ansible --version
ansible 2.3.0 (devel 4549604cc7) last updated 2016/10/19 21:51:04 (GMT +200)
lib/ansible/modules/core: (detached HEAD 7a7ff3ebca) last updated 2016/10/19 21:51:11 (GMT +200)
lib/ansible/modules/extras: (detached HEAD accb04b867) last updated 2016/10/19 21:51:17 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
controller machine is a docker container running debian wheezy. will gladly share the contents of the Dockerfile, if required.
##### SUMMARY
When trying to create or update a lambda function by using the code from an S3 bucket, the module crashes.
I believe the same issue was brought up in the comments of this PR: https://github.com/ansible/ansible-modules-extras/pull/1270#issuecomment-248341279
##### STEPS TO REPRODUCE
```
---
- hosts: localhost
connection: local
any_errors_fatal: True
vars:
lambda_functions:
- name: testlambda
description: 'Testing the lambda module'
s3_bucket: 'bucket_name'
s3_key: 'codefor-lambda.zip'
runtime: 'python2.7'
timeout: 3
handler: 'codefor.run'
memory_size: 128
role_arn: 'arn:aws:iam::123456789012:role/codefor'
region: 'eu-central-1'
tasks:
- name: Lambda function code should be updated
lambda:
name: "{{ item.name }}"
description: "{{ item.description }}"
s3_bucket: "{{ item.s3_bucket }}"
s3_key: "{{ item.s3_key }}"
s3_object_version: Null
runtime: "{{ item.runtime }}"
timeout: "{{ item.timeout }}"
handler: "{{ item.handler }}"
memory_size: "{{ item.memory_size }}"
role: "{{ item.role_arn }}"
region: "{{ item.region }}"
state: present
register: _lambda_functions
with_items: "{{ lambda_functions }}"
tags:
- lambda
- debug:
var: _lambda_functions
when: deploy_debug
tags:
- lambda
```
##### EXPECTED RESULTS
The lambda function would be created if it doesn't exist. If it it exists, its code should be updated with the one from the S3 bucket.
##### ACTUAL RESULTS
```
# ansible-playbook lambda.yml --extra-vars deploy_debug=True --inventory-file=/dev/null -vvvv
Using /playbook/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
Loading callback plugin default of type stdout, v2.0 from /ansible/lib/ansible/plugins/callback/__init__.pyc
PLAYBOOK: lambda.yml ***********************************************************
1 plays in lambda.yml
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
Using module file /ansible/lib/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603 `" && echo ansible-tmp-1477046665.89-211253865119603="` echo $HOME/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpYkD9WE TO /root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/ /root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/setup.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/" > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [Lambda function code should be updated] **********************************
task path: /playbook/lambda.yml:19
Using module file /ansible/lib/ansible/modules/extras/cloud/amazon/lambda.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544 `" && echo ansible-tmp-1477046667.88-263609977357544="` echo $HOME/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmphRcHhL TO /root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/lambda.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/ /root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/lambda.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/lambda.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_7XhV3c/ansible_module_lambda.py", line 467, in <module>
main()
File "/tmp/ansible_7XhV3c/ansible_module_lambda.py", line 436, in main
response = client.create_function(**func_kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 159, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 483, in _make_api_call
operation_model, request_dict)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 141, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 168, in _send_request
request, operation_model, attempts)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 233, in _get_response
response_dict, operation_model.output_shape)
File "/usr/local/lib/python2.7/dist-packages/botocore/parsers.py", line 209, in parse
parsed = self._do_error_parse(response, shape)
File "/usr/local/lib/python2.7/dist-packages/botocore/parsers.py", line 687, in _do_error_parse
error = super(RestJSONParser, self)._do_error_parse(response, shape)
File "/usr/local/lib/python2.7/dist-packages/botocore/parsers.py", line 542, in _do_error_parse
body = self._parse_body_as_json(response['body'])
File "/usr/local/lib/python2.7/dist-packages/botocore/parsers.py", line 574, in _parse_body_as_json
original_parsed = json.loads(body)
File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 383, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
failed: [localhost] (item={u'role_arn': u'arn:aws:iam::123456789012:role/codefor', u'name': u'testlambda', u's3_key': u'codefor-lambda.zip', u's3_bucket': u'bucket_name', u'handler': u'codefor.run', u'memory_size': 128, u'timeout': 3, u'runtime': u'python2.7', u'region': u'eu-central-1', u'description': u"Testing the lambda module'"}) => {
"failed": true,
"invocation": {
"module_name": "lambda"
},
"item": {
"description": "Testing the lambda module'",
"handler": "codefor.run",
"memory_size": 128,
"name": "testlambda",
"region": "eu-central-1",
"role_arn": "arn:aws:iam::123456789012:role/codefor",
"runtime": "python2.7",
"s3_bucket": "bucket_name",
"s3_key": "codefor-lambda.zip",
"timeout": 3
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_7XhV3c/ansible_module_lambda.py\", line 467, in <module>\n main()\n File \"/tmp/ansible_7XhV3c/ansible_module_lambda.py\", line 436, in main\n response = client.create_function(**func_kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/client.py\", line 159, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/client.py\", line 483, in _make_api_call\n operation_model, request_dict)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py\", line 141, in make_request\n return self._send_request(request_dict, operation_model)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py\", line 168, in _send_request\n request, operation_model, attempts)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py\", line 233, in _get_response\n response_dict, operation_model.output_shape)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/parsers.py\", line 209, in parse\n parsed = self._do_error_parse(response, shape)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/parsers.py\", line 687, in _do_error_parse\n error = super(RestJSONParser, self)._do_error_parse(response, shape)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/parsers.py\", line 542, in _do_error_parse\n body = self._parse_body_as_json(response['body'])\n File \"/usr/local/lib/python2.7/dist-packages/botocore/parsers.py\", line 574, in _parse_body_as_json\n original_parsed = json.loads(body)\n File \"/usr/lib/python2.7/json/__init__.py\", line 326, in loads\n return _default_decoder.decode(s)\n File \"/usr/lib/python2.7/json/decoder.py\", line 365, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/usr/lib/python2.7/json/decoder.py\", line 383, in raw_decode\n raise ValueError(\"No JSON object could be decoded\")\nValueError: No JSON object could be decoded\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
```
| True | lambda module doesn't upload code from S3 - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lambda
##### ANSIBLE VERSION
```
# ansible --version
ansible 2.3.0 (devel 4549604cc7) last updated 2016/10/19 21:51:04 (GMT +200)
lib/ansible/modules/core: (detached HEAD 7a7ff3ebca) last updated 2016/10/19 21:51:11 (GMT +200)
lib/ansible/modules/extras: (detached HEAD accb04b867) last updated 2016/10/19 21:51:17 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
controller machine is a docker container running debian wheezy. will gladly share the contents of the Dockerfile, if required.
##### SUMMARY
When trying to create or update a lambda function by using the code from an S3 bucket, the module crashes.
I believe the same issue was brought up in the comments of this PR: https://github.com/ansible/ansible-modules-extras/pull/1270#issuecomment-248341279
##### STEPS TO REPRODUCE
```
---
- hosts: localhost
connection: local
any_errors_fatal: True
vars:
lambda_functions:
- name: testlambda
description: 'Testing the lambda module'
s3_bucket: 'bucket_name'
s3_key: 'codefor-lambda.zip'
runtime: 'python2.7'
timeout: 3
handler: 'codefor.run'
memory_size: 128
role_arn: 'arn:aws:iam::123456789012:role/codefor'
region: 'eu-central-1'
tasks:
- name: Lambda function code should be updated
lambda:
name: "{{ item.name }}"
description: "{{ item.description }}"
s3_bucket: "{{ item.s3_bucket }}"
s3_key: "{{ item.s3_key }}"
s3_object_version: Null
runtime: "{{ item.runtime }}"
timeout: "{{ item.timeout }}"
handler: "{{ item.handler }}"
memory_size: "{{ item.memory_size }}"
role: "{{ item.role_arn }}"
region: "{{ item.region }}"
state: present
register: _lambda_functions
with_items: "{{ lambda_functions }}"
tags:
- lambda
- debug:
var: _lambda_functions
when: deploy_debug
tags:
- lambda
```
##### EXPECTED RESULTS
The lambda function would be created if it doesn't exist. If it it exists, its code should be updated with the one from the S3 bucket.
##### ACTUAL RESULTS
```
# ansible-playbook lambda.yml --extra-vars deploy_debug=True --inventory-file=/dev/null -vvvv
Using /playbook/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
Loading callback plugin default of type stdout, v2.0 from /ansible/lib/ansible/plugins/callback/__init__.pyc
PLAYBOOK: lambda.yml ***********************************************************
1 plays in lambda.yml
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
Using module file /ansible/lib/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603 `" && echo ansible-tmp-1477046665.89-211253865119603="` echo $HOME/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpYkD9WE TO /root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/ /root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/setup.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1477046665.89-211253865119603/" > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [Lambda function code should be updated] **********************************
task path: /playbook/lambda.yml:19
Using module file /ansible/lib/ansible/modules/extras/cloud/amazon/lambda.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544 `" && echo ansible-tmp-1477046667.88-263609977357544="` echo $HOME/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmphRcHhL TO /root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/lambda.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/ /root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/lambda.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/lambda.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1477046667.88-263609977357544/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_7XhV3c/ansible_module_lambda.py", line 467, in <module>
main()
File "/tmp/ansible_7XhV3c/ansible_module_lambda.py", line 436, in main
response = client.create_function(**func_kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 159, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 483, in _make_api_call
operation_model, request_dict)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 141, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 168, in _send_request
request, operation_model, attempts)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 233, in _get_response
response_dict, operation_model.output_shape)
File "/usr/local/lib/python2.7/dist-packages/botocore/parsers.py", line 209, in parse
parsed = self._do_error_parse(response, shape)
File "/usr/local/lib/python2.7/dist-packages/botocore/parsers.py", line 687, in _do_error_parse
error = super(RestJSONParser, self)._do_error_parse(response, shape)
File "/usr/local/lib/python2.7/dist-packages/botocore/parsers.py", line 542, in _do_error_parse
body = self._parse_body_as_json(response['body'])
File "/usr/local/lib/python2.7/dist-packages/botocore/parsers.py", line 574, in _parse_body_as_json
original_parsed = json.loads(body)
File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 383, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
failed: [localhost] (item={u'role_arn': u'arn:aws:iam::123456789012:role/codefor', u'name': u'testlambda', u's3_key': u'codefor-lambda.zip', u's3_bucket': u'bucket_name', u'handler': u'codefor.run', u'memory_size': 128, u'timeout': 3, u'runtime': u'python2.7', u'region': u'eu-central-1', u'description': u"Testing the lambda module'"}) => {
"failed": true,
"invocation": {
"module_name": "lambda"
},
"item": {
"description": "Testing the lambda module'",
"handler": "codefor.run",
"memory_size": 128,
"name": "testlambda",
"region": "eu-central-1",
"role_arn": "arn:aws:iam::123456789012:role/codefor",
"runtime": "python2.7",
"s3_bucket": "bucket_name",
"s3_key": "codefor-lambda.zip",
"timeout": 3
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_7XhV3c/ansible_module_lambda.py\", line 467, in <module>\n main()\n File \"/tmp/ansible_7XhV3c/ansible_module_lambda.py\", line 436, in main\n response = client.create_function(**func_kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/client.py\", line 159, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/client.py\", line 483, in _make_api_call\n operation_model, request_dict)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py\", line 141, in make_request\n return self._send_request(request_dict, operation_model)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py\", line 168, in _send_request\n request, operation_model, attempts)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py\", line 233, in _get_response\n response_dict, operation_model.output_shape)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/parsers.py\", line 209, in parse\n parsed = self._do_error_parse(response, shape)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/parsers.py\", line 687, in _do_error_parse\n error = super(RestJSONParser, self)._do_error_parse(response, shape)\n File \"/usr/local/lib/python2.7/dist-packages/botocore/parsers.py\", line 542, in _do_error_parse\n body = self._parse_body_as_json(response['body'])\n File \"/usr/local/lib/python2.7/dist-packages/botocore/parsers.py\", line 574, in _parse_body_as_json\n original_parsed = json.loads(body)\n File \"/usr/lib/python2.7/json/__init__.py\", line 326, in loads\n return _default_decoder.decode(s)\n File \"/usr/lib/python2.7/json/decoder.py\", line 365, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/usr/lib/python2.7/json/decoder.py\", line 383, in raw_decode\n raise ValueError(\"No JSON object could be decoded\")\nValueError: No JSON object could be decoded\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
```
| main | lambda module doesn t upload code from issue type bug report component name lambda ansible version ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration n a os environment controller machine is a docker container running debian wheezy will gladly share the contents of the dockerfile if required summary when trying to create or update a lambda function by using the code from an bucket the module crashes i believe the same issue was brought up in the comments of this pr steps to reproduce hosts localhost connection local any errors fatal true vars lambda functions name testlambda description testing the lambda module bucket bucket name key codefor lambda zip runtime timeout handler codefor run memory size role arn arn aws iam role codefor region eu central tasks name lambda function code should be updated lambda name item name description item description bucket item bucket key item key object version null runtime item runtime timeout item timeout handler item handler memory size item memory size role item role arn region item region state present register lambda functions with items lambda functions tags lambda debug var lambda functions when deploy debug tags lambda expected results the lambda function would be created if it doesn t exist if it it exists its code should be updated with the one from the bucket actual results ansible playbook lambda yml extra vars deploy debug true inventory file dev null vvvv using playbook ansible cfg as config file provided hosts list is empty only localhost is available loading callback plugin default of type stdout from ansible lib ansible plugins callback init pyc playbook lambda yml plays in lambda yml play task using module file ansible lib ansible modules core system setup py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp setup py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp setup py sleep exec bin sh c usr bin python root ansible tmp ansible tmp setup py rm rf root ansible tmp ansible tmp dev null sleep ok task task path playbook lambda yml using module file ansible lib ansible modules extras cloud amazon lambda py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmphrchhl to root ansible tmp ansible tmp lambda py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp lambda py sleep exec bin sh c usr bin python root ansible tmp ansible tmp lambda py rm rf root ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module lambda py line in main file tmp ansible ansible module lambda py line in main response client create function func kwargs file usr local lib dist packages botocore client py line in api call return self make api call operation name kwargs file usr local lib dist packages botocore client py line in make api call operation model request dict file usr local lib dist packages botocore endpoint py line in make request return self send request request dict operation model file usr local lib dist packages botocore endpoint py line in send request request operation model attempts file usr local lib dist packages botocore endpoint py line in get response response dict operation model output shape file usr local lib dist packages botocore parsers py line in parse parsed self do error parse response shape file usr local lib dist packages botocore parsers py line in do error parse error super restjsonparser self do error parse response shape file usr local lib dist packages botocore parsers py line in do error parse body self parse body as json response file usr local lib dist packages botocore parsers py line in parse body as json original parsed json loads body file usr lib json init py line in loads return default decoder decode s file usr lib json decoder py line in decode obj end self raw decode s idx w s end file usr lib json decoder py line in raw decode raise valueerror no json object could be decoded valueerror no json object could be decoded failed item u role arn u arn aws iam role codefor u name u testlambda u key u codefor lambda zip u bucket u bucket name u handler u codefor run u memory size u timeout u runtime u u region u eu central u description u testing the lambda module failed true invocation module name lambda item description testing the lambda module handler codefor run memory size name testlambda region eu central role arn arn aws iam role codefor runtime bucket bucket name key codefor lambda zip timeout module stderr traceback most recent call last n file tmp ansible ansible module lambda py line in n main n file tmp ansible ansible module lambda py line in main n response client create function func kwargs n file usr local lib dist packages botocore client py line in api call n return self make api call operation name kwargs n file usr local lib dist packages botocore client py line in make api call n operation model request dict n file usr local lib dist packages botocore endpoint py line in make request n return self send request request dict operation model n file usr local lib dist packages botocore endpoint py line in send request n request operation model attempts n file usr local lib dist packages botocore endpoint py line in get response n response dict operation model output shape n file usr local lib dist packages botocore parsers py line in parse n parsed self do error parse response shape n file usr local lib dist packages botocore parsers py line in do error parse n error super restjsonparser self do error parse response shape n file usr local lib dist packages botocore parsers py line in do error parse n body self parse body as json response n file usr local lib dist packages botocore parsers py line in parse body as json n original parsed json loads body n file usr lib json init py line in loads n return default decoder decode s n file usr lib json decoder py line in decode n obj end self raw decode s idx w s end n file usr lib json decoder py line in raw decode n raise valueerror no json object could be decoded nvalueerror no json object could be decoded n module stdout msg module failure no more hosts left play recap localhost ok changed unreachable failed | 1 |
197,731 | 22,606,074,512 | IssuesEvent | 2022-06-29 13:22:37 | crouchr/learnage | https://api.github.com/repos/crouchr/learnage | closed | CVE-2014-0191 (Low) detected in clamav-develclamav-0.98.4 - autoclosed | security vulnerability | ## CVE-2014-0191 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>clamav-develclamav-0.98.4</b></p></summary>
<p>
<p>ClamAV Development - FAQ is here: https://github.com/Cisco-Talos/clamav-faq</p>
<p>Library home page: <a href=https://github.com/vrtadmin/clamav-devel.git>https://github.com/vrtadmin/clamav-devel.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/crouchr/learnage/commit/a5f2b4a6eb346dbe0def97e83877b169dc4b8f8c">a5f2b4a6eb346dbe0def97e83877b169dc4b8f8c</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/blackrain2020/original-sources-3rd-party/clamav-0.98.4.tar/clamav-0.98.4/win32/3rdparty/libxml2/parser.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/blackrain2020/original-sources-3rd-party/clamav-0.98.4.tar/clamav-0.98.4/win32/3rdparty/libxml2/parser.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/blackrain2020/original-sources-3rd-party/clamav-0.98.4.tar/clamav-0.98.4/win32/3rdparty/libxml2/parser.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlParserHandlePEReference function in parser.c in libxml2 before 2.9.2, as used in Web Listener in Oracle HTTP Server in Oracle Fusion Middleware 11.1.1.7.0, 12.1.2.0, and 12.1.3.0 and other products, loads external parameter entities regardless of whether entity substitution or validation is enabled, which allows remote attackers to cause a denial of service (resource consumption) via a crafted XML document.
<p>Publish Date: 2015-01-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0191>CVE-2014-0191</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-0191">https://nvd.nist.gov/vuln/detail/CVE-2014-0191</a></p>
<p>Release Date: 2015-01-21</p>
<p>Fix Resolution: 2.9.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2014-0191 (Low) detected in clamav-develclamav-0.98.4 - autoclosed - ## CVE-2014-0191 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>clamav-develclamav-0.98.4</b></p></summary>
<p>
<p>ClamAV Development - FAQ is here: https://github.com/Cisco-Talos/clamav-faq</p>
<p>Library home page: <a href=https://github.com/vrtadmin/clamav-devel.git>https://github.com/vrtadmin/clamav-devel.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/crouchr/learnage/commit/a5f2b4a6eb346dbe0def97e83877b169dc4b8f8c">a5f2b4a6eb346dbe0def97e83877b169dc4b8f8c</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/blackrain2020/original-sources-3rd-party/clamav-0.98.4.tar/clamav-0.98.4/win32/3rdparty/libxml2/parser.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/blackrain2020/original-sources-3rd-party/clamav-0.98.4.tar/clamav-0.98.4/win32/3rdparty/libxml2/parser.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/blackrain2020/original-sources-3rd-party/clamav-0.98.4.tar/clamav-0.98.4/win32/3rdparty/libxml2/parser.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlParserHandlePEReference function in parser.c in libxml2 before 2.9.2, as used in Web Listener in Oracle HTTP Server in Oracle Fusion Middleware 11.1.1.7.0, 12.1.2.0, and 12.1.3.0 and other products, loads external parameter entities regardless of whether entity substitution or validation is enabled, which allows remote attackers to cause a denial of service (resource consumption) via a crafted XML document.
<p>Publish Date: 2015-01-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0191>CVE-2014-0191</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-0191">https://nvd.nist.gov/vuln/detail/CVE-2014-0191</a></p>
<p>Release Date: 2015-01-21</p>
<p>Fix Resolution: 2.9.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve low detected in clamav develclamav autoclosed cve low severity vulnerability vulnerable library clamav develclamav clamav development faq is here library home page a href found in head commit a href found in base branch master vulnerable source files original sources party clamav tar clamav parser c original sources party clamav tar clamav parser c original sources party clamav tar clamav parser c vulnerability details the xmlparserhandlepereference function in parser c in before as used in web listener in oracle http server in oracle fusion middleware and and other products loads external parameter entities regardless of whether entity substitution or validation is enabled which allows remote attackers to cause a denial of service resource consumption via a crafted xml document publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
154,139 | 24,252,892,160 | IssuesEvent | 2022-09-27 15:24:14 | grafana/grafana | https://api.github.com/repos/grafana/grafana | closed | GLDS- Storybook; Add Controls — Epic | help wanted area/frontend type/epic area/design-system | Storybook at the moment is quite inconsistent in the level of documentation of components and the existence of controls. This needs to be cleaned up to become consistent within itself and also with Figma.
Tasks:
- [x] Compile list of issues around storybook components documentation/controls
Stories missing controls, some of these might also need cleaning up so that the controls can work properly:
- [x] #54161
- [x] #54166
- [x] #54167
- [x] #52617
- [x] #54168
- [x] #54957
- [x] #54170
- [x] #54958
- [x] #54171
- [x] #54173
- [x] #54210
- [x] Add controls to `Legend` story
- [x] #51622
- [x] #51782
- [x] #54221
- [x] #54798
- [x] #54799
- [x] #52868
- [x] #54222
- [x] #53039
- [x] #54631
- [x] #53458
- [x] #53375
- [x] #53218
- [x] #53453
- [x] #54630
- [x] #54628
When adding controls, if you find a story that has a lot of different examples of the component in the same view (i.e. https://developers.grafana.com/ui/latest/index.html?path=/story/overlays-alert--examples), please leave it there and make sure it is called "Examples" and has no controls attached to it. You can then create a different story that shows only a single instance of that component and has controls attached to it.
To create an issue based on this look at the example of #51622, just add some similar description, add it to the same project as this epic and the tags `area/storybook`, `area/design-system`, `type/chore` and `good-first-issue` if you won't be picking it up immediately. | 1.0 | GLDS- Storybook; Add Controls — Epic - Storybook at the moment is quite inconsistent in the level of documentation of components and the existence of controls. This needs to be cleaned up to become consistent within itself and also with Figma.
Tasks:
- [x] Compile list of issues around storybook components documentation/controls
Stories missing controls, some of these might also need cleaning up so that the controls can work properly:
- [x] #54161
- [x] #54166
- [x] #54167
- [x] #52617
- [x] #54168
- [x] #54957
- [x] #54170
- [x] #54958
- [x] #54171
- [x] #54173
- [x] #54210
- [x] Add controls to `Legend` story
- [x] #51622
- [x] #51782
- [x] #54221
- [x] #54798
- [x] #54799
- [x] #52868
- [x] #54222
- [x] #53039
- [x] #54631
- [x] #53458
- [x] #53375
- [x] #53218
- [x] #53453
- [x] #54630
- [x] #54628
When adding controls, if you find a story that has a lot of different examples of the component in the same view (i.e. https://developers.grafana.com/ui/latest/index.html?path=/story/overlays-alert--examples), please leave it there and make sure it is called "Examples" and has no controls attached to it. You can then create a different story that shows only a single instance of that component and has controls attached to it.
To create an issue based on this look at the example of #51622, just add some similar description, add it to the same project as this epic and the tags `area/storybook`, `area/design-system`, `type/chore` and `good-first-issue` if you won't be picking it up immediately. | non_main | glds storybook add controls — epic storybook at the moment is quite inconsistent in the level of documentation of components and the existence of controls this needs to be cleaned up to become consistent within itself and also with figma tasks compile list of issues around storybook components documentation controls stories missing controls some of these might also need cleaning up so that the controls can work properly add controls to legend story when adding controls if you find a story that has a lot of different examples of the component in the same view i e please leave it there and make sure it is called examples and has no controls attached to it you can then create a different story that shows only a single instance of that component and has controls attached to it to create an issue based on this look at the example of just add some similar description add it to the same project as this epic and the tags area storybook area design system type chore and good first issue if you won t be picking it up immediately | 0 |
4,704 | 24,270,825,435 | IssuesEvent | 2022-09-28 10:07:17 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | SPIKE - Investigate errors in SEO report | engineering Maintain | # Description
Investigate errors (cell b12 to b22) in the Grassriots report
Link to report [https://docs.google.com/spreadsheets/d/15HwgpxSYc4Zl809kcebAhLfLYXFuIk8ZP-Qvk3yVV8Q/edit#gid=627737737](url).
# Acceptance criteria
- [ ] Look into the errors in the report
- [ ] Create a ticket to fix each of the errors
- [ ] Flesh out each ticket with: description and dev tasks
# List of errors:
- URLs with a temporary redirect
- issues with unminified JavaScript and CSS files
- pages have too much text within the title tags
- images don't have alt attributes
- external links are broken
- pages have low text-HTML ratio
- pages have a low word count
- links on HTTPS pages leads to HTTP page
- pages don't have an h1 heading
- pages have too many parameters in their URLs
- Sitemap.xml not indicated in robots.txt
| True | SPIKE - Investigate errors in SEO report - # Description
Investigate errors (cell b12 to b22) in the Grassriots report
Link to report [https://docs.google.com/spreadsheets/d/15HwgpxSYc4Zl809kcebAhLfLYXFuIk8ZP-Qvk3yVV8Q/edit#gid=627737737](url).
# Acceptance criteria
- [ ] Look into the errors in the report
- [ ] Create a ticket to fix each of the errors
- [ ] Flesh out each ticket with: description and dev tasks
# List of errors:
- URLs with a temporary redirect
- issues with unminified JavaScript and CSS files
- pages have too much text within the title tags
- images don't have alt attributes
- external links are broken
- pages have low text-HTML ratio
- pages have a low word count
- links on HTTPS pages leads to HTTP page
- pages don't have an h1 heading
- pages have too many parameters in their URLs
- Sitemap.xml not indicated in robots.txt
| main | spike investigate errors in seo report description investigate errors cell to in the grassriots report link to report url acceptance criteria look into the errors in the report create a ticket to fix each of the errors flesh out each ticket with description and dev tasks list of errors urls with a temporary redirect issues with unminified javascript and css files pages have too much text within the title tags images don t have alt attributes external links are broken pages have low text html ratio pages have a low word count links on https pages leads to http page pages don t have an heading pages have too many parameters in their urls sitemap xml not indicated in robots txt | 1 |
48,020 | 12,135,038,773 | IssuesEvent | 2020-04-23 11:50:32 | matrix-construct/construct | https://api.github.com/repos/matrix-construct/construct | closed | Configure fails inside nix-shell --pure | bug build | Using the following `shell.nix`
```
with import <nixpkgs> {};
stdenv.mkDerivation {
name = "construct";
buildInputs = [
pkgs.git
pkgs.automake
pkgs.autoconf
pkgs.libtool
pkgs.libsodium
pkgs.openssl
pkgs.file
pkgs.boost
];
}
``` | 1.0 | Configure fails inside nix-shell --pure - Using the following `shell.nix`
```
with import <nixpkgs> {};
stdenv.mkDerivation {
name = "construct";
buildInputs = [
pkgs.git
pkgs.automake
pkgs.autoconf
pkgs.libtool
pkgs.libsodium
pkgs.openssl
pkgs.file
pkgs.boost
];
}
``` | non_main | configure fails inside nix shell pure using the following shell nix with import stdenv mkderivation name construct buildinputs pkgs git pkgs automake pkgs autoconf pkgs libtool pkgs libsodium pkgs openssl pkgs file pkgs boost | 0 |
4,081 | 19,269,087,758 | IssuesEvent | 2021-12-10 01:38:20 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | SAM deploy doesn't detect Type change for Step Function | area/deploy stage/needs-investigation maintainer/need-response | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
<!-- Briefly describe the bug you are facing.-->
When changing the `Type` parameter for `Type: AWS::Serverless::StateMachine` from `EXPRESS` to `STANDARD` or vice versa, the change is not detected or deployed to AWS when `sam deploy` is called.
### Steps to reproduce:
<!-- Provide steps to replicate.-->
Create template with `Type: AWS::Serverless::StateMachine` and `Type` parameter set to either `EXPRESS` or `STANDARD` and deploy to AWS. Then update the `Type` to the opposite value and deploy again.
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
Error: No changes to deploy. Stack _Name_ is up to date
State machine is still defined with previous type
### Expected result:
<!-- Describe what you expected.-->
State machine defined with new type
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: macOS Catalina 10.15.7 (19H15)
2. `sam --version`: SAM CLI, version 1.11.0
`Add --debug flag to command you are running`
| True | SAM deploy doesn't detect Type change for Step Function - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
<!-- Briefly describe the bug you are facing.-->
When changing the `Type` parameter for `Type: AWS::Serverless::StateMachine` from `EXPRESS` to `STANDARD` or vice versa, the change is not detected or deployed to AWS when `sam deploy` is called.
### Steps to reproduce:
<!-- Provide steps to replicate.-->
Create template with `Type: AWS::Serverless::StateMachine` and `Type` parameter set to either `EXPRESS` or `STANDARD` and deploy to AWS. Then update the `Type` to the opposite value and deploy again.
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
Error: No changes to deploy. Stack _Name_ is up to date
State machine is still defined with previous type
### Expected result:
<!-- Describe what you expected.-->
State machine defined with new type
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: macOS Catalina 10.15.7 (19H15)
2. `sam --version`: SAM CLI, version 1.11.0
`Add --debug flag to command you are running`
| main | sam deploy doesn t detect type change for step function make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description when changing the type parameter for type aws serverless statemachine from express to standard or vice versa the change is not detected or deployed to aws when sam deploy is called steps to reproduce create template with type aws serverless statemachine and type parameter set to either express or standard and deploy to aws then update the type to the opposite value and deploy again observed result error no changes to deploy stack name is up to date state machine is still defined with previous type expected result state machine defined with new type additional environment details ex windows mac amazon linux etc os macos catalina sam version sam cli version add debug flag to command you are running | 1 |
65,190 | 19,253,871,294 | IssuesEvent | 2021-12-09 09:13:04 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Poll Create dialog in high contrast theme: delete answer button is a circle | T-Defect S-Minor A-Appearance A-Themes-Official O-Occasional A-Polls Z-Labs | ### Steps to reproduce
1. Turn on Polls in Labs settings
2. Choose the high contrast theme
3. Create a poll
### Outcome
#### What did you expect?
The delete answer buttons should look like "X".
#### What happened instead?
Instead they are filled circles:

### Operating system
Ubuntu 21.10
### Browser information
Firefox 94.0 (64-bit)
### URL for webapp
https://develop.element.io
### Application version
Element version: 10e121a5143f-react-6d3865bdd542-js-b33b01df0f32 Olm version: 3.2.3
### Homeserver
matrix.org
### Will you send logs?
No | 1.0 | Poll Create dialog in high contrast theme: delete answer button is a circle - ### Steps to reproduce
1. Turn on Polls in Labs settings
2. Choose the high contrast theme
3. Create a poll
### Outcome
#### What did you expect?
The delete answer buttons should look like "X".
#### What happened instead?
Instead they are filled circles:

### Operating system
Ubuntu 21.10
### Browser information
Firefox 94.0 (64-bit)
### URL for webapp
https://develop.element.io
### Application version
Element version: 10e121a5143f-react-6d3865bdd542-js-b33b01df0f32 Olm version: 3.2.3
### Homeserver
matrix.org
### Will you send logs?
No | non_main | poll create dialog in high contrast theme delete answer button is a circle steps to reproduce turn on polls in labs settings choose the high contrast theme create a poll outcome what did you expect the delete answer buttons should look like x what happened instead instead they are filled circles operating system ubuntu browser information firefox bit url for webapp application version element version react js olm version homeserver matrix org will you send logs no | 0 |
424,439 | 12,311,244,371 | IssuesEvent | 2020-05-12 12:05:09 | jenkins-x/jx | https://api.github.com/repos/jenkins-x/jx | closed | Should the initial `jx boot` be run incluster? | area/boot kind/enhancement lifecycle/rotten priority/important-soon | ### Summary
We have a number of issues around helm/git/jx versions being out of sync, as well as issues connecting to vault when the letsencrypt staging server is selected. So I'd like to open a discussion on whether the initial `jx boot` command could/should be run as a Job inside the cluster?
* Would need to work out what to do with secrets
* jx-requirements.yml
* would remove the interactive nature of jx boot
* could stop exposing vault to the public
| 1.0 | Should the initial `jx boot` be run incluster? - ### Summary
We have a number of issues around helm/git/jx versions being out of sync, as well as issues connecting to vault when the letsencrypt staging server is selected. So I'd like to open a discussion on whether the initial `jx boot` command could/should be run as a Job inside the cluster?
* Would need to work out what to do with secrets
* jx-requirements.yml
* would remove the interactive nature of jx boot
* could stop exposing vault to the public
| non_main | should the initial jx boot be run incluster summary we have a number of issues around helm git jx versions being out of sync as well as issues connecting to vault when the letsencrypt staging server is selected so i d like to open a discussion on whether the initial jx boot command could should be run as a job inside the cluster would need to work out what to do with secrets jx requirements yml would remove the interactive nature of jx boot could stop exposing vault to the public | 0 |
3,446 | 13,212,224,178 | IssuesEvent | 2020-08-16 05:31:41 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | GitHub Enterprise support | affects_2.9 bot_closed collection collection:community.general feature has_pr module needs_collection_redirect needs_maintainer source_control support:community | <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add GitHub Enterprise support to the GitHub modules that don't currently support it by adding a single optional parameter for the GitHub API URL which defaults to non-enterprise GitHub, similar to how this is handled in the `github_webhook` module.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
- `github_deploy_key`
- `github_key`
- `github_issue`
- `github_release`
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
`github_issue` and `github_release` are written with `github3` library, which does not support GitHub Enterprise. Proposing a refactor or re-implementation to use `PyGithub` or `requests` API.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: get issue status from github repository
github_issue:
action: get_status
organization: ansible
repo: ansible
issue: 59560
github_url: https://github.company.com
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| True | GitHub Enterprise support - <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add GitHub Enterprise support to the GitHub modules that don't currently support it by adding a single optional parameter for the GitHub API URL which defaults to non-enterprise GitHub, similar to how this is handled in the `github_webhook` module.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
- `github_deploy_key`
- `github_key`
- `github_issue`
- `github_release`
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
`github_issue` and `github_release` are written with `github3` library, which does not support GitHub Enterprise. Proposing a refactor or re-implementation to use `PyGithub` or `requests` API.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: get issue status from github repository
github_issue:
action: get_status
organization: ansible
repo: ansible
issue: 59560
github_url: https://github.company.com
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| main | github enterprise support summary add github enterprise support to the github modules that don t currently support it by adding a single optional parameter for the github api url which defaults to non enterprise github similar to how this is handled in the github webhook module issue type feature idea component name github deploy key github key github issue github release additional information github issue and github release are written with library which does not support github enterprise proposing a refactor or re implementation to use pygithub or requests api yaml name get issue status from github repository github issue action get status organization ansible repo ansible issue github url | 1 |
3,908 | 3,604,630,806 | IssuesEvent | 2016-02-03 23:43:54 | sul-dlss/SearchWorks | https://api.github.com/repos/sul-dlss/SearchWorks | closed | Add sidebar top/bottom nav buttons to search results | in progress usability | Same as on the record view, but only top/bottom links. They should be in the DOM somewhere around the pagination.

| True | Add sidebar top/bottom nav buttons to search results - Same as on the record view, but only top/bottom links. They should be in the DOM somewhere around the pagination.

| non_main | add sidebar top bottom nav buttons to search results same as on the record view but only top bottom links they should be in the dom somewhere around the pagination | 0 |
79,028 | 15,108,196,318 | IssuesEvent | 2021-02-08 16:21:27 | spcl/dace | https://api.github.com/repos/spcl/dace | opened | Pipeline scope generates wrong code if ceiling is used in one of the pipeline range | bug codegen fpga | **Describe the bug**
If ceiling is used in one of the pipeline ranges, then the FPGA backend will generate code without replacing `ceiling` with `int_ceil`. This will let compilation fail.
**Expected behavior**
Generate code with `int_ceil` instead of `ceiling` | 1.0 | Pipeline scope generates wrong code if ceiling is used in one of the pipeline range - **Describe the bug**
If ceiling is used in one of the pipeline ranges, then the FPGA backend will generate code without replacing `ceiling` with `int_ceil`. This will let compilation fail.
**Expected behavior**
Generate code with `int_ceil` instead of `ceiling` | non_main | pipeline scope generates wrong code if ceiling is used in one of the pipeline range describe the bug if ceiling is used in one of the pipeline ranges then the fpga backend will generate code without replacing ceiling with int ceil this will let compilation fail expected behavior generate code with int ceil instead of ceiling | 0 |
186,848 | 21,992,937,384 | IssuesEvent | 2022-05-26 01:12:36 | coffeehorn/MaxwellBurdick | https://api.github.com/repos/coffeehorn/MaxwellBurdick | reopened | WS-2022-0008 (Medium) detected in node-forge-0.10.0.tgz | security vulnerability | ## WS-2022-0008 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.29.1.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- selfsigned-1.10.8.tgz
- :x: **node-forge-0.10.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/coffeehorn/MaxwellBurdick/commit/9e989a0a71bab377b13ec7c3e36d92b3b550f1d4">9e989a0a71bab377b13ec7c3e36d92b3b550f1d4</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.
<p>Publish Date: 2022-01-08
<p>URL: <a href=https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562>WS-2022-0008</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5rrq-pxf6-6jx5">https://github.com/advisories/GHSA-5rrq-pxf6-6jx5</a></p>
<p>Release Date: 2022-01-08</p>
<p>Fix Resolution (node-forge): 1.0.0</p>
<p>Direct dependency fix Resolution (gatsby): 4.12.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2022-0008 (Medium) detected in node-forge-0.10.0.tgz - ## WS-2022-0008 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.29.1.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- selfsigned-1.10.8.tgz
- :x: **node-forge-0.10.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/coffeehorn/MaxwellBurdick/commit/9e989a0a71bab377b13ec7c3e36d92b3b550f1d4">9e989a0a71bab377b13ec7c3e36d92b3b550f1d4</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.
<p>Publish Date: 2022-01-08
<p>URL: <a href=https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562>WS-2022-0008</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5rrq-pxf6-6jx5">https://github.com/advisories/GHSA-5rrq-pxf6-6jx5</a></p>
<p>Release Date: 2022-01-08</p>
<p>Fix Resolution (node-forge): 1.0.0</p>
<p>Direct dependency fix Resolution (gatsby): 4.12.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws medium detected in node forge tgz ws medium severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file package json path to vulnerable library node modules node forge package json dependency hierarchy gatsby tgz root library webpack dev server tgz selfsigned tgz x node forge tgz vulnerable library found in head commit a href found in base branch main vulnerability details the forge debug api had a potential prototype pollution issue if called with untrusted input the api was only used for internal debug purposes in a safe way and never documented or advertised it is suspected that uses of this api if any exist would likely not have used untrusted inputs in a vulnerable way publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge direct dependency fix resolution gatsby step up your open source security game with whitesource | 0 |
5,436 | 27,245,882,265 | IssuesEvent | 2023-02-22 01:56:25 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | opened | Change date used to report VAs | Type: Maintainance good first issue Domain: Frontend | **What is the expected state?**
When I view VAs as any type of user I expect to see VAs dated via the date of interview field (`Id10012`).
**What is the actual state?**
When I view VAs as any type of user I see VAs dated via the `submissiondate` field
**Relevant context**
endusers report that `submissiondate` is unreliable as it is
1. generated automatically via VA collection software and sometimes that process fails
2. can be inaccurate due to batch submission of multiple VAs at the same time that were conducted on days other than `submissiondate`
and that date of interview `Id10012` is required to be filled out before submission and reflects the actual date of the VA being filled out. Due to 1, we also observe 'dk' as the `submissiondate` sometimes and this issue would fix that.
| True | Change date used to report VAs - **What is the expected state?**
When I view VAs as any type of user I expect to see VAs dated via the date of interview field (`Id10012`).
**What is the actual state?**
When I view VAs as any type of user I see VAs dated via the `submissiondate` field
**Relevant context**
endusers report that `submissiondate` is unreliable as it is
1. generated automatically via VA collection software and sometimes that process fails
2. can be inaccurate due to batch submission of multiple VAs at the same time that were conducted on days other than `submissiondate`
and that date of interview `Id10012` is required to be filled out before submission and reflects the actual date of the VA being filled out. Due to 1, we also observe 'dk' as the `submissiondate` sometimes and this issue would fix that.
| main | change date used to report vas what is the expected state when i view vas as any type of user i expect to see vas dated via the date of interview field what is the actual state when i view vas as any type of user i see vas dated via the submissiondate field relevant context endusers report that submissiondate is unreliable as it is generated automatically via va collection software and sometimes that process fails can be inaccurate due to batch submission of multiple vas at the same time that were conducted on days other than submissiondate and that date of interview is required to be filled out before submission and reflects the actual date of the va being filled out due to we also observe dk as the submissiondate sometimes and this issue would fix that | 1 |
5,345 | 26,961,579,260 | IssuesEvent | 2023-02-08 18:37:24 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Container ID 1230228 cannot be mapped to a host ID; aws-sam-cli-build-image-python3.7 | area/local/invoke maintainer/need-followup | ### Description
The initial error happens during `sam build --use-container` inside the Bitbucket pipeline:
```
RuntimeError: Container does not exist. Cannot get logs for this container
```
In the Bitbucket pipeline I get following error output, when I run the command `docker run amazon/aws-sam-cli-build-image-python3.7 echo 'hello world'`.
```
6b6e3df282b0: Verifying Checksum
6b6e3df282b0: Download complete
6b6e3df282b0: Pull complete
failed to register layer: Error processing tar file(exit status 1): Container ID 1230228 cannot be mapped to a host ID
```
### Steps to reproduce
Run `docker run amazon/aws-sam-cli-build-image-python3.7 echo 'hello world'` inside the bitbucket pipeline.
### Observed result
```
6b6e3df282b0: Verifying Checksum
6b6e3df282b0: Download complete
6b6e3df282b0: Pull complete
failed to register layer: Error processing tar file(exit status 1): Container ID 1230228 cannot be mapped to a host ID
```
From my initial analysis it seems to be related to this error and solution: https://circleci.com/docs/2.0/high-uid-error/.
>The error is caused by a userns remapping failure. CircleCI runs Docker containers with userns enabled in order to securely run customers’ containers. The host machine is configured with a valid UID/GID for remapping. This UID/GID must be in the range of 0 - 65535.
Though this can only be applied by the docker image owner.
### Expected result
A working docker pull.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: python:3.7-buster (https://hub.docker.com/_/python)
2. `sam --version`: 0.53.0 (inside aws-sam-cli-build-image-python3.7), 'samcliVersion': '1.0.0' (inside python:3.7-buster)
```
docker history amazon/aws-sam-cli-build-image-python3.7
IMAGE CREATED CREATED BY SIZE COMMENT
9afb7fbe0b02 4 weeks ago /bin/sh -c #(nop) COPY file:ca68a30f0c2f97ee… 101MB
<missing> 4 weeks ago |1 SAM_CLI_VERSION=0.53.0 /bin/sh -c pip3 in… 457MB
<missing> 4 weeks ago /bin/sh -c #(nop) ENV LANG=en_US.UTF-8 0B
<missing> 4 weeks ago /bin/sh -c #(nop) ENV PATH=/var/lang/bin:/u… 0B
<missing> 4 weeks ago |1 SAM_CLI_VERSION=0.53.0 /bin/sh -c rm samc… 57.4MB
<missing> 4 weeks ago |1 SAM_CLI_VERSION=0.53.0 /bin/sh -c curl -L… 172MB
<missing> 4 weeks ago /bin/sh -c #(nop) ARG SAM_CLI_VERSION 0B
<missing> 4 weeks ago /bin/sh -c curl "https://awscli.amazonaws.co… 298kB
<missing> 4 weeks ago /bin/sh -c chmod 1777 /tmp && /usr/bin/pyt… 644MB
```
`Add --debug flag to command you are running`
```
Fetching amazon/aws-sam-cli-build-image-python3.7 Docker container image..............................................................................................................................................................................................................................................................................................................................................................................................................................................
Mounting /opt/atlassian/pipelines/agent/build/build/**reducted** as /tmp/samcli/source:ro,delegated inside runtime container
Container was not created. Skipping deletion
Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'duration': 35126, 'exitReason': 'RuntimeError', 'exitCode': 255, 'requestId': '0d920e92-1049-402b-a78d-6d2140eed13c', 'installationId': '1e1cbc8e-2ae3-42a0-95f8-d96eb5680fb6', 'sessionId': '8c001b5a-94b8-4456-a433-d566899baddc', 'executionEnvironment': 'CLI', 'pyversion': '3.7.8', 'samcliVersion': '1.0.0'}}]}
HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Traceback (most recent call last):
File "/usr/local/bin/sam", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/samcli/lib/telemetry/metrics.py", line 96, in wrapped
raise exception # pylint: disable=raising-bad-type
File "/usr/local/lib/python3.7/site-packages/samcli/lib/telemetry/metrics.py", line 62, in wrapped
return_value = func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/samcli/commands/build/command.py", line 129, in cli
mode,
File "/usr/local/lib/python3.7/site-packages/samcli/commands/build/command.py", line 194, in do_cli
artifacts = builder.build()
File "/usr/local/lib/python3.7/site-packages/samcli/lib/build/app_builder.py", line 117, in build
function.metadata)
File "/usr/local/lib/python3.7/site-packages/samcli/lib/build/app_builder.py", line 271, in _build_function
options)
File "/usr/local/lib/python3.7/site-packages/samcli/lib/build/app_builder.py", line 369, in _build_function_on_container
container.wait_for_logs(stdout=stdout_stream, stderr=stderr_stream)
File "/usr/local/lib/python3.7/site-packages/samcli/local/docker/container.py", line 197, in wait_for_logs
raise RuntimeError("Container does not exist. Cannot get logs for this container")
RuntimeError: Container does not exist. Cannot get logs for this container
``` | True | Container ID 1230228 cannot be mapped to a host ID; aws-sam-cli-build-image-python3.7 - ### Description
The initial error happens during `sam build --use-container` inside the Bitbucket pipeline:
```
RuntimeError: Container does not exist. Cannot get logs for this container
```
In the Bitbucket pipeline I get following error output, when I run the command `docker run amazon/aws-sam-cli-build-image-python3.7 echo 'hello world'`.
```
6b6e3df282b0: Verifying Checksum
6b6e3df282b0: Download complete
6b6e3df282b0: Pull complete
failed to register layer: Error processing tar file(exit status 1): Container ID 1230228 cannot be mapped to a host ID
```
### Steps to reproduce
Run `docker run amazon/aws-sam-cli-build-image-python3.7 echo 'hello world'` inside the bitbucket pipeline.
### Observed result
```
6b6e3df282b0: Verifying Checksum
6b6e3df282b0: Download complete
6b6e3df282b0: Pull complete
failed to register layer: Error processing tar file(exit status 1): Container ID 1230228 cannot be mapped to a host ID
```
From my initial analysis it seems to be related to this error and solution: https://circleci.com/docs/2.0/high-uid-error/.
>The error is caused by a userns remapping failure. CircleCI runs Docker containers with userns enabled in order to securely run customers’ containers. The host machine is configured with a valid UID/GID for remapping. This UID/GID must be in the range of 0 - 65535.
Though this can only be applied by the docker image owner.
### Expected result
A working docker pull.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: python:3.7-buster (https://hub.docker.com/_/python)
2. `sam --version`: 0.53.0 (inside aws-sam-cli-build-image-python3.7), 'samcliVersion': '1.0.0' (inside python:3.7-buster)
```
docker history amazon/aws-sam-cli-build-image-python3.7
IMAGE CREATED CREATED BY SIZE COMMENT
9afb7fbe0b02 4 weeks ago /bin/sh -c #(nop) COPY file:ca68a30f0c2f97ee… 101MB
<missing> 4 weeks ago |1 SAM_CLI_VERSION=0.53.0 /bin/sh -c pip3 in… 457MB
<missing> 4 weeks ago /bin/sh -c #(nop) ENV LANG=en_US.UTF-8 0B
<missing> 4 weeks ago /bin/sh -c #(nop) ENV PATH=/var/lang/bin:/u… 0B
<missing> 4 weeks ago |1 SAM_CLI_VERSION=0.53.0 /bin/sh -c rm samc… 57.4MB
<missing> 4 weeks ago |1 SAM_CLI_VERSION=0.53.0 /bin/sh -c curl -L… 172MB
<missing> 4 weeks ago /bin/sh -c #(nop) ARG SAM_CLI_VERSION 0B
<missing> 4 weeks ago /bin/sh -c curl "https://awscli.amazonaws.co… 298kB
<missing> 4 weeks ago /bin/sh -c chmod 1777 /tmp && /usr/bin/pyt… 644MB
```
`Add --debug flag to command you are running`
```
Fetching amazon/aws-sam-cli-build-image-python3.7 Docker container image..............................................................................................................................................................................................................................................................................................................................................................................................................................................
Mounting /opt/atlassian/pipelines/agent/build/build/**reducted** as /tmp/samcli/source:ro,delegated inside runtime container
Container was not created. Skipping deletion
Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'duration': 35126, 'exitReason': 'RuntimeError', 'exitCode': 255, 'requestId': '0d920e92-1049-402b-a78d-6d2140eed13c', 'installationId': '1e1cbc8e-2ae3-42a0-95f8-d96eb5680fb6', 'sessionId': '8c001b5a-94b8-4456-a433-d566899baddc', 'executionEnvironment': 'CLI', 'pyversion': '3.7.8', 'samcliVersion': '1.0.0'}}]}
HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Traceback (most recent call last):
File "/usr/local/bin/sam", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/samcli/lib/telemetry/metrics.py", line 96, in wrapped
raise exception # pylint: disable=raising-bad-type
File "/usr/local/lib/python3.7/site-packages/samcli/lib/telemetry/metrics.py", line 62, in wrapped
return_value = func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/samcli/commands/build/command.py", line 129, in cli
mode,
File "/usr/local/lib/python3.7/site-packages/samcli/commands/build/command.py", line 194, in do_cli
artifacts = builder.build()
File "/usr/local/lib/python3.7/site-packages/samcli/lib/build/app_builder.py", line 117, in build
function.metadata)
File "/usr/local/lib/python3.7/site-packages/samcli/lib/build/app_builder.py", line 271, in _build_function
options)
File "/usr/local/lib/python3.7/site-packages/samcli/lib/build/app_builder.py", line 369, in _build_function_on_container
container.wait_for_logs(stdout=stdout_stream, stderr=stderr_stream)
File "/usr/local/lib/python3.7/site-packages/samcli/local/docker/container.py", line 197, in wait_for_logs
raise RuntimeError("Container does not exist. Cannot get logs for this container")
RuntimeError: Container does not exist. Cannot get logs for this container
``` | main | container id cannot be mapped to a host id aws sam cli build image description the initial error happens during sam build use container inside the bitbucket pipeline runtimeerror container does not exist cannot get logs for this container in the bitbucket pipeline i get following error output when i run the command docker run amazon aws sam cli build image echo hello world verifying checksum download complete pull complete failed to register layer error processing tar file exit status container id cannot be mapped to a host id steps to reproduce run docker run amazon aws sam cli build image echo hello world inside the bitbucket pipeline observed result verifying checksum download complete pull complete failed to register layer error processing tar file exit status container id cannot be mapped to a host id from my initial analysis it seems to be related to this error and solution the error is caused by a userns remapping failure circleci runs docker containers with userns enabled in order to securely run customers’ containers the host machine is configured with a valid uid gid for remapping this uid gid must be in the range of though this can only be applied by the docker image owner expected result a working docker pull additional environment details ex windows mac amazon linux etc os python buster sam version inside aws sam cli build image samcliversion inside python buster docker history amazon aws sam cli build image image created created by size comment weeks ago bin sh c nop copy file … weeks ago sam cli version bin sh c in… weeks ago bin sh c nop env lang en us utf weeks ago bin sh c nop env path var lang bin u… weeks ago sam cli version bin sh c rm samc… weeks ago sam cli version bin sh c curl l… weeks ago bin sh c nop arg sam cli version weeks ago bin sh c curl weeks ago bin sh c chmod tmp usr bin pyt… add debug flag to command you are running fetching amazon aws sam cli build image docker container image mounting opt atlassian pipelines agent build build reducted as tmp samcli source ro delegated inside runtime container container was not created skipping deletion sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout traceback most recent call last file usr local bin sam line in sys exit cli file usr local lib site packages click core py line in call return self main args kwargs file usr local lib site packages click core py line in main rv self invoke ctx file usr local lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file usr local lib site packages click core py line in invoke return ctx invoke self callback ctx params file usr local lib site packages click core py line in invoke return callback args kwargs file usr local lib site packages click decorators py line in new func return ctx invoke f obj args kwargs file usr local lib site packages click core py line in invoke return callback args kwargs file usr local lib site packages samcli lib telemetry metrics py line in wrapped raise exception pylint disable raising bad type file usr local lib site packages samcli lib telemetry metrics py line in wrapped return value func args kwargs file usr local lib site packages samcli commands build command py line in cli mode file usr local lib site packages samcli commands build command py line in do cli artifacts builder build file usr local lib site packages samcli lib build app builder py line in build function metadata file usr local lib site packages samcli lib build app builder py line in build function options file usr local lib site packages samcli lib build app builder py line in build function on container container wait for logs stdout stdout stream stderr stderr stream file usr local lib site packages samcli local docker container py line in wait for logs raise runtimeerror container does not exist cannot get logs for this container runtimeerror container does not exist cannot get logs for this container | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.