Unnamed: 0
int64 1
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 3
438
| labels
stringlengths 4
308
| body
stringlengths 7
254k
| index
stringclasses 7
values | text_combine
stringlengths 96
254k
| label
stringclasses 2
values | text
stringlengths 96
246k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
333,295
| 10,120,174,226
|
IssuesEvent
|
2019-07-31 13:12:58
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
Studio goes blank
|
bug/2-confirmed kind/bug priority/mid
|
For some reason I can't replicate again, the studio view was going blank when I pressed the Tab key to go to the next field either in the "view" or "table list" when creating a new record. I ended the dev session and started again and it didn't happen.
But this one kept on happening, I think it's related:
try typing a query this way:
```js
photon.
```
the moment you press the period, the screen goes all white, am not sure why.
|
1.0
|
Studio goes blank - For some reason I can't replicate again, the studio view was going blank when I pressed the Tab key to go to the next field either in the "view" or "table list" when creating a new record. I ended the dev session and started again and it didn't happen.
But this one kept on happening, I think it's related:
try typing a query this way:
```js
photon.
```
the moment you press the period, the screen goes all white, am not sure why.
|
non_main
|
studio goes blank for some reason i can t replicate again the studio view was going blank when i pressed the tab key to go to the next field either in the view or table list when creating a new record i ended the dev session and started again and it didn t happen but this one kept on happening i think it s related try typing a query this way js photon the moment you press the period the screen goes all white am not sure why
| 0
|
29,201
| 11,726,314,100
|
IssuesEvent
|
2020-03-10 14:20:32
|
jhdcruz/BlackPearl-website
|
https://api.github.com/repos/jhdcruz/BlackPearl-website
|
closed
|
WS-2020-0042 (Medium) detected in acorn-6.4.0.tgz
|
security vulnerability
|
## WS-2020-0042 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>acorn-6.4.0.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-6.4.0.tgz">https://registry.npmjs.org/acorn/-/acorn-6.4.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/BlackPearl-website/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/BlackPearl-website/node_modules/webpack/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- webpack-4.42.0.tgz (Root Library)
- :x: **acorn-6.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jhdcruz/BlackPearl-website/commit/41e2942adf6889619dc058aa72bf0ca9b176bb42">41e2942adf6889619dc058aa72bf0ca9b176bb42</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
acorn is vulnerable to REGEX DoS. A regex of the form /[x-\ud800]/u causes the parser to enter an infinite loop. attackers may leverage the vulnerability leading to a Denial of Service since the string is not valid UTF16 and it results in it being sanitized before reaching the parser.
<p>Publish Date: 2020-03-08
<p>URL: <a href=https://github.com/acornjs/acorn/commit/b5c17877ac0511e31579ea31e7650ba1a5871e51>WS-2020-0042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1488">https://www.npmjs.com/advisories/1488</a></p>
<p>Release Date: 2020-03-08</p>
<p>Fix Resolution: 7.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0042 (Medium) detected in acorn-6.4.0.tgz - ## WS-2020-0042 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>acorn-6.4.0.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-6.4.0.tgz">https://registry.npmjs.org/acorn/-/acorn-6.4.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/BlackPearl-website/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/BlackPearl-website/node_modules/webpack/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- webpack-4.42.0.tgz (Root Library)
- :x: **acorn-6.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jhdcruz/BlackPearl-website/commit/41e2942adf6889619dc058aa72bf0ca9b176bb42">41e2942adf6889619dc058aa72bf0ca9b176bb42</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
acorn is vulnerable to REGEX DoS. A regex of the form /[x-\ud800]/u causes the parser to enter an infinite loop. attackers may leverage the vulnerability leading to a Denial of Service since the string is not valid UTF16 and it results in it being sanitized before reaching the parser.
<p>Publish Date: 2020-03-08
<p>URL: <a href=https://github.com/acornjs/acorn/commit/b5c17877ac0511e31579ea31e7650ba1a5871e51>WS-2020-0042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1488">https://www.npmjs.com/advisories/1488</a></p>
<p>Release Date: 2020-03-08</p>
<p>Fix Resolution: 7.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_main
|
ws medium detected in acorn tgz ws medium severity vulnerability vulnerable library acorn tgz ecmascript parser library home page a href path to dependency file tmp ws scm blackpearl website package json path to vulnerable library tmp ws scm blackpearl website node modules webpack node modules acorn package json dependency hierarchy webpack tgz root library x acorn tgz vulnerable library found in head commit a href vulnerability details acorn is vulnerable to regex dos a regex of the form u causes the parser to enter an infinite loop attackers may leverage the vulnerability leading to a denial of service since the string is not valid and it results in it being sanitized before reaching the parser publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
322,034
| 23,886,230,710
|
IssuesEvent
|
2022-09-08 07:54:00
|
puppeteer/puppeteer
|
https://api.github.com/repos/puppeteer/puppeteer
|
closed
|
How to select the current element using `$eval` and/or `$$eval`?
|
needs-feedback documentation
|
I know I can do this by [$()](https://github.com/puppeteer/puppeteer/blob/v16.2.0/docs/api/puppeteer.elementhandle._.md) and [$$()](https://github.com/puppeteer/puppeteer/blob/v16.2.0/docs/api/puppeteer.elementhandle.__.md). But do I have to specify something other than the current element as the `selector` parameter of [$eval()](https://github.com/puppeteer/puppeteer/blob/v16.2.0/docs/api/puppeteer.elementhandle._eval.md) and [$$eval()](https://github.com/puppeteer/puppeteer/blob/v16.2.0/docs/api/puppeteer.elementhandle.__eval.md)?
I haven't found the _css selector_ that represents current element.
It'd be great if puppeteer could provide a workaround.
For example, making `selector` optional and thus selecting the current element by default.
```ts
tweetHandle.$eval(node => node.innerText);
```
|
1.0
|
How to select the current element using `$eval` and/or `$$eval`? - I know I can do this by [$()](https://github.com/puppeteer/puppeteer/blob/v16.2.0/docs/api/puppeteer.elementhandle._.md) and [$$()](https://github.com/puppeteer/puppeteer/blob/v16.2.0/docs/api/puppeteer.elementhandle.__.md). But do I have to specify something other than the current element as the `selector` parameter of [$eval()](https://github.com/puppeteer/puppeteer/blob/v16.2.0/docs/api/puppeteer.elementhandle._eval.md) and [$$eval()](https://github.com/puppeteer/puppeteer/blob/v16.2.0/docs/api/puppeteer.elementhandle.__eval.md)?
I haven't found the _css selector_ that represents current element.
It'd be great if puppeteer could provide a workaround.
For example, making `selector` optional and thus selecting the current element by default.
```ts
tweetHandle.$eval(node => node.innerText);
```
|
non_main
|
how to select the current element using eval and or eval i know i can do this by and but do i have to specify something other than the current element as the selector parameter of and i haven t found the css selector that represents current element it d be great if puppeteer could provide a workaround for example making selector optional and thus selecting the current element by default ts tweethandle eval node node innertext
| 0
|
543,486
| 15,882,978,451
|
IssuesEvent
|
2021-04-09 16:45:10
|
mozilla/addons-server
|
https://api.github.com/repos/mozilla/addons-server
|
closed
|
Developer dashboard page is slow for most users since the switch to MySQL 8.0.x
|
component: operations component: performance priority: p3
|
We've not tried to particularly optimize https://addons.mozilla.org/developers/addons (it was never fast, but it was fast enough for it's purpose) but recently (from around January 25th, according to stats) has become very slow (~15 seconds).
We should investigate if any code changes caused the slowdown (possible, but pushes were on January 21st and February 4th) or something else in the infrastructure.
|
1.0
|
Developer dashboard page is slow for most users since the switch to MySQL 8.0.x - We've not tried to particularly optimize https://addons.mozilla.org/developers/addons (it was never fast, but it was fast enough for it's purpose) but recently (from around January 25th, according to stats) has become very slow (~15 seconds).
We should investigate if any code changes caused the slowdown (possible, but pushes were on January 21st and February 4th) or something else in the infrastructure.
|
non_main
|
developer dashboard page is slow for most users since the switch to mysql x we ve not tried to particularly optimize it was never fast but it was fast enough for it s purpose but recently from around january according to stats has become very slow seconds we should investigate if any code changes caused the slowdown possible but pushes were on january and february or something else in the infrastructure
| 0
|
3,567
| 14,273,323,281
|
IssuesEvent
|
2020-11-21 21:05:28
|
geolexica/geolexica-server
|
https://api.github.com/repos/geolexica/geolexica-server
|
opened
|
Build JavaScripts somewhere in this project
|
javascript maintainability
|
Right now every site has its very own `package.json`. That isn't good, because we can't specify common requirements in one central place. For example, I'd really like to add dependency on Babel's [plugin for transpiling Unicode regexps](https://www.npmjs.com/package/@babel/plugin-proposal-unicode-property-regex), which is important for us (#135), even though it's now bundled with Babel as one of its default plugins.
Hence, I'm considering building all the required JavaScripts here in this very project. This will make overriding whole scripts more difficult, but perhaps it's not a big deal. Anyway, for maintainability sake, we prefer to configure common scripts rather than override them. Alternatively, maybe this project should produce NPM packages in addition to gems.
I don't know which way to go yet. I'm leaving this ticket open not to forget about this thing, and to open it for discussion.
|
True
|
Build JavaScripts somewhere in this project - Right now every site has its very own `package.json`. That isn't good, because we can't specify common requirements in one central place. For example, I'd really like to add dependency on Babel's [plugin for transpiling Unicode regexps](https://www.npmjs.com/package/@babel/plugin-proposal-unicode-property-regex), which is important for us (#135), even though it's now bundled with Babel as one of its default plugins.
Hence, I'm considering building all the required JavaScripts here in this very project. This will make overriding whole scripts more difficult, but perhaps it's not a big deal. Anyway, for maintainability sake, we prefer to configure common scripts rather than override them. Alternatively, maybe this project should produce NPM packages in addition to gems.
I don't know which way to go yet. I'm leaving this ticket open not to forget about this thing, and to open it for discussion.
|
main
|
build javascripts somewhere in this project right now every site has its very own package json that isn t good because we can t specify common requirements in one central place for example i d really like to add dependency on babel s which is important for us even though it s now bundled with babel as one of its default plugins hence i m considering building all the required javascripts here in this very project this will make overriding whole scripts more difficult but perhaps it s not a big deal anyway for maintainability sake we prefer to configure common scripts rather than override them alternatively maybe this project should produce npm packages in addition to gems i don t know which way to go yet i m leaving this ticket open not to forget about this thing and to open it for discussion
| 1
|
1,253
| 5,316,712,187
|
IssuesEvent
|
2017-02-13 20:36:59
|
christoff-buerger/racr
|
https://api.github.com/repos/christoff-buerger/racr
|
closed
|
replace which in Bash scripts by command -v
|
low maintainability
|
The `list-scheme-systems.bash` script is using `which` to find installed _R6RS Scheme_ systems that are officially supported by _RACR_. To use `which` in shell scripts is problematic however, since it is not a built-in command:
* External command calls are more expensive than built-ins.
* The semantics of the actual `which` executable/script called is operating system dependent (including different search strategies to find the command, exit codes and output behaviours like formatting and the device printed to).
A good overview about the problem gives http://unix.stackexchange.com/questions/85249/why-not-use-which-what-to-use-then.
A more portable -- in terms of operating system -- alternative is to use _Bash's_ `command -v` built-in; after all, _RACR's_ scripts are for _Bash_.
Scripts using `which` are:
* `list-scheme-systems.bash`
* `profiling/atomic-petrinets/print-system-configuration.bash`
The issue has been reported in pull request #73 by @rene-schoene.
|
True
|
replace which in Bash scripts by command -v - The `list-scheme-systems.bash` script is using `which` to find installed _R6RS Scheme_ systems that are officially supported by _RACR_. To use `which` in shell scripts is problematic however, since it is not a built-in command:
* External command calls are more expensive than built-ins.
* The semantics of the actual `which` executable/script called is operating system dependent (including different search strategies to find the command, exit codes and output behaviours like formatting and the device printed to).
A good overview about the problem gives http://unix.stackexchange.com/questions/85249/why-not-use-which-what-to-use-then.
A more portable -- in terms of operating system -- alternative is to use _Bash's_ `command -v` built-in; after all, _RACR's_ scripts are for _Bash_.
Scripts using `which` are:
* `list-scheme-systems.bash`
* `profiling/atomic-petrinets/print-system-configuration.bash`
The issue has been reported in pull request #73 by @rene-schoene.
|
main
|
replace which in bash scripts by command v the list scheme systems bash script is using which to find installed scheme systems that are officially supported by racr to use which in shell scripts is problematic however since it is not a built in command external command calls are more expensive than built ins the semantics of the actual which executable script called is operating system dependent including different search strategies to find the command exit codes and output behaviours like formatting and the device printed to a good overview about the problem gives a more portable in terms of operating system alternative is to use bash s command v built in after all racr s scripts are for bash scripts using which are list scheme systems bash profiling atomic petrinets print system configuration bash the issue has been reported in pull request by rene schoene
| 1
|
1,913
| 6,577,578,665
|
IssuesEvent
|
2017-09-12 01:53:27
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
Create VM with vsphere_guest from template and change IP address
|
affects_2.3 cloud feature_idea vmware waiting_on_maintainer
|
##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
N/A
##### SUMMARY
I need a way to create virtual machine (VM) from template and change ip address in the creation process.
|
True
|
Create VM with vsphere_guest from template and change IP address - ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
N/A
##### SUMMARY
I need a way to create virtual machine (VM) from template and change ip address in the creation process.
|
main
|
create vm with vsphere guest from template and change ip address issue type feature idea component name vsphere guest ansible version n a summary i need a way to create virtual machine vm from template and change ip address in the creation process
| 1
|
160,803
| 20,118,880,275
|
IssuesEvent
|
2022-02-07 22:52:52
|
TreyM-WSS/whitesource-demo-1
|
https://api.github.com/repos/TreyM-WSS/whitesource-demo-1
|
opened
|
CVE-2021-23364 (Medium) detected in browserslist-4.7.0.tgz
|
security vulnerability
|
## CVE-2021-23364 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserslist-4.7.0.tgz</b></p></summary>
<p>Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset</p>
<p>Library home page: <a href="https://registry.npmjs.org/browserslist/-/browserslist-4.7.0.tgz">https://registry.npmjs.org/browserslist/-/browserslist-4.7.0.tgz</a></p>
<p>
Dependency Hierarchy:
- preset-env-7.6.2.tgz (Root Library)
- :x: **browserslist-4.7.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TreyM-WSS/whitesource-demo-1/commit/6b3ad4a94fc12cfb9895be6288cc1855734988e1">6b3ad4a94fc12cfb9895be6288cc1855734988e1</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package browserslist from 4.0.0 and before 4.16.5 are vulnerable to Regular Expression Denial of Service (ReDoS) during parsing of queries.
<p>Publish Date: 2021-04-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23364>CVE-2021-23364</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364</a></p>
<p>Release Date: 2021-04-28</p>
<p>Fix Resolution: browserslist - 4.16.5</p>
</p>
</details>
<p></p>
|
True
|
CVE-2021-23364 (Medium) detected in browserslist-4.7.0.tgz - ## CVE-2021-23364 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserslist-4.7.0.tgz</b></p></summary>
<p>Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset</p>
<p>Library home page: <a href="https://registry.npmjs.org/browserslist/-/browserslist-4.7.0.tgz">https://registry.npmjs.org/browserslist/-/browserslist-4.7.0.tgz</a></p>
<p>
Dependency Hierarchy:
- preset-env-7.6.2.tgz (Root Library)
- :x: **browserslist-4.7.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TreyM-WSS/whitesource-demo-1/commit/6b3ad4a94fc12cfb9895be6288cc1855734988e1">6b3ad4a94fc12cfb9895be6288cc1855734988e1</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package browserslist from 4.0.0 and before 4.16.5 are vulnerable to Regular Expression Denial of Service (ReDoS) during parsing of queries.
<p>Publish Date: 2021-04-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23364>CVE-2021-23364</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364</a></p>
<p>Release Date: 2021-04-28</p>
<p>Fix Resolution: browserslist - 4.16.5</p>
</p>
</details>
<p></p>
|
non_main
|
cve medium detected in browserslist tgz cve medium severity vulnerability vulnerable library browserslist tgz share target browsers between different front end tools like autoprefixer stylelint and babel env preset library home page a href dependency hierarchy preset env tgz root library x browserslist tgz vulnerable library found in head commit a href vulnerability details the package browserslist from and before are vulnerable to regular expression denial of service redos during parsing of queries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution browserslist
| 0
|
143,246
| 13,056,800,695
|
IssuesEvent
|
2020-07-30 05:49:45
|
aws-samples/aws-cdk-intro-workshop
|
https://api.github.com/repos/aws-samples/aws-cdk-intro-workshop
|
closed
|
Workshop (Python) should advise about RetentionPolicy
|
documentation effort/small feature-request
|
_Originally posted by @telenieko in https://github.com/aws/aws-cdk/issues/7403
Hi,
I just got bit by #3476, it is stated that this is intentional and documented but...
Given it's a workshop, one would expect that the last step would destroy all created resources, which does not happen and there's no warning about it.
The Workshop should either:
In the Congrats page, when telling you to cdk destroy advise that the DynamoDB table will not be deleted (extra points for a link to the RetentionPolicy docs)
When creating the table, set removalPolicy: RemovalPolicy.DESTROY and explain it
Option two would be preferable as it explains something quite useful.
|
1.0
|
Workshop (Python) should advise about RetentionPolicy - _Originally posted by @telenieko in https://github.com/aws/aws-cdk/issues/7403
Hi,
I just got bit by #3476, it is stated that this is intentional and documented but...
Given it's a workshop, one would expect that the last step would destroy all created resources, which does not happen and there's no warning about it.
The Workshop should either:
In the Congrats page, when telling you to cdk destroy advise that the DynamoDB table will not be deleted (extra points for a link to the RetentionPolicy docs)
When creating the table, set removalPolicy: RemovalPolicy.DESTROY and explain it
Option two would be preferable as it explains something quite useful.
|
non_main
|
workshop python should advise about retentionpolicy originally posted by telenieko in hi i just got bit by it is stated that this is intentional and documented but given it s a workshop one would expect that the last step would destroy all created resources which does not happen and there s no warning about it the workshop should either in the congrats page when telling you to cdk destroy advise that the dynamodb table will not be deleted extra points for a link to the retentionpolicy docs when creating the table set removalpolicy removalpolicy destroy and explain it option two would be preferable as it explains something quite useful
| 0
|
5,251
| 26,576,817,573
|
IssuesEvent
|
2023-01-21 23:09:03
|
Lissy93/dashy
|
https://api.github.com/repos/Lissy93/dashy
|
closed
|
[BUG] Vulnerabilities Widget Not Loading
|
🐛 Bug 👤 Awaiting Maintainer Response
|
### Environment
Self-Hosted (Docker)
### System
Docker version 20.10.22, build 3a2c30b
### Version
2.1.1
### Describe the problem
The vulnerabilities widget does not load with any combination of configuration, even with options specified. The message on the widget and in the logs is "Unable to fetch data".
Other widgets like the domain monitor widget are working as expected.
Current non working widget configuration:
` widgets:
- type: cve-vulnerabilities
id: 0_1586_cvevulnerabilities
`
### Additional info

### Please tick the boxes
- [X] You have explained the issue clearly, and included all relevant info
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy
- [X] You've checked that this [issue hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide 
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct)
|
True
|
[BUG] Vulnerabilities Widget Not Loading - ### Environment
Self-Hosted (Docker)
### System
Docker version 20.10.22, build 3a2c30b
### Version
2.1.1
### Describe the problem
The vulnerabilities widget does not load with any combination of configuration, even with options specified. The message on the widget and in the logs is "Unable to fetch data".
Other widgets like the domain monitor widget are working as expected.
Current non working widget configuration:
` widgets:
- type: cve-vulnerabilities
id: 0_1586_cvevulnerabilities
`
### Additional info

### Please tick the boxes
- [X] You have explained the issue clearly, and included all relevant info
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy
- [X] You've checked that this [issue hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide 
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct)
|
main
|
vulnerabilities widget not loading environment self hosted docker system docker version build version describe the problem the vulnerabilities widget does not load with any combination of configuration even with options specified the message on the widget and in the logs is unable to fetch data other widgets like the domain monitor widget are working as expected current non working widget configuration widgets type cve vulnerabilities id cvevulnerabilities additional info please tick the boxes you have explained the issue clearly and included all relevant info you are using a version of dashy you ve checked that this you ve checked the and guide you agree to the
| 1
|
335,961
| 10,169,178,869
|
IssuesEvent
|
2019-08-07 23:18:15
|
AugurProject/augur
|
https://api.github.com/repos/AugurProject/augur
|
closed
|
Implement getAccountTimeRangedStats getter & tests
|
Feature Priority: High Product Critical
|
```ts
export interface AccountTimeRangedStatsResult {
// Yea. The ProfitLossChanged event then
// Sum of unique entries (defined by market + outcome) with non-zero netPosition
positions: number;
// OrderEvent table for fill events (eventType == 3) where they are the orderCreator or orderFiller address
// if multiple fills in the same tx count as one trade then also couldting just the unique tradeGroupId from those
numberOfTrades: number;
marketsCreated: number;
// Trades? uniq the market
marketsTraded: number;
// DisputeCrowdsourcerRedeemed where the payoutNumerators match the MarketFinalized winningPayoutNumerators
successfulDisputes: number;
// For getAccountTimeRangedStats.redeemedPositions use the InitialReporterRedeemed and DisputeCrowdsourcerRedeemed log?
redeemedPositions: number;
}
```
|
1.0
|
Implement getAccountTimeRangedStats getter & tests - ```ts
export interface AccountTimeRangedStatsResult {
// Yea. The ProfitLossChanged event then
// Sum of unique entries (defined by market + outcome) with non-zero netPosition
positions: number;
// OrderEvent table for fill events (eventType == 3) where they are the orderCreator or orderFiller address
// if multiple fills in the same tx count as one trade then also couldting just the unique tradeGroupId from those
numberOfTrades: number;
marketsCreated: number;
// Trades? uniq the market
marketsTraded: number;
// DisputeCrowdsourcerRedeemed where the payoutNumerators match the MarketFinalized winningPayoutNumerators
successfulDisputes: number;
// For getAccountTimeRangedStats.redeemedPositions use the InitialReporterRedeemed and DisputeCrowdsourcerRedeemed log?
redeemedPositions: number;
}
```
|
non_main
|
implement getaccounttimerangedstats getter tests ts export interface accounttimerangedstatsresult yea the profitlosschanged event then sum of unique entries defined by market outcome with non zero netposition positions number orderevent table for fill events eventtype where they are the ordercreator or orderfiller address if multiple fills in the same tx count as one trade then also couldting just the unique tradegroupid from those numberoftrades number marketscreated number trades uniq the market marketstraded number disputecrowdsourcerredeemed where the payoutnumerators match the marketfinalized winningpayoutnumerators successfuldisputes number for getaccounttimerangedstats redeemedpositions use the initialreporterredeemed and disputecrowdsourcerredeemed log redeemedpositions number
| 0
|
42,568
| 5,476,560,222
|
IssuesEvent
|
2017-03-11 21:47:51
|
ceylon/ceylon.ast
|
https://api.github.com/repos/ceylon/ceylon.ast
|
opened
|
Use ModuleSpecifier in ModuleImport
|
API design
|
The culmination of #128 and #129, in a way – when I implemented #128, we didn’t have a spec for #129 and `ModuleSpecifier` yet, and when I implemented #129, I didn’t notice that `ModuleImport` had also been updated again to use `ModuleSpecifier` too.
|
1.0
|
Use ModuleSpecifier in ModuleImport - The culmination of #128 and #129, in a way – when I implemented #128, we didn’t have a spec for #129 and `ModuleSpecifier` yet, and when I implemented #129, I didn’t notice that `ModuleImport` had also been updated again to use `ModuleSpecifier` too.
|
non_main
|
use modulespecifier in moduleimport the culmination of and in a way – when i implemented we didn’t have a spec for and modulespecifier yet and when i implemented i didn’t notice that moduleimport had also been updated again to use modulespecifier too
| 0
|
79,805
| 23,044,142,226
|
IssuesEvent
|
2022-07-23 16:22:11
|
Tombodil/Trope22
|
https://api.github.com/repos/Tombodil/Trope22
|
closed
|
Ongoing updates to launch - Notes and feedback
|
enhancement Building
|
### Ongoing changes and updates
Changes per 7/17 meeting and other assorted changes
===================================
HEADER
-"Go" shortened, text centered, alignment adjusted to new 140px area margin (even margins r/l).
--Dropdown adjusted to match tagline alignment.
--Full logic for header adjusted, now arrows highlight when category hovered or selected.
--'Go' has correct hover state.
CAROUSELS
-carousel background area black
--arrows stronger needs rollover
--- FIXED sqaurespace margin calc: page + 3vw padder + internal component padding. That was a surprise geometry problem!
CONTACT FORM / FOOTER
-adjusted margins to new menu size and position, internal margins and layout adjusted.
-removed the bottom border on the social media links
-left text line break standardized
-Fields now stay white with user input
-Submit button becomes blue when all fields are filled
--actual data validation is handled by squarespace, this is visual, but all fields ARE required and type is validated
-Footer text is now grey and aligns with form content
HOME PAGE
-Footer test made grey and aligned with page layout
-content margins for content blocks hard coded to 140px;
-Left alignment adjusted
--overlay aligns with main content, everything flush left
--choices are width of 'trope collaborative'
---carousel dot replaced with a good icon
QUOTES FUNCTION -logic done
-quote white and reduced leading to 1.2em
-kept italic
--title of quoted person - changed to blue, can be removed
LANDING PAGE SORTING
-tabs are white on blue
-reset is black with white text
-reset is hidden until a choice is made
PUBLISHED PAGE - Agile
-removed space between title and subtitle
PUBLISHING LANDING PAGE
--Blurbs added
INTEREST AREAS
-Sub Title removed and text brought in alignment with left padding
description copy is now grey
-Description is flush left on the sub cat line ONLY if over 1300px screen width. otherwise it's the original size (for mobile readability).
--right margin to arrows added
--arrows are correctly aligned and spaced based on monospace type
INTERVIEWS
-Kept leader lines
-Interview link on line break and formatted
-Publication title is bold
--Sub Titles given line breaks where needed.
-All img icons given specific css formatting to keep aspect ratio, currently 100px wide.
--Added new interview with link
CASE STORIES LANDING
CASE STORIES (Current Example Aurora)
-Sub cat are H3 and no leader lines
--back chevron on left, right chevron on right
--Chevrons now change color with the hover state
INSPIRATION (LINKS LANDING)
-Broken Links removed, correct links checked and links added
--Added new blubs
INSTAGRAM
BLOG LINK
|
1.0
|
Ongoing updates to launch - Notes and feedback - ### Ongoing changes and updates
Changes per 7/17 meeting and other assorted changes
===================================
HEADER
-"Go" shortened, text centered, alignment adjusted to new 140px area margin (even margins r/l).
--Dropdown adjusted to match tagline alignment.
--Full logic for header adjusted, now arrows highlight when category hovered or selected.
--'Go' has correct hover state.
CAROUSELS
-carousel background area black
--arrows stronger needs rollover
--- FIXED sqaurespace margin calc: page + 3vw padder + internal component padding. That was a surprise geometry problem!
CONTACT FORM / FOOTER
-adjusted margins to new menu size and position, internal margins and layout adjusted.
-removed the bottom border on the social media links
-left text line break standardized
-Fields now stay white with user input
-Submit button becomes blue when all fields are filled
--actual data validation is handled by squarespace, this is visual, but all fields ARE required and type is validated
-Footer text is now grey and aligns with form content
HOME PAGE
-Footer test made grey and aligned with page layout
-content margins for content blocks hard coded to 140px;
-Left alignment adjusted
--overlay aligns with main content, everything flush left
--choices are width of 'trope collaborative'
---carousel dot replaced with a good icon
QUOTES FUNCTION -logic done
-quote white and reduced leading to 1.2em
-kept italic
--title of quoted person - changed to blue, can be removed
LANDING PAGE SORTING
-tabs are white on blue
-reset is black with white text
-reset is hidden until a choice is made
PUBLISHED PAGE - Agile
-removed space between title and subtitle
PUBLISHING LANDING PAGE
--Blurbs added
INTEREST AREAS
-Sub Title removed and text brought in alignment with left padding
description copy is now grey
-Description is flush left on the sub cat line ONLY if over 1300px screen width. otherwise it's the original size (for mobile readability).
--right margin to arrows added
--arrows are correctly aligned and spaced based on monospace type
INTERVIEWS
-Kept leader lines
-Interview link on line break and formatted
-Publication title is bold
--Sub Titles given line breaks where needed.
-All img icons given specific css formatting to keep aspect ratio, currently 100px wide.
--Added new interview with link
CASE STORIES LANDING
CASE STORIES (Current Example Aurora)
-Sub cat are H3 and no leader lines
--back chevron on left, right chevron on right
--Chevrons now change color with the hover state
INSPIRATION (LINKS LANDING)
-Broken Links removed, correct links checked and links added
--Added new blubs
INSTAGRAM
BLOG LINK
|
non_main
|
ongoing updates to launch notes and feedback ongoing changes and updates changes per meeting and other assorted changes header go shortened text centered alignment adjusted to new area margin even margins r l dropdown adjusted to match tagline alignment full logic for header adjusted now arrows highlight when category hovered or selected go has correct hover state carousels carousel background area black arrows stronger needs rollover fixed sqaurespace margin calc page padder internal component padding that was a surprise geometry problem contact form footer adjusted margins to new menu size and position internal margins and layout adjusted removed the bottom border on the social media links left text line break standardized fields now stay white with user input submit button becomes blue when all fields are filled actual data validation is handled by squarespace this is visual but all fields are required and type is validated footer text is now grey and aligns with form content home page footer test made grey and aligned with page layout content margins for content blocks hard coded to left alignment adjusted overlay aligns with main content everything flush left choices are width of trope collaborative carousel dot replaced with a good icon quotes function logic done quote white and reduced leading to kept italic title of quoted person changed to blue can be removed landing page sorting tabs are white on blue reset is black with white text reset is hidden until a choice is made published page agile removed space between title and subtitle publishing landing page blurbs added interest areas sub title removed and text brought in alignment with left padding description copy is now grey description is flush left on the sub cat line only if over screen width otherwise it s the original size for mobile readability right margin to arrows added arrows are correctly aligned and spaced based on monospace type interviews kept leader lines interview link on line break and formatted publication title is bold sub titles given line breaks where needed all img icons given specific css formatting to keep aspect ratio currently wide added new interview with link case stories landing case stories current example aurora sub cat are and no leader lines back chevron on left right chevron on right chevrons now change color with the hover state inspiration links landing broken links removed correct links checked and links added added new blubs instagram blog link
| 0
|
83,974
| 7,886,274,780
|
IssuesEvent
|
2018-06-27 14:49:19
|
hazelcast/hazelcast-nodejs-client
|
https://api.github.com/repos/hazelcast/hazelcast-nodejs-client
|
closed
|
Map Partition Aware put
|
Type: Test-Failure
|
https://hazelcast-l337.ci.cloudbees.com/view/Official%20Builds/job/NodeJS-4/431/console
```
06:51:12 Map Partition Aware
06:51:28 [DefaultLogger] INFO at ClusterService: Members received.
06:51:28 [ Member {
06:51:28 address: Address { host: 'localhost', port: 5701, type: 0 },
06:51:28 uuid: 'f690d1ea-de83-4339-8a38-d5a698bad1db',
06:51:28 isLiteMember: false,
06:51:28 attributes: {} },
06:51:28 Member {
06:51:28 address: Address { host: 'localhost', port: 5702, type: 0 },
06:51:28 uuid: '64f7d39f-62d1-4961-9af3-6f72da39845d',
06:51:28 isLiteMember: false,
06:51:28 attributes: {} },
06:51:28 Member {
06:51:28 address: Address { host: 'localhost', port: 5703, type: 0 },
06:51:28 uuid: 'a03d6fd6-145f-48d1-bedd-73ddcf3dc8e9',
06:51:28 isLiteMember: false,
06:51:28 attributes: {} } ]
06:51:29 [DefaultLogger] INFO at HazelcastClient: Client started
06:51:36 1) put
...
1) Map Partition Aware put:
06:54:54
06:54:54 One member should have all of the entries. The rest will have 0 entries.
06:54:54 + expected - actual
06:54:54
06:54:54 [
06:54:54 - 9999
06:54:54 + 10000
06:54:54 0
06:54:54 0
06:54:54 ]
06:54:54
06:54:54 at test/map/MapPartitionAwareTest.js:103:129
06:54:54 at tryCatcher (node_modules/bluebird/js/release/util.js:16:23)
06:54:54 at Promise._settlePromiseFromHandler (node_modules/bluebird/js/release/promise.js:512:31)
06:54:54 at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:569:18)
06:54:54 at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
06:54:54 at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
06:54:54 at Promise._fulfill (node_modules/bluebird/js/release/promise.js:638:18)
06:54:54 at PromiseArray._resolve (node_modules/bluebird/js/release/promise_array.js:126:19)
06:54:54 at PromiseArray._promiseFulfilled (node_modules/bluebird/js/release/promise_array.js:144:14)
06:54:54 at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:574:26)
06:54:54 at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
06:54:54 at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
06:54:54 at Async._drainQueue (node_modules/bluebird/js/release/async.js:133:16)
06:54:54 at Async._drainQueues (node_modules/bluebird/js/release/async.js:143:10)
06:54:54 at Immediate.Async.drainQueues [as _onImmediate] (node_modules/bluebird/js/release/async.js:17:14)
```
|
1.0
|
Map Partition Aware put - https://hazelcast-l337.ci.cloudbees.com/view/Official%20Builds/job/NodeJS-4/431/console
```
06:51:12 Map Partition Aware
06:51:28 [DefaultLogger] INFO at ClusterService: Members received.
06:51:28 [ Member {
06:51:28 address: Address { host: 'localhost', port: 5701, type: 0 },
06:51:28 uuid: 'f690d1ea-de83-4339-8a38-d5a698bad1db',
06:51:28 isLiteMember: false,
06:51:28 attributes: {} },
06:51:28 Member {
06:51:28 address: Address { host: 'localhost', port: 5702, type: 0 },
06:51:28 uuid: '64f7d39f-62d1-4961-9af3-6f72da39845d',
06:51:28 isLiteMember: false,
06:51:28 attributes: {} },
06:51:28 Member {
06:51:28 address: Address { host: 'localhost', port: 5703, type: 0 },
06:51:28 uuid: 'a03d6fd6-145f-48d1-bedd-73ddcf3dc8e9',
06:51:28 isLiteMember: false,
06:51:28 attributes: {} } ]
06:51:29 [DefaultLogger] INFO at HazelcastClient: Client started
06:51:36 1) put
...
1) Map Partition Aware put:
06:54:54
06:54:54 One member should have all of the entries. The rest will have 0 entries.
06:54:54 + expected - actual
06:54:54
06:54:54 [
06:54:54 - 9999
06:54:54 + 10000
06:54:54 0
06:54:54 0
06:54:54 ]
06:54:54
06:54:54 at test/map/MapPartitionAwareTest.js:103:129
06:54:54 at tryCatcher (node_modules/bluebird/js/release/util.js:16:23)
06:54:54 at Promise._settlePromiseFromHandler (node_modules/bluebird/js/release/promise.js:512:31)
06:54:54 at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:569:18)
06:54:54 at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
06:54:54 at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
06:54:54 at Promise._fulfill (node_modules/bluebird/js/release/promise.js:638:18)
06:54:54 at PromiseArray._resolve (node_modules/bluebird/js/release/promise_array.js:126:19)
06:54:54 at PromiseArray._promiseFulfilled (node_modules/bluebird/js/release/promise_array.js:144:14)
06:54:54 at Promise._settlePromise (node_modules/bluebird/js/release/promise.js:574:26)
06:54:54 at Promise._settlePromise0 (node_modules/bluebird/js/release/promise.js:614:10)
06:54:54 at Promise._settlePromises (node_modules/bluebird/js/release/promise.js:693:18)
06:54:54 at Async._drainQueue (node_modules/bluebird/js/release/async.js:133:16)
06:54:54 at Async._drainQueues (node_modules/bluebird/js/release/async.js:143:10)
06:54:54 at Immediate.Async.drainQueues [as _onImmediate] (node_modules/bluebird/js/release/async.js:17:14)
```
|
non_main
|
map partition aware put map partition aware info at clusterservice members received member address address host localhost port type uuid islitemember false attributes member address address host localhost port type uuid islitemember false attributes member address address host localhost port type uuid bedd islitemember false attributes info at hazelcastclient client started put map partition aware put one member should have all of the entries the rest will have entries expected actual at test map mappartitionawaretest js at trycatcher node modules bluebird js release util js at promise settlepromisefromhandler node modules bluebird js release promise js at promise settlepromise node modules bluebird js release promise js at promise node modules bluebird js release promise js at promise settlepromises node modules bluebird js release promise js at promise fulfill node modules bluebird js release promise js at promisearray resolve node modules bluebird js release promise array js at promisearray promisefulfilled node modules bluebird js release promise array js at promise settlepromise node modules bluebird js release promise js at promise node modules bluebird js release promise js at promise settlepromises node modules bluebird js release promise js at async drainqueue node modules bluebird js release async js at async drainqueues node modules bluebird js release async js at immediate async drainqueues node modules bluebird js release async js
| 0
|
1,542
| 6,572,233,227
|
IssuesEvent
|
2017-09-11 00:23:23
|
ansible/ansible-modules-extras
|
https://api.github.com/repos/ansible/ansible-modules-extras
|
closed
|
using NVM: npm module throws "ValueError: No JSON object could be decoded" the second time it's run
|
affects_2.0 bug_report waiting_on_maintainer
|
##### ISSUE TYPE
- Bug Report
##### ANSIBLE VERSION
```
ansible 2.0.0.2
config file = /Users/oby/cermati-deployment/ansible.cfg
configured module search path = /usr/share/ansible
```
##### CONFIGURATION
Nothing changed, I use the default configuration.
##### OS / ENVIRONMENT
Running ansible on Mac Darwin Kernel Version 13.4.0
This for deploying on Ubuntu 14.04
##### SUMMARY
I used nvm (node version manager https://www.npmjs.com/package/nvm) to install different version of npm on my server. So after installing node version 0.10.44 using nvm, I tried doing `npm install` using ansible's npm module. The first time my playbook is run, everything went well. **But the second time** I run the playbook, the npm module throw error (this behavior is pretty consistent):
```
TASK [do npm install from package.json for npm version 2.15.0] *****************
fatal: [128.199.225.57]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461802965.66-211488871785680/npm\", line 2198, in <module>\r\n main()\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461802965.66-211488871785680/npm\", line 245, in main\r\n installed, missing = npm.list()\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461802965.66-211488871785680/npm\", line 169, in list\r\n data = json.loads(self._exec(cmd, True, False))\r\n File \"/usr/lib/python2.7/json/__init__.py\", line 338, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/lib/python2.7/json/decoder.py\", line 366, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/usr/lib/python2.7/json/decoder.py\", line 384, in raw_decode\r\n raise ValueError(\"No JSON object could be decoded\")\r\nValueError: No JSON object could be decoded\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
Here's the ansible command that caused the problem:
```
- name: do npm install from package.json for npm version 2.15.0
npm: executable=/home/appRunner/.nvm/v0.10.44/bin/npm path={{ workspace_dir }}
remote_user: "appRunner"
```
NOTE: I can run npm install multiple times if I ssh-ed directly to the server.
For now, my workaround is using `command` module and issuing command `npm install` directly. But it'd be nice if npm module is also working.
Here's the ansible command that I use for the workaround:
```
- name: do npm install from package.json for npm version 2.15.0
command: chdir={{ workspace_dir }} /bin/bash -c "{{nvm_bin_dir}}/npm install"
remote_user: "appRunner"
```
##### STEPS TO REPRODUCE
Create a playbook that
1.) Install nvm
2.) Install npm version 0.10.44
3.) run npm install
Run the playbook twice. The second time, it will throw the error.
See example playbook
```
- hosts: stg2-server
remote_user: root
vars:
workspace_dir: /home/{{ server.name }}/workspace
nvm_dir: /home/{{ server.name }}/.nvm
nvm_bin_dir: "{{nvm_dir}}/v{{node_version}}/bin"
vars_files:
- ./vars/secure_vars.yml
roles:
# setup user and install gcc, make, etc.
- role: common
tasks:
- name: install nvm
shell: curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash
remote_user: "appRunner"
- name: install node v0.10.44
shell: export NVM_DIR="$HOME/.nvm" && . "$NVM_DIR/nvm.sh" && nvm install v{{node_version}} && nvm use v{{node_version}}
remote_user: "appRunner"
- name: create a workspace directory in the user home directory
file: path={{ workspace_dir }} state=directory
remote_user: "appRunner"
- name: do a git clone to the workspace from master branch
git: accept_hostkey=yes repo=<fill your own repo> dest={{ workspace_dir }}
remote_user: "appRunner"
- name: do npm install from package.json for npm version 2.15.0
npm: executable=/home/appRunner/.nvm/v0.10.44/bin/npm path={{ workspace_dir }}
remote_user: "appRunner"
```
##### EXPECTED RESULTS
```
TASK [do npm install from package.json for npm version 2.15.0] *****************
changed: [128.199.225.57]
```
##### ACTUAL RESULTS
```
TASK [do npm install from package.json for npm version 2.15.0] *****************
task path: /Users/oby/cermati-deployment/test.yml:31
<128.199.225.57> ESTABLISH SSH CONNECTION FOR USER: appRunner
<128.199.225.57> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=appRunner -o ConnectTimeout=10 -o ControlPath=/Users/oby/.ansible/cp/ansible-ssh-%h-%p-%r -tt 128.199.225.57 '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058 )" )'
<128.199.225.57> PUT /var/folders/jd/srwxscyj62j3q8gx1_h37djh0000gn/T/tmpt08bdv TO /home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm
<128.199.225.57> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=appRunner -o ConnectTimeout=10 -o ControlPath=/Users/oby/.ansible/cp/ansible-ssh-%h-%p-%r '[128.199.225.57]'
<128.199.225.57> ESTABLISH SSH CONNECTION FOR USER: appRunner
<128.199.225.57> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=appRunner -o ConnectTimeout=10 -o ControlPath=/Users/oby/.ansible/cp/ansible-ssh-%h-%p-%r -tt 128.199.225.57 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm; rm -rf "/home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/" > /dev/null 2>&1'
fatal: [128.199.225.57]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "npm"}, "module_stderr": "OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011\ndebug1: Reading configuration data /Users/oby/.ssh/config\r\ndebug1: /Users/oby/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh_config\r\ndebug1: /etc/ssh_config line 20: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 20565\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 128.199.225.57 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm\", line 2198, in <module>\r\n main()\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm\", line 245, in main\r\n installed, missing = npm.list()\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm\", line 169, in list\r\n data = json.loads(self._exec(cmd, True, False))\r\n File \"/usr/lib/python2.7/json/__init__.py\", line 338, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/lib/python2.7/json/decoder.py\", line 366, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/usr/lib/python2.7/json/decoder.py\", line 384, in raw_decode\r\n raise ValueError(\"No JSON object could be decoded\")\r\nValueError: No JSON object could be decoded\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
|
True
|
using NVM: npm module throws "ValueError: No JSON object could be decoded" the second time it's run - ##### ISSUE TYPE
- Bug Report
##### ANSIBLE VERSION
```
ansible 2.0.0.2
config file = /Users/oby/cermati-deployment/ansible.cfg
configured module search path = /usr/share/ansible
```
##### CONFIGURATION
Nothing changed, I use the default configuration.
##### OS / ENVIRONMENT
Running ansible on Mac Darwin Kernel Version 13.4.0
This for deploying on Ubuntu 14.04
##### SUMMARY
I used nvm (node version manager https://www.npmjs.com/package/nvm) to install different version of npm on my server. So after installing node version 0.10.44 using nvm, I tried doing `npm install` using ansible's npm module. The first time my playbook is run, everything went well. **But the second time** I run the playbook, the npm module throw error (this behavior is pretty consistent):
```
TASK [do npm install from package.json for npm version 2.15.0] *****************
fatal: [128.199.225.57]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461802965.66-211488871785680/npm\", line 2198, in <module>\r\n main()\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461802965.66-211488871785680/npm\", line 245, in main\r\n installed, missing = npm.list()\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461802965.66-211488871785680/npm\", line 169, in list\r\n data = json.loads(self._exec(cmd, True, False))\r\n File \"/usr/lib/python2.7/json/__init__.py\", line 338, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/lib/python2.7/json/decoder.py\", line 366, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/usr/lib/python2.7/json/decoder.py\", line 384, in raw_decode\r\n raise ValueError(\"No JSON object could be decoded\")\r\nValueError: No JSON object could be decoded\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
Here's the ansible command that caused the problem:
```
- name: do npm install from package.json for npm version 2.15.0
npm: executable=/home/appRunner/.nvm/v0.10.44/bin/npm path={{ workspace_dir }}
remote_user: "appRunner"
```
NOTE: I can run npm install multiple times if I ssh-ed directly to the server.
For now, my workaround is using `command` module and issuing command `npm install` directly. But it'd be nice if npm module is also working.
Here's the ansible command that I use for the workaround:
```
- name: do npm install from package.json for npm version 2.15.0
command: chdir={{ workspace_dir }} /bin/bash -c "{{nvm_bin_dir}}/npm install"
remote_user: "appRunner"
```
##### STEPS TO REPRODUCE
Create a playbook that
1.) Install nvm
2.) Install npm version 0.10.44
3.) run npm install
Run the playbook twice. The second time, it will throw the error.
See example playbook
```
- hosts: stg2-server
remote_user: root
vars:
workspace_dir: /home/{{ server.name }}/workspace
nvm_dir: /home/{{ server.name }}/.nvm
nvm_bin_dir: "{{nvm_dir}}/v{{node_version}}/bin"
vars_files:
- ./vars/secure_vars.yml
roles:
# setup user and install gcc, make, etc.
- role: common
tasks:
- name: install nvm
shell: curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash
remote_user: "appRunner"
- name: install node v0.10.44
shell: export NVM_DIR="$HOME/.nvm" && . "$NVM_DIR/nvm.sh" && nvm install v{{node_version}} && nvm use v{{node_version}}
remote_user: "appRunner"
- name: create a workspace directory in the user home directory
file: path={{ workspace_dir }} state=directory
remote_user: "appRunner"
- name: do a git clone to the workspace from master branch
git: accept_hostkey=yes repo=<fill your own repo> dest={{ workspace_dir }}
remote_user: "appRunner"
- name: do npm install from package.json for npm version 2.15.0
npm: executable=/home/appRunner/.nvm/v0.10.44/bin/npm path={{ workspace_dir }}
remote_user: "appRunner"
```
##### EXPECTED RESULTS
```
TASK [do npm install from package.json for npm version 2.15.0] *****************
changed: [128.199.225.57]
```
##### ACTUAL RESULTS
```
TASK [do npm install from package.json for npm version 2.15.0] *****************
task path: /Users/oby/cermati-deployment/test.yml:31
<128.199.225.57> ESTABLISH SSH CONNECTION FOR USER: appRunner
<128.199.225.57> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=appRunner -o ConnectTimeout=10 -o ControlPath=/Users/oby/.ansible/cp/ansible-ssh-%h-%p-%r -tt 128.199.225.57 '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058 )" )'
<128.199.225.57> PUT /var/folders/jd/srwxscyj62j3q8gx1_h37djh0000gn/T/tmpt08bdv TO /home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm
<128.199.225.57> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=appRunner -o ConnectTimeout=10 -o ControlPath=/Users/oby/.ansible/cp/ansible-ssh-%h-%p-%r '[128.199.225.57]'
<128.199.225.57> ESTABLISH SSH CONNECTION FOR USER: appRunner
<128.199.225.57> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=appRunner -o ConnectTimeout=10 -o ControlPath=/Users/oby/.ansible/cp/ansible-ssh-%h-%p-%r -tt 128.199.225.57 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm; rm -rf "/home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/" > /dev/null 2>&1'
fatal: [128.199.225.57]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "npm"}, "module_stderr": "OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011\ndebug1: Reading configuration data /Users/oby/.ssh/config\r\ndebug1: /Users/oby/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh_config\r\ndebug1: /etc/ssh_config line 20: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 20565\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 128.199.225.57 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm\", line 2198, in <module>\r\n main()\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm\", line 245, in main\r\n installed, missing = npm.list()\r\n File \"/home/appRunner/.ansible/tmp/ansible-tmp-1461811178.63-221498059846058/npm\", line 169, in list\r\n data = json.loads(self._exec(cmd, True, False))\r\n File \"/usr/lib/python2.7/json/__init__.py\", line 338, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/lib/python2.7/json/decoder.py\", line 366, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/usr/lib/python2.7/json/decoder.py\", line 384, in raw_decode\r\n raise ValueError(\"No JSON object could be decoded\")\r\nValueError: No JSON object could be decoded\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
|
main
|
using nvm npm module throws valueerror no json object could be decoded the second time it s run issue type bug report ansible version ansible config file users oby cermati deployment ansible cfg configured module search path usr share ansible configuration nothing changed i use the default configuration os environment running ansible on mac darwin kernel version this for deploying on ubuntu summary i used nvm node version manager to install different version of npm on my server so after installing node version using nvm i tried doing npm install using ansible s npm module the first time my playbook is run everything went well but the second time i run the playbook the npm module throw error this behavior is pretty consistent task fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file home apprunner ansible tmp ansible tmp npm line in r n main r n file home apprunner ansible tmp ansible tmp npm line in main r n installed missing npm list r n file home apprunner ansible tmp ansible tmp npm line in list r n data json loads self exec cmd true false r n file usr lib json init py line in loads r n return default decoder decode s r n file usr lib json decoder py line in decode r n obj end self raw decode s idx w s end r n file usr lib json decoder py line in raw decode r n raise valueerror no json object could be decoded r nvalueerror no json object could be decoded r n msg module failure parsed false here s the ansible command that caused the problem name do npm install from package json for npm version npm executable home apprunner nvm bin npm path workspace dir remote user apprunner note i can run npm install multiple times if i ssh ed directly to the server for now my workaround is using command module and issuing command npm install directly but it d be nice if npm module is also working here s the ansible command that i use for the workaround name do npm install from package json for npm version command chdir workspace dir bin bash c nvm bin dir npm install remote user apprunner steps to reproduce create a playbook that install nvm install npm version run npm install run the playbook twice the second time it will throw the error see example playbook hosts server remote user root vars workspace dir home server name workspace nvm dir home server name nvm nvm bin dir nvm dir v node version bin vars files vars secure vars yml roles setup user and install gcc make etc role common tasks name install nvm shell curl o bash remote user apprunner name install node shell export nvm dir home nvm nvm dir nvm sh nvm install v node version nvm use v node version remote user apprunner name create a workspace directory in the user home directory file path workspace dir state directory remote user apprunner name do a git clone to the workspace from master branch git accept hostkey yes repo dest workspace dir remote user apprunner name do npm install from package json for npm version npm executable home apprunner nvm bin npm path workspace dir remote user apprunner expected results task changed actual results task task path users oby cermati deployment test yml establish ssh connection for user apprunner ssh exec ssh c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user apprunner o connecttimeout o controlpath users oby ansible cp ansible ssh h p r tt umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders jd t to home apprunner ansible tmp ansible tmp npm ssh exec sftp b c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user apprunner o connecttimeout o controlpath users oby ansible cp ansible ssh h p r establish ssh connection for user apprunner ssh exec ssh c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user apprunner o connecttimeout o controlpath users oby ansible cp ansible ssh h p r tt lang c lc all c lc messages c usr bin python home apprunner ansible tmp ansible tmp npm rm rf home apprunner ansible tmp ansible tmp dev null fatal failed changed false failed true invocation module name npm module stderr openssh osslshim dec reading configuration data users oby ssh config r users oby ssh config line applying options for r reading configuration data etc ssh config r etc ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to closed r n module stdout traceback most recent call last r n file home apprunner ansible tmp ansible tmp npm line in r n main r n file home apprunner ansible tmp ansible tmp npm line in main r n installed missing npm list r n file home apprunner ansible tmp ansible tmp npm line in list r n data json loads self exec cmd true false r n file usr lib json init py line in loads r n return default decoder decode s r n file usr lib json decoder py line in decode r n obj end self raw decode s idx w s end r n file usr lib json decoder py line in raw decode r n raise valueerror no json object could be decoded r nvalueerror no json object could be decoded r n msg module failure parsed false
| 1
|
4,181
| 20,115,586,148
|
IssuesEvent
|
2022-02-07 19:09:41
|
backdrop-ops/contrib
|
https://api.github.com/repos/backdrop-ops/contrib
|
opened
|
Contrib Group Application:
|
Maintainer application
|
Hello and welcome to the contrib application process! We're happy to have you :)
**Please indicate how you intend to help the Backdrop community by joining this group**
Option 1
Option 2
My team would like to start the process of providing initial ports of certain modules for the Backdrop community. My team would also like to contribute to Backdrop community by submitting fixes, etc. For example we have already contributed to the coder_upgrade module and made posts regarding initial port requests for a wealth of Drupal 7 modules.
## Based on your selection above, please provide the following information:
**(option #1) The name of your module, theme, or layout**
cas, coder_upgrade, other modules my team has planned.
## (option #1) Please note these 3 requirements for new contrib projects:
- [ ] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [ ] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [ ] If porting a Drupal 7 project, Maintain the Git history from Drupal.
**(option #1 -- optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
- https://www.drupal.org/project/cas/issues/2914520
- https://www.drupal.org/project/menu_token/issues/3261166
- https://www.drupal.org/project/rules_conditional/issues/3261174
- https://www.drupal.org/project/seckit/issues/3261178
- https://www.drupal.org/project/views_php/issues/3261182
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/rbargerhuff/cas
**(option #2) If you have already contributed code to Backdrop core or contrib projects, please provide 1-3 links to pull requests or commits**
- https://github.com/backdrop-contrib/coder_upgrade/issues/75
- https://github.com/backdrop-contrib/coder_upgrade/pull/74 (smaiorana is part of my team)
- https://github.com/backdrop-contrib/coder_upgrade/pull/57
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
_Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project._
OK
_Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group._
OK
Cheers!
|
True
|
Contrib Group Application: - Hello and welcome to the contrib application process! We're happy to have you :)
**Please indicate how you intend to help the Backdrop community by joining this group**
Option 1
Option 2
My team would like to start the process of providing initial ports of certain modules for the Backdrop community. My team would also like to contribute to Backdrop community by submitting fixes, etc. For example we have already contributed to the coder_upgrade module and made posts regarding initial port requests for a wealth of Drupal 7 modules.
## Based on your selection above, please provide the following information:
**(option #1) The name of your module, theme, or layout**
cas, coder_upgrade, other modules my team has planned.
## (option #1) Please note these 3 requirements for new contrib projects:
- [ ] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [ ] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [ ] If porting a Drupal 7 project, Maintain the Git history from Drupal.
**(option #1 -- optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
- https://www.drupal.org/project/cas/issues/2914520
- https://www.drupal.org/project/menu_token/issues/3261166
- https://www.drupal.org/project/rules_conditional/issues/3261174
- https://www.drupal.org/project/seckit/issues/3261178
- https://www.drupal.org/project/views_php/issues/3261182
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/rbargerhuff/cas
**(option #2) If you have already contributed code to Backdrop core or contrib projects, please provide 1-3 links to pull requests or commits**
- https://github.com/backdrop-contrib/coder_upgrade/issues/75
- https://github.com/backdrop-contrib/coder_upgrade/pull/74 (smaiorana is part of my team)
- https://github.com/backdrop-contrib/coder_upgrade/pull/57
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
_Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project._
OK
_Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group._
OK
Cheers!
|
main
|
contrib group application hello and welcome to the contrib application process we re happy to have you please indicate how you intend to help the backdrop community by joining this group option option my team would like to start the process of providing initial ports of certain modules for the backdrop community my team would also like to contribute to backdrop community by submitting fixes etc for example we have already contributed to the coder upgrade module and made posts regarding initial port requests for a wealth of drupal modules based on your selection above please provide the following information option the name of your module theme or layout cas coder upgrade other modules my team has planned option please note these requirements for new contrib projects include a readme md file containing license and maintainer information you can use this example include a license txt file you can use this example if porting a drupal project maintain the git history from drupal option optional post a link here to an issue in the drupal org queue notifying the drupal maintainers that you are working on a backdrop port of their project post a link to your new backdrop project under your own github account option option if you have already contributed code to backdrop core or contrib projects please provide links to pull requests or commits smaiorana is part of my team if you have chosen option or above do you agree to the yes once we have a chance to review your project we will check for the requirements at the top of this issue if those requirements are met you will be invited to the backdrop contrib group at that point you will be able to transfer the project ok please note that we may also include additional feedback in the code review but anything else is only intended to be helpful and is not a requirement for joining the contrib group ok cheers
| 1
|
1,023
| 4,818,339,513
|
IssuesEvent
|
2016-11-04 16:06:11
|
ansible/ansible-modules-extras
|
https://api.github.com/repos/ansible/ansible-modules-extras
|
closed
|
Ansible 2.2.0.0 DNF module cannot manage package groups
|
bug_report in progress waiting_on_maintainer
|
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
dnf
##### ANSIBLE VERSION
```console
$ ansible --version
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
`/etc/ansible.cfg` is untouched. It is as provided by the Arch Linux package. See: https://www.archlinux.org/packages/community/any/ansible/
No Ansible-related variables are set. Verified with:
```console
$ env | grep -i ansible
$
```
##### OS / ENVIRONMENT
My control host is an up-to-date Arch Linux system. It was updated yesterday evening, which is less than 24 hours ago.
My managed hosts are Fedora 23 and Fedora 24 VMs, "server" spin. These VMs are pretty bare-bones. Ignoring snapshots and clones, each system is built by doing the following:
1. Download the "Server" spin of Fedora 24 from [here](https://getfedora.org/en/server/download/) (and from a similar page for F23).
2. Create a VM, and install an up-to-date system from the images.
3. Set the system's hostname and install SSH keys.
That's about it. The important bit is not that I'm managing Fedora systems per se, but rather that I'm using the DNF package managers on said systems. Here's the version of DNF installed on the F23 system:
```console
# dnf --version
1.1.10
Installed: dnf-0:1.1.10-1.fc23.noarch at 2016-10-10 16:03
Built : Fedora Project at 2016-08-18 14:43
Installed: rpm-0:4.13.0-0.rc1.13.fc23.x86_64 at 2016-10-10 16:01
Built : Fedora Project at 2016-04-25 13:50
```
Here's the version of DNF installed on the F24 system:
```console
# dnf --version
1.1.10
Installed: dnf-0:1.1.10-1.fc24.noarch at 2016-10-10 19:55
Built : Fedora Project at 2016-08-18 14:39
Installed: rpm-0:4.13.0-0.rc1.27.fc24.x86_64 at 2016-10-10 19:55
Built : Fedora Project at 2016-04-25 13:45
```
##### SUMMARY
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
To reproduce the issue, use Ansible's DNF module to install a package group. For example:
```console:
$ ansible all -i fedora-23-pulp-2-11, -m dnf -a 'name=@LibreOffice state=present'
fedora-23-pulp-2-11 | FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Shared connection to fedora-23-pulp-2-11 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_fMteBp/ansible_module_dnf.py\", line 375, in <module>\r\n main()\r\n File \"/tmp/ansible_fMteBp/ansible_module_dnf.py\", line 369, in main\r\n ensure(
module, base, params['state'], params['name'])\r\n File \"/tmp/ansible_fMteBp/ansible_module_dnf.py\", line 280, in ensure\r\n for group in (g.strip() for g in groups):\r\n File \"/tmp/ansible_fMteBp/ansible_module_dnf.py\", line 280,
in <genexpr>\r\n for group in (g.strip() for g in groups):\r\n File \"/usr/lib/python2.7/site-packages/dnf/comps.py\", line 183, in __getattr__\r\n return getattr(self._i, name)\r\nAttributeError: 'libcomps.Group' object has no att
ribute 'strip'\r\n",
"msg": "MODULE FAILURE"
}
```
Downgrading to Ansible 2.1 solves the issue:
```console
$ ls /var/cache/pacman/pkg/ | grep -i ansible
ansible-2.1.2.0-1-any.pkg.tar.xz
ansible-2.2.0.0-1-any.pkg.tar.xz
$ sudo pacman -U /var/cache/pacman/pkg/ansible-2.1.2.0-1-any.pkg.tar.xz
loading packages...
warning: downgrading package ansible (2.2.0.0-1 => 2.1.2.0-1)
resolving dependencies...
looking for conflicting packages...
Packages (1) ansible-2.1.2.0-1
Total Installed Size: 22.09 MiB
Net Upgrade Size: -9.07 MiB
:: Proceed with installation? [Y/n] y
(1/1) checking keys in keyring [########################################################################################] 100%
(1/1) checking package integrity [########################################################################################] 100%
(1/1) loading package files [########################################################################################] 100%
(1/1) checking for file conflicts [########################################################################################] 100%
(1/1) checking available disk space [########################################################################################] 100%
:: Processing package changes...
(1/1) downgrading ansible [########################################################################################] 100%
$ ansible all -i fedora-23-pulp-2-11, -m dnf -a 'name=@LibreOffice state=present'
fedora-23-pulp-2-11 | SUCCESS => {
"changed": true,
"results": [
"Installed: cairo-1.14.2-2.fc23.x86_64",
"Installed: google-crosextra-caladea-fonts-1.002-0.6.20130214.fc23.noarch",
snip!
"Installed: mesa-libgbm-11.1.0-4.20151218.fc23.x86_64",
"Installed: mesa-libglapi-11.1.0-4.20151218.fc23.x86_64"
]
}
```
This issue is not specific to the LibreOffice package group. The error also occurs when using Ansible to install nightly builds of [Pulp](http://pulpproject.org/). See a full log [here](http://pastebin.com/FhtXkBaZ). Importantly, `ansible all -i fedora-23-pulp-2-11, -m dnf -a 'name=@pulp-server-qpid state=present'` fails when Ansible 2.2.0.0 is installed and succeeds when Ansible 2.1.2.0 is installed.
##### EXPECTED RESULTS
Ansible's DNF module should be able to install package groups.
##### ACTUAL RESULTS
Here's another demo of what happens. This time, I'm using a Fedora 24 VM. (This ensures that the issue isn't specific to Fedora 23.)
```console
$ ansible --version
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
$ ansible all -i fedora-24-test, -m ping
fedora-24-test | SUCCESS => {
"changed": false,
"ping": "pong"
}
$ ansible all -i fedora-24-test, -m raw -a 'dnf -y install python2-dnf'
fedora-24-test | SUCCESS | rc=0 >>
Last metadata expiration check: 5:31:59 ago on Thu Nov 3 09:54:51 2016.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
pyliblzma x86_64 0.5.3-15.fc24 fedora 53 k
python-six noarch 1.10.0-2.fc24 fedora 34 k
python2-dnf noarch 1.1.10-1.fc24 updates 444 k
python2-hawkey x86_64 0.6.3-6.fc24 updates 46 k
python2-iniparse noarch 0.4-19.fc24 fedora 45 k
python2-libcomps x86_64 0.1.7-4.fc24 fedora 47 k
python2-librepo x86_64 1.7.18-2.fc24 fedora 56 k
python2-pygpgme x86_64 0.3-18.fc24 updates 90 k
rpm-python x86_64 4.13.0-0.rc1.27.fc24 fedora 102 k
Transaction Summary
================================================================================
Install 9 Packages
Total download size: 919 k
Installed size: 3.2 M
Downloading Packages:
(1/9): pyliblzma-0.5.3-15.fc24.x86_64.rpm 299 kB/s | 53 kB 00:00
(2/9): python2-iniparse-0.4-19.fc24.noarch.rpm 226 kB/s | 45 kB 00:00
(3/9): python2-libcomps-0.1.7-4.fc24.x86_64.rpm 1.0 MB/s | 47 kB 00:00
(4/9): rpm-python-4.13.0-0.rc1.27.fc24.x86_64.r 1.0 MB/s | 102 kB 00:00
(5/9): python2-dnf-1.1.10-1.fc24.noarch.rpm 1.1 MB/s | 444 kB 00:00
(6/9): python2-pygpgme-0.3-18.fc24.x86_64.rpm 1.0 MB/s | 90 kB 00:00
(7/9): python-six-1.10.0-2.fc24.noarch.rpm 567 kB/s | 34 kB 00:00
(8/9): python2-hawkey-0.6.3-6.fc24.x86_64.rpm 305 kB/s | 46 kB 00:00
(9/9): python2-librepo-1.7.18-2.fc24.x86_64.rpm 122 kB/s | 56 kB 00:00
--------------------------------------------------------------------------------
Total 427 kB/s | 919 kB 00:02
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Installing : python2-hawkey-0.6.3-6.fc24.x86_64 1/9
Installing : python-six-1.10.0-2.fc24.noarch 2/9
Installing : python2-iniparse-0.4-19.fc24.noarch 3/9
Installing : python2-pygpgme-0.3-18.fc24.x86_64 4/9
Installing : rpm-python-4.13.0-0.rc1.27.fc24.x86_64 5/9
Installing : python2-librepo-1.7.18-2.fc24.x86_64 6/9
Installing : python2-libcomps-0.1.7-4.fc24.x86_64 7/9
Installing : pyliblzma-0.5.3-15.fc24.x86_64 8/9
Installing : python2-dnf-1.1.10-1.fc24.noarch 9/9
Verifying : python2-dnf-1.1.10-1.fc24.noarch 1/9
Verifying : pyliblzma-0.5.3-15.fc24.x86_64 2/9
Verifying : python2-iniparse-0.4-19.fc24.noarch 3/9
Verifying : python2-libcomps-0.1.7-4.fc24.x86_64 4/9
Verifying : python2-librepo-1.7.18-2.fc24.x86_64 5/9
Verifying : rpm-python-4.13.0-0.rc1.27.fc24.x86_64 6/9
Verifying : python2-pygpgme-0.3-18.fc24.x86_64 7/9
Verifying : python-six-1.10.0-2.fc24.noarch 8/9
Verifying : python2-hawkey-0.6.3-6.fc24.x86_64 9/9
Installed:
pyliblzma.x86_64 0.5.3-15.fc24 python-six.noarch 1.10.0-2.fc24
python2-dnf.noarch 1.1.10-1.fc24 python2-hawkey.x86_64 0.6.3-6.fc24
python2-iniparse.noarch 0.4-19.fc24 python2-libcomps.x86_64 0.1.7-4.fc24
python2-librepo.x86_64 1.7.18-2.fc24 python2-pygpgme.x86_64 0.3-18.fc24
rpm-python.x86_64 4.13.0-0.rc1.27.fc24
Complete!
Shared connection to fedora-24-test closed.
$ ansible all -vvvv -i fedora-24-test, -m dnf -a 'name=@LibreOffice state=present'
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/site-packages/ansible/modules/extras/packaging/os/dnf.py
<fedora-24-test> ESTABLISH SSH CONNECTION FOR USER: None
<fedora-24-test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ichimonji10/.ansible/cp/ansible-ssh-%h-%p-%r fedora-24-test '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544 `" && echo ansible-tmp-1478201327.35-116696318694544="` echo $HOME/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544 `" ) && sleep 0'"'"''
<fedora-24-test> PUT /tmp/tmp5wtflb TO /root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/dnf.py
<fedora-24-test> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ichimonji10/.ansible/cp/ansible-ssh-%h-%p-%r '[fedora-24-test]'
<fedora-24-test> ESTABLISH SSH CONNECTION FOR USER: None
<fedora-24-test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ichimonji10/.ansible/cp/ansible-ssh-%h-%p-%r fedora-24-test '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/ /root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/dnf.py && sleep 0'"'"''
<fedora-24-test> ESTABLISH SSH CONNECTION FOR USER: None
<fedora-24-test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ichimonji10/.ansible/cp/ansible-ssh-%h-%p-%r -tt fedora-24-test '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/dnf.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/" > /dev/null 2>&1 && sleep 0'"'"''
fedora-24-test | FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "dnf"
},
"module_stderr": "OpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016\r\ndebug1: Reading configuration data /home/ichimonji10/.ssh/config\r\ndebug1: /home/ichimonji10/.ssh/config line 21: Applying options for fedora-24-test\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 6986\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to fedora-24-test closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_lGlW4D/ansible_module_dnf.py\", line 375, in <module>\r\n main()\r\n File \"/tmp/ansible_lGlW4D/ansible_module_dnf.py\", line 369, in main\r\n ensure(module, base, params['state'], params['name'])\r\n File \"/tmp/ansible_lGlW4D/ansible_module_dnf.py\", line 280, in ensure\r\n for group in (g.strip() for g in groups):\r\n File \"/tmp/ansible_lGlW4D/ansible_module_dnf.py\", line 280, in <genexpr>\r\n for group in (g.strip() for g in groups):\r\n File \"/usr/lib/python2.7/site-packages/dnf/comps.py\", line 183, in __getattr__\r\n return getattr(self._i, name)\r\nAttributeError: 'libcomps.Group' object has no attribute 'strip'\r\n",
"msg": "MODULE FAILURE"
}
```
Here's what the `module_stderr` line says:
```
OpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016
debug1: Reading configuration data /home/ichimonji10/.ssh/config
debug1: /home/ichimonji10/.ssh/config line 21: Applying options for fedora-24-test
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 6986
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 0
Shared connection to fedora-24-test closed.
```
Here's what the `module_stout` line says:
```
Traceback (most recent call last):
File "/tmp/ansible_lGlW4D/ansible_module_dnf.py", line 375, in <module>
main()
File "/tmp/ansible_lGlW4D/ansible_module_dnf.py", line 369, in main
ensure(module, base, params['state'], params['name'])
File "/tmp/ansible_lGlW4D/ansible_module_dnf.py", line 280, in ensure
for group in (g.strip() for g in groups):
File "/tmp/ansible_lGlW4D/ansible_module_dnf.py", line 280, in <genexpr>
for group in (g.strip() for g in groups):
File "/usr/lib/python2.7/site-packages/dnf/comps.py", line 183, in __getattr__
return getattr(self._i, name)
AttributeError: 'libcomps.Group' object has no attribute 'strip'
```
|
True
|
Ansible 2.2.0.0 DNF module cannot manage package groups - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
dnf
##### ANSIBLE VERSION
```console
$ ansible --version
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
`/etc/ansible.cfg` is untouched. It is as provided by the Arch Linux package. See: https://www.archlinux.org/packages/community/any/ansible/
No Ansible-related variables are set. Verified with:
```console
$ env | grep -i ansible
$
```
##### OS / ENVIRONMENT
My control host is an up-to-date Arch Linux system. It was updated yesterday evening, which is less than 24 hours ago.
My managed hosts are Fedora 23 and Fedora 24 VMs, "server" spin. These VMs are pretty bare-bones. Ignoring snapshots and clones, each system is built by doing the following:
1. Download the "Server" spin of Fedora 24 from [here](https://getfedora.org/en/server/download/) (and from a similar page for F23).
2. Create a VM, and install an up-to-date system from the images.
3. Set the system's hostname and install SSH keys.
That's about it. The important bit is not that I'm managing Fedora systems per se, but rather that I'm using the DNF package managers on said systems. Here's the version of DNF installed on the F23 system:
```console
# dnf --version
1.1.10
Installed: dnf-0:1.1.10-1.fc23.noarch at 2016-10-10 16:03
Built : Fedora Project at 2016-08-18 14:43
Installed: rpm-0:4.13.0-0.rc1.13.fc23.x86_64 at 2016-10-10 16:01
Built : Fedora Project at 2016-04-25 13:50
```
Here's the version of DNF installed on the F24 system:
```console
# dnf --version
1.1.10
Installed: dnf-0:1.1.10-1.fc24.noarch at 2016-10-10 19:55
Built : Fedora Project at 2016-08-18 14:39
Installed: rpm-0:4.13.0-0.rc1.27.fc24.x86_64 at 2016-10-10 19:55
Built : Fedora Project at 2016-04-25 13:45
```
##### SUMMARY
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
To reproduce the issue, use Ansible's DNF module to install a package group. For example:
```console:
$ ansible all -i fedora-23-pulp-2-11, -m dnf -a 'name=@LibreOffice state=present'
fedora-23-pulp-2-11 | FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Shared connection to fedora-23-pulp-2-11 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_fMteBp/ansible_module_dnf.py\", line 375, in <module>\r\n main()\r\n File \"/tmp/ansible_fMteBp/ansible_module_dnf.py\", line 369, in main\r\n ensure(
module, base, params['state'], params['name'])\r\n File \"/tmp/ansible_fMteBp/ansible_module_dnf.py\", line 280, in ensure\r\n for group in (g.strip() for g in groups):\r\n File \"/tmp/ansible_fMteBp/ansible_module_dnf.py\", line 280,
in <genexpr>\r\n for group in (g.strip() for g in groups):\r\n File \"/usr/lib/python2.7/site-packages/dnf/comps.py\", line 183, in __getattr__\r\n return getattr(self._i, name)\r\nAttributeError: 'libcomps.Group' object has no att
ribute 'strip'\r\n",
"msg": "MODULE FAILURE"
}
```
Downgrading to Ansible 2.1 solves the issue:
```console
$ ls /var/cache/pacman/pkg/ | grep -i ansible
ansible-2.1.2.0-1-any.pkg.tar.xz
ansible-2.2.0.0-1-any.pkg.tar.xz
$ sudo pacman -U /var/cache/pacman/pkg/ansible-2.1.2.0-1-any.pkg.tar.xz
loading packages...
warning: downgrading package ansible (2.2.0.0-1 => 2.1.2.0-1)
resolving dependencies...
looking for conflicting packages...
Packages (1) ansible-2.1.2.0-1
Total Installed Size: 22.09 MiB
Net Upgrade Size: -9.07 MiB
:: Proceed with installation? [Y/n] y
(1/1) checking keys in keyring [########################################################################################] 100%
(1/1) checking package integrity [########################################################################################] 100%
(1/1) loading package files [########################################################################################] 100%
(1/1) checking for file conflicts [########################################################################################] 100%
(1/1) checking available disk space [########################################################################################] 100%
:: Processing package changes...
(1/1) downgrading ansible [########################################################################################] 100%
$ ansible all -i fedora-23-pulp-2-11, -m dnf -a 'name=@LibreOffice state=present'
fedora-23-pulp-2-11 | SUCCESS => {
"changed": true,
"results": [
"Installed: cairo-1.14.2-2.fc23.x86_64",
"Installed: google-crosextra-caladea-fonts-1.002-0.6.20130214.fc23.noarch",
snip!
"Installed: mesa-libgbm-11.1.0-4.20151218.fc23.x86_64",
"Installed: mesa-libglapi-11.1.0-4.20151218.fc23.x86_64"
]
}
```
This issue is not specific to the LibreOffice package group. The error also occurs when using Ansible to install nightly builds of [Pulp](http://pulpproject.org/). See a full log [here](http://pastebin.com/FhtXkBaZ). Importantly, `ansible all -i fedora-23-pulp-2-11, -m dnf -a 'name=@pulp-server-qpid state=present'` fails when Ansible 2.2.0.0 is installed and succeeds when Ansible 2.1.2.0 is installed.
##### EXPECTED RESULTS
Ansible's DNF module should be able to install package groups.
##### ACTUAL RESULTS
Here's another demo of what happens. This time, I'm using a Fedora 24 VM. (This ensures that the issue isn't specific to Fedora 23.)
```console
$ ansible --version
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
$ ansible all -i fedora-24-test, -m ping
fedora-24-test | SUCCESS => {
"changed": false,
"ping": "pong"
}
$ ansible all -i fedora-24-test, -m raw -a 'dnf -y install python2-dnf'
fedora-24-test | SUCCESS | rc=0 >>
Last metadata expiration check: 5:31:59 ago on Thu Nov 3 09:54:51 2016.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
pyliblzma x86_64 0.5.3-15.fc24 fedora 53 k
python-six noarch 1.10.0-2.fc24 fedora 34 k
python2-dnf noarch 1.1.10-1.fc24 updates 444 k
python2-hawkey x86_64 0.6.3-6.fc24 updates 46 k
python2-iniparse noarch 0.4-19.fc24 fedora 45 k
python2-libcomps x86_64 0.1.7-4.fc24 fedora 47 k
python2-librepo x86_64 1.7.18-2.fc24 fedora 56 k
python2-pygpgme x86_64 0.3-18.fc24 updates 90 k
rpm-python x86_64 4.13.0-0.rc1.27.fc24 fedora 102 k
Transaction Summary
================================================================================
Install 9 Packages
Total download size: 919 k
Installed size: 3.2 M
Downloading Packages:
(1/9): pyliblzma-0.5.3-15.fc24.x86_64.rpm 299 kB/s | 53 kB 00:00
(2/9): python2-iniparse-0.4-19.fc24.noarch.rpm 226 kB/s | 45 kB 00:00
(3/9): python2-libcomps-0.1.7-4.fc24.x86_64.rpm 1.0 MB/s | 47 kB 00:00
(4/9): rpm-python-4.13.0-0.rc1.27.fc24.x86_64.r 1.0 MB/s | 102 kB 00:00
(5/9): python2-dnf-1.1.10-1.fc24.noarch.rpm 1.1 MB/s | 444 kB 00:00
(6/9): python2-pygpgme-0.3-18.fc24.x86_64.rpm 1.0 MB/s | 90 kB 00:00
(7/9): python-six-1.10.0-2.fc24.noarch.rpm 567 kB/s | 34 kB 00:00
(8/9): python2-hawkey-0.6.3-6.fc24.x86_64.rpm 305 kB/s | 46 kB 00:00
(9/9): python2-librepo-1.7.18-2.fc24.x86_64.rpm 122 kB/s | 56 kB 00:00
--------------------------------------------------------------------------------
Total 427 kB/s | 919 kB 00:02
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Installing : python2-hawkey-0.6.3-6.fc24.x86_64 1/9
Installing : python-six-1.10.0-2.fc24.noarch 2/9
Installing : python2-iniparse-0.4-19.fc24.noarch 3/9
Installing : python2-pygpgme-0.3-18.fc24.x86_64 4/9
Installing : rpm-python-4.13.0-0.rc1.27.fc24.x86_64 5/9
Installing : python2-librepo-1.7.18-2.fc24.x86_64 6/9
Installing : python2-libcomps-0.1.7-4.fc24.x86_64 7/9
Installing : pyliblzma-0.5.3-15.fc24.x86_64 8/9
Installing : python2-dnf-1.1.10-1.fc24.noarch 9/9
Verifying : python2-dnf-1.1.10-1.fc24.noarch 1/9
Verifying : pyliblzma-0.5.3-15.fc24.x86_64 2/9
Verifying : python2-iniparse-0.4-19.fc24.noarch 3/9
Verifying : python2-libcomps-0.1.7-4.fc24.x86_64 4/9
Verifying : python2-librepo-1.7.18-2.fc24.x86_64 5/9
Verifying : rpm-python-4.13.0-0.rc1.27.fc24.x86_64 6/9
Verifying : python2-pygpgme-0.3-18.fc24.x86_64 7/9
Verifying : python-six-1.10.0-2.fc24.noarch 8/9
Verifying : python2-hawkey-0.6.3-6.fc24.x86_64 9/9
Installed:
pyliblzma.x86_64 0.5.3-15.fc24 python-six.noarch 1.10.0-2.fc24
python2-dnf.noarch 1.1.10-1.fc24 python2-hawkey.x86_64 0.6.3-6.fc24
python2-iniparse.noarch 0.4-19.fc24 python2-libcomps.x86_64 0.1.7-4.fc24
python2-librepo.x86_64 1.7.18-2.fc24 python2-pygpgme.x86_64 0.3-18.fc24
rpm-python.x86_64 4.13.0-0.rc1.27.fc24
Complete!
Shared connection to fedora-24-test closed.
$ ansible all -vvvv -i fedora-24-test, -m dnf -a 'name=@LibreOffice state=present'
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/site-packages/ansible/modules/extras/packaging/os/dnf.py
<fedora-24-test> ESTABLISH SSH CONNECTION FOR USER: None
<fedora-24-test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ichimonji10/.ansible/cp/ansible-ssh-%h-%p-%r fedora-24-test '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544 `" && echo ansible-tmp-1478201327.35-116696318694544="` echo $HOME/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544 `" ) && sleep 0'"'"''
<fedora-24-test> PUT /tmp/tmp5wtflb TO /root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/dnf.py
<fedora-24-test> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ichimonji10/.ansible/cp/ansible-ssh-%h-%p-%r '[fedora-24-test]'
<fedora-24-test> ESTABLISH SSH CONNECTION FOR USER: None
<fedora-24-test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ichimonji10/.ansible/cp/ansible-ssh-%h-%p-%r fedora-24-test '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/ /root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/dnf.py && sleep 0'"'"''
<fedora-24-test> ESTABLISH SSH CONNECTION FOR USER: None
<fedora-24-test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/ichimonji10/.ansible/cp/ansible-ssh-%h-%p-%r -tt fedora-24-test '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/dnf.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1478201327.35-116696318694544/" > /dev/null 2>&1 && sleep 0'"'"''
fedora-24-test | FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "dnf"
},
"module_stderr": "OpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016\r\ndebug1: Reading configuration data /home/ichimonji10/.ssh/config\r\ndebug1: /home/ichimonji10/.ssh/config line 21: Applying options for fedora-24-test\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 6986\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to fedora-24-test closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_lGlW4D/ansible_module_dnf.py\", line 375, in <module>\r\n main()\r\n File \"/tmp/ansible_lGlW4D/ansible_module_dnf.py\", line 369, in main\r\n ensure(module, base, params['state'], params['name'])\r\n File \"/tmp/ansible_lGlW4D/ansible_module_dnf.py\", line 280, in ensure\r\n for group in (g.strip() for g in groups):\r\n File \"/tmp/ansible_lGlW4D/ansible_module_dnf.py\", line 280, in <genexpr>\r\n for group in (g.strip() for g in groups):\r\n File \"/usr/lib/python2.7/site-packages/dnf/comps.py\", line 183, in __getattr__\r\n return getattr(self._i, name)\r\nAttributeError: 'libcomps.Group' object has no attribute 'strip'\r\n",
"msg": "MODULE FAILURE"
}
```
Here's what the `module_stderr` line says:
```
OpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016
debug1: Reading configuration data /home/ichimonji10/.ssh/config
debug1: /home/ichimonji10/.ssh/config line 21: Applying options for fedora-24-test
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 6986
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 0
Shared connection to fedora-24-test closed.
```
Here's what the `module_stout` line says:
```
Traceback (most recent call last):
File "/tmp/ansible_lGlW4D/ansible_module_dnf.py", line 375, in <module>
main()
File "/tmp/ansible_lGlW4D/ansible_module_dnf.py", line 369, in main
ensure(module, base, params['state'], params['name'])
File "/tmp/ansible_lGlW4D/ansible_module_dnf.py", line 280, in ensure
for group in (g.strip() for g in groups):
File "/tmp/ansible_lGlW4D/ansible_module_dnf.py", line 280, in <genexpr>
for group in (g.strip() for g in groups):
File "/usr/lib/python2.7/site-packages/dnf/comps.py", line 183, in __getattr__
return getattr(self._i, name)
AttributeError: 'libcomps.Group' object has no attribute 'strip'
```
|
main
|
ansible dnf module cannot manage package groups issue type bug report component name dnf ansible version console ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables etc ansible cfg is untouched it is as provided by the arch linux package see no ansible related variables are set verified with console env grep i ansible os environment my control host is an up to date arch linux system it was updated yesterday evening which is less than hours ago my managed hosts are fedora and fedora vms server spin these vms are pretty bare bones ignoring snapshots and clones each system is built by doing the following download the server spin of fedora from and from a similar page for create a vm and install an up to date system from the images set the system s hostname and install ssh keys that s about it the important bit is not that i m managing fedora systems per se but rather that i m using the dnf package managers on said systems here s the version of dnf installed on the system console dnf version installed dnf noarch at built fedora project at installed rpm at built fedora project at here s the version of dnf installed on the system console dnf version installed dnf noarch at built fedora project at installed rpm at built fedora project at summary steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used to reproduce the issue use ansible s dnf module to install a package group for example console ansible all i fedora pulp m dnf a name libreoffice state present fedora pulp failed changed false failed true module stderr shared connection to fedora pulp closed r n module stdout traceback most recent call last r n file tmp ansible fmtebp ansible module dnf py line in r n main r n file tmp ansible fmtebp ansible module dnf py line in main r n ensure module base params params r n file tmp ansible fmtebp ansible module dnf py line in ensure r n for group in g strip for g in groups r n file tmp ansible fmtebp ansible module dnf py line in r n for group in g strip for g in groups r n file usr lib site packages dnf comps py line in getattr r n return getattr self i name r nattributeerror libcomps group object has no att ribute strip r n msg module failure downgrading to ansible solves the issue console ls var cache pacman pkg grep i ansible ansible any pkg tar xz ansible any pkg tar xz sudo pacman u var cache pacman pkg ansible any pkg tar xz loading packages warning downgrading package ansible resolving dependencies looking for conflicting packages packages ansible total installed size mib net upgrade size mib proceed with installation y checking keys in keyring checking package integrity loading package files checking for file conflicts checking available disk space processing package changes downgrading ansible ansible all i fedora pulp m dnf a name libreoffice state present fedora pulp success changed true results installed cairo installed google crosextra caladea fonts noarch snip installed mesa libgbm installed mesa libglapi this issue is not specific to the libreoffice package group the error also occurs when using ansible to install nightly builds of see a full log importantly ansible all i fedora pulp m dnf a name pulp server qpid state present fails when ansible is installed and succeeds when ansible is installed expected results ansible s dnf module should be able to install package groups actual results here s another demo of what happens this time i m using a fedora vm this ensures that the issue isn t specific to fedora console ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides ansible all i fedora test m ping fedora test success changed false ping pong ansible all i fedora test m raw a dnf y install dnf fedora test success rc last metadata expiration check ago on thu nov dependencies resolved package arch version repository size installing pyliblzma fedora k python six noarch fedora k dnf noarch updates k hawkey updates k iniparse noarch fedora k libcomps fedora k librepo fedora k pygpgme updates k rpm python fedora k transaction summary install packages total download size k installed size m downloading packages pyliblzma rpm kb s kb iniparse noarch rpm kb s kb libcomps rpm mb s kb rpm python r mb s kb dnf noarch rpm mb s kb pygpgme rpm mb s kb python six noarch rpm kb s kb hawkey rpm kb s kb librepo rpm kb s kb total kb s kb running transaction check transaction check succeeded running transaction test transaction test succeeded running transaction installing hawkey installing python six noarch installing iniparse noarch installing pygpgme installing rpm python installing librepo installing libcomps installing pyliblzma installing dnf noarch verifying dnf noarch verifying pyliblzma verifying iniparse noarch verifying libcomps verifying librepo verifying rpm python verifying pygpgme verifying python six noarch verifying hawkey installed pyliblzma python six noarch dnf noarch hawkey iniparse noarch libcomps librepo pygpgme rpm python complete shared connection to fedora test closed ansible all vvvv i fedora test m dnf a name libreoffice state present using etc ansible ansible cfg as config file loading callback plugin minimal of type stdout from usr lib site packages ansible plugins callback init pyc using module file usr lib site packages ansible modules extras packaging os dnf py establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r fedora test bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp dnf py ssh exec sftp b vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r fedora test bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp dnf py sleep establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r tt fedora test bin sh c usr bin python root ansible tmp ansible tmp dnf py rm rf root ansible tmp ansible tmp dev null sleep fedora test failed changed false failed true invocation module name dnf module stderr openssh openssl sep r reading configuration data home ssh config r home ssh config line applying options for fedora test r reading configuration data etc ssh ssh config r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to fedora test closed r n module stdout traceback most recent call last r n file tmp ansible ansible module dnf py line in r n main r n file tmp ansible ansible module dnf py line in main r n ensure module base params params r n file tmp ansible ansible module dnf py line in ensure r n for group in g strip for g in groups r n file tmp ansible ansible module dnf py line in r n for group in g strip for g in groups r n file usr lib site packages dnf comps py line in getattr r n return getattr self i name r nattributeerror libcomps group object has no attribute strip r n msg module failure here s what the module stderr line says openssh openssl sep reading configuration data home ssh config home ssh config line applying options for fedora test reading configuration data etc ssh ssh config auto mux trying existing master fd setting o nonblock mux client hello exchange master version mux client forwards request forwardings local remote mux client request session entering mux client request alive entering mux client request alive done pid mux client request session session request sent mux client request session master session id mux client read packet read header failed broken pipe received exit status from master shared connection to fedora test closed here s what the module stout line says traceback most recent call last file tmp ansible ansible module dnf py line in main file tmp ansible ansible module dnf py line in main ensure module base params params file tmp ansible ansible module dnf py line in ensure for group in g strip for g in groups file tmp ansible ansible module dnf py line in for group in g strip for g in groups file usr lib site packages dnf comps py line in getattr return getattr self i name attributeerror libcomps group object has no attribute strip
| 1
|
1,199
| 5,133,072,787
|
IssuesEvent
|
2017-01-11 01:40:20
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
s3 module is not idempotent for bucket creation (fails when bucket already exists)
|
affects_2.0 aws bug_report cloud waiting_on_maintainer
|
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
s3
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.0.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
s3 bucket creation fails when bucket already exists
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
1. create bucket with s3 module
2. attempt to create same bucket with s3 module
<!--- Paste example playbooks or commands between quotes below -->
```
- name: create s3 buckets
s3:
bucket: $BUCKET_NAME
mode: create
become: false
delegate_to: localhost
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Bucket should create the first time. The second time bucket should remain and not throw an error.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Bucket creation failed on 2nd run.
<!--- Paste verbatim command output between quotes below -->
```
20:47:39 An exception occurred during task execution. The full traceback is:
20:47:39 Traceback (most recent call last):
20:47:39 File "$PATH//ansible-tmp-1476132487.47-226397347317836/s3", line 2846, in <module>
20:47:39 main()
20:47:39 File "$PATH/ansible-tmp-1476132487.47-226397347317836/s3", line 610, in main
20:47:39 module.exit_json(msg="Bucket created successfully", changed=create_bucket(module, s3, bucket, location))
20:47:39 File "$PATH//ansible-tmp-1476132487.47-226397347317836/s3", line 244, in create_bucket
20:47:39 bucket = s3.create_bucket(bucket, location=location)
20:47:39 File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 616, in create_bucket
20:47:39 response.status, response.reason, body)
20:47:39 boto.exception.S3CreateError: S3CreateError: 409 Conflict
20:47:39 <?xml version="1.0" encoding="UTF-8"?>
20:47:39 <Error><Code>BucketAlreadyExists</Code><Message>The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.</Message><BucketName>blend</BucketName><RequestId>5FB9B826861513DD</RequestId><HostId>$HOSTID</HostId></Error>
```
|
True
|
s3 module is not idempotent for bucket creation (fails when bucket already exists) - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
s3
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.0.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
s3 bucket creation fails when bucket already exists
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
1. create bucket with s3 module
2. attempt to create same bucket with s3 module
<!--- Paste example playbooks or commands between quotes below -->
```
- name: create s3 buckets
s3:
bucket: $BUCKET_NAME
mode: create
become: false
delegate_to: localhost
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Bucket should create the first time. The second time bucket should remain and not throw an error.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Bucket creation failed on 2nd run.
<!--- Paste verbatim command output between quotes below -->
```
20:47:39 An exception occurred during task execution. The full traceback is:
20:47:39 Traceback (most recent call last):
20:47:39 File "$PATH//ansible-tmp-1476132487.47-226397347317836/s3", line 2846, in <module>
20:47:39 main()
20:47:39 File "$PATH/ansible-tmp-1476132487.47-226397347317836/s3", line 610, in main
20:47:39 module.exit_json(msg="Bucket created successfully", changed=create_bucket(module, s3, bucket, location))
20:47:39 File "$PATH//ansible-tmp-1476132487.47-226397347317836/s3", line 244, in create_bucket
20:47:39 bucket = s3.create_bucket(bucket, location=location)
20:47:39 File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 616, in create_bucket
20:47:39 response.status, response.reason, body)
20:47:39 boto.exception.S3CreateError: S3CreateError: 409 Conflict
20:47:39 <?xml version="1.0" encoding="UTF-8"?>
20:47:39 <Error><Code>BucketAlreadyExists</Code><Message>The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.</Message><BucketName>blend</BucketName><RequestId>5FB9B826861513DD</RequestId><HostId>$HOSTID</HostId></Error>
```
|
main
|
module is not idempotent for bucket creation fails when bucket already exists issue type bug report component name ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary bucket creation fails when bucket already exists steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create bucket with module attempt to create same bucket with module name create buckets bucket bucket name mode create become false delegate to localhost expected results bucket should create the first time the second time bucket should remain and not throw an error actual results bucket creation failed on run an exception occurred during task execution the full traceback is traceback most recent call last file path ansible tmp line in main file path ansible tmp line in main module exit json msg bucket created successfully changed create bucket module bucket location file path ansible tmp line in create bucket bucket create bucket bucket location location file usr local lib dist packages boto connection py line in create bucket response status response reason body boto exception conflict bucketalreadyexists the requested bucket name is not available the bucket namespace is shared by all users of the system please select a different name and try again blend hostid
| 1
|
60,544
| 3,130,516,127
|
IssuesEvent
|
2015-09-09 09:50:18
|
evanplaice/jquery-csv
|
https://api.github.com/repos/evanplaice/jquery-csv
|
closed
|
CSV values with embedded newlines do not work.
|
1 star bug imported Priority-Medium
|
_From [r...@acm.org](https://code.google.com/u/108598473804003844552/) on September 04, 2012 02:38:02_
Per the RFC (paragraph 2.6), this should be legal:
"aaa","b CRLF
bb","ccc" CRLF
zzz,yyy,xxx
I think the way to fix this might be to check to see if reValue consumed the entire line (check reValue.lastIndex). If not, see if it ended before a (?:^|,)" (or maybe you have to use lookbehind). If so, splice the next line onto this one, and pick up where we left off.
But you'll have to get it past the reValid check. Is that really needed? I think a CSV line is not well-formed IFF it can't be parsed as a sequence of values. I don't see that the validator regex is accomplishing anything useful?
_Original issue: http://code.google.com/p/jquery-csv/issues/detail?id=7_
|
1.0
|
CSV values with embedded newlines do not work. - _From [r...@acm.org](https://code.google.com/u/108598473804003844552/) on September 04, 2012 02:38:02_
Per the RFC (paragraph 2.6), this should be legal:
"aaa","b CRLF
bb","ccc" CRLF
zzz,yyy,xxx
I think the way to fix this might be to check to see if reValue consumed the entire line (check reValue.lastIndex). If not, see if it ended before a (?:^|,)" (or maybe you have to use lookbehind). If so, splice the next line onto this one, and pick up where we left off.
But you'll have to get it past the reValid check. Is that really needed? I think a CSV line is not well-formed IFF it can't be parsed as a sequence of values. I don't see that the validator regex is accomplishing anything useful?
_Original issue: http://code.google.com/p/jquery-csv/issues/detail?id=7_
|
non_main
|
csv values with embedded newlines do not work from on september per the rfc paragraph this should be legal aaa b crlf bb ccc crlf zzz yyy xxx i think the way to fix this might be to check to see if revalue consumed the entire line check revalue lastindex if not see if it ended before a or maybe you have to use lookbehind if so splice the next line onto this one and pick up where we left off but you ll have to get it past the revalid check is that really needed i think a csv line is not well formed iff it can t be parsed as a sequence of values i don t see that the validator regex is accomplishing anything useful original issue
| 0
|
2,796
| 10,018,719,348
|
IssuesEvent
|
2019-07-16 08:29:11
|
ipfs/package-managers
|
https://api.github.com/repos/ipfs/package-managers
|
closed
|
Package signing
|
Audience: Package manager maintainers Focus: Identity/security Type: Discussion
|
Many of the newer language package managers and registries have little or no support for package signing, and the ones that do don't always enforce signing of new packages, so the percentage of signed packages in a registry is often small.
As IPFS becomes a viable mirror to package managers, some security conscious users are going to want to be able to verify that the package content and/or it's metadata really did come from the upstream registry and haven't been tampered with.
Having a mirror on IPFS potentially offers registries verifiable backups in case of data loss or security compromises.
Inspired by an [episode of The Manifest](https://manifest.fm/7) with [The Update Framework](https://theupdateframework.github.io/security.html), one approach we might be able to help with when bootstrapping mirrors (like registry.js.ipfs.io) is to sign the packages and metadata added on behalf of the registries.
I'm sure there are other methods that can be helpful in both giving the users confidence in the validity of the data and supporting registries and communities to become more aware of the security issues involved with package management.
|
True
|
Package signing - Many of the newer language package managers and registries have little or no support for package signing, and the ones that do don't always enforce signing of new packages, so the percentage of signed packages in a registry is often small.
As IPFS becomes a viable mirror to package managers, some security conscious users are going to want to be able to verify that the package content and/or it's metadata really did come from the upstream registry and haven't been tampered with.
Having a mirror on IPFS potentially offers registries verifiable backups in case of data loss or security compromises.
Inspired by an [episode of The Manifest](https://manifest.fm/7) with [The Update Framework](https://theupdateframework.github.io/security.html), one approach we might be able to help with when bootstrapping mirrors (like registry.js.ipfs.io) is to sign the packages and metadata added on behalf of the registries.
I'm sure there are other methods that can be helpful in both giving the users confidence in the validity of the data and supporting registries and communities to become more aware of the security issues involved with package management.
|
main
|
package signing many of the newer language package managers and registries have little or no support for package signing and the ones that do don t always enforce signing of new packages so the percentage of signed packages in a registry is often small as ipfs becomes a viable mirror to package managers some security conscious users are going to want to be able to verify that the package content and or it s metadata really did come from the upstream registry and haven t been tampered with having a mirror on ipfs potentially offers registries verifiable backups in case of data loss or security compromises inspired by an with one approach we might be able to help with when bootstrapping mirrors like registry js ipfs io is to sign the packages and metadata added on behalf of the registries i m sure there are other methods that can be helpful in both giving the users confidence in the validity of the data and supporting registries and communities to become more aware of the security issues involved with package management
| 1
|
1,677
| 6,574,117,379
|
IssuesEvent
|
2017-09-11 11:33:54
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
yum: different results on CentOS vs RHEL
|
affects_2.2 bug_report waiting_on_maintainer
|
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
yum
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
I am calling the ANSIBLE_ROLES_PATH environment variable during execution.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Running from: **Fedora release 23 (Twenty Three)**
Managing:
1. **CentOS Linux release 7.2.1511 (Core)**
2. **Red Hat Enterprise Linux Server release 7.3 (Maipo)**
##### SUMMARY
<!--- Explain the problem briefly -->
A playbook, which installs some yum packages, succeeds on **CentOS Linux release 7.2.1511 (Core)** but fails on **Red Hat Enterprise Linux Server release 7.3 (Maipo)**.
I have confirmed that if I run `yum install <packages>` manually on both the CentOS and RHEL machines, the installation works on both machines as expected.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
1. Create the playbook as shown below
2. Run the playbook on a CentOS & RHEL machine
3. Installation will run flawlessly on the CentOS machine
4. Installation will fail on the RHEL machine due to `No package matching <package_name>`
If I remove the following packages from the with_items list
1. libsemanage-python
2. policycoreutils-python
3. setroubleshoot
4. tmux
And run the playbook again on CentOS & RHEL, it will run flawlessly on both of the machines.
Another scenario that works around the problem is is breaking writing the yum module four times, one time for each of the four packages stated above. Installing them individually works on both CentOS & RHEL.
<!--- Paste example playbooks or commands between quotes below -->
```
--
- name: Install base applications
- hosts: redhat
yum: name={{ item }} state=present
with_items:
- vim-enhanced
- gcc
- wget
- curl
- chrony
- bzip2
- libsemanage-python
- policycoreutils-python
- setroubleshoot
- firewalld
- tmux
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
changed: [CentOS7] => (item=[u'epel-release', u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux'])
changed: [RHEL7] => (item=[u'epel-release', u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux'])
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
failed: [RHEL7] (item=[u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux']) =>
{
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"conf_file": null,
"disable_gpg_check": false,
"disablerepo": null,
"enablerepo": null,
"exclude": null,
"install_repoquery": true,
"list": null,
"name": [
"vim-enhanced",
"gcc",
"wget",
"curl",
"chrony",
"bzip2",
"libsemanage-python",
"policycoreutils-python",
"setroubleshoot",
"firewalld",
"tmux"
],
"state": "present",
"update_cache": false,
"validate_certs": true
},
"module_name": "yum"
},
"item": [
"vim-enhanced",
"gcc",
"wget",
"curl",
"chrony",
"bzip2",
"libsemanage-python",
"policycoreutils-python",
"setroubleshoot",
"firewalld",
"tmux"
],
"msg": "No package matching 'libsemanage-python' found available, installed or updated",
"rc": 126,
"results": [
"vim-enhanced-7.4.160-1.el7.x86_64 providing vim-enhanced is already installed",
"gcc-4.8.5-11.el7.x86_64 providing gcc is already installed",
"wget-1.14-13.el7.x86_64 providing wget is already installed",
"curl-7.29.0-35.el7.x86_64 providing curl is already installed",
"chrony-2.1.1-3.el7.x86_64 providing chrony is already installed",
"bzip2-1.0.6-13.el7.x86_64 providing bzip2 is already installed",
"No package matching 'libsemanage-python' found available, installed or updated"
]
}
```
|
True
|
yum: different results on CentOS vs RHEL - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
yum
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
I am calling the ANSIBLE_ROLES_PATH environment variable during execution.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Running from: **Fedora release 23 (Twenty Three)**
Managing:
1. **CentOS Linux release 7.2.1511 (Core)**
2. **Red Hat Enterprise Linux Server release 7.3 (Maipo)**
##### SUMMARY
<!--- Explain the problem briefly -->
A playbook, which installs some yum packages, succeeds on **CentOS Linux release 7.2.1511 (Core)** but fails on **Red Hat Enterprise Linux Server release 7.3 (Maipo)**.
I have confirmed that if I run `yum install <packages>` manually on both the CentOS and RHEL machines, the installation works on both machines as expected.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
1. Create the playbook as shown below
2. Run the playbook on a CentOS & RHEL machine
3. Installation will run flawlessly on the CentOS machine
4. Installation will fail on the RHEL machine due to `No package matching <package_name>`
If I remove the following packages from the with_items list
1. libsemanage-python
2. policycoreutils-python
3. setroubleshoot
4. tmux
And run the playbook again on CentOS & RHEL, it will run flawlessly on both of the machines.
Another scenario that works around the problem is is breaking writing the yum module four times, one time for each of the four packages stated above. Installing them individually works on both CentOS & RHEL.
<!--- Paste example playbooks or commands between quotes below -->
```
--
- name: Install base applications
- hosts: redhat
yum: name={{ item }} state=present
with_items:
- vim-enhanced
- gcc
- wget
- curl
- chrony
- bzip2
- libsemanage-python
- policycoreutils-python
- setroubleshoot
- firewalld
- tmux
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
changed: [CentOS7] => (item=[u'epel-release', u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux'])
changed: [RHEL7] => (item=[u'epel-release', u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux'])
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
failed: [RHEL7] (item=[u'vim-enhanced', u'gcc', u'wget', u'curl', u'chrony', u'bzip2', u'libsemanage-python', u'policycoreutils-python', u'setroubleshoot', u'firewalld', u'tmux']) =>
{
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"conf_file": null,
"disable_gpg_check": false,
"disablerepo": null,
"enablerepo": null,
"exclude": null,
"install_repoquery": true,
"list": null,
"name": [
"vim-enhanced",
"gcc",
"wget",
"curl",
"chrony",
"bzip2",
"libsemanage-python",
"policycoreutils-python",
"setroubleshoot",
"firewalld",
"tmux"
],
"state": "present",
"update_cache": false,
"validate_certs": true
},
"module_name": "yum"
},
"item": [
"vim-enhanced",
"gcc",
"wget",
"curl",
"chrony",
"bzip2",
"libsemanage-python",
"policycoreutils-python",
"setroubleshoot",
"firewalld",
"tmux"
],
"msg": "No package matching 'libsemanage-python' found available, installed or updated",
"rc": 126,
"results": [
"vim-enhanced-7.4.160-1.el7.x86_64 providing vim-enhanced is already installed",
"gcc-4.8.5-11.el7.x86_64 providing gcc is already installed",
"wget-1.14-13.el7.x86_64 providing wget is already installed",
"curl-7.29.0-35.el7.x86_64 providing curl is already installed",
"chrony-2.1.1-3.el7.x86_64 providing chrony is already installed",
"bzip2-1.0.6-13.el7.x86_64 providing bzip2 is already installed",
"No package matching 'libsemanage-python' found available, installed or updated"
]
}
```
|
main
|
yum different results on centos vs rhel issue type bug report component name yum ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables i am calling the ansible roles path environment variable during execution os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running from fedora release twenty three managing centos linux release core red hat enterprise linux server release maipo summary a playbook which installs some yum packages succeeds on centos linux release core but fails on red hat enterprise linux server release maipo i have confirmed that if i run yum install manually on both the centos and rhel machines the installation works on both machines as expected steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create the playbook as shown below run the playbook on a centos rhel machine installation will run flawlessly on the centos machine installation will fail on the rhel machine due to no package matching if i remove the following packages from the with items list libsemanage python policycoreutils python setroubleshoot tmux and run the playbook again on centos rhel it will run flawlessly on both of the machines another scenario that works around the problem is is breaking writing the yum module four times one time for each of the four packages stated above installing them individually works on both centos rhel name install base applications hosts redhat yum name item state present with items vim enhanced gcc wget curl chrony libsemanage python policycoreutils python setroubleshoot firewalld tmux expected results changed item changed item actual results failed item changed false failed true invocation module args conf file null disable gpg check false disablerepo null enablerepo null exclude null install repoquery true list null name vim enhanced gcc wget curl chrony libsemanage python policycoreutils python setroubleshoot firewalld tmux state present update cache false validate certs true module name yum item vim enhanced gcc wget curl chrony libsemanage python policycoreutils python setroubleshoot firewalld tmux msg no package matching libsemanage python found available installed or updated rc results vim enhanced providing vim enhanced is already installed gcc providing gcc is already installed wget providing wget is already installed curl providing curl is already installed chrony providing chrony is already installed providing is already installed no package matching libsemanage python found available installed or updated
| 1
|
1,077
| 4,893,649,207
|
IssuesEvent
|
2016-11-19 00:12:42
|
ohmlabs/ohm
|
https://api.github.com/repos/ohmlabs/ohm
|
closed
|
make config params optional with defaults
|
maintainability
|
* `host`
* `port`
* `MONGO_DB`
* `MONGO_PORT`
* `MONGO_HOST`
* `MONGODB_INSTANCE`
* `REDIS_PORT`
* `REDIS_HOST`
* `SESSION_KEY`
* `PARSE_PATH`
* `PARSE_DASHBOARD`
* `PARSE_SERVER_URL`
|
True
|
make config params optional with defaults - * `host`
* `port`
* `MONGO_DB`
* `MONGO_PORT`
* `MONGO_HOST`
* `MONGODB_INSTANCE`
* `REDIS_PORT`
* `REDIS_HOST`
* `SESSION_KEY`
* `PARSE_PATH`
* `PARSE_DASHBOARD`
* `PARSE_SERVER_URL`
|
main
|
make config params optional with defaults host port mongo db mongo port mongo host mongodb instance redis port redis host session key parse path parse dashboard parse server url
| 1
|
349,858
| 24,959,070,342
|
IssuesEvent
|
2022-11-01 14:12:12
|
equinor/dcd
|
https://api.github.com/repos/equinor/dcd
|
closed
|
API Structure
|
documentation
|
### Project
- GET `/projects/` Get all Projects and all data
- GET `/projects/{ProjectID}` Get one project
- POST `/projects/` create new project
- PATCH `/projects/{ProjectID}` modify project
- DELETE `/projects/{ProjectID}` delete project
### Case
- POST `/cases/` Create new case
- PATCH `/cases/{CaseID}/` Modify case
- DELETE `cases/{CaseID}/` Delete a case
### Drainage Strategy
- POST `/drainage-strategies/` Create new DrainageStrategy
- PATCH `/drainage-strategies/{DrainageStrategyID}/` Modify DrainageStrategy
- DELETE `drainage-strategies/{DrainageStrategyID}/` Delete a DrainageStrategy
### Exploration
- POST `/exploration/` Create new Exploration
- PATCH `/exploration/{ExplorationID}/` Modify Exploration
- DELETE `exploration/{ExplorationID}/` Delete a Exploration
### WellProject
- POST `/well-projects/` Create new WellProject
- PATCH `/well-projects/{WellProjectID}/` Modify WellProject
- DELETE `well-projects/{WellProjectID}/` Delete a WellProject
### SURF
- POST `/surfs/` Create new SURF
- PATCH `/surfs/{SurfID}/` Modify SURF
- DELETE `surfs/{SurfID}/` Delete a SURF
### Topside Facilities
- POST `/topside-facilities/` Create new TopsideFacilities
- PATCH `/topside-facilities/{TopsideFacilitiesID}/` Modify TopsideFacilities
- DELETE `topside-facilities/{TopsideFacilitiesID}/` Delete a TopsideFacilities
### Substructure
- POST `/substructures/` Create new Substructure
- PATCH `/substructures/{SubstructureID}/` Modify Substructure
- DELETE `substructures/{SubstructureID}/` Delete a Substructure
### Transport
- POST `/transports/` Create new Transport
- PATCH `/transports/{TransportID}/` Modify Transport
- DELETE `transports/{TransportID}/` Delete a Transport
|
1.0
|
API Structure - ### Project
- GET `/projects/` Get all Projects and all data
- GET `/projects/{ProjectID}` Get one project
- POST `/projects/` create new project
- PATCH `/projects/{ProjectID}` modify project
- DELETE `/projects/{ProjectID}` delete project
### Case
- POST `/cases/` Create new case
- PATCH `/cases/{CaseID}/` Modify case
- DELETE `cases/{CaseID}/` Delete a case
### Drainage Strategy
- POST `/drainage-strategies/` Create new DrainageStrategy
- PATCH `/drainage-strategies/{DrainageStrategyID}/` Modify DrainageStrategy
- DELETE `drainage-strategies/{DrainageStrategyID}/` Delete a DrainageStrategy
### Exploration
- POST `/exploration/` Create new Exploration
- PATCH `/exploration/{ExplorationID}/` Modify Exploration
- DELETE `exploration/{ExplorationID}/` Delete a Exploration
### WellProject
- POST `/well-projects/` Create new WellProject
- PATCH `/well-projects/{WellProjectID}/` Modify WellProject
- DELETE `well-projects/{WellProjectID}/` Delete a WellProject
### SURF
- POST `/surfs/` Create new SURF
- PATCH `/surfs/{SurfID}/` Modify SURF
- DELETE `surfs/{SurfID}/` Delete a SURF
### Topside Facilities
- POST `/topside-facilities/` Create new TopsideFacilities
- PATCH `/topside-facilities/{TopsideFacilitiesID}/` Modify TopsideFacilities
- DELETE `topside-facilities/{TopsideFacilitiesID}/` Delete a TopsideFacilities
### Substructure
- POST `/substructures/` Create new Substructure
- PATCH `/substructures/{SubstructureID}/` Modify Substructure
- DELETE `substructures/{SubstructureID}/` Delete a Substructure
### Transport
- POST `/transports/` Create new Transport
- PATCH `/transports/{TransportID}/` Modify Transport
- DELETE `transports/{TransportID}/` Delete a Transport
|
non_main
|
api structure project get projects get all projects and all data get projects projectid get one project post projects create new project patch projects projectid modify project delete projects projectid delete project case post cases create new case patch cases caseid modify case delete cases caseid delete a case drainage strategy post drainage strategies create new drainagestrategy patch drainage strategies drainagestrategyid modify drainagestrategy delete drainage strategies drainagestrategyid delete a drainagestrategy exploration post exploration create new exploration patch exploration explorationid modify exploration delete exploration explorationid delete a exploration wellproject post well projects create new wellproject patch well projects wellprojectid modify wellproject delete well projects wellprojectid delete a wellproject surf post surfs create new surf patch surfs surfid modify surf delete surfs surfid delete a surf topside facilities post topside facilities create new topsidefacilities patch topside facilities topsidefacilitiesid modify topsidefacilities delete topside facilities topsidefacilitiesid delete a topsidefacilities substructure post substructures create new substructure patch substructures substructureid modify substructure delete substructures substructureid delete a substructure transport post transports create new transport patch transports transportid modify transport delete transports transportid delete a transport
| 0
|
299,922
| 9,205,972,539
|
IssuesEvent
|
2019-03-08 12:17:17
|
qissue-bot/QGIS
|
https://api.github.com/repos/qissue-bot/QGIS
|
closed
|
Allow for smaller icon display
|
Category: GUI Component: Easy fix? Component: Pull Request or Patch supplied Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Feature request
|
---
Author Name: **springmeyer -** (springmeyer -)
Original Redmine Issue: 1276, https://issues.qgis.org/issues/1276
Original Assignee: Charles Timko -
---
To go along with the improvements to the general QGIS layout in 0.11, adding the ability to specify/control the icon size would be a good user interface improvement.
It seems that most icons render at their native image size (usually 32x32 or 22x22). Perhaps a preferences option could be added to allow for rendering the icons at half of their normal size, which would allow for the maximization of screen usage for the rendered map while still allowing for the use of the icons.
---
- [capture_QGIS_lofi.png](https://issues.qgis.org/attachments/download/2123/capture_QGIS_lofi.png) (mav -)
|
1.0
|
Allow for smaller icon display - ---
Author Name: **springmeyer -** (springmeyer -)
Original Redmine Issue: 1276, https://issues.qgis.org/issues/1276
Original Assignee: Charles Timko -
---
To go along with the improvements to the general QGIS layout in 0.11, adding the ability to specify/control the icon size would be a good user interface improvement.
It seems that most icons render at their native image size (usually 32x32 or 22x22). Perhaps a preferences option could be added to allow for rendering the icons at half of their normal size, which would allow for the maximization of screen usage for the rendered map while still allowing for the use of the icons.
---
- [capture_QGIS_lofi.png](https://issues.qgis.org/attachments/download/2123/capture_QGIS_lofi.png) (mav -)
|
non_main
|
allow for smaller icon display author name springmeyer springmeyer original redmine issue original assignee charles timko to go along with the improvements to the general qgis layout in adding the ability to specify control the icon size would be a good user interface improvement it seems that most icons render at their native image size usually or perhaps a preferences option could be added to allow for rendering the icons at half of their normal size which would allow for the maximization of screen usage for the rendered map while still allowing for the use of the icons mav
| 0
|
5,874
| 31,884,339,380
|
IssuesEvent
|
2023-09-16 19:11:04
|
google/wasefire
|
https://api.github.com/repos/google/wasefire
|
closed
|
Update the npm installation
|
for:maintainability
|
The script currently used is deprecated. Follow the [new instructions](https://github.com/nodesource/distributions#installation-instructions) instead. Ideally, this could be made optional by detecting if the `nodejs` package is 16 or later and just install it in that case (it's probably only an issue with Ubuntu LTS or Debian stable).
|
True
|
Update the npm installation - The script currently used is deprecated. Follow the [new instructions](https://github.com/nodesource/distributions#installation-instructions) instead. Ideally, this could be made optional by detecting if the `nodejs` package is 16 or later and just install it in that case (it's probably only an issue with Ubuntu LTS or Debian stable).
|
main
|
update the npm installation the script currently used is deprecated follow the instead ideally this could be made optional by detecting if the nodejs package is or later and just install it in that case it s probably only an issue with ubuntu lts or debian stable
| 1
|
1,168
| 5,079,483,624
|
IssuesEvent
|
2016-12-28 20:20:10
|
duckduckgo/zeroclickinfo-spice
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
|
closed
|
Zanran: IA Disabled -- Over triggering, Poor Relevancy, Paywalled Content
|
Bug Low-Hanging Fruit Maintainer Timeout Relevancy Triggering
|
This IA has been disabled due to sever over triggering. It has several hundred vague trigger words which is simply too much and too broad. Relevancy of the results is also not very good. It looks like the Zanran now requires the user to sign-up to view result on their site as well which is poor experience for our users.
This IA should be removed from the repo.
---
IA Page: http://duck.co/ia/view/zanran
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @taw
|
True
|
Zanran: IA Disabled -- Over triggering, Poor Relevancy, Paywalled Content - This IA has been disabled due to sever over triggering. It has several hundred vague trigger words which is simply too much and too broad. Relevancy of the results is also not very good. It looks like the Zanran now requires the user to sign-up to view result on their site as well which is poor experience for our users.
This IA should be removed from the repo.
---
IA Page: http://duck.co/ia/view/zanran
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @taw
|
main
|
zanran ia disabled over triggering poor relevancy paywalled content this ia has been disabled due to sever over triggering it has several hundred vague trigger words which is simply too much and too broad relevancy of the results is also not very good it looks like the zanran now requires the user to sign up to view result on their site as well which is poor experience for our users this ia should be removed from the repo ia page taw
| 1
|
696
| 4,257,760,256
|
IssuesEvent
|
2016-07-11 01:06:06
|
Particular/ServiceInsight
|
https://api.github.com/repos/Particular/ServiceInsight
|
closed
|
ServiceInsight keeps saying trial has ended even though a license key is installed correctly
|
Size: S Tag: Maintainer Prio Type: Bug
|
A customer riased an issue in a [support case](https://nservicebus.desk.com/agent/case/18537) related to a licensing misbehavior.
They installed the licenses in the registry as per our documentation:


however SI kept saying that the license was expired:

The workaround that fixed the customer issue was to create a License registry key, with a propre license, even under `HKEY_CURRENT_USER\SOFTWARE\ParticularSoftware`
--
Don't know if this is already included in https://github.com/Particular/ServiceInsight/issues/569
|
True
|
ServiceInsight keeps saying trial has ended even though a license key is installed correctly - A customer riased an issue in a [support case](https://nservicebus.desk.com/agent/case/18537) related to a licensing misbehavior.
They installed the licenses in the registry as per our documentation:


however SI kept saying that the license was expired:

The workaround that fixed the customer issue was to create a License registry key, with a propre license, even under `HKEY_CURRENT_USER\SOFTWARE\ParticularSoftware`
--
Don't know if this is already included in https://github.com/Particular/ServiceInsight/issues/569
|
main
|
serviceinsight keeps saying trial has ended even though a license key is installed correctly a customer riased an issue in a related to a licensing misbehavior they installed the licenses in the registry as per our documentation however si kept saying that the license was expired the workaround that fixed the customer issue was to create a license registry key with a propre license even under hkey current user software particularsoftware don t know if this is already included in
| 1
|
5,685
| 29,924,457,251
|
IssuesEvent
|
2023-06-22 03:23:38
|
spicetify/spicetify-themes
|
https://api.github.com/repos/spicetify/spicetify-themes
|
closed
|
[Fluent] sidebar icons not properly aligned
|
☠️ unmaintained
|

I've tried reinstalling Spotify, spicetify and fluent but the problem keeps coming back to me. Any idea how to fix it?
|
True
|
[Fluent] sidebar icons not properly aligned - 
I've tried reinstalling Spotify, spicetify and fluent but the problem keeps coming back to me. Any idea how to fix it?
|
main
|
sidebar icons not properly aligned i ve tried reinstalling spotify spicetify and fluent but the problem keeps coming back to me any idea how to fix it
| 1
|
3,723
| 15,390,671,740
|
IssuesEvent
|
2021-03-03 13:42:41
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
closed
|
Chromium client ID policy change
|
9.needs: maintainer feedback
|
I got this email from Google:
> Hi,
>
> We are writing to let you know that Google will discontinue support for sign-ins to Google accounts from embedded browser frameworks, starting January 4, 2021.
>
> We are following up with you about a recent blog post outlining our effort to block less secure browsers and applications. Most of the traffic we have detected from your client ID in the last 30 days would not be affected by this change.
>
> Nevertheless, we recommend that you take the time to review your use of Google Account authorization flows in the following Google OAuth client IDs and confirm that you are in compliance with the policy before January 4, 2021:
>
> Chromium on NixOS
>
> Please refer to our blog post for more information about the policy enforcement and how to test your apps for potential impact. If necessary, you may request a Policy enforcement extension request for embedded browser frameworks, expiring June 30, 2021, by providing all required information.
>
> Sincerely,
>
> The Google Account Security Team
This refers to the `google_default_client_id` (404761575300.apps.googleusercontent.com) in pkgs/applications/networking/browsers/chromium/common.nix.
No idea what the impact of this policy change is on Chromium in NixOS.
@primeos @thefloweringash @bendlas
|
True
|
Chromium client ID policy change - I got this email from Google:
> Hi,
>
> We are writing to let you know that Google will discontinue support for sign-ins to Google accounts from embedded browser frameworks, starting January 4, 2021.
>
> We are following up with you about a recent blog post outlining our effort to block less secure browsers and applications. Most of the traffic we have detected from your client ID in the last 30 days would not be affected by this change.
>
> Nevertheless, we recommend that you take the time to review your use of Google Account authorization flows in the following Google OAuth client IDs and confirm that you are in compliance with the policy before January 4, 2021:
>
> Chromium on NixOS
>
> Please refer to our blog post for more information about the policy enforcement and how to test your apps for potential impact. If necessary, you may request a Policy enforcement extension request for embedded browser frameworks, expiring June 30, 2021, by providing all required information.
>
> Sincerely,
>
> The Google Account Security Team
This refers to the `google_default_client_id` (404761575300.apps.googleusercontent.com) in pkgs/applications/networking/browsers/chromium/common.nix.
No idea what the impact of this policy change is on Chromium in NixOS.
@primeos @thefloweringash @bendlas
|
main
|
chromium client id policy change i got this email from google hi we are writing to let you know that google will discontinue support for sign ins to google accounts from embedded browser frameworks starting january we are following up with you about a recent blog post outlining our effort to block less secure browsers and applications most of the traffic we have detected from your client id in the last days would not be affected by this change nevertheless we recommend that you take the time to review your use of google account authorization flows in the following google oauth client ids and confirm that you are in compliance with the policy before january chromium on nixos please refer to our blog post for more information about the policy enforcement and how to test your apps for potential impact if necessary you may request a policy enforcement extension request for embedded browser frameworks expiring june by providing all required information sincerely the google account security team this refers to the google default client id apps googleusercontent com in pkgs applications networking browsers chromium common nix no idea what the impact of this policy change is on chromium in nixos primeos thefloweringash bendlas
| 1
|
2,507
| 8,655,459,768
|
IssuesEvent
|
2018-11-27 16:00:30
|
codestation/qcma
|
https://api.github.com/repos/codestation/qcma
|
closed
|
Font size bug in Backup Manager
|
bug unmaintained
|
I have Windows 10 x64 + fullhd resolution + 150% scaling. I have this bug on every QCMA versions.

|
True
|
Font size bug in Backup Manager - I have Windows 10 x64 + fullhd resolution + 150% scaling. I have this bug on every QCMA versions.

|
main
|
font size bug in backup manager i have windows fullhd resolution scaling i have this bug on every qcma versions
| 1
|
4,995
| 2,765,514,010
|
IssuesEvent
|
2015-04-29 20:59:00
|
sunlightlabs/the-phantom-mask
|
https://api.github.com/repos/sunlightlabs/the-phantom-mask
|
opened
|
As a user, I will be notified when my email does not successfully get sent to congress.
|
copy design
|

|
1.0
|
As a user, I will be notified when my email does not successfully get sent to congress. - 
|
non_main
|
as a user i will be notified when my email does not successfully get sent to congress
| 0
|
321,448
| 27,530,677,068
|
IssuesEvent
|
2023-03-06 21:49:37
|
pandas-dev/pandas
|
https://api.github.com/repos/pandas-dev/pandas
|
closed
|
BUG: groupby.std with no numeric columns and numeric_only=True raises
|
Bug Groupby good first issue Needs Tests Reduction Operations Nuisance Columns
|
```
df = pd.DataFrame({'a': list('xyz'), 'b': list('def')})
gb = df.groupby('a')
print(gb.std(numeric_only=True))
```
raises `TypeError: All columns were dropped in grouped_reduce`. Instead, it should return an empty frame with index consisting of the groups.
|
1.0
|
BUG: groupby.std with no numeric columns and numeric_only=True raises - ```
df = pd.DataFrame({'a': list('xyz'), 'b': list('def')})
gb = df.groupby('a')
print(gb.std(numeric_only=True))
```
raises `TypeError: All columns were dropped in grouped_reduce`. Instead, it should return an empty frame with index consisting of the groups.
|
non_main
|
bug groupby std with no numeric columns and numeric only true raises df pd dataframe a list xyz b list def gb df groupby a print gb std numeric only true raises typeerror all columns were dropped in grouped reduce instead it should return an empty frame with index consisting of the groups
| 0
|
586,002
| 17,552,507,731
|
IssuesEvent
|
2021-08-13 00:45:23
|
meerk40t/meerk40t
|
https://api.github.com/repos/meerk40t/meerk40t
|
closed
|
Feature Request: Double Click on Camera to update Bed Image
|
Type: Enhancement Context: UI/UX Status: Accepted Priority: Low Context: Camera
|
Just trying to save some keystrokes. Don't know how hard it would be to implement.
Because the Camera Window being open still seems to be the pre-cursor to "pauses" and "Camera Not Found" and the occasional "USB Failure" I try to keep the Camera window closed as much as possible. With that in mind, I would suggest an option for a quick way to update the Bed Photo.
**Double Click on the Camera Icon on the top toolbar to update the Bed Photo**.
This would NOT open the camera window, it would, Put a msg on the screen, in the background, capture the camera session, let it focus (if needed), update the screen msg, update the Bed Photo. and release the camera, and update the screen msg, then delete the screen msg.
|
1.0
|
Feature Request: Double Click on Camera to update Bed Image - Just trying to save some keystrokes. Don't know how hard it would be to implement.
Because the Camera Window being open still seems to be the pre-cursor to "pauses" and "Camera Not Found" and the occasional "USB Failure" I try to keep the Camera window closed as much as possible. With that in mind, I would suggest an option for a quick way to update the Bed Photo.
**Double Click on the Camera Icon on the top toolbar to update the Bed Photo**.
This would NOT open the camera window, it would, Put a msg on the screen, in the background, capture the camera session, let it focus (if needed), update the screen msg, update the Bed Photo. and release the camera, and update the screen msg, then delete the screen msg.
|
non_main
|
feature request double click on camera to update bed image just trying to save some keystrokes don t know how hard it would be to implement because the camera window being open still seems to be the pre cursor to pauses and camera not found and the occasional usb failure i try to keep the camera window closed as much as possible with that in mind i would suggest an option for a quick way to update the bed photo double click on the camera icon on the top toolbar to update the bed photo this would not open the camera window it would put a msg on the screen in the background capture the camera session let it focus if needed update the screen msg update the bed photo and release the camera and update the screen msg then delete the screen msg
| 0
|
82,690
| 3,618,242,346
|
IssuesEvent
|
2016-02-08 10:33:44
|
TrinityCore/TrinityCore
|
https://api.github.com/repos/TrinityCore/TrinityCore
|
closed
|
[3.3.5a] Spell id 57330 that has been learned show in TrainerList as GREEN all the way and can't be learned !
|
Branch-3.3.5a Priority-Cosmetic Sub-Spells
|
CORE: 2492eddcf2ae+ 2015-01-22 11:48:06 +0000 (3.3.5 branch)
DB:TDB 335.57 Update to 2015_01_22_00_world.sql
After DK learn 57623(Horn of Winter - Rank 2) , 57330(Horn of Winter - Rank1) show in TrainerList as GREEN all the way and can't be learned !
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/7964574-3-3-5a-spell-id-57330-that-has-been-learned-show-in-trainerlist-as-green-all-the-way-and-can-t-be-learned?utm_campaign=plugin&utm_content=tracker%2F1310&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F1310&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
[3.3.5a] Spell id 57330 that has been learned show in TrainerList as GREEN all the way and can't be learned ! - CORE: 2492eddcf2ae+ 2015-01-22 11:48:06 +0000 (3.3.5 branch)
DB:TDB 335.57 Update to 2015_01_22_00_world.sql
After DK learn 57623(Horn of Winter - Rank 2) , 57330(Horn of Winter - Rank1) show in TrainerList as GREEN all the way and can't be learned !
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/7964574-3-3-5a-spell-id-57330-that-has-been-learned-show-in-trainerlist-as-green-all-the-way-and-can-t-be-learned?utm_campaign=plugin&utm_content=tracker%2F1310&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F1310&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
non_main
|
spell id that has been learned show in trainerlist as green all the way and can t be learned core branch db tdb update to world sql after dk learn horn of winter rank horn of winter show in trainerlist as green all the way and can t be learned want to back this issue we accept bounties via
| 0
|
4,599
| 23,846,359,952
|
IssuesEvent
|
2022-09-06 14:18:27
|
pyOpenSci/software-review
|
https://api.github.com/repos/pyOpenSci/software-review
|
closed
|
Sevivi: A Rendering Tool to Generate Videos With Synchronized Sensor Data
|
4/review(s)-in-awaiting-changes ⌛ pending-maintainer-response
|
Submitting Author: Name (@justamad)
Package Name: Sevivi
One-Line Description of Package: A Rendering Tool to Generate Videos With Synchronized Sensor Data
Repository Link: https://github.com/HPI-CH/sevivi
Version submitted: 1.0.3
Editor: @xmnlab
Reviewer 1: @edgarriba
Reviewer 2: @pmeier
Archive: TBD
Version accepted: TBD
---
## Description
Sevivi is designed to render plots of sensor data next to a video that was taken synchronously, synchronizing the sensor data precisely to the video. It allows you to investigate why certain patterns occur in your sensor data based on the exact moment in the video.
## Scope
- Please indicate which [category or categories][PackageCategories] this package falls under:
- [ ] Data retrieval
- [ ] Data extraction
- [ ] Data munging
- [ ] Data deposition
- [ ] Reproducibility
- [ ] Geospatial
- [ ] Education
- [X] Data visualization*
\* Please fill out a pre-submission inquiry before submitting a data visualization package. For more info, see [notes on categories][NotesOnCategories] of our guidebook.
https://github.com/pyOpenSci/software-review/issues/47
- Explain how the and why the package falls under these categories (briefly, 1-2 sentences):
Sevivi renders plots of given data next to a given video.
- Who is the target audience and what are scientific applications of this package?
Target audience is researchers working with motion data, e.g., prediction of squat intensity using acceleration data. When these researchers have taken videos of their trials, they might want to see what exactly has produced a certain pattern, helping to differentiate between noise and signal. Sevivi makes this easier, by rendering synchronized videos of both the original video and the sensor data plots synchronously. Synchronization can be done manually, using an IMU on the camera (e.g. https://github.com/DavidGillsjo/VideoIMUCapture-Android/) or by using skeleton data from a tracking software (we tested with an azure kinect).
- Are there other Python packages that accomplish the same thing? If so, how does yours differ?
Our research indicates that no similar python packages or other programs exist.
- If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or `@tag` the editor you contacted:
https://github.com/pyOpenSci/software-review/issues/47
## Technical checks
For details about the pyOpenSci packaging requirements, see our [packaging guide][PackagingGuide]. Confirm each of the following by checking the box. This package:
- [X] does not violate the Terms of Service of any service it interacts with.
- [X] has an [OSI approved license][OsiApprovedLicense].
- [X] contains a README with instructions for installing the development version.
- [X] includes documentation with examples for all functions.
- [X] contains a vignette with examples of its essential functions and uses.
- [X] has a test suite.
- [X] has continuous integration, such as Travis CI, AppVeyor, CircleCI, and/or others.
## Publication options
- [X] Do you wish to automatically submit to the [Journal of Open Source Software][JournalOfOpenSourceSoftware]? If so:
<details>
<summary>JOSS Checks</summary>
- [X] The package has an **obvious research application** according to JOSS's definition in their [submission requirements][JossSubmissionRequirements]. Be aware that completing the pyOpenSci review process **does not** guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS.
- [X] The package is not a "minor utility" as defined by JOSS's [submission requirements][JossSubmissionRequirements]: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria.
- [ ] The package contains a `paper.md` matching [JOSS's requirements][JossPaperRequirements] with a high-level description in the package root or in `inst/`.
- [ ] The package is deposited in a long-term repository with the DOI:
*Note: Do not submit your package separately to JOSS*
We're preparing the paper and looking for suggestions regarding the long-term repository options.
</details>
## Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?
This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.
- [x] Yes I am OK with reviewers submitting requested changes as issues to my repo. Reviewers will then link to the issues in their submitted review.
## Code of conduct
- [X] I agree to abide by [pyOpenSci's Code of Conduct][PyOpenSciCodeOfConduct] during the review process and in maintaining my package should it be accepted.
**P.S.** *Have feedback/comments about our review process? Leave a comment [here][Comments]
## Editor and Review Templates
[Editor and review templates can be found here][Templates]
[PackagingGuide]: https://www.pyopensci.org/contributing-guide/authoring/index.html#packaging-guide
[PackageCategories]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/aims-and-scope.html?highlight=data#package-categories
[NotesOnCategories]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/aims-and-scope.html?highlight=data#notes-on-categories
[JournalOfOpenSourceSoftware]: http://joss.theoj.org/
[JossSubmissionRequirements]: https://joss.readthedocs.io/en/latest/submitting.html#submission-requirements
[JossPaperRequirements]: https://joss.readthedocs.io/en/latest/submitting.html#what-should-my-paper-contain
[PyOpenSciCodeOfConduct]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/code-of-conduct.html?highlight=code%20conduct
[OsiApprovedLicense]: https://opensource.org/licenses
[Templates]: https://www.pyopensci.org/contributing-guide/appendices/templates.html
[Comments]: https://github.com/pyOpenSci/governance/issues/8
|
True
|
Sevivi: A Rendering Tool to Generate Videos With Synchronized Sensor Data - Submitting Author: Name (@justamad)
Package Name: Sevivi
One-Line Description of Package: A Rendering Tool to Generate Videos With Synchronized Sensor Data
Repository Link: https://github.com/HPI-CH/sevivi
Version submitted: 1.0.3
Editor: @xmnlab
Reviewer 1: @edgarriba
Reviewer 2: @pmeier
Archive: TBD
Version accepted: TBD
---
## Description
Sevivi is designed to render plots of sensor data next to a video that was taken synchronously, synchronizing the sensor data precisely to the video. It allows you to investigate why certain patterns occur in your sensor data based on the exact moment in the video.
## Scope
- Please indicate which [category or categories][PackageCategories] this package falls under:
- [ ] Data retrieval
- [ ] Data extraction
- [ ] Data munging
- [ ] Data deposition
- [ ] Reproducibility
- [ ] Geospatial
- [ ] Education
- [X] Data visualization*
\* Please fill out a pre-submission inquiry before submitting a data visualization package. For more info, see [notes on categories][NotesOnCategories] of our guidebook.
https://github.com/pyOpenSci/software-review/issues/47
- Explain how the and why the package falls under these categories (briefly, 1-2 sentences):
Sevivi renders plots of given data next to a given video.
- Who is the target audience and what are scientific applications of this package?
Target audience is researchers working with motion data, e.g., prediction of squat intensity using acceleration data. When these researchers have taken videos of their trials, they might want to see what exactly has produced a certain pattern, helping to differentiate between noise and signal. Sevivi makes this easier, by rendering synchronized videos of both the original video and the sensor data plots synchronously. Synchronization can be done manually, using an IMU on the camera (e.g. https://github.com/DavidGillsjo/VideoIMUCapture-Android/) or by using skeleton data from a tracking software (we tested with an azure kinect).
- Are there other Python packages that accomplish the same thing? If so, how does yours differ?
Our research indicates that no similar python packages or other programs exist.
- If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or `@tag` the editor you contacted:
https://github.com/pyOpenSci/software-review/issues/47
## Technical checks
For details about the pyOpenSci packaging requirements, see our [packaging guide][PackagingGuide]. Confirm each of the following by checking the box. This package:
- [X] does not violate the Terms of Service of any service it interacts with.
- [X] has an [OSI approved license][OsiApprovedLicense].
- [X] contains a README with instructions for installing the development version.
- [X] includes documentation with examples for all functions.
- [X] contains a vignette with examples of its essential functions and uses.
- [X] has a test suite.
- [X] has continuous integration, such as Travis CI, AppVeyor, CircleCI, and/or others.
## Publication options
- [X] Do you wish to automatically submit to the [Journal of Open Source Software][JournalOfOpenSourceSoftware]? If so:
<details>
<summary>JOSS Checks</summary>
- [X] The package has an **obvious research application** according to JOSS's definition in their [submission requirements][JossSubmissionRequirements]. Be aware that completing the pyOpenSci review process **does not** guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS.
- [X] The package is not a "minor utility" as defined by JOSS's [submission requirements][JossSubmissionRequirements]: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria.
- [ ] The package contains a `paper.md` matching [JOSS's requirements][JossPaperRequirements] with a high-level description in the package root or in `inst/`.
- [ ] The package is deposited in a long-term repository with the DOI:
*Note: Do not submit your package separately to JOSS*
We're preparing the paper and looking for suggestions regarding the long-term repository options.
</details>
## Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?
This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.
- [x] Yes I am OK with reviewers submitting requested changes as issues to my repo. Reviewers will then link to the issues in their submitted review.
## Code of conduct
- [X] I agree to abide by [pyOpenSci's Code of Conduct][PyOpenSciCodeOfConduct] during the review process and in maintaining my package should it be accepted.
**P.S.** *Have feedback/comments about our review process? Leave a comment [here][Comments]
## Editor and Review Templates
[Editor and review templates can be found here][Templates]
[PackagingGuide]: https://www.pyopensci.org/contributing-guide/authoring/index.html#packaging-guide
[PackageCategories]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/aims-and-scope.html?highlight=data#package-categories
[NotesOnCategories]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/aims-and-scope.html?highlight=data#notes-on-categories
[JournalOfOpenSourceSoftware]: http://joss.theoj.org/
[JossSubmissionRequirements]: https://joss.readthedocs.io/en/latest/submitting.html#submission-requirements
[JossPaperRequirements]: https://joss.readthedocs.io/en/latest/submitting.html#what-should-my-paper-contain
[PyOpenSciCodeOfConduct]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/code-of-conduct.html?highlight=code%20conduct
[OsiApprovedLicense]: https://opensource.org/licenses
[Templates]: https://www.pyopensci.org/contributing-guide/appendices/templates.html
[Comments]: https://github.com/pyOpenSci/governance/issues/8
|
main
|
sevivi a rendering tool to generate videos with synchronized sensor data submitting author name justamad package name sevivi one line description of package a rendering tool to generate videos with synchronized sensor data repository link version submitted editor xmnlab reviewer edgarriba reviewer pmeier archive tbd version accepted tbd description sevivi is designed to render plots of sensor data next to a video that was taken synchronously synchronizing the sensor data precisely to the video it allows you to investigate why certain patterns occur in your sensor data based on the exact moment in the video scope please indicate which this package falls under data retrieval data extraction data munging data deposition reproducibility geospatial education data visualization please fill out a pre submission inquiry before submitting a data visualization package for more info see of our guidebook explain how the and why the package falls under these categories briefly sentences sevivi renders plots of given data next to a given video who is the target audience and what are scientific applications of this package target audience is researchers working with motion data e g prediction of squat intensity using acceleration data when these researchers have taken videos of their trials they might want to see what exactly has produced a certain pattern helping to differentiate between noise and signal sevivi makes this easier by rendering synchronized videos of both the original video and the sensor data plots synchronously synchronization can be done manually using an imu on the camera e g or by using skeleton data from a tracking software we tested with an azure kinect are there other python packages that accomplish the same thing if so how does yours differ our research indicates that no similar python packages or other programs exist if you made a pre submission enquiry please paste the link to the corresponding issue forum post or other discussion or tag the editor you contacted technical checks for details about the pyopensci packaging requirements see our confirm each of the following by checking the box this package does not violate the terms of service of any service it interacts with has an contains a readme with instructions for installing the development version includes documentation with examples for all functions contains a vignette with examples of its essential functions and uses has a test suite has continuous integration such as travis ci appveyor circleci and or others publication options do you wish to automatically submit to the if so joss checks the package has an obvious research application according to joss s definition in their be aware that completing the pyopensci review process does not guarantee acceptance to joss be sure to read their submission requirements linked above if you are interested in submitting to joss the package is not a minor utility as defined by joss s minor ‘utility’ packages including ‘thin’ api clients are not acceptable pyopensci welcomes these packages under data retrieval but joss has slightly different criteria the package contains a paper md matching with a high level description in the package root or in inst the package is deposited in a long term repository with the doi note do not submit your package separately to joss we re preparing the paper and looking for suggestions regarding the long term repository options are you ok with reviewers submitting issues and or pull requests to your repo directly this option will allow reviewers to open smaller issues that can then be linked to pr s rather than submitting a more dense text based review it will also allow you to demonstrate addressing the issue via pr links yes i am ok with reviewers submitting requested changes as issues to my repo reviewers will then link to the issues in their submitted review code of conduct i agree to abide by during the review process and in maintaining my package should it be accepted p s have feedback comments about our review process leave a comment editor and review templates
| 1
|
158,706
| 20,028,889,122
|
IssuesEvent
|
2022-02-02 01:26:20
|
LancelotLiu/CAP4
|
https://api.github.com/repos/LancelotLiu/CAP4
|
opened
|
CVE-2021-41183 (Medium) detected in jquery-ui-1.12.1.min.js
|
security vulnerability
|
## CVE-2021-41183 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.12.1.min.js</b></p></summary>
<p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js</a></p>
<p>Path to vulnerable library: /cap-web/src/main/webapp/static/lib/js/jquery/ui/js/jquery-ui-1.12.1.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.12.1.min.js** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery-UI is the official jQuery user interface library. Prior to version 1.13.0, accepting the value of various `*Text` options of the Datepicker widget from untrusted sources may execute untrusted code. The issue is fixed in jQuery UI 1.13.0. The values passed to various `*Text` options are now always treated as pure text, not HTML. A workaround is to not accept the value of the `*Text` options from untrusted sources.
<p>Publish Date: 2021-10-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41183>CVE-2021-41183</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41183">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41183</a></p>
<p>Release Date: 2021-10-26</p>
<p>Fix Resolution: jquery-ui - 1.13.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-41183 (Medium) detected in jquery-ui-1.12.1.min.js - ## CVE-2021-41183 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-ui-1.12.1.min.js</b></p></summary>
<p>A curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js">https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js</a></p>
<p>Path to vulnerable library: /cap-web/src/main/webapp/static/lib/js/jquery/ui/js/jquery-ui-1.12.1.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-ui-1.12.1.min.js** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery-UI is the official jQuery user interface library. Prior to version 1.13.0, accepting the value of various `*Text` options of the Datepicker widget from untrusted sources may execute untrusted code. The issue is fixed in jQuery UI 1.13.0. The values passed to various `*Text` options are now always treated as pure text, not HTML. A workaround is to not accept the value of the `*Text` options from untrusted sources.
<p>Publish Date: 2021-10-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41183>CVE-2021-41183</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41183">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41183</a></p>
<p>Release Date: 2021-10-26</p>
<p>Fix Resolution: jquery-ui - 1.13.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_main
|
cve medium detected in jquery ui min js cve medium severity vulnerability vulnerable library jquery ui min js a curated set of user interface interactions effects widgets and themes built on top of the jquery javascript library library home page a href path to vulnerable library cap web src main webapp static lib js jquery ui js jquery ui min js dependency hierarchy x jquery ui min js vulnerable library found in base branch develop vulnerability details jquery ui is the official jquery user interface library prior to version accepting the value of various text options of the datepicker widget from untrusted sources may execute untrusted code the issue is fixed in jquery ui the values passed to various text options are now always treated as pure text not html a workaround is to not accept the value of the text options from untrusted sources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery ui step up your open source security game with whitesource
| 0
|
4,469
| 23,292,152,400
|
IssuesEvent
|
2022-08-06 02:24:01
|
tgstation/tgstation
|
https://api.github.com/repos/tgstation/tgstation
|
closed
|
Techweb design in "initial" category are having their build_type's overriden after world finishes initializing. RD console gives you a construct at protolathe prompt for things that don't exist in it.
|
Maintainability/Hinders improvements Bug
|
Master https://github.com/tgstation/tgstation/commit/1f805017d5d3ba2082905d1096b5f4bd3192c75d
Something down the chain after all the `/datum/techweb/specialized/autounlocking`'s proc and unlock all the designs in the `initial` category is overriding all initial designs' `build_type` var to `PROTOLATHE`
Example design:
https://github.com/tgstation/tgstation/blob/0af352927a0225f9e12ea5bf202c7f86d4be6581/code/modules/research/designs/autolathe_designs.dm#L482-L488
**Repo:**
Start server
Check SSresearch in MC, go to `techweb_designs`
Pull up `rubber shot`
Check `build_type`
Buildtype is 2, should be 4.
Open RD console interface
Pull up basic research tech node
Check rubber shot
See construction button, even though it can't be constructed at a protolathe
Click construction button, get a blank protolathe screen since it doesn't exist.
Can also be confirmed by checking output on the RD console screen
https://github.com/tgstation/tgstation/blob/4a2861b1c2883d5ba428766c99960adcb948ba4e/code/modules/research/rdconsole.dm#L760-L763
`to_chat(world, "protolathe TRUE, [selected_design] [selected_design.build_type] [PROTOLATHE] [AUTOLATHE]")`

Define for `PROTOLATHE` is 2, the rubber shot design should be 4 / `AUTOLATHE`

Shouldn't be seeing the protolathe icon nor the construct button in this pic.
|
True
|
Techweb design in "initial" category are having their build_type's overriden after world finishes initializing. RD console gives you a construct at protolathe prompt for things that don't exist in it. - Master https://github.com/tgstation/tgstation/commit/1f805017d5d3ba2082905d1096b5f4bd3192c75d
Something down the chain after all the `/datum/techweb/specialized/autounlocking`'s proc and unlock all the designs in the `initial` category is overriding all initial designs' `build_type` var to `PROTOLATHE`
Example design:
https://github.com/tgstation/tgstation/blob/0af352927a0225f9e12ea5bf202c7f86d4be6581/code/modules/research/designs/autolathe_designs.dm#L482-L488
**Repo:**
Start server
Check SSresearch in MC, go to `techweb_designs`
Pull up `rubber shot`
Check `build_type`
Buildtype is 2, should be 4.
Open RD console interface
Pull up basic research tech node
Check rubber shot
See construction button, even though it can't be constructed at a protolathe
Click construction button, get a blank protolathe screen since it doesn't exist.
Can also be confirmed by checking output on the RD console screen
https://github.com/tgstation/tgstation/blob/4a2861b1c2883d5ba428766c99960adcb948ba4e/code/modules/research/rdconsole.dm#L760-L763
`to_chat(world, "protolathe TRUE, [selected_design] [selected_design.build_type] [PROTOLATHE] [AUTOLATHE]")`

Define for `PROTOLATHE` is 2, the rubber shot design should be 4 / `AUTOLATHE`

Shouldn't be seeing the protolathe icon nor the construct button in this pic.
|
main
|
techweb design in initial category are having their build type s overriden after world finishes initializing rd console gives you a construct at protolathe prompt for things that don t exist in it master something down the chain after all the datum techweb specialized autounlocking s proc and unlock all the designs in the initial category is overriding all initial designs build type var to protolathe example design repo start server check ssresearch in mc go to techweb designs pull up rubber shot check build type buildtype is should be open rd console interface pull up basic research tech node check rubber shot see construction button even though it can t be constructed at a protolathe click construction button get a blank protolathe screen since it doesn t exist can also be confirmed by checking output on the rd console screen to chat world protolathe true define for protolathe is the rubber shot design should be autolathe shouldn t be seeing the protolathe icon nor the construct button in this pic
| 1
|
4,279
| 21,526,376,992
|
IssuesEvent
|
2022-04-28 18:53:44
|
BioArchLinux/Packages
|
https://api.github.com/repos/BioArchLinux/Packages
|
opened
|
[MAINTAIN] bioconductor upgrade issue: can't open shared lib
|
maintain
|
<!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
- [ ] r-mzr: unable to load shared object '/build/r-mzr/src/00LOCK-mzR/00new/mzR/libs/mzR.so'
- [ ] r-rsamtools: unable to load shared object '/build/r-rsamtools/src/00LOCK-Rsamtools/00new/Rsamtools/libs/Rsamtools.so'
- [ ] r-rhdf5: unable to load shared object '/build/r-rhdf5/src/00LOCK-rhdf5/00new/rhdf5/libs/rhdf5.so'
<details>
```
Error: package or namespace load failed for ‘Rsamtools’ in dyn.load(file, DLLpath = DLLpath, ...):
unable to load shared object '/build/r-rsamtools/src/00LOCK-Rsamtools/00new/Rsamtools/libs/Rsamtools.so':
/build/r-rsamtools/src/00LOCK-Rsamtools/00new/Rsamtools/libs/Rsamtools.so: undefined symbol: hts_open_format
Error: loading failed
Execution halted
ERROR: loading failed
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
|
True
|
[MAINTAIN] bioconductor upgrade issue: can't open shared lib - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
- [ ] r-mzr: unable to load shared object '/build/r-mzr/src/00LOCK-mzR/00new/mzR/libs/mzR.so'
- [ ] r-rsamtools: unable to load shared object '/build/r-rsamtools/src/00LOCK-Rsamtools/00new/Rsamtools/libs/Rsamtools.so'
- [ ] r-rhdf5: unable to load shared object '/build/r-rhdf5/src/00LOCK-rhdf5/00new/rhdf5/libs/rhdf5.so'
<details>
```
Error: package or namespace load failed for ‘Rsamtools’ in dyn.load(file, DLLpath = DLLpath, ...):
unable to load shared object '/build/r-rsamtools/src/00LOCK-Rsamtools/00new/Rsamtools/libs/Rsamtools.so':
/build/r-rsamtools/src/00LOCK-Rsamtools/00new/Rsamtools/libs/Rsamtools.so: undefined symbol: hts_open_format
Error: loading failed
Execution halted
ERROR: loading failed
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
|
main
|
bioconductor upgrade issue can t open shared lib please report the error of one package in one issue use multi issues to report multi bugs thanks log of the bug r mzr unable to load shared object build r mzr src mzr mzr libs mzr so r rsamtools unable to load shared object build r rsamtools src rsamtools rsamtools libs rsamtools so r unable to load shared object build r src libs so error package or namespace load failed for ‘rsamtools’ in dyn load file dllpath dllpath unable to load shared object build r rsamtools src rsamtools rsamtools libs rsamtools so build r rsamtools src rsamtools rsamtools libs rsamtools so undefined symbol hts open format error loading failed execution halted error loading failed packages please complete the following information package name description add any other context about the problem here
| 1
|
38,574
| 10,219,151,600
|
IssuesEvent
|
2019-08-15 17:49:53
|
spack/spack
|
https://api.github.com/repos/spack/spack
|
closed
|
Installation issue: ncbi-toolkit
|
build-error
|
I'm working on version bumping `ncbi-toolkit` to `22_0_0` and found that:
1. Upstream's configure script generates the build directory only using the major version number of `gcc` and setting the other version digits to zero, which is why the spack build fails for `gcc@8.3.0` below.
2. The spack build further restricts one only to using the `gcc` compiler, but upstream supports `intel`, etc, so the useless compiler restriction should also be removed.
Patch incoming that addresses both issues.
### Steps to reproduce the issue
```console
$ spack install ncbi-toolkit@22_0_0
--snip--
==> Created stage in /home/omsai/src/libkmap/spack/var/spack/stage/ncbi-toolkit-22_0_0-r4gzkg7temhtcdjqztodgizppujvqudb
==> Ran patch() for ncbi-toolkit
==> Building ncbi-toolkit [AutotoolsPackage]
==> Executing phase: 'autoreconf'
==> Executing phase: 'configure'
==> Executing phase: 'build'
==> Error: OSError: [Errno 2] No such file or directory: 'GCC830-DebugMT64/build'
/home/omsai/src/libkmap/spack/var/spack/repos/builtin/packages/ncbi-toolkit/package.py:45, in build:
42 compiler_version = self.compiler.version.joined
43
44 with working_dir(join_path(
>> 45 'GCC{0}-DebugMT64'.format(compiler_version), 'build')):
46 make('all_r')
See build log for details:
/home/omsai/src/libkmap/spack/var/spack/stage/ncbi-toolkit-22_0_0-r4gzkg7temhtcdjqztodgizppujvqudb/spack-build-out.txt
```
The actual directory upstream creates for gcc@8.3.0 is:
`./spack-src/GCC800-DebugMT64/build`
### Platform and user environment
Please report your OS here:
```commandline
$ uname -a
Linux xm1 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u1 (2019-07-19) x86_64 GNU/Linux
$ lsb_release -d
Description: Debian GNU/Linux 10 (buster)
$ gcc --version | head -1
gcc (Debian 8.3.0-6) 8.3.0
```
```YAML
# compilers.yaml
compilers:
- compiler:
environment: {}
extra_rpaths: []
flags: {}
modules: []
operating_system: debian10
paths:
cc: /usr/bin/gcc-8
cxx: /usr/bin/g++-8
f77: /usr/bin/gfortran-8
fc: /usr/bin/gfortran-8
spec: gcc@8.3.0
```
|
1.0
|
Installation issue: ncbi-toolkit - I'm working on version bumping `ncbi-toolkit` to `22_0_0` and found that:
1. Upstream's configure script generates the build directory only using the major version number of `gcc` and setting the other version digits to zero, which is why the spack build fails for `gcc@8.3.0` below.
2. The spack build further restricts one only to using the `gcc` compiler, but upstream supports `intel`, etc, so the useless compiler restriction should also be removed.
Patch incoming that addresses both issues.
### Steps to reproduce the issue
```console
$ spack install ncbi-toolkit@22_0_0
--snip--
==> Created stage in /home/omsai/src/libkmap/spack/var/spack/stage/ncbi-toolkit-22_0_0-r4gzkg7temhtcdjqztodgizppujvqudb
==> Ran patch() for ncbi-toolkit
==> Building ncbi-toolkit [AutotoolsPackage]
==> Executing phase: 'autoreconf'
==> Executing phase: 'configure'
==> Executing phase: 'build'
==> Error: OSError: [Errno 2] No such file or directory: 'GCC830-DebugMT64/build'
/home/omsai/src/libkmap/spack/var/spack/repos/builtin/packages/ncbi-toolkit/package.py:45, in build:
42 compiler_version = self.compiler.version.joined
43
44 with working_dir(join_path(
>> 45 'GCC{0}-DebugMT64'.format(compiler_version), 'build')):
46 make('all_r')
See build log for details:
/home/omsai/src/libkmap/spack/var/spack/stage/ncbi-toolkit-22_0_0-r4gzkg7temhtcdjqztodgizppujvqudb/spack-build-out.txt
```
The actual directory upstream creates for gcc@8.3.0 is:
`./spack-src/GCC800-DebugMT64/build`
### Platform and user environment
Please report your OS here:
```commandline
$ uname -a
Linux xm1 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u1 (2019-07-19) x86_64 GNU/Linux
$ lsb_release -d
Description: Debian GNU/Linux 10 (buster)
$ gcc --version | head -1
gcc (Debian 8.3.0-6) 8.3.0
```
```YAML
# compilers.yaml
compilers:
- compiler:
environment: {}
extra_rpaths: []
flags: {}
modules: []
operating_system: debian10
paths:
cc: /usr/bin/gcc-8
cxx: /usr/bin/g++-8
f77: /usr/bin/gfortran-8
fc: /usr/bin/gfortran-8
spec: gcc@8.3.0
```
|
non_main
|
installation issue ncbi toolkit i m working on version bumping ncbi toolkit to and found that upstream s configure script generates the build directory only using the major version number of gcc and setting the other version digits to zero which is why the spack build fails for gcc below the spack build further restricts one only to using the gcc compiler but upstream supports intel etc so the useless compiler restriction should also be removed patch incoming that addresses both issues steps to reproduce the issue console spack install ncbi toolkit snip created stage in home omsai src libkmap spack var spack stage ncbi toolkit ran patch for ncbi toolkit building ncbi toolkit executing phase autoreconf executing phase configure executing phase build error oserror no such file or directory build home omsai src libkmap spack var spack repos builtin packages ncbi toolkit package py in build compiler version self compiler version joined with working dir join path gcc format compiler version build make all r see build log for details home omsai src libkmap spack var spack stage ncbi toolkit spack build out txt the actual directory upstream creates for gcc is spack src build platform and user environment please report your os here commandline uname a linux smp debian gnu linux lsb release d description debian gnu linux buster gcc version head gcc debian yaml compilers yaml compilers compiler environment extra rpaths flags modules operating system paths cc usr bin gcc cxx usr bin g usr bin gfortran fc usr bin gfortran spec gcc
| 0
|
406,971
| 27,590,341,250
|
IssuesEvent
|
2023-03-08 23:35:45
|
developmentseed/titiler
|
https://api.github.com/repos/developmentseed/titiler
|
closed
|
Docs: Add step for upgrading pip
|
documentation
|
#### Problem description
I am installing a local build for development as described here, https://developmentseed.org/titiler/#installation, and ran into an issue with the fact that there is no `setup.py` in the app, only `pyproject.toml`. Turns out, this is just a problem with pip <21.3 (I had 20 by default in my virtual environment). Upgrading allowed me to make an editable install without any trouble.
Adding
```
python -m pip install --upgrade pip
```
as a first step in those commands may be helpful for others in the future.
#### Environment Information
Ubuntu 20.04, Python 3.8.10 (using `venv` for virtual environment)
|
1.0
|
Docs: Add step for upgrading pip - #### Problem description
I am installing a local build for development as described here, https://developmentseed.org/titiler/#installation, and ran into an issue with the fact that there is no `setup.py` in the app, only `pyproject.toml`. Turns out, this is just a problem with pip <21.3 (I had 20 by default in my virtual environment). Upgrading allowed me to make an editable install without any trouble.
Adding
```
python -m pip install --upgrade pip
```
as a first step in those commands may be helpful for others in the future.
#### Environment Information
Ubuntu 20.04, Python 3.8.10 (using `venv` for virtual environment)
|
non_main
|
docs add step for upgrading pip problem description i am installing a local build for development as described here and ran into an issue with the fact that there is no setup py in the app only pyproject toml turns out this is just a problem with pip i had by default in my virtual environment upgrading allowed me to make an editable install without any trouble adding python m pip install upgrade pip as a first step in those commands may be helpful for others in the future environment information ubuntu python using venv for virtual environment
| 0
|
1,512
| 6,537,412,125
|
IssuesEvent
|
2017-08-31 22:21:27
|
ocaml/opam-repository
|
https://api.github.com/repos/ocaml/opam-repository
|
closed
|
LZ4 doesn't compile on Mac OS X
|
needs maintainer action
|
```
[ERROR] The compilation of lz4 failed at "ocaml setup.ml -build -cflags -ccopt,-I,-ccopt,/usr/local/include".
Processing 1/1: [lz4: ocamlfind remove]
#=== ERROR while installing lz4.1.1.1 =========================================#
# opam-version 1.2.2
# os darwin
# command ocaml setup.ml -build -cflags -ccopt,-I,-ccopt,/usr/local/include
# path /Users/stas/.opam/system/build/lz4.1.1.1
# compiler system (4.05.0)
# exit-code 1
# env-file /Users/stas/.opam/system/build/lz4.1.1.1/lz4-68396-e7966f.env
# stdout-file /Users/stas/.opam/system/build/lz4.1.1.1/lz4-68396-e7966f.out
# stderr-file /Users/stas/.opam/system/build/lz4.1.1.1/lz4-68396-e7966f.err
### stdout ###
# [...]
# ld: library not found for -llz4
# clang: error: linker command failed with exit code 1 (use -v to see invocation)
# Command exited with code 2.
# + ocamlopt.opt unix.cmxa -I /Users/stas/.opam/system/lib/ocamlbuild /Users/stas/.opam/system/lib/ocamlbuild/ocamlbuildlib.cmxa myocamlbuild.ml /Users/stas/.opam/system/lib/ocamlbuild/ocamlbuild.cmx -o myocamlbuild
# File "myocamlbuild.ml", line 518, characters 43-62:
# Warning 3: deprecated: Ocamlbuild_plugin.String.uncapitalize
# Use String.uncapitalize_ascii instead.
# File "myocamlbuild.ml", line 531, characters 51-70:
# Warning 3: deprecated: Ocamlbuild_plugin.String.uncapitalize
# Use String.uncapitalize_ascii instead.
### stderr ###
# [...]
# Use String.uncapitalize_ascii instead.
# File "setup.ml", line 5847, characters 11-28:
# Warning 3: deprecated: String.capitalize
# Use String.capitalize_ascii instead.
# File "setup.ml", line 5848, characters 11-30:
# Warning 3: deprecated: String.uncapitalize
# Use String.uncapitalize_ascii instead.
# W: Cannot find source file matching module 'LZ4_bindings' in library lz4
# W: Cannot find source file matching module 'LZ4_generated' in library lz4
# E: Failure("Command ''/Users/stas/.opam/system/bin/ocamlbuild' lib_gen/LZ4_bindgen.byte lib/liblz4_stubs.a lib/dlllz4_stubs.so lib/lz4.cma lib/lz4.cmxa lib/lz4.a lib/lz4.cmxs -tag debug -cflags -ccopt,-I,-ccopt,/usr/local/include' terminated with error code 10")
=-=- Error report -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
The following actions failed
∗ install lz4 1.1.1
```
Basically, it can't find lz4 library installed with Homebrew. I saw #4238 where you fixed include path, but library path is still missing.
I was able to install the package using:
```
LIBRARY_PATH=/usr/local/lib opam install lz4
```
So, need to pass `-L"/usr/local/lib"` to the compiler.
|
True
|
LZ4 doesn't compile on Mac OS X - ```
[ERROR] The compilation of lz4 failed at "ocaml setup.ml -build -cflags -ccopt,-I,-ccopt,/usr/local/include".
Processing 1/1: [lz4: ocamlfind remove]
#=== ERROR while installing lz4.1.1.1 =========================================#
# opam-version 1.2.2
# os darwin
# command ocaml setup.ml -build -cflags -ccopt,-I,-ccopt,/usr/local/include
# path /Users/stas/.opam/system/build/lz4.1.1.1
# compiler system (4.05.0)
# exit-code 1
# env-file /Users/stas/.opam/system/build/lz4.1.1.1/lz4-68396-e7966f.env
# stdout-file /Users/stas/.opam/system/build/lz4.1.1.1/lz4-68396-e7966f.out
# stderr-file /Users/stas/.opam/system/build/lz4.1.1.1/lz4-68396-e7966f.err
### stdout ###
# [...]
# ld: library not found for -llz4
# clang: error: linker command failed with exit code 1 (use -v to see invocation)
# Command exited with code 2.
# + ocamlopt.opt unix.cmxa -I /Users/stas/.opam/system/lib/ocamlbuild /Users/stas/.opam/system/lib/ocamlbuild/ocamlbuildlib.cmxa myocamlbuild.ml /Users/stas/.opam/system/lib/ocamlbuild/ocamlbuild.cmx -o myocamlbuild
# File "myocamlbuild.ml", line 518, characters 43-62:
# Warning 3: deprecated: Ocamlbuild_plugin.String.uncapitalize
# Use String.uncapitalize_ascii instead.
# File "myocamlbuild.ml", line 531, characters 51-70:
# Warning 3: deprecated: Ocamlbuild_plugin.String.uncapitalize
# Use String.uncapitalize_ascii instead.
### stderr ###
# [...]
# Use String.uncapitalize_ascii instead.
# File "setup.ml", line 5847, characters 11-28:
# Warning 3: deprecated: String.capitalize
# Use String.capitalize_ascii instead.
# File "setup.ml", line 5848, characters 11-30:
# Warning 3: deprecated: String.uncapitalize
# Use String.uncapitalize_ascii instead.
# W: Cannot find source file matching module 'LZ4_bindings' in library lz4
# W: Cannot find source file matching module 'LZ4_generated' in library lz4
# E: Failure("Command ''/Users/stas/.opam/system/bin/ocamlbuild' lib_gen/LZ4_bindgen.byte lib/liblz4_stubs.a lib/dlllz4_stubs.so lib/lz4.cma lib/lz4.cmxa lib/lz4.a lib/lz4.cmxs -tag debug -cflags -ccopt,-I,-ccopt,/usr/local/include' terminated with error code 10")
=-=- Error report -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
The following actions failed
∗ install lz4 1.1.1
```
Basically, it can't find lz4 library installed with Homebrew. I saw #4238 where you fixed include path, but library path is still missing.
I was able to install the package using:
```
LIBRARY_PATH=/usr/local/lib opam install lz4
```
So, need to pass `-L"/usr/local/lib"` to the compiler.
|
main
|
doesn t compile on mac os x the compilation of failed at ocaml setup ml build cflags ccopt i ccopt usr local include processing error while installing opam version os darwin command ocaml setup ml build cflags ccopt i ccopt usr local include path users stas opam system build compiler system exit code env file users stas opam system build env stdout file users stas opam system build out stderr file users stas opam system build err stdout ld library not found for clang error linker command failed with exit code use v to see invocation command exited with code ocamlopt opt unix cmxa i users stas opam system lib ocamlbuild users stas opam system lib ocamlbuild ocamlbuildlib cmxa myocamlbuild ml users stas opam system lib ocamlbuild ocamlbuild cmx o myocamlbuild file myocamlbuild ml line characters warning deprecated ocamlbuild plugin string uncapitalize use string uncapitalize ascii instead file myocamlbuild ml line characters warning deprecated ocamlbuild plugin string uncapitalize use string uncapitalize ascii instead stderr use string uncapitalize ascii instead file setup ml line characters warning deprecated string capitalize use string capitalize ascii instead file setup ml line characters warning deprecated string uncapitalize use string uncapitalize ascii instead w cannot find source file matching module bindings in library w cannot find source file matching module generated in library e failure command users stas opam system bin ocamlbuild lib gen bindgen byte lib stubs a lib stubs so lib cma lib cmxa lib a lib cmxs tag debug cflags ccopt i ccopt usr local include terminated with error code error report the following actions failed ∗ install basically it can t find library installed with homebrew i saw where you fixed include path but library path is still missing i was able to install the package using library path usr local lib opam install so need to pass l usr local lib to the compiler
| 1
|
4,740
| 24,460,288,684
|
IssuesEvent
|
2022-10-07 10:29:17
|
mozilla/foundation.mozilla.org
|
https://api.github.com/repos/mozilla/foundation.mozilla.org
|
closed
|
Upgrade Django to 3.2.16
|
engineering localization 🌎 unplanned Maintain
|
A security patch for Django was released fixing a critical issue on sites that use the translation framework.
See also: https://www.djangoproject.com/weblog/2022/oct/04/security-releases/
|
True
|
Upgrade Django to 3.2.16 - A security patch for Django was released fixing a critical issue on sites that use the translation framework.
See also: https://www.djangoproject.com/weblog/2022/oct/04/security-releases/
|
main
|
upgrade django to a security patch for django was released fixing a critical issue on sites that use the translation framework see also
| 1
|
1,112
| 4,988,882,622
|
IssuesEvent
|
2016-12-08 09:57:18
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
EC2 module parameter `id` is undocumented.
|
affects_2.3 aws cloud docs_report waiting_on_maintainer
|
The `id` parameter was added at https://github.com/ansible/ansible/pull/2421.
For some reason the parameter's documentation was removed later at https://github.com/ansible/ansible-modules-core/commit/c6b0d469acbb1a1b0508bacaedc5456eb5e9be83#diff-9667dfcde0b7854855c94acb534b156aL31
Can the documentation be restored? I haven't seen notices about deprecation and the functionality is there.
|
True
|
EC2 module parameter `id` is undocumented. - The `id` parameter was added at https://github.com/ansible/ansible/pull/2421.
For some reason the parameter's documentation was removed later at https://github.com/ansible/ansible-modules-core/commit/c6b0d469acbb1a1b0508bacaedc5456eb5e9be83#diff-9667dfcde0b7854855c94acb534b156aL31
Can the documentation be restored? I haven't seen notices about deprecation and the functionality is there.
|
main
|
module parameter id is undocumented the id parameter was added at for some reason the parameter s documentation was removed later at can the documentation be restored i haven t seen notices about deprecation and the functionality is there
| 1
|
989
| 11,984,959,858
|
IssuesEvent
|
2020-04-07 16:40:38
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
opened
|
Compiler crash with syntax error in indexer declaration
|
Area-Compilers Bug Tenet-Reliability
|
The following proposed unit test crashes the compiler:
```csharp
[Fact]
public void Test()
{
var source = @"
struct S
{
public bool Nada[int t] { get { return false; } }
}
";
var compilation = CreateCompilation(source, options: TestOptions.DebugDll);
compilation.GetDiagnostics();
}
```
The result of running it is
```none
Microsoft.CodeAnalysis.CSharp.UnitTests.PatternMatchingTests3.Test
Source: PatternMatchingTests3.cs line 3313
Duration: 839 ms
Message:
System.AggregateException : One or more errors occurred.
---- System.InvalidOperationException :
Stack Trace:
Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
Task.Wait()
Parallel.ForWorker[TLocal](Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally)
Parallel.For(Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body)
SourceNamespaceSymbol.ForceComplete(SourceLocation locationOpt, CancellationToken cancellationToken) line 56
SourceModuleSymbol.ForceComplete(SourceLocation locationOpt, CancellationToken cancellationToken) line 261
SourceAssemblySymbol.ForceComplete(SourceLocation locationOpt, CancellationToken cancellationToken) line 911
CSharpCompilation.GetSourceDeclarationDiagnostics(SyntaxTree syntaxTree, Nullable`1 filterSpanWithinTree, Func`4 locationFilterOpt, CancellationToken cancellationToken) line 2508
CSharpCompilation.GetDiagnostics(CompilationStage stage, Boolean includeEarlierStages, DiagnosticBag diagnostics, CancellationToken cancellationToken) line 2395
CSharpCompilation.GetDiagnostics(CompilationStage stage, Boolean includeEarlierStages, CancellationToken cancellationToken) line 2316
CSharpCompilation.GetDiagnostics(CancellationToken cancellationToken) line 2310
PatternMatchingTests3.Test() line 3322
----- Inner Stack Trace -----
ThrowingTraceListener.Fail(String message, String detailMessage) line 26
TraceListener.Fail(String message)
TraceInternal.Fail(String message)
Debug.Assert(Boolean condition)
SourceMemberFieldSymbolFromDeclarator.get_HasPointerType() line 376
BaseTypeAnalysis.NonPointerType(FieldSymbol field) line 154
SourceMemberContainerTypeSymbol.HasStructCircularity(DiagnosticBag diagnostics) line 1930
SourceMemberContainerTypeSymbol.CheckStructCircularity(DiagnosticBag diagnostics) line 1911
SourceMemberContainerTypeSymbol.get_KnownCircularStruct() line 1890
SourceMemberContainerTypeSymbol.AfterMembersChecks(DiagnosticBag diagnostics) line 1418
SourceMemberContainerTypeSymbol.ForceComplete(SourceLocation locationOpt, CancellationToken cancellationToken) line 516
Symbol.ForceCompleteMemberByLocation(SourceLocation locationOpt, Symbol member, CancellationToken cancellationToken) line 775
<>c__DisplayClass49_1.<ForceComplete>b__0(Int32 i) line 61
<>c__DisplayClass6_0`1.<WithCurrentUICulture>b__0(T param) line 171
<>c__DisplayClass17_0`1.<ForWorker>b__1()
Task.InnerInvoke()
Task.InnerInvokeWithArg(Task childTask)
<>c__DisplayClass176_0.<ExecuteSelfReplicating>b__0(Object <p0>)
```
|
True
|
Compiler crash with syntax error in indexer declaration - The following proposed unit test crashes the compiler:
```csharp
[Fact]
public void Test()
{
var source = @"
struct S
{
public bool Nada[int t] { get { return false; } }
}
";
var compilation = CreateCompilation(source, options: TestOptions.DebugDll);
compilation.GetDiagnostics();
}
```
The result of running it is
```none
Microsoft.CodeAnalysis.CSharp.UnitTests.PatternMatchingTests3.Test
Source: PatternMatchingTests3.cs line 3313
Duration: 839 ms
Message:
System.AggregateException : One or more errors occurred.
---- System.InvalidOperationException :
Stack Trace:
Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
Task.Wait()
Parallel.ForWorker[TLocal](Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally)
Parallel.For(Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body)
SourceNamespaceSymbol.ForceComplete(SourceLocation locationOpt, CancellationToken cancellationToken) line 56
SourceModuleSymbol.ForceComplete(SourceLocation locationOpt, CancellationToken cancellationToken) line 261
SourceAssemblySymbol.ForceComplete(SourceLocation locationOpt, CancellationToken cancellationToken) line 911
CSharpCompilation.GetSourceDeclarationDiagnostics(SyntaxTree syntaxTree, Nullable`1 filterSpanWithinTree, Func`4 locationFilterOpt, CancellationToken cancellationToken) line 2508
CSharpCompilation.GetDiagnostics(CompilationStage stage, Boolean includeEarlierStages, DiagnosticBag diagnostics, CancellationToken cancellationToken) line 2395
CSharpCompilation.GetDiagnostics(CompilationStage stage, Boolean includeEarlierStages, CancellationToken cancellationToken) line 2316
CSharpCompilation.GetDiagnostics(CancellationToken cancellationToken) line 2310
PatternMatchingTests3.Test() line 3322
----- Inner Stack Trace -----
ThrowingTraceListener.Fail(String message, String detailMessage) line 26
TraceListener.Fail(String message)
TraceInternal.Fail(String message)
Debug.Assert(Boolean condition)
SourceMemberFieldSymbolFromDeclarator.get_HasPointerType() line 376
BaseTypeAnalysis.NonPointerType(FieldSymbol field) line 154
SourceMemberContainerTypeSymbol.HasStructCircularity(DiagnosticBag diagnostics) line 1930
SourceMemberContainerTypeSymbol.CheckStructCircularity(DiagnosticBag diagnostics) line 1911
SourceMemberContainerTypeSymbol.get_KnownCircularStruct() line 1890
SourceMemberContainerTypeSymbol.AfterMembersChecks(DiagnosticBag diagnostics) line 1418
SourceMemberContainerTypeSymbol.ForceComplete(SourceLocation locationOpt, CancellationToken cancellationToken) line 516
Symbol.ForceCompleteMemberByLocation(SourceLocation locationOpt, Symbol member, CancellationToken cancellationToken) line 775
<>c__DisplayClass49_1.<ForceComplete>b__0(Int32 i) line 61
<>c__DisplayClass6_0`1.<WithCurrentUICulture>b__0(T param) line 171
<>c__DisplayClass17_0`1.<ForWorker>b__1()
Task.InnerInvoke()
Task.InnerInvokeWithArg(Task childTask)
<>c__DisplayClass176_0.<ExecuteSelfReplicating>b__0(Object <p0>)
```
|
non_main
|
compiler crash with syntax error in indexer declaration the following proposed unit test crashes the compiler csharp public void test var source struct s public bool nada get return false var compilation createcompilation source options testoptions debugdll compilation getdiagnostics the result of running it is none microsoft codeanalysis csharp unittests test source cs line duration ms message system aggregateexception one or more errors occurred system invalidoperationexception stack trace task throwifexceptional boolean includetaskcanceledexceptions task wait millisecondstimeout cancellationtoken cancellationtoken task wait parallel forworker frominclusive toexclusive paralleloptions paralleloptions action body action bodywithstate func bodywithlocal func localinit action localfinally parallel for frominclusive toexclusive paralleloptions paralleloptions action body sourcenamespacesymbol forcecomplete sourcelocation locationopt cancellationtoken cancellationtoken line sourcemodulesymbol forcecomplete sourcelocation locationopt cancellationtoken cancellationtoken line sourceassemblysymbol forcecomplete sourcelocation locationopt cancellationtoken cancellationtoken line csharpcompilation getsourcedeclarationdiagnostics syntaxtree syntaxtree nullable filterspanwithintree func locationfilteropt cancellationtoken cancellationtoken line csharpcompilation getdiagnostics compilationstage stage boolean includeearlierstages diagnosticbag diagnostics cancellationtoken cancellationtoken line csharpcompilation getdiagnostics compilationstage stage boolean includeearlierstages cancellationtoken cancellationtoken line csharpcompilation getdiagnostics cancellationtoken cancellationtoken line test line inner stack trace throwingtracelistener fail string message string detailmessage line tracelistener fail string message traceinternal fail string message debug assert boolean condition sourcememberfieldsymbolfromdeclarator get haspointertype line basetypeanalysis nonpointertype fieldsymbol field line sourcemembercontainertypesymbol hasstructcircularity diagnosticbag diagnostics line sourcemembercontainertypesymbol checkstructcircularity diagnosticbag diagnostics line sourcemembercontainertypesymbol get knowncircularstruct line sourcemembercontainertypesymbol aftermemberschecks diagnosticbag diagnostics line sourcemembercontainertypesymbol forcecomplete sourcelocation locationopt cancellationtoken cancellationtoken line symbol forcecompletememberbylocation sourcelocation locationopt symbol member cancellationtoken cancellationtoken line c b i line c b t param line c b task innerinvoke task innerinvokewitharg task childtask c b object
| 0
|
4,717
| 24,342,131,828
|
IssuesEvent
|
2022-10-01 21:02:13
|
beekama/NutritionApp
|
https://api.github.com/repos/beekama/NutritionApp
|
closed
|
Unessesary (?) IndexOutOfBoundsException try/catch in RecommendationsElement
|
maintainability
|
RecommendationsElement:104 has a catch for an IndexOutOfBoundsException, but I fail to see how this Exception can occur and if it occurs it's like an indication of another bug that should be fixed rather than catching the exception here.
|
True
|
Unessesary (?) IndexOutOfBoundsException try/catch in RecommendationsElement - RecommendationsElement:104 has a catch for an IndexOutOfBoundsException, but I fail to see how this Exception can occur and if it occurs it's like an indication of another bug that should be fixed rather than catching the exception here.
|
main
|
unessesary indexoutofboundsexception try catch in recommendationselement recommendationselement has a catch for an indexoutofboundsexception but i fail to see how this exception can occur and if it occurs it s like an indication of another bug that should be fixed rather than catching the exception here
| 1
|
665,796
| 22,330,403,122
|
IssuesEvent
|
2022-06-14 14:07:03
|
consta-design-system/uikit
|
https://api.github.com/repos/consta-design-system/uikit
|
closed
|
Sidebar: Значение свойства rootClassName не применяется к корневому контейнеру
|
bug 🔥🔥🔥 priority
|
> Полные [правила оформления issue](https://consta-uikit.vercel.app/?path=/docs/common-develop-issues--page)
**Описание бага**
NB: свойство не описано в документации storybook, но присутствует в типах и исходном коде
Если для компонента Sidebar задать значение для свойства rootClassName, то пользовательски определенные классы не применятся к корневому контейнеру
<img width="575" alt="Снимок экрана 2022-05-17 в 18 56 26" src="https://user-images.githubusercontent.com/25363699/168855728-6fa845da-a5ec-4332-b483-955992bf79d9.png">
<img width="1428" alt="Снимок экрана 2022-05-17 в 18 55 43" src="https://user-images.githubusercontent.com/25363699/168855759-e712c857-12e2-4d4c-96ac-d0bb5f8d43a0.png">
**Ожидаемое поведение** (опционально)
при присвоении значения свойству rootClassName, назначенные пользовательские классы стилей применяются к корневому контейнеру компонента Sidebar
> Если баг в коде, необходимо заполнить поля ниже ↧
**Версия Consta Kit**
3.19.0
**Параметры софта на компьютере:**
- ОС: MacOS BigSur 11.6.5
- Браузер и его версия: Google Chrome 101.0.4951.64
- React 17.0.2
**Дополнительная информация**
Все, что считаете нужным.
|
1.0
|
Sidebar: Значение свойства rootClassName не применяется к корневому контейнеру - > Полные [правила оформления issue](https://consta-uikit.vercel.app/?path=/docs/common-develop-issues--page)
**Описание бага**
NB: свойство не описано в документации storybook, но присутствует в типах и исходном коде
Если для компонента Sidebar задать значение для свойства rootClassName, то пользовательски определенные классы не применятся к корневому контейнеру
<img width="575" alt="Снимок экрана 2022-05-17 в 18 56 26" src="https://user-images.githubusercontent.com/25363699/168855728-6fa845da-a5ec-4332-b483-955992bf79d9.png">
<img width="1428" alt="Снимок экрана 2022-05-17 в 18 55 43" src="https://user-images.githubusercontent.com/25363699/168855759-e712c857-12e2-4d4c-96ac-d0bb5f8d43a0.png">
**Ожидаемое поведение** (опционально)
при присвоении значения свойству rootClassName, назначенные пользовательские классы стилей применяются к корневому контейнеру компонента Sidebar
> Если баг в коде, необходимо заполнить поля ниже ↧
**Версия Consta Kit**
3.19.0
**Параметры софта на компьютере:**
- ОС: MacOS BigSur 11.6.5
- Браузер и его версия: Google Chrome 101.0.4951.64
- React 17.0.2
**Дополнительная информация**
Все, что считаете нужным.
|
non_main
|
sidebar значение свойства rootclassname не применяется к корневому контейнеру полные описание бага nb свойство не описано в документации storybook но присутствует в типах и исходном коде если для компонента sidebar задать значение для свойства rootclassname то пользовательски определенные классы не применятся к корневому контейнеру img width alt снимок экрана в src img width alt снимок экрана в src ожидаемое поведение опционально при присвоении значения свойству rootclassname назначенные пользовательские классы стилей применяются к корневому контейнеру компонента sidebar если баг в коде необходимо заполнить поля ниже ↧ версия consta kit параметры софта на компьютере ос macos bigsur браузер и его версия google chrome react дополнительная информация все что считаете нужным
| 0
|
4,772
| 24,585,875,478
|
IssuesEvent
|
2022-10-13 19:39:49
|
carbon-design-system/carbon
|
https://api.github.com/repos/carbon-design-system/carbon
|
closed
|
SideNavLink element to support `<Button>`
|
type: enhancement 💡 proposal: needs more research 🕵️♀️ status: waiting for maintainer response 💬
|
Use this template if you want to request a new feature, or a change to an
existing feature.
If you are reporting a bug or problem, please use the bug template instead.
### Summary
Currently, the `SideNavLink` component accepts an `element` prop which is defaulted to an `a`. This is helpful when utilizing Gatsby, as you can set the element prop to `Link` and use the `to` attribute. However, in some instances, such as in [CreateTearsheet](https://ibm-cloud-cognitive.netlify.app/?path=/story/cloud-cognitive-canary-createtearsheet--with-view-all-toggle), the `SideNavLink` is used to bring you to different parts of the inner content of the modal. It would be great if you could set the element prop to a `Button` but currently all of the side nav link styles are only for `a` tags.
Clarify if you are asking for design, development, or both design and
development.
### Justification
A button here makes sense because we aren't redirecting or sending users' to another URL or location in the browser.
### "Must have" functionality
Prop for element accepts `Button` and applies all applicable styling
|
True
|
SideNavLink element to support `<Button>` - Use this template if you want to request a new feature, or a change to an
existing feature.
If you are reporting a bug or problem, please use the bug template instead.
### Summary
Currently, the `SideNavLink` component accepts an `element` prop which is defaulted to an `a`. This is helpful when utilizing Gatsby, as you can set the element prop to `Link` and use the `to` attribute. However, in some instances, such as in [CreateTearsheet](https://ibm-cloud-cognitive.netlify.app/?path=/story/cloud-cognitive-canary-createtearsheet--with-view-all-toggle), the `SideNavLink` is used to bring you to different parts of the inner content of the modal. It would be great if you could set the element prop to a `Button` but currently all of the side nav link styles are only for `a` tags.
Clarify if you are asking for design, development, or both design and
development.
### Justification
A button here makes sense because we aren't redirecting or sending users' to another URL or location in the browser.
### "Must have" functionality
Prop for element accepts `Button` and applies all applicable styling
|
main
|
sidenavlink element to support use this template if you want to request a new feature or a change to an existing feature if you are reporting a bug or problem please use the bug template instead summary currently the sidenavlink component accepts an element prop which is defaulted to an a this is helpful when utilizing gatsby as you can set the element prop to link and use the to attribute however in some instances such as in the sidenavlink is used to bring you to different parts of the inner content of the modal it would be great if you could set the element prop to a button but currently all of the side nav link styles are only for a tags clarify if you are asking for design development or both design and development justification a button here makes sense because we aren t redirecting or sending users to another url or location in the browser must have functionality prop for element accepts button and applies all applicable styling
| 1
|
36,362
| 8,099,423,678
|
IssuesEvent
|
2018-08-11 08:27:29
|
cython/cython
|
https://api.github.com/repos/cython/cython
|
opened
|
Invalid code for default values of cdef class attributes
|
Code Generation defect
|
The following pure mode example leads to invalid C code:
```
import cython
@cython.cclass
class A:
c = cython.declare(cython.int, visibility='public') # works
d = cython.declare(cython.int, 5) # gives invalid assignment code
e = cython.declare(cython.int, 3, visibility='readonly')
```
The errors are:
```
cclass.cpp: In function ‘PyObject* PyInit_cclass()’:
cclass.cpp:2894:3: error: ‘d’ was not declared in this scope
d = 5;
^
cclass.cpp:2903:3: error: ‘e’ was not declared in this scope
e = 3;
^
```
Apparently, the default values lead to invalid assignment code being generated. If we want to support this, it should be handled in `tp_new()`, but it would be ok to make this an error for now as long as we cannot (easily) make it work.
|
1.0
|
Invalid code for default values of cdef class attributes - The following pure mode example leads to invalid C code:
```
import cython
@cython.cclass
class A:
c = cython.declare(cython.int, visibility='public') # works
d = cython.declare(cython.int, 5) # gives invalid assignment code
e = cython.declare(cython.int, 3, visibility='readonly')
```
The errors are:
```
cclass.cpp: In function ‘PyObject* PyInit_cclass()’:
cclass.cpp:2894:3: error: ‘d’ was not declared in this scope
d = 5;
^
cclass.cpp:2903:3: error: ‘e’ was not declared in this scope
e = 3;
^
```
Apparently, the default values lead to invalid assignment code being generated. If we want to support this, it should be handled in `tp_new()`, but it would be ok to make this an error for now as long as we cannot (easily) make it work.
|
non_main
|
invalid code for default values of cdef class attributes the following pure mode example leads to invalid c code import cython cython cclass class a c cython declare cython int visibility public works d cython declare cython int gives invalid assignment code e cython declare cython int visibility readonly the errors are cclass cpp in function ‘pyobject pyinit cclass ’ cclass cpp error ‘d’ was not declared in this scope d cclass cpp error ‘e’ was not declared in this scope e apparently the default values lead to invalid assignment code being generated if we want to support this it should be handled in tp new but it would be ok to make this an error for now as long as we cannot easily make it work
| 0
|
217,088
| 16,679,302,795
|
IssuesEvent
|
2021-06-07 20:39:19
|
recognai/rubrix
|
https://api.github.com/repos/recognai/rubrix
|
closed
|
Reference Section
|
documentation
|
Including a Reference section. For now it can have links to Python API docs, and we can think of more elements to add. @dvsrepo told me that you were working on the API reference, @dcfidalgo , how is it going?
|
1.0
|
Reference Section - Including a Reference section. For now it can have links to Python API docs, and we can think of more elements to add. @dvsrepo told me that you were working on the API reference, @dcfidalgo , how is it going?
|
non_main
|
reference section including a reference section for now it can have links to python api docs and we can think of more elements to add dvsrepo told me that you were working on the api reference dcfidalgo how is it going
| 0
|
116,146
| 4,697,658,351
|
IssuesEvent
|
2016-10-12 10:05:53
|
gbif/ipt
|
https://api.github.com/repos/gbif/ipt
|
closed
|
IPT connection to Google Fusion Tables
|
Priority-Medium Type-Enhancement Won't-fix
|
```
What feature would like to see being added to the IPT?
I use Google Fusion Tables to visualize and manage a dataset I have. It allows me to
geocode/georeference, map, filter and search, some of which are not dissimilar from
what the original IPT portal could do.
I'd like to publish this dataset as a Darwin Core Archive through IPT, and republish
it without remapping for updates.
Would it be possible to support a connection to Google Fusion Tables, next to text
uploads and database connections? The Google Fusion Tables API supports JDBC.
```
Original issue reported on code.google.com by `peter.desmet.cubc` on 2011-05-26 18:13:40
|
1.0
|
IPT connection to Google Fusion Tables - ```
What feature would like to see being added to the IPT?
I use Google Fusion Tables to visualize and manage a dataset I have. It allows me to
geocode/georeference, map, filter and search, some of which are not dissimilar from
what the original IPT portal could do.
I'd like to publish this dataset as a Darwin Core Archive through IPT, and republish
it without remapping for updates.
Would it be possible to support a connection to Google Fusion Tables, next to text
uploads and database connections? The Google Fusion Tables API supports JDBC.
```
Original issue reported on code.google.com by `peter.desmet.cubc` on 2011-05-26 18:13:40
|
non_main
|
ipt connection to google fusion tables what feature would like to see being added to the ipt i use google fusion tables to visualize and manage a dataset i have it allows me to geocode georeference map filter and search some of which are not dissimilar from what the original ipt portal could do i d like to publish this dataset as a darwin core archive through ipt and republish it without remapping for updates would it be possible to support a connection to google fusion tables next to text uploads and database connections the google fusion tables api supports jdbc original issue reported on code google com by peter desmet cubc on
| 0
|
1,867
| 6,577,487,584
|
IssuesEvent
|
2017-09-12 01:15:46
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
sysctl: set in /sys, not in /proc
|
affects_2.0 bug_report waiting_on_maintainer
|
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
sysctl
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
nocows = 1
hostfile = foo-hosts.txt
fact_caching = jsonfile
fact_caching_connection = /tmp/anscache
fact_caching_timeout = 86400
hash_behaviour = replace
```
##### OS / ENVIRONMENT
ansible: debian 8.4/64, linux 3.16.7-ckt20-1+deb8u3
managed os: debian 8.4/x64, linux 3.16.7-ckt25-2
##### SUMMARY
try to set sysctl value that is only present in /sys, not in /proc/sys fails
##### STEPS TO REPRODUCE
```
- sysctl: name="kernel.mm.ksm.run" value=1 sysctl_set=yes state=present
```
##### EXPECTED RESULTS
sould set the value in /sys/... and not in /proc/...
##### ACTUAL RESULTS
```
TASK [netdata : enable KSM] ****************************************************
task path: /root/devel/ansible-pb/roles/netdata/tasks/main.yml:97
<t.domain.tld> ESTABLISH SSH CONNECTION FOR USER: root
<t.domain.tld> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt t.domain.tld '/bin/sh -c '"'"'mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710 `"'"'"''
<t.domain.tld> PUT /tmp/tmp73kpL8 TO /root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/sysctl
<t.domain.tld> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[t.domain.tld]'
<t.domain.tld> ESTABLISH SSH CONNECTION FOR USER: root
<t.domain.tld> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt t.domain.tld '/bin/sh -c '"'"'LANG=en_US.UTF-8 GIT_COMMITTER_EMAIL=ansible@debian-workstation.domain2.tld LC_MESSAGES=en_US.UTF-8 GIT_AUTOCOMMIT=true LC_ALL=en_US.UTF-8 GIT_COMMITTER_NAME='"'"'"'"'"'"'"'"'ansible on debian-workstation.domain2.tld'"'"'"'"'"'"'"'"' /usr/bin/python /root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/sysctl; rm -rf "/root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/" > /dev/null 2>&1'"'"''
fatal: [t.domain.tld]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"ignoreerrors": false, "name": "kernel.mm.ksm.run", "reload": true, "state": "present", "sysctl_file": "/etc/sysctl.conf", "sysctl_set": true, "value": "1"}, "module_name": "sysctl"}, "msg": "setting kernel.mm.ksm.run failed: sysctl: cannot stat /proc/sys/kernel/mm/ksm/run: No such file or directory\n"}
```
info about the sys/proc files:
```
root@t ~ # ls -la /proc/sys/kernel/mm/ksm/run
ls: cannot access /proc/sys/kernel/mm/ksm/run: No such file or directory
root@t ~ # ls -la /sys/kernel/mm/ksm/run
-rw-r--r-- 1 root root 4096 Apr 11 13:25 /sys/kernel/mm/ksm/run
```
|
True
|
sysctl: set in /sys, not in /proc - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
sysctl
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
nocows = 1
hostfile = foo-hosts.txt
fact_caching = jsonfile
fact_caching_connection = /tmp/anscache
fact_caching_timeout = 86400
hash_behaviour = replace
```
##### OS / ENVIRONMENT
ansible: debian 8.4/64, linux 3.16.7-ckt20-1+deb8u3
managed os: debian 8.4/x64, linux 3.16.7-ckt25-2
##### SUMMARY
try to set sysctl value that is only present in /sys, not in /proc/sys fails
##### STEPS TO REPRODUCE
```
- sysctl: name="kernel.mm.ksm.run" value=1 sysctl_set=yes state=present
```
##### EXPECTED RESULTS
sould set the value in /sys/... and not in /proc/...
##### ACTUAL RESULTS
```
TASK [netdata : enable KSM] ****************************************************
task path: /root/devel/ansible-pb/roles/netdata/tasks/main.yml:97
<t.domain.tld> ESTABLISH SSH CONNECTION FOR USER: root
<t.domain.tld> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt t.domain.tld '/bin/sh -c '"'"'mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710 `"'"'"''
<t.domain.tld> PUT /tmp/tmp73kpL8 TO /root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/sysctl
<t.domain.tld> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[t.domain.tld]'
<t.domain.tld> ESTABLISH SSH CONNECTION FOR USER: root
<t.domain.tld> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt t.domain.tld '/bin/sh -c '"'"'LANG=en_US.UTF-8 GIT_COMMITTER_EMAIL=ansible@debian-workstation.domain2.tld LC_MESSAGES=en_US.UTF-8 GIT_AUTOCOMMIT=true LC_ALL=en_US.UTF-8 GIT_COMMITTER_NAME='"'"'"'"'"'"'"'"'ansible on debian-workstation.domain2.tld'"'"'"'"'"'"'"'"' /usr/bin/python /root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/sysctl; rm -rf "/root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/" > /dev/null 2>&1'"'"''
fatal: [t.domain.tld]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"ignoreerrors": false, "name": "kernel.mm.ksm.run", "reload": true, "state": "present", "sysctl_file": "/etc/sysctl.conf", "sysctl_set": true, "value": "1"}, "module_name": "sysctl"}, "msg": "setting kernel.mm.ksm.run failed: sysctl: cannot stat /proc/sys/kernel/mm/ksm/run: No such file or directory\n"}
```
info about the sys/proc files:
```
root@t ~ # ls -la /proc/sys/kernel/mm/ksm/run
ls: cannot access /proc/sys/kernel/mm/ksm/run: No such file or directory
root@t ~ # ls -la /sys/kernel/mm/ksm/run
-rw-r--r-- 1 root root 4096 Apr 11 13:25 /sys/kernel/mm/ksm/run
```
|
main
|
sysctl set in sys not in proc issue type bug report component name sysctl ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration nocows hostfile foo hosts txt fact caching jsonfile fact caching connection tmp anscache fact caching timeout hash behaviour replace os environment ansible debian linux managed os debian linux summary try to set sysctl value that is only present in sys not in proc sys fails steps to reproduce sysctl name kernel mm ksm run value sysctl set yes state present expected results sould set the value in sys and not in proc actual results task task path root devel ansible pb roles netdata tasks main yml establish ssh connection for user root ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath root ansible cp ansible ssh h p r tt t domain tld bin sh c mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put tmp to root ansible tmp ansible tmp sysctl ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath root ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath root ansible cp ansible ssh h p r tt t domain tld bin sh c lang en us utf git committer email ansible debian workstation tld lc messages en us utf git autocommit true lc all en us utf git committer name ansible on debian workstation tld usr bin python root ansible tmp ansible tmp sysctl rm rf root ansible tmp ansible tmp dev null fatal failed changed false failed true invocation module args ignoreerrors false name kernel mm ksm run reload true state present sysctl file etc sysctl conf sysctl set true value module name sysctl msg setting kernel mm ksm run failed sysctl cannot stat proc sys kernel mm ksm run no such file or directory n info about the sys proc files root t ls la proc sys kernel mm ksm run ls cannot access proc sys kernel mm ksm run no such file or directory root t ls la sys kernel mm ksm run rw r r root root apr sys kernel mm ksm run
| 1
|
5,415
| 27,183,162,388
|
IssuesEvent
|
2023-02-18 22:13:38
|
arcticicestudio/nord
|
https://api.github.com/repos/arcticicestudio/nord
|
closed
|
`nordtheme` organization migration
|
type-task context-workflow scope-maintainability
|
As part of the [“Northern Post — The state and roadmap of Nord“][1] announcement, this repository will be migrated to [the `nordtheme` GitHub organization][2].
This issue only tracks the actual move as well as preparations steps to do so. The detailed plan, including [tasklists that serve as “epics“][3], will follow later on for all Nord repositories, published and announced on all “new“ and current community platforms.
[1]: https://github.com/arcticicestudio/nord/issues/180
[2]: https://github.com/nordtheme
[3]: https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/about-task-lists
|
True
|
`nordtheme` organization migration - As part of the [“Northern Post — The state and roadmap of Nord“][1] announcement, this repository will be migrated to [the `nordtheme` GitHub organization][2].
This issue only tracks the actual move as well as preparations steps to do so. The detailed plan, including [tasklists that serve as “epics“][3], will follow later on for all Nord repositories, published and announced on all “new“ and current community platforms.
[1]: https://github.com/arcticicestudio/nord/issues/180
[2]: https://github.com/nordtheme
[3]: https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/about-task-lists
|
main
|
nordtheme organization migration as part of the announcement this repository will be migrated to this issue only tracks the actual move as well as preparations steps to do so the detailed plan including will follow later on for all nord repositories published and announced on all “new“ and current community platforms
| 1
|
312,620
| 26,873,404,020
|
IssuesEvent
|
2023-02-04 18:55:30
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Informações Institucionais - Leis Municipais - Mantena
|
generalization test development template - Betha (26) tag - Informações Institucionais subtag - Leis Municipais
|
DoD: Realizar o teste de Generalização do validador da tag Informações Institucionais - Leis Municipais para o Município de Mantena.
|
1.0
|
Teste de generalizacao para a tag Informações Institucionais - Leis Municipais - Mantena - DoD: Realizar o teste de Generalização do validador da tag Informações Institucionais - Leis Municipais para o Município de Mantena.
|
non_main
|
teste de generalizacao para a tag informações institucionais leis municipais mantena dod realizar o teste de generalização do validador da tag informações institucionais leis municipais para o município de mantena
| 0
|
144,840
| 13,127,806,348
|
IssuesEvent
|
2020-08-06 11:04:59
|
AlexKMarshall/regMan
|
https://api.github.com/repos/AlexKMarshall/regMan
|
opened
|
Improve readme documentation
|
documentation
|
To create a clean build of this project to test, you need to set up a database, and you need an admin user and password, which has to be created through Auth0 by the repo owner currently
This should be made clear in the instructions
|
1.0
|
Improve readme documentation - To create a clean build of this project to test, you need to set up a database, and you need an admin user and password, which has to be created through Auth0 by the repo owner currently
This should be made clear in the instructions
|
non_main
|
improve readme documentation to create a clean build of this project to test you need to set up a database and you need an admin user and password which has to be created through by the repo owner currently this should be made clear in the instructions
| 0
|
90,904
| 8,287,005,504
|
IssuesEvent
|
2018-09-19 07:30:14
|
humera987/HumTestData
|
https://api.github.com/repos/humera987/HumTestData
|
opened
|
fx_test_proj : api_v1_dashboard_count-time_get_query_param_sql_injection_MySQL_page
|
fx_test_proj
|
Project : fx_test_proj
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 19 Sep 2018 07:30:13 GMT]}
Endpoint : http://13.56.210.25/api/v1/dashboard/count-time?page=
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2018-09-19T07:30:13.865+0000",
"errors" : false,
"messages" : [ ],
"data" : 5711631,
"totalPages" : 0,
"totalElements" : 0
}
Logs :
Assertion [@StatusCode != 200] resolved-to [200 != 200] result [Failed]
--- FX Bot ---
|
1.0
|
fx_test_proj : api_v1_dashboard_count-time_get_query_param_sql_injection_MySQL_page - Project : fx_test_proj
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 19 Sep 2018 07:30:13 GMT]}
Endpoint : http://13.56.210.25/api/v1/dashboard/count-time?page=
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2018-09-19T07:30:13.865+0000",
"errors" : false,
"messages" : [ ],
"data" : 5711631,
"totalPages" : 0,
"totalElements" : 0
}
Logs :
Assertion [@StatusCode != 200] resolved-to [200 != 200] result [Failed]
--- FX Bot ---
|
non_main
|
fx test proj api dashboard count time get query param sql injection mysql page project fx test proj job uat env uat region fxlabs us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options content type transfer encoding date endpoint request response requestid none requesttime errors false messages data totalpages totalelements logs assertion resolved to result fx bot
| 0
|
19,436
| 3,203,036,236
|
IssuesEvent
|
2015-10-02 16:54:26
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
No completion for named parameters
|
Analyzer-Completion Area-Analyzer Priority-Medium Type-Defect
|
Expected to have `format` and `type` in completion
```
import 'dart:io';
class RequestBody {
final ContentType format;
final Type type;
const RequestBody({this.format, this.type});
}
@RequestBody(<caret>)
```

(originally filed to the WebStorm issue tracker: https://youtrack.jetbrains.com/issue/WEB-16369)
|
1.0
|
No completion for named parameters - Expected to have `format` and `type` in completion
```
import 'dart:io';
class RequestBody {
final ContentType format;
final Type type;
const RequestBody({this.format, this.type});
}
@RequestBody(<caret>)
```

(originally filed to the WebStorm issue tracker: https://youtrack.jetbrains.com/issue/WEB-16369)
|
non_main
|
no completion for named parameters expected to have format and type in completion import dart io class requestbody final contenttype format final type type const requestbody this format this type requestbody originally filed to the webstorm issue tracker
| 0
|
110,917
| 9,483,473,481
|
IssuesEvent
|
2019-04-22 00:36:32
|
NayRojas/LIM008-fe-burger-queen
|
https://api.github.com/repos/NayRojas/LIM008-fe-burger-queen
|
closed
|
Ver resumen y el total de la compra
|
CSS3 JS Testing angular
|
- [x] Crear la interfaz del componente order-items
- [x] Crear la interfaz del componente order-total
- [x] Crear fn de suma de precios
- [x] Crear template para pintar los elementos seleccionados del componente menu
|
1.0
|
Ver resumen y el total de la compra - - [x] Crear la interfaz del componente order-items
- [x] Crear la interfaz del componente order-total
- [x] Crear fn de suma de precios
- [x] Crear template para pintar los elementos seleccionados del componente menu
|
non_main
|
ver resumen y el total de la compra crear la interfaz del componente order items crear la interfaz del componente order total crear fn de suma de precios crear template para pintar los elementos seleccionados del componente menu
| 0
|
981
| 4,746,537,643
|
IssuesEvent
|
2016-10-21 11:33:45
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
YAML support not working correctly
|
affects_2.2 aws bug_report cloud waiting_on_maintainer
|
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-modules-core/cloud/amazon/cloudformation
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 29fda4be1e) last updated 2016/09/16 18:26:39 (GMT +000)
lib/ansible/modules/core: (detached HEAD 2e1e3562b9) last updated 2016/09/16 18:26:41 (GMT +000)
lib/ansible/modules/extras: (detached HEAD 9b5c64e240) last updated 2016/09/16 18:26:42 (GMT +000)
config file = /opt/ansible_dev/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
Amazon Linux
##### SUMMARY
When using the CloudFormation template in YAML format, the scripts are encountering an error. It appears that line 277 of the 'cloudformation.py' module attempts to parse the template as valid YAML with the standard yaml library and then convert that into JSON. This is completely unnecessary as the CloudFormation engine will throw back an error if there is some kind of formatting issue. In fact, attempting to convert the YAML file back to JSON will almost always break the format in some way.
##### WORKAROUND:
Commenting out lines 276 and 277 allows the module to work fine using a YAML formatted template.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
--- Sample playbook ---
---
- hosts: localhost
tasks:
- name: Build EC2 windows instance
cloudformation:
stack_name: "ansibleTestInstance"
state: "present"
region: "us-east-1"
disable_rollback: true
template: "files/generic_cf_template.yml"
template_format: "yaml"
template_parameters:
InstanceType: "t2.medium"
KeyName: "your-master-key"
RDPSecurityGroup: "sg-12345678"
DestinationSubnet: "subnet-12345678"
WindowsAMI: "ami-ee7805f9"
InstanceIAMRoleName: "test_iam_role"
tags:
Name: "ansible-test-instance"
--- Sample CloudFormation.yml file: ---
AWSTemplateFormatVersion: '2010-09-09'
Description: >
This template creates Amazon EC2 Windows instance and related resources. You will
be billed for the AWS resources used if you create a stack from this template.
Outputs:
EC2InstanceId:
Description: The ID of the instance created.
Value: !Ref WindowsServer
Parameters:
InstanceType:
ConstraintDescription: Must be a valid EC2 instance type.
Default: t2.medium
Description: Amazon EC2 instance type
Type: String
KeyName:
ConstraintDescription: must be the name of an existing EC2 KeyPair.
Description: Name of an existing EC2 KeyPair
Type: AWS::EC2::KeyPair::KeyName
RDPSecurityGroup:
Description: Select the security group that will allow RDP connections to this machine.
Type: AWS::EC2::SecurityGroup::Id
DestinationSubnet:
Description: Select the subnet where to place the new instance.
Type: AWS::EC2::Subnet::Id
WindowsAMI:
Description: Enter the id of the Windows AMI you wish to use.
Default: ami-ee7805f9
Type: AWS::EC2::Image::Id
InstanceIAMRoleName:
Description: Enter the name of the IAM Role you wish to use for this instance. (NOTE this is just the role name, not an ARN)
Type: String
Resources:
WindowsServer:
Type: AWS::EC2::Instance
Properties:
ImageId:
Ref: WindowsAMI
InstanceType:
Ref: InstanceType
KeyName:
Ref: KeyName
SubnetId:
Ref: DestinationSubnet
SecurityGroupIds:
- Ref: RDPSecurityGroup
IamInstanceProfile:
Ref: InstanceIAMRoleName
```
|
True
|
YAML support not working correctly - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-modules-core/cloud/amazon/cloudformation
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 29fda4be1e) last updated 2016/09/16 18:26:39 (GMT +000)
lib/ansible/modules/core: (detached HEAD 2e1e3562b9) last updated 2016/09/16 18:26:41 (GMT +000)
lib/ansible/modules/extras: (detached HEAD 9b5c64e240) last updated 2016/09/16 18:26:42 (GMT +000)
config file = /opt/ansible_dev/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
Amazon Linux
##### SUMMARY
When using the CloudFormation template in YAML format, the scripts are encountering an error. It appears that line 277 of the 'cloudformation.py' module attempts to parse the template as valid YAML with the standard yaml library and then convert that into JSON. This is completely unnecessary as the CloudFormation engine will throw back an error if there is some kind of formatting issue. In fact, attempting to convert the YAML file back to JSON will almost always break the format in some way.
##### WORKAROUND:
Commenting out lines 276 and 277 allows the module to work fine using a YAML formatted template.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
--- Sample playbook ---
---
- hosts: localhost
tasks:
- name: Build EC2 windows instance
cloudformation:
stack_name: "ansibleTestInstance"
state: "present"
region: "us-east-1"
disable_rollback: true
template: "files/generic_cf_template.yml"
template_format: "yaml"
template_parameters:
InstanceType: "t2.medium"
KeyName: "your-master-key"
RDPSecurityGroup: "sg-12345678"
DestinationSubnet: "subnet-12345678"
WindowsAMI: "ami-ee7805f9"
InstanceIAMRoleName: "test_iam_role"
tags:
Name: "ansible-test-instance"
--- Sample CloudFormation.yml file: ---
AWSTemplateFormatVersion: '2010-09-09'
Description: >
This template creates Amazon EC2 Windows instance and related resources. You will
be billed for the AWS resources used if you create a stack from this template.
Outputs:
EC2InstanceId:
Description: The ID of the instance created.
Value: !Ref WindowsServer
Parameters:
InstanceType:
ConstraintDescription: Must be a valid EC2 instance type.
Default: t2.medium
Description: Amazon EC2 instance type
Type: String
KeyName:
ConstraintDescription: must be the name of an existing EC2 KeyPair.
Description: Name of an existing EC2 KeyPair
Type: AWS::EC2::KeyPair::KeyName
RDPSecurityGroup:
Description: Select the security group that will allow RDP connections to this machine.
Type: AWS::EC2::SecurityGroup::Id
DestinationSubnet:
Description: Select the subnet where to place the new instance.
Type: AWS::EC2::Subnet::Id
WindowsAMI:
Description: Enter the id of the Windows AMI you wish to use.
Default: ami-ee7805f9
Type: AWS::EC2::Image::Id
InstanceIAMRoleName:
Description: Enter the name of the IAM Role you wish to use for this instance. (NOTE this is just the role name, not an ARN)
Type: String
Resources:
WindowsServer:
Type: AWS::EC2::Instance
Properties:
ImageId:
Ref: WindowsAMI
InstanceType:
Ref: InstanceType
KeyName:
Ref: KeyName
SubnetId:
Ref: DestinationSubnet
SecurityGroupIds:
- Ref: RDPSecurityGroup
IamInstanceProfile:
Ref: InstanceIAMRoleName
```
|
main
|
yaml support not working correctly issue type bug report component name ansible modules core cloud amazon cloudformation ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file opt ansible dev ansible cfg configured module search path default w o overrides configuration n a os environment amazon linux summary when using the cloudformation template in yaml format the scripts are encountering an error it appears that line of the cloudformation py module attempts to parse the template as valid yaml with the standard yaml library and then convert that into json this is completely unnecessary as the cloudformation engine will throw back an error if there is some kind of formatting issue in fact attempting to convert the yaml file back to json will almost always break the format in some way workaround commenting out lines and allows the module to work fine using a yaml formatted template steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used sample playbook hosts localhost tasks name build windows instance cloudformation stack name ansibletestinstance state present region us east disable rollback true template files generic cf template yml template format yaml template parameters instancetype medium keyname your master key rdpsecuritygroup sg destinationsubnet subnet windowsami ami instanceiamrolename test iam role tags name ansible test instance sample cloudformation yml file awstemplateformatversion description this template creates amazon windows instance and related resources you will be billed for the aws resources used if you create a stack from this template outputs description the id of the instance created value ref windowsserver parameters instancetype constraintdescription must be a valid instance type default medium description amazon instance type type string keyname constraintdescription must be the name of an existing keypair description name of an existing keypair type aws keypair keyname rdpsecuritygroup description select the security group that will allow rdp connections to this machine type aws securitygroup id destinationsubnet description select the subnet where to place the new instance type aws subnet id windowsami description enter the id of the windows ami you wish to use default ami type aws image id instanceiamrolename description enter the name of the iam role you wish to use for this instance note this is just the role name not an arn type string resources windowsserver type aws instance properties imageid ref windowsami instancetype ref instancetype keyname ref keyname subnetid ref destinationsubnet securitygroupids ref rdpsecuritygroup iaminstanceprofile ref instanceiamrolename
| 1
|
51,980
| 13,710,433,544
|
IssuesEvent
|
2020-10-02 01:04:11
|
BrianMcDonaldWS/deck.gl
|
https://api.github.com/repos/BrianMcDonaldWS/deck.gl
|
opened
|
WS-2018-0628 (Medium) detected in marked-0.3.19.js
|
security vulnerability
|
## WS-2018-0628 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.3.19.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js</a></p>
<p>Path to dependency file: deck.gl/website/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: deck.gl/website/node_modules/marked/www/../lib/marked.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 0.4.0 is vulnerable to Regular Expression Denial of Service (REDoS) through heading in marked.js.
<p>Publish Date: 2018-04-16
<p>URL: <a href=https://github.com/markedjs/marked/commit/09afabf69c6d0c919c03443f47bdfe476566105d>WS-2018-0628</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/0.4.0">https://github.com/markedjs/marked/releases/tag/0.4.0</a></p>
<p>Release Date: 2018-04-16</p>
<p>Fix Resolution: marked - 0.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"marked","packageVersion":"0.3.19","isTransitiveDependency":false,"dependencyTree":"marked:0.3.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"marked - 0.4.0"}],"vulnerabilityIdentifier":"WS-2018-0628","vulnerabilityDetails":"marked before 0.4.0 is vulnerable to Regular Expression Denial of Service (REDoS) through heading in marked.js.","vulnerabilityUrl":"https://github.com/markedjs/marked/commit/09afabf69c6d0c919c03443f47bdfe476566105d","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2018-0628 (Medium) detected in marked-0.3.19.js - ## WS-2018-0628 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.3.19.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.3.19/marked.js</a></p>
<p>Path to dependency file: deck.gl/website/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: deck.gl/website/node_modules/marked/www/../lib/marked.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.3.19.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 0.4.0 is vulnerable to Regular Expression Denial of Service (REDoS) through heading in marked.js.
<p>Publish Date: 2018-04-16
<p>URL: <a href=https://github.com/markedjs/marked/commit/09afabf69c6d0c919c03443f47bdfe476566105d>WS-2018-0628</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/0.4.0">https://github.com/markedjs/marked/releases/tag/0.4.0</a></p>
<p>Release Date: 2018-04-16</p>
<p>Fix Resolution: marked - 0.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"marked","packageVersion":"0.3.19","isTransitiveDependency":false,"dependencyTree":"marked:0.3.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"marked - 0.4.0"}],"vulnerabilityIdentifier":"WS-2018-0628","vulnerabilityDetails":"marked before 0.4.0 is vulnerable to Regular Expression Denial of Service (REDoS) through heading in marked.js.","vulnerabilityUrl":"https://github.com/markedjs/marked/commit/09afabf69c6d0c919c03443f47bdfe476566105d","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_main
|
ws medium detected in marked js ws medium severity vulnerability vulnerable library marked js a markdown parser built for speed library home page a href path to dependency file deck gl website node modules marked www demo html path to vulnerable library deck gl website node modules marked www lib marked js dependency hierarchy x marked js vulnerable library vulnerability details marked before is vulnerable to regular expression denial of service redos through heading in marked js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution marked isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails marked before is vulnerable to regular expression denial of service redos through heading in marked js vulnerabilityurl
| 0
|
2,138
| 7,359,292,008
|
IssuesEvent
|
2018-03-10 04:29:33
|
beefproject/beef
|
https://api.github.com/repos/beefproject/beef
|
opened
|
Add CONTRIBUTING.md
|
Maintainability
|
Add a `CONTRIBUTING.md` file with contribution guidelines.
Describe:
* Project style
* Ruby best practice
* Git best practice
* Tests
Metasploit's [`CONTRIBUTING.md`](https://github.com/rapid7/metasploit-framework/blob/master/CONTRIBUTING.md) would be ~~good to plagiarize~~ a good place to start.
|
True
|
Add CONTRIBUTING.md - Add a `CONTRIBUTING.md` file with contribution guidelines.
Describe:
* Project style
* Ruby best practice
* Git best practice
* Tests
Metasploit's [`CONTRIBUTING.md`](https://github.com/rapid7/metasploit-framework/blob/master/CONTRIBUTING.md) would be ~~good to plagiarize~~ a good place to start.
|
main
|
add contributing md add a contributing md file with contribution guidelines describe project style ruby best practice git best practice tests metasploit s would be good to plagiarize a good place to start
| 1
|
95,019
| 16,064,666,647
|
IssuesEvent
|
2021-04-23 17:08:33
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
opened
|
Vulnerability roundup 101: openexr-2.5.3: 7 advisories [5.5]
|
1.severity: security
|
[search](https://search.nix.gsc.io/?q=openexr&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=openexr+in%3Apath&type=Code)
* [ ] [CVE-2021-3477](https://nvd.nist.gov/vuln/detail/CVE-2021-3477) CVSSv3=5.5 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3478](https://nvd.nist.gov/vuln/detail/CVE-2021-3478) CVSSv3=5.5 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3479](https://nvd.nist.gov/vuln/detail/CVE-2021-3479) CVSSv3=5.5 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3474](https://nvd.nist.gov/vuln/detail/CVE-2021-3474) CVSSv3=5.3 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3475](https://nvd.nist.gov/vuln/detail/CVE-2021-3475) CVSSv3=5.3 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3476](https://nvd.nist.gov/vuln/detail/CVE-2021-3476) CVSSv3=5.3 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-20296](https://nvd.nist.gov/vuln/detail/CVE-2021-20296) CVSSv3=5.3 (nixos-20.09, nixos-unstable)
Scanned versions: nixos-20.09: c7e905b6a97; nixos-unstable: f5e8bdd07d1.
|
True
|
Vulnerability roundup 101: openexr-2.5.3: 7 advisories [5.5] - [search](https://search.nix.gsc.io/?q=openexr&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=openexr+in%3Apath&type=Code)
* [ ] [CVE-2021-3477](https://nvd.nist.gov/vuln/detail/CVE-2021-3477) CVSSv3=5.5 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3478](https://nvd.nist.gov/vuln/detail/CVE-2021-3478) CVSSv3=5.5 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3479](https://nvd.nist.gov/vuln/detail/CVE-2021-3479) CVSSv3=5.5 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3474](https://nvd.nist.gov/vuln/detail/CVE-2021-3474) CVSSv3=5.3 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3475](https://nvd.nist.gov/vuln/detail/CVE-2021-3475) CVSSv3=5.3 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-3476](https://nvd.nist.gov/vuln/detail/CVE-2021-3476) CVSSv3=5.3 (nixos-20.09, nixos-unstable)
* [ ] [CVE-2021-20296](https://nvd.nist.gov/vuln/detail/CVE-2021-20296) CVSSv3=5.3 (nixos-20.09, nixos-unstable)
Scanned versions: nixos-20.09: c7e905b6a97; nixos-unstable: f5e8bdd07d1.
|
non_main
|
vulnerability roundup openexr advisories nixos nixos unstable nixos nixos unstable nixos nixos unstable nixos nixos unstable nixos nixos unstable nixos nixos unstable nixos nixos unstable scanned versions nixos nixos unstable
| 0
|
236,609
| 19,562,253,635
|
IssuesEvent
|
2022-01-03 17:52:40
|
Julian/lean.nvim
|
https://api.github.com/repos/Julian/lean.nvim
|
closed
|
Speed up CI by getting greadlink in a hackier way
|
tests
|
macOS CI runs spend [1 minute](https://github.com/Julian/lean.nvim/runs/4692356820?check_suite_focus=true) fetching `coreutils` from Homebrew which can at times be more than half the total runtime.
This is because `leanpkg` wants `greadlink` (so that it can run `readlink -f`).
We can speed this up by perhaps just grabbing `greadlink` some hackier way, or by seeing whether [caching](https://github.com/actions/cache/) the Homebrew cache helps at all.
|
1.0
|
Speed up CI by getting greadlink in a hackier way - macOS CI runs spend [1 minute](https://github.com/Julian/lean.nvim/runs/4692356820?check_suite_focus=true) fetching `coreutils` from Homebrew which can at times be more than half the total runtime.
This is because `leanpkg` wants `greadlink` (so that it can run `readlink -f`).
We can speed this up by perhaps just grabbing `greadlink` some hackier way, or by seeing whether [caching](https://github.com/actions/cache/) the Homebrew cache helps at all.
|
non_main
|
speed up ci by getting greadlink in a hackier way macos ci runs spend fetching coreutils from homebrew which can at times be more than half the total runtime this is because leanpkg wants greadlink so that it can run readlink f we can speed this up by perhaps just grabbing greadlink some hackier way or by seeing whether the homebrew cache helps at all
| 0
|
2,942
| 10,563,746,312
|
IssuesEvent
|
2019-10-04 21:56:49
|
Unleash/unleash-client-core
|
https://api.github.com/repos/Unleash/unleash-client-core
|
opened
|
Add Code of Conduct
|
maintainers
|
Create a document that establishes expectations for behavior for your project’s participants. Adopting, and enforcing, a code of conduct can help create a positive social atmosphere for your community. Transparency should be primordial.
|
True
|
Add Code of Conduct - Create a document that establishes expectations for behavior for your project’s participants. Adopting, and enforcing, a code of conduct can help create a positive social atmosphere for your community. Transparency should be primordial.
|
main
|
add code of conduct create a document that establishes expectations for behavior for your project’s participants adopting and enforcing a code of conduct can help create a positive social atmosphere for your community transparency should be primordial
| 1
|
3,987
| 18,425,778,063
|
IssuesEvent
|
2021-10-13 21:48:53
|
intarchboard/program-edm
|
https://api.github.com/repos/intarchboard/program-edm
|
opened
|
Add RFC/document metadata to point to implementation status
|
deployability maintainability
|
Implementation status is removed before RFC publication. Could we instead link to something like in #13 ?
|
True
|
Add RFC/document metadata to point to implementation status - Implementation status is removed before RFC publication. Could we instead link to something like in #13 ?
|
main
|
add rfc document metadata to point to implementation status implementation status is removed before rfc publication could we instead link to something like in
| 1
|
3,497
| 13,647,714,417
|
IssuesEvent
|
2020-09-26 05:00:09
|
TabbycatDebate/tabbycat
|
https://api.github.com/repos/TabbycatDebate/tabbycat
|
closed
|
Allow next draw to be generated with unconfirmed (but existent) ballots
|
awaiting maintainer
|
Currently, we prevent draws from being generated before all ballots are confirmed. In some contexts, particularly where pairings within brackets are random (WUDC), it makes sense to press ahead before all ballots are confirmed, hoping or assuming that the results (not speaker scores) are correct. This needs to be done with various checks to prevent stupidity, though.
|
True
|
Allow next draw to be generated with unconfirmed (but existent) ballots - Currently, we prevent draws from being generated before all ballots are confirmed. In some contexts, particularly where pairings within brackets are random (WUDC), it makes sense to press ahead before all ballots are confirmed, hoping or assuming that the results (not speaker scores) are correct. This needs to be done with various checks to prevent stupidity, though.
|
main
|
allow next draw to be generated with unconfirmed but existent ballots currently we prevent draws from being generated before all ballots are confirmed in some contexts particularly where pairings within brackets are random wudc it makes sense to press ahead before all ballots are confirmed hoping or assuming that the results not speaker scores are correct this needs to be done with various checks to prevent stupidity though
| 1
|
4,692
| 24,211,708,849
|
IssuesEvent
|
2022-09-26 00:01:49
|
tgstation/tgstation
|
https://api.github.com/repos/tgstation/tgstation
|
closed
|
Spurious test failure: /obj/structure/closet/emcloset/anchored was unable to be GC'd
|
Maintainability/Hinders improvements
|
## Reproduction:
```
## REF SEARCH Beginning search for references to a /obj/structure/closet/emcloset/anchored.
## REF SEARCH Finished searching globals
## REF SEARCH Finished searching native globals
## REF SEARCH Found /obj/structure/closet/emcloset/anchored [0x2003844] in /obj/item/tank/internals/emergency_oxygen's [0x2003845] loc var. World -> /obj/item/tank/internals/emergency_oxygen
## REF SEARCH Found /obj/structure/closet/emcloset/anchored [0x2003844] in list World -> /obj/item/tank/internals/emergency_oxygen [0x2003845] -> locs (list).
## REF SEARCH Found /obj/structure/closet/emcloset/anchored [0x2003844] in /obj/item/clothing/mask/breath's [0x200384a] loc var. World -> /obj/item/clothing/mask/breath
## REF SEARCH Found /obj/structure/closet/emcloset/anchored [0x2003844] in list World -> /obj/item/clothing/mask/breath [0x200384a] -> locs (list).
## REF SEARCH Finished searching atoms
## REF SEARCH Finished searching datums
## REF SEARCH Finished searching clients
## REF SEARCH Completed search for references to a /obj/structure/closet/emcloset/anchored.
## TESTING: GC: -- [0x2003844] | /obj/structure/closet/emcloset/anchored was unable to be GC'd --
Error: /obj/structure/closet/emcloset/anchored hard deleted 1 times out of a total del count of 4
FAIL: /datum/unit_test/create_and_destroy 536s
REASON #1: /obj/structure/closet/emcloset/anchored hard deleted 1 times out of a total del count of 4 at code/modules/unit_tests/create_and_destroy.dm:167
```
https://github.com/tgstation/tgstation/runs/8196398955?check_suite_focus=true
|
True
|
Spurious test failure: /obj/structure/closet/emcloset/anchored was unable to be GC'd - ## Reproduction:
```
## REF SEARCH Beginning search for references to a /obj/structure/closet/emcloset/anchored.
## REF SEARCH Finished searching globals
## REF SEARCH Finished searching native globals
## REF SEARCH Found /obj/structure/closet/emcloset/anchored [0x2003844] in /obj/item/tank/internals/emergency_oxygen's [0x2003845] loc var. World -> /obj/item/tank/internals/emergency_oxygen
## REF SEARCH Found /obj/structure/closet/emcloset/anchored [0x2003844] in list World -> /obj/item/tank/internals/emergency_oxygen [0x2003845] -> locs (list).
## REF SEARCH Found /obj/structure/closet/emcloset/anchored [0x2003844] in /obj/item/clothing/mask/breath's [0x200384a] loc var. World -> /obj/item/clothing/mask/breath
## REF SEARCH Found /obj/structure/closet/emcloset/anchored [0x2003844] in list World -> /obj/item/clothing/mask/breath [0x200384a] -> locs (list).
## REF SEARCH Finished searching atoms
## REF SEARCH Finished searching datums
## REF SEARCH Finished searching clients
## REF SEARCH Completed search for references to a /obj/structure/closet/emcloset/anchored.
## TESTING: GC: -- [0x2003844] | /obj/structure/closet/emcloset/anchored was unable to be GC'd --
Error: /obj/structure/closet/emcloset/anchored hard deleted 1 times out of a total del count of 4
FAIL: /datum/unit_test/create_and_destroy 536s
REASON #1: /obj/structure/closet/emcloset/anchored hard deleted 1 times out of a total del count of 4 at code/modules/unit_tests/create_and_destroy.dm:167
```
https://github.com/tgstation/tgstation/runs/8196398955?check_suite_focus=true
|
main
|
spurious test failure obj structure closet emcloset anchored was unable to be gc d reproduction ref search beginning search for references to a obj structure closet emcloset anchored ref search finished searching globals ref search finished searching native globals ref search found obj structure closet emcloset anchored in obj item tank internals emergency oxygen s loc var world obj item tank internals emergency oxygen ref search found obj structure closet emcloset anchored in list world obj item tank internals emergency oxygen locs list ref search found obj structure closet emcloset anchored in obj item clothing mask breath s loc var world obj item clothing mask breath ref search found obj structure closet emcloset anchored in list world obj item clothing mask breath locs list ref search finished searching atoms ref search finished searching datums ref search finished searching clients ref search completed search for references to a obj structure closet emcloset anchored testing gc obj structure closet emcloset anchored was unable to be gc d error obj structure closet emcloset anchored hard deleted times out of a total del count of fail datum unit test create and destroy reason obj structure closet emcloset anchored hard deleted times out of a total del count of at code modules unit tests create and destroy dm
| 1
|
363,204
| 25,413,313,469
|
IssuesEvent
|
2022-11-22 21:06:09
|
ruthlennonatu/groot22
|
https://api.github.com/repos/ruthlennonatu/groot22
|
closed
|
As a customer I want to be able to use the product with ease so that my application process will be as simple as possible.
|
documentation enhancement
|
Description:
A merge request with the dev branch must be made and the documentation containing Information about automated Java Documentation Tools
Acceptance Criteria:
Resolve issue.
DoD:
Merge request
Have a document containing where to find information on Java Documentation
|
1.0
|
As a customer I want to be able to use the product with ease so that my application process will be as simple as possible. - Description:
A merge request with the dev branch must be made and the documentation containing Information about automated Java Documentation Tools
Acceptance Criteria:
Resolve issue.
DoD:
Merge request
Have a document containing where to find information on Java Documentation
|
non_main
|
as a customer i want to be able to use the product with ease so that my application process will be as simple as possible description a merge request with the dev branch must be made and the documentation containing information about automated java documentation tools acceptance criteria resolve issue dod merge request have a document containing where to find information on java documentation
| 0
|
3,375
| 13,063,181,136
|
IssuesEvent
|
2020-07-30 16:09:43
|
laminas/laminas-mail
|
https://api.github.com/repos/laminas/laminas-mail
|
closed
|
Drop dependency on zendframework/zend-loader
|
Awaiting Maintainer Response BC Break
|
This removes the dependency on the zendframework/zend-load package, as suggested in #185
The `HeaderLoader` class has been removed and replaced by a simple class map in the `Headers` class.
However, I have also removed the `getPluginClassLoader` and `setPluginClassLoader` methods. Since they are public, this might be a BC break. What do you think?
---
Originally posted by @acelaya at https://github.com/zendframework/zend-mail/pull/186
|
True
|
Drop dependency on zendframework/zend-loader - This removes the dependency on the zendframework/zend-load package, as suggested in #185
The `HeaderLoader` class has been removed and replaced by a simple class map in the `Headers` class.
However, I have also removed the `getPluginClassLoader` and `setPluginClassLoader` methods. Since they are public, this might be a BC break. What do you think?
---
Originally posted by @acelaya at https://github.com/zendframework/zend-mail/pull/186
|
main
|
drop dependency on zendframework zend loader this removes the dependency on the zendframework zend load package as suggested in the headerloader class has been removed and replaced by a simple class map in the headers class however i have also removed the getpluginclassloader and setpluginclassloader methods since they are public this might be a bc break what do you think originally posted by acelaya at
| 1
|
3,716
| 15,351,572,794
|
IssuesEvent
|
2021-03-01 05:24:00
|
cloverhearts/quilljs-markdown
|
https://api.github.com/repos/cloverhearts/quilljs-markdown
|
closed
|
Improve nested list styling
|
NICE IDEA Saw with Maintainer WILL MAKE IT WORK IN PROGRESS
|
Thanks so much for your work on this!
We have only just started to use this extension, and wondering if more sophisticated nested lists are possible? for example like https://jsfiddle.net/c4v608ty/1/
This shows 3 main benefits:
- the top level bullet icon is larger than in quilljs-markdown, and
- the nested bullets are different based on the nesting level.
- the indentation level is a little bit different to the quilljs-markdown indentation which is better because it uses less space
I'm not sure if there are limitations imposed by quill, but if something like this were possible it would be awesome. Thanks for considering!
|
True
|
Improve nested list styling - Thanks so much for your work on this!
We have only just started to use this extension, and wondering if more sophisticated nested lists are possible? for example like https://jsfiddle.net/c4v608ty/1/
This shows 3 main benefits:
- the top level bullet icon is larger than in quilljs-markdown, and
- the nested bullets are different based on the nesting level.
- the indentation level is a little bit different to the quilljs-markdown indentation which is better because it uses less space
I'm not sure if there are limitations imposed by quill, but if something like this were possible it would be awesome. Thanks for considering!
|
main
|
improve nested list styling thanks so much for your work on this we have only just started to use this extension and wondering if more sophisticated nested lists are possible for example like this shows main benefits the top level bullet icon is larger than in quilljs markdown and the nested bullets are different based on the nesting level the indentation level is a little bit different to the quilljs markdown indentation which is better because it uses less space i m not sure if there are limitations imposed by quill but if something like this were possible it would be awesome thanks for considering
| 1
|
115,630
| 14,858,020,410
|
IssuesEvent
|
2021-01-18 16:12:17
|
SummerRolls99/VoidLight-front
|
https://api.github.com/repos/SummerRolls99/VoidLight-front
|
closed
|
Achievements
|
design front
|
Well, as the name implies, you know what to do.
- [x] Design
- [x] Component front
|
1.0
|
Achievements - Well, as the name implies, you know what to do.
- [x] Design
- [x] Component front
|
non_main
|
achievements well as the name implies you know what to do design component front
| 0
|
467,781
| 13,455,081,360
|
IssuesEvent
|
2020-09-09 05:24:03
|
pantheracorp/PantheraIDS_Issues
|
https://api.github.com/repos/pantheracorp/PantheraIDS_Issues
|
closed
|
Adjust name of leopard individuals for S27_2019 (server-based)
|
database priority: LOW
|
Site 27 has individuals with NA in their name where the site number should be in the following survey: S27_2019 (server-based). This issue was encountered before for a few sites (see closed GitHub Issue #85 ) but the issue can not be replicated. Still unsure as to what causes this, and if the issue has already inadvertently been fixed previously.
|
1.0
|
Adjust name of leopard individuals for S27_2019 (server-based) - Site 27 has individuals with NA in their name where the site number should be in the following survey: S27_2019 (server-based). This issue was encountered before for a few sites (see closed GitHub Issue #85 ) but the issue can not be replicated. Still unsure as to what causes this, and if the issue has already inadvertently been fixed previously.
|
non_main
|
adjust name of leopard individuals for server based site has individuals with na in their name where the site number should be in the following survey server based this issue was encountered before for a few sites see closed github issue but the issue can not be replicated still unsure as to what causes this and if the issue has already inadvertently been fixed previously
| 0
|
839
| 15,731,278,015
|
IssuesEvent
|
2021-03-29 16:52:55
|
openstates/issues
|
https://api.github.com/repos/openstates/issues
|
closed
|
SD: old, invalid source URLs for one legislator
|
component:people-data type:bug
|
**Issue Description:**
For `ocd-person/e40b342d-1732-4d1d-aa23-6d904e871e98` (Wayne H. Steinhauer), the source URLs listed are
```
http://legis.sd.gov/Legislators/Legislators/MemberCommittees.aspx?Member=1069&Session=2016
http://legis.sd.gov/Legislators/Legislators/MemberDetail.aspx?Member=1069&Session=2016
```
`legis.sd.gov` no longer exists. All other source correctly use `sdlegislature.gov`.
|
1.0
|
SD: old, invalid source URLs for one legislator - **Issue Description:**
For `ocd-person/e40b342d-1732-4d1d-aa23-6d904e871e98` (Wayne H. Steinhauer), the source URLs listed are
```
http://legis.sd.gov/Legislators/Legislators/MemberCommittees.aspx?Member=1069&Session=2016
http://legis.sd.gov/Legislators/Legislators/MemberDetail.aspx?Member=1069&Session=2016
```
`legis.sd.gov` no longer exists. All other source correctly use `sdlegislature.gov`.
|
non_main
|
sd old invalid source urls for one legislator issue description for ocd person wayne h steinhauer the source urls listed are legis sd gov no longer exists all other source correctly use sdlegislature gov
| 0
|
279,504
| 24,230,826,478
|
IssuesEvent
|
2022-09-26 18:07:03
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
opened
|
Test: git clone URI handler supports branch checkout after clone
|
testplan-item
|
Refs https://github.com/microsoft/vscode/issues/158386
- [ ] Windows
- [ ] macOS
- [ ] Linux
Authors: @joyceerhl, @lszomoru
Complexity: 2
---
## Background
The built-in git extension registers a URI handler which can handle clone operations from a URI. For example, you can copy and paste the following URI into your browser search bar, which will launch VS Code Insiders on your desktop and start the flow to clone `microsoft/vscode` onto your local machine:
```
vscode-insiders://vscode.git/clone?url=https://github.com/microsoft/vscode
```
This milestone as part of polishing Continue On support, we expanded the URI handler to accept a `ref` query parameter whose value is a specific branch. This lets you specify a specific branch to be checked out immediately after cloning:
```
vscode-insiders://vscode.git/clone?url=https://github.com/microsoft/vscode&ref=release/1.71
```
## Verification:
1. Paste `vscode-insiders://vscode.git/clone?url=https://github.com/microsoft/vscode&ref=release/1.71` into your browser search bar
2. Select a directory to clone `microsoft/vscode` into when prompted
3. Ensure that after the clone operation completes, you get a secondary progress notification while the `release/1.71` branch is checked out
4. Ensure that after the checkout operation completes and you open the cloned repo, the `release/1.71` branch is checked out
|
1.0
|
Test: git clone URI handler supports branch checkout after clone - Refs https://github.com/microsoft/vscode/issues/158386
- [ ] Windows
- [ ] macOS
- [ ] Linux
Authors: @joyceerhl, @lszomoru
Complexity: 2
---
## Background
The built-in git extension registers a URI handler which can handle clone operations from a URI. For example, you can copy and paste the following URI into your browser search bar, which will launch VS Code Insiders on your desktop and start the flow to clone `microsoft/vscode` onto your local machine:
```
vscode-insiders://vscode.git/clone?url=https://github.com/microsoft/vscode
```
This milestone as part of polishing Continue On support, we expanded the URI handler to accept a `ref` query parameter whose value is a specific branch. This lets you specify a specific branch to be checked out immediately after cloning:
```
vscode-insiders://vscode.git/clone?url=https://github.com/microsoft/vscode&ref=release/1.71
```
## Verification:
1. Paste `vscode-insiders://vscode.git/clone?url=https://github.com/microsoft/vscode&ref=release/1.71` into your browser search bar
2. Select a directory to clone `microsoft/vscode` into when prompted
3. Ensure that after the clone operation completes, you get a secondary progress notification while the `release/1.71` branch is checked out
4. Ensure that after the checkout operation completes and you open the cloned repo, the `release/1.71` branch is checked out
|
non_main
|
test git clone uri handler supports branch checkout after clone refs windows macos linux authors joyceerhl lszomoru complexity background the built in git extension registers a uri handler which can handle clone operations from a uri for example you can copy and paste the following uri into your browser search bar which will launch vs code insiders on your desktop and start the flow to clone microsoft vscode onto your local machine vscode insiders vscode git clone url this milestone as part of polishing continue on support we expanded the uri handler to accept a ref query parameter whose value is a specific branch this lets you specify a specific branch to be checked out immediately after cloning vscode insiders vscode git clone url verification paste vscode insiders vscode git clone url into your browser search bar select a directory to clone microsoft vscode into when prompted ensure that after the clone operation completes you get a secondary progress notification while the release branch is checked out ensure that after the checkout operation completes and you open the cloned repo the release branch is checked out
| 0
|
2,105
| 7,126,728,924
|
IssuesEvent
|
2018-01-20 13:57:42
|
sinonjs/lolex
|
https://api.github.com/repos/sinonjs/lolex
|
closed
|
Domains/async hooks
|
awaiting maintainer feedback feature request question stale
|
Hey,
Domains/async hooks aren't really supported from what I can tell - if a timeout is registered from a given domain/async context lolex probably wants to run it in that context.
|
True
|
Domains/async hooks - Hey,
Domains/async hooks aren't really supported from what I can tell - if a timeout is registered from a given domain/async context lolex probably wants to run it in that context.
|
main
|
domains async hooks hey domains async hooks aren t really supported from what i can tell if a timeout is registered from a given domain async context lolex probably wants to run it in that context
| 1
|
741,082
| 25,778,850,901
|
IssuesEvent
|
2022-12-09 14:17:25
|
bounswe/bounswe2022group2
|
https://api.github.com/repos/bounswe/bounswe2022group2
|
closed
|
Frontend: Unit Testing Initialization for Dropdown, SelectionGrid, JoinLSButton Components
|
priority-medium status-new front-end
|
### Issue Description
In terms of functionality, the dropdown, selection grid and join button serves us as expected during the whole preparation time and during the demo. They are still well functioning in any use. In this issue, I will initialize a unit testing structure to make this components testable without any further need. So this initialization and tests will be useful when it comes to adding new feature to these components.
### Step Details
Steps that will be performed:
- [x] Unit test for Dropdown
- [x] Unit test for SelectionGrid
- [x] Unit test for JoinLS button
### Final Actions
_No response_
### Deadline of the Issue
08.12.2022 - Thursday - 19.59
### Reviewer
Koray Tekin
### Deadline for the Review
08.12.2022 - Thursday - 23.59
|
1.0
|
Frontend: Unit Testing Initialization for Dropdown, SelectionGrid, JoinLSButton Components - ### Issue Description
In terms of functionality, the dropdown, selection grid and join button serves us as expected during the whole preparation time and during the demo. They are still well functioning in any use. In this issue, I will initialize a unit testing structure to make this components testable without any further need. So this initialization and tests will be useful when it comes to adding new feature to these components.
### Step Details
Steps that will be performed:
- [x] Unit test for Dropdown
- [x] Unit test for SelectionGrid
- [x] Unit test for JoinLS button
### Final Actions
_No response_
### Deadline of the Issue
08.12.2022 - Thursday - 19.59
### Reviewer
Koray Tekin
### Deadline for the Review
08.12.2022 - Thursday - 23.59
|
non_main
|
frontend unit testing initialization for dropdown selectiongrid joinlsbutton components issue description in terms of functionality the dropdown selection grid and join button serves us as expected during the whole preparation time and during the demo they are still well functioning in any use in this issue i will initialize a unit testing structure to make this components testable without any further need so this initialization and tests will be useful when it comes to adding new feature to these components step details steps that will be performed unit test for dropdown unit test for selectiongrid unit test for joinls button final actions no response deadline of the issue thursday reviewer koray tekin deadline for the review thursday
| 0
|
439
| 3,561,411,758
|
IssuesEvent
|
2016-01-23 19:40:44
|
tgstation/-tg-station
|
https://api.github.com/repos/tgstation/-tg-station
|
closed
|
voice analyzer + flash assemblies are terrible
|
Bug In Game Exploit Maintainability - Hinders improvements
|
exhibit A: they don't parse speech well, meaning that you can make the trigger "s" and it will trigger on any word with an s in it. you can easily make them to affect every value
exhibit B: there is no sanity checking for range, so if they hear someone say the trigger over the radio, they flash that person
|
True
|
voice analyzer + flash assemblies are terrible - exhibit A: they don't parse speech well, meaning that you can make the trigger "s" and it will trigger on any word with an s in it. you can easily make them to affect every value
exhibit B: there is no sanity checking for range, so if they hear someone say the trigger over the radio, they flash that person
|
main
|
voice analyzer flash assemblies are terrible exhibit a they don t parse speech well meaning that you can make the trigger s and it will trigger on any word with an s in it you can easily make them to affect every value exhibit b there is no sanity checking for range so if they hear someone say the trigger over the radio they flash that person
| 1
|
135,002
| 12,643,862,651
|
IssuesEvent
|
2020-06-16 10:32:27
|
rte-france/l2rpn-baselines
|
https://api.github.com/repos/rte-france/l2rpn-baselines
|
closed
|
Documentation issue
|
documentation
|
When copy pasting the documentation of the SAC train function (https://l2rpn-baselines.readthedocs.io/en/stable/SAC.html#l2rpn_baselines.SAC.train) the program does not work.
The documentation should be adapted, for the SAC as:
```python
import grid2op
from grid2op.Reward import L2RPNReward
from l2rpn_baselines.utils import TrainingParam
from l2rpn_baselines.SAC import train
from l2rpn_baselines.utils import NNParam
# define the environment
env = grid2op.make("l2rpn_case14_sandbox",
reward_class=L2RPNReward)
# use the default training parameters
tp = TrainingParam()
# this will be the list of what part of the observation I want to keep
# more information on https://grid2op.readthedocs.io/en/latest/observation.html#main-observation-attributes
li_attr_obs_X = ["day_of_week", "hour_of_day", "minute_of_hour", "prod_p", "prod_v", "load_p", "load_q",
"actual_dispatch", "target_dispatch", "topo_vect", "time_before_cooldown_line",
"time_before_cooldown_sub", "rho", "timestep_overflow", "line_status"]
# neural network architecture
observation_size = NNParam.get_obs_size(env, li_attr_obs_X)
sizes_q = [800, 800, 800, 494, 494, 494] # sizes of each hidden layers
sizes_v = [800, 800] # sizes of each hidden layers
sizes_pol = [800, 800, 800, 494, 494, 494] # sizes of each hidden layers
kwargs_archi = {'observation_size': observation_size,
'sizes': sizes_q,
'activs': ["relu" for _ in range(len(sizes_q))],
"list_attr_obs": li_attr_obs_X,
"sizes_value": sizes_v,
"activs_value": ["relu" for _ in range(len(sizes_v))],
"sizes_policy": sizes_pol,
"activs_policy": ["relu" for _ in range(len(sizes_pol))]
}
# select some part of the action
# more information at https://grid2op.readthedocs.io/en/latest/converter.html#grid2op.Converter.IdToAct.init_converter
kwargs_converters = {"all_actions": None,
"set_line_status": False,
"change_bus_vect": True,
"set_topo_vect": False
}
# define the name of the model
nm_ = "AnneOnymous"
save_path="/WHERE/I/SAVED/THE/MODEL"
logs_dir="/WHERE/I/SAVED/THE/LOGS"
try:
train(env,
name=nm_,
iterations=10000,
save_path=save_path,
load_path=None,
logs_dir=logs_dir,
nb_env=1,
training_param=tp,
kwargs_converters=kwargs_converters,
kwargs_archi=kwargs_archi)
finally:
env.close()
```
|
1.0
|
Documentation issue - When copy pasting the documentation of the SAC train function (https://l2rpn-baselines.readthedocs.io/en/stable/SAC.html#l2rpn_baselines.SAC.train) the program does not work.
The documentation should be adapted, for the SAC as:
```python
import grid2op
from grid2op.Reward import L2RPNReward
from l2rpn_baselines.utils import TrainingParam
from l2rpn_baselines.SAC import train
from l2rpn_baselines.utils import NNParam
# define the environment
env = grid2op.make("l2rpn_case14_sandbox",
reward_class=L2RPNReward)
# use the default training parameters
tp = TrainingParam()
# this will be the list of what part of the observation I want to keep
# more information on https://grid2op.readthedocs.io/en/latest/observation.html#main-observation-attributes
li_attr_obs_X = ["day_of_week", "hour_of_day", "minute_of_hour", "prod_p", "prod_v", "load_p", "load_q",
"actual_dispatch", "target_dispatch", "topo_vect", "time_before_cooldown_line",
"time_before_cooldown_sub", "rho", "timestep_overflow", "line_status"]
# neural network architecture
observation_size = NNParam.get_obs_size(env, li_attr_obs_X)
sizes_q = [800, 800, 800, 494, 494, 494] # sizes of each hidden layers
sizes_v = [800, 800] # sizes of each hidden layers
sizes_pol = [800, 800, 800, 494, 494, 494] # sizes of each hidden layers
kwargs_archi = {'observation_size': observation_size,
'sizes': sizes_q,
'activs': ["relu" for _ in range(len(sizes_q))],
"list_attr_obs": li_attr_obs_X,
"sizes_value": sizes_v,
"activs_value": ["relu" for _ in range(len(sizes_v))],
"sizes_policy": sizes_pol,
"activs_policy": ["relu" for _ in range(len(sizes_pol))]
}
# select some part of the action
# more information at https://grid2op.readthedocs.io/en/latest/converter.html#grid2op.Converter.IdToAct.init_converter
kwargs_converters = {"all_actions": None,
"set_line_status": False,
"change_bus_vect": True,
"set_topo_vect": False
}
# define the name of the model
nm_ = "AnneOnymous"
save_path="/WHERE/I/SAVED/THE/MODEL"
logs_dir="/WHERE/I/SAVED/THE/LOGS"
try:
train(env,
name=nm_,
iterations=10000,
save_path=save_path,
load_path=None,
logs_dir=logs_dir,
nb_env=1,
training_param=tp,
kwargs_converters=kwargs_converters,
kwargs_archi=kwargs_archi)
finally:
env.close()
```
|
non_main
|
documentation issue when copy pasting the documentation of the sac train function the program does not work the documentation should be adapted for the sac as python import from reward import from baselines utils import trainingparam from baselines sac import train from baselines utils import nnparam define the environment env make sandbox reward class use the default training parameters tp trainingparam this will be the list of what part of the observation i want to keep more information on li attr obs x day of week hour of day minute of hour prod p prod v load p load q actual dispatch target dispatch topo vect time before cooldown line time before cooldown sub rho timestep overflow line status neural network architecture observation size nnparam get obs size env li attr obs x sizes q sizes of each hidden layers sizes v sizes of each hidden layers sizes pol sizes of each hidden layers kwargs archi observation size observation size sizes sizes q activs list attr obs li attr obs x sizes value sizes v activs value sizes policy sizes pol activs policy select some part of the action more information at kwargs converters all actions none set line status false change bus vect true set topo vect false define the name of the model nm anneonymous save path where i saved the model logs dir where i saved the logs try train env name nm iterations save path save path load path none logs dir logs dir nb env training param tp kwargs converters kwargs converters kwargs archi kwargs archi finally env close
| 0
|
1,841
| 6,577,374,380
|
IssuesEvent
|
2017-09-12 00:28:02
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
mysql_user Provide access to mysql 5.7 installs
|
affects_2.0 feature_idea waiting_on_maintainer
|
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
mysql_user module
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides`
```
##### CONFIGURATION
NONE/Using Tower
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Cannot log in as root@localhost into mysql with null password anymore.
The problem is the same as the problem mentioned here:
https://forge.puppet.com/puppetlabs/mysql#mysql_datadir
Ansible playbooks for mysql become deprecated because of the feature that forces root user to log in for the first time with a temporary password introduced here:
Blog post from 2015 when feature was introduced:
http://mysqlserverteam.com/initialize-your-mysql-5-7-instances-with-ease/
There's a workaround to launch mysqld temporaryly with root password disabled but it doesn't work in deamonized mode:
https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_initialize-insecure
The WORKAROUND is to scrape: /var/log/mysqld.log after mysqld starts as a serverand look for a the following:
[Note] A temporary password is generated for root@localhost: O,k5.marHfFu
Then parse it and use the password on the current my_sql modules.
##### STEPS TO REPRODUCE
https://github.com/einarc/autoscaling-blog/tree/feature
Please run in a RHEL7 instance with playbook config.
##### EXPECTED RESULTS
The mysql instance can be accessed and configured:
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
failed: [172.16.5.197] => (item=ip-172-16-5-197) => {"failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": true, "config_file": "~/.my.cnf", "encrypted": false, "host": "ip-172-16-5-197", "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "root", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "root"}, "module_name": "mysql_user"}, "item": "ip-172-16-5-197", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'localhost' (using password: NO)\")"}
failed: [172.16.5.197] => (item=127.0.0.1) => {"failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": true, "config_file": "~/.my.cnf", "encrypted": false, "host": "127.0.0.1", "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "root", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "root"}, "module_name": "mysql_user"}, "item": "127.0.0.1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'localhost' (using password: NO)\")"}
failed: [172.16.5.197] => (item=::1) => {"failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": true, "config_file": "~/.my.cnf", "encrypted": false, "host": "::1", "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "root", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "root"}, "module_name": "mysql_user"}, "item": "::1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'localhost' (using password: NO)\")"}
failed: [172.16.5.197] => (item=localhost) => {"failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": true, "config_file": "~/.my.cnf", "encrypted": false, "host": "localhost", "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "root", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "root"}, "module_name": "mysql_user"}, "item": "localhost", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'localhost' (using password: NO)\")"}`
```
|
True
|
mysql_user Provide access to mysql 5.7 installs - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
mysql_user module
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides`
```
##### CONFIGURATION
NONE/Using Tower
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Cannot log in as root@localhost into mysql with null password anymore.
The problem is the same as the problem mentioned here:
https://forge.puppet.com/puppetlabs/mysql#mysql_datadir
Ansible playbooks for mysql become deprecated because of the feature that forces root user to log in for the first time with a temporary password introduced here:
Blog post from 2015 when feature was introduced:
http://mysqlserverteam.com/initialize-your-mysql-5-7-instances-with-ease/
There's a workaround to launch mysqld temporaryly with root password disabled but it doesn't work in deamonized mode:
https://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_initialize-insecure
The WORKAROUND is to scrape: /var/log/mysqld.log after mysqld starts as a serverand look for a the following:
[Note] A temporary password is generated for root@localhost: O,k5.marHfFu
Then parse it and use the password on the current my_sql modules.
##### STEPS TO REPRODUCE
https://github.com/einarc/autoscaling-blog/tree/feature
Please run in a RHEL7 instance with playbook config.
##### EXPECTED RESULTS
The mysql instance can be accessed and configured:
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
failed: [172.16.5.197] => (item=ip-172-16-5-197) => {"failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": true, "config_file": "~/.my.cnf", "encrypted": false, "host": "ip-172-16-5-197", "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "root", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "root"}, "module_name": "mysql_user"}, "item": "ip-172-16-5-197", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'localhost' (using password: NO)\")"}
failed: [172.16.5.197] => (item=127.0.0.1) => {"failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": true, "config_file": "~/.my.cnf", "encrypted": false, "host": "127.0.0.1", "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "root", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "root"}, "module_name": "mysql_user"}, "item": "127.0.0.1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'localhost' (using password: NO)\")"}
failed: [172.16.5.197] => (item=::1) => {"failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": true, "config_file": "~/.my.cnf", "encrypted": false, "host": "::1", "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "root", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "root"}, "module_name": "mysql_user"}, "item": "::1", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'localhost' (using password: NO)\")"}
failed: [172.16.5.197] => (item=localhost) => {"failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": true, "config_file": "~/.my.cnf", "encrypted": false, "host": "localhost", "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "root", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "root"}, "module_name": "mysql_user"}, "item": "localhost", "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'localhost' (using password: NO)\")"}`
```
|
main
|
mysql user provide access to mysql installs issue type feature idea component name mysql user module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration none using tower os environment n a summary cannot log in as root localhost into mysql with null password anymore the problem is the same as the problem mentioned here ansible playbooks for mysql become deprecated because of the feature that forces root user to log in for the first time with a temporary password introduced here blog post from when feature was introduced there s a workaround to launch mysqld temporaryly with root password disabled but it doesn t work in deamonized mode the workaround is to scrape var log mysqld log after mysqld starts as a serverand look for a the following a temporary password is generated for root localhost o marhffu then parse it and use the password on the current my sql modules steps to reproduce please run in a instance with playbook config expected results the mysql instance can be accessed and configured actual results failed item ip failed true invocation module args append privs false check implicit admin true config file my cnf encrypted false host ip login host localhost login password null login port login unix socket null login user null name root password value specified in no log parameter priv null ssl ca null ssl cert null ssl key null state present update password always user root module name mysql user item ip msg unable to connect to database check login user and login password are correct or root my cnf has the credentials exception message access denied for user root localhost using password no failed item failed true invocation module args append privs false check implicit admin true config file my cnf encrypted false host login host localhost login password null login port login unix socket null login user null name root password value specified in no log parameter priv null ssl ca null ssl cert null ssl key null state present update password always user root module name mysql user item msg unable to connect to database check login user and login password are correct or root my cnf has the credentials exception message access denied for user root localhost using password no failed item failed true invocation module args append privs false check implicit admin true config file my cnf encrypted false host login host localhost login password null login port login unix socket null login user null name root password value specified in no log parameter priv null ssl ca null ssl cert null ssl key null state present update password always user root module name mysql user item msg unable to connect to database check login user and login password are correct or root my cnf has the credentials exception message access denied for user root localhost using password no failed item localhost failed true invocation module args append privs false check implicit admin true config file my cnf encrypted false host localhost login host localhost login password null login port login unix socket null login user null name root password value specified in no log parameter priv null ssl ca null ssl cert null ssl key null state present update password always user root module name mysql user item localhost msg unable to connect to database check login user and login password are correct or root my cnf has the credentials exception message access denied for user root localhost using password no
| 1
|
216,533
| 16,769,054,494
|
IssuesEvent
|
2021-06-14 12:46:08
|
onias-rocha/desafio_quality
|
https://api.github.com/repos/onias-rocha/desafio_quality
|
closed
|
TU-0001: Verifique se o total de metros quadrados calculados por propriedade está correto
|
test
|
Retorna o cálculo correto do total de metros quadrados de uma propriedade.
|
1.0
|
TU-0001: Verifique se o total de metros quadrados calculados por propriedade está correto -
Retorna o cálculo correto do total de metros quadrados de uma propriedade.
|
non_main
|
tu verifique se o total de metros quadrados calculados por propriedade está correto retorna o cálculo correto do total de metros quadrados de uma propriedade
| 0
|
5,360
| 26,979,386,096
|
IssuesEvent
|
2023-02-09 11:59:26
|
backdrop-ops/contrib
|
https://api.github.com/repos/backdrop-ops/contrib
|
closed
|
Application to join: dyrer (Flat Zymphonies Theme)
|
Port in progress Maintainer application
|
I am porting flat_zymphonies_theme with @klonos https://github.com/dyrer/flat_zymphonies
I also requested permissions @@https://www.drupal.org/node/2793919
|
True
|
Application to join: dyrer (Flat Zymphonies Theme) - I am porting flat_zymphonies_theme with @klonos https://github.com/dyrer/flat_zymphonies
I also requested permissions @@https://www.drupal.org/node/2793919
|
main
|
application to join dyrer flat zymphonies theme i am porting flat zymphonies theme with klonos i also requested permissions
| 1
|
3,290
| 12,624,293,692
|
IssuesEvent
|
2020-06-14 05:10:33
|
short-d/short
|
https://api.github.com/repos/short-d/short
|
closed
|
[Refactor] Change ReCaptcha API body to a map
|
maintainability
|
**What is frustrating you?**
With manually concatenated string, it's error prone to add new query parameter to reCaptcha.
**Your solution**
Change the manually concatenated string to a map.
|
True
|
[Refactor] Change ReCaptcha API body to a map - **What is frustrating you?**
With manually concatenated string, it's error prone to add new query parameter to reCaptcha.
**Your solution**
Change the manually concatenated string to a map.
|
main
|
change recaptcha api body to a map what is frustrating you with manually concatenated string it s error prone to add new query parameter to recaptcha your solution change the manually concatenated string to a map
| 1
|
4,649
| 24,075,842,536
|
IssuesEvent
|
2022-09-18 19:40:26
|
beyarkay/eskom-calendar
|
https://api.github.com/repos/beyarkay/eskom-calendar
|
closed
|
Possible error in parsing observed for CPT region 7
|
waiting-on-maintainer
|
Hi Boyd
Thank you for your app. Imported into Google Calander.
Seems to be a "duplication" error for some stage 5, CPT region 7.
Today the power was off 12:00-14:00 (first pic)
Second "duplication" tomorrow as exaple


|
True
|
Possible error in parsing observed for CPT region 7 -
Hi Boyd
Thank you for your app. Imported into Google Calander.
Seems to be a "duplication" error for some stage 5, CPT region 7.
Today the power was off 12:00-14:00 (first pic)
Second "duplication" tomorrow as exaple


|
main
|
possible error in parsing observed for cpt region hi boyd thank you for your app imported into google calander seems to be a duplication error for some stage cpt region today the power was off first pic second duplication tomorrow as exaple
| 1
|
120,401
| 25,788,106,160
|
IssuesEvent
|
2022-12-09 23:04:28
|
Azure/azure-sdk-for-java
|
https://api.github.com/repos/Azure/azure-sdk-for-java
|
opened
|
Look at Adding Spotless as Part of Build
|
EngSys Java Source Code Rules
|
Look at adding [Spotless](https://github.com/diffplug/spotless) as part of the normal build to add automated code formatting to build. Doing this will allow us to ensure Checkstyle and Spotbugs rules are being met while providing a standard code format.
|
1.0
|
Look at Adding Spotless as Part of Build - Look at adding [Spotless](https://github.com/diffplug/spotless) as part of the normal build to add automated code formatting to build. Doing this will allow us to ensure Checkstyle and Spotbugs rules are being met while providing a standard code format.
|
non_main
|
look at adding spotless as part of build look at adding as part of the normal build to add automated code formatting to build doing this will allow us to ensure checkstyle and spotbugs rules are being met while providing a standard code format
| 0
|
360
| 3,315,292,231
|
IssuesEvent
|
2015-11-06 11:10:27
|
Homebrew/homebrew
|
https://api.github.com/repos/Homebrew/homebrew
|
closed
|
Possible way to handle sandbox issues for Postgres's plugins
|
help wanted maintainer feedback sandbox upstream issue
|
As we can seen in https://github.com/Homebrew/homebrew/pull/41962 and many others PRs, all of Postgres's plugins are broken under sandbox. Moreover, this means all of them are broken during `upgrade/unlink/link/switch` etc.
Considering the amount of plugins for Postgres, vending all of them will soon become unscalable. However, until it's fixed/supported by upstream (See https://github.com/Homebrew/homebrew/issues/10247), Postgres is inherently hostile to Homebrew-style sandboxing where several components are symlinked into a common prefix.
Since there isn't any perfect solution, we may will just accept some hacking middle ground. AFAIK, NixOS handles this by copying all of binaries directly to common prefix, hence breaking its symlink sandbox as well. We may take some similar approach:
* Compile Postgres as usual.
* Move all of binaries in `prefix/bin` to `prefix/libexec/bin-backup`.
* Hard link binaries `prefix/libexec/bin-backup` to `HOMEBREW_PREFIX/bin` during `post_install`.
Clearly, it's still breaking our symlink system. But at least, it can work under sandbox.
Any objection/suggestion/commments? OR should we just vendor all of them inside one mega formula?
cc @mikemcquaid @DomT4
|
True
|
Possible way to handle sandbox issues for Postgres's plugins - As we can seen in https://github.com/Homebrew/homebrew/pull/41962 and many others PRs, all of Postgres's plugins are broken under sandbox. Moreover, this means all of them are broken during `upgrade/unlink/link/switch` etc.
Considering the amount of plugins for Postgres, vending all of them will soon become unscalable. However, until it's fixed/supported by upstream (See https://github.com/Homebrew/homebrew/issues/10247), Postgres is inherently hostile to Homebrew-style sandboxing where several components are symlinked into a common prefix.
Since there isn't any perfect solution, we may will just accept some hacking middle ground. AFAIK, NixOS handles this by copying all of binaries directly to common prefix, hence breaking its symlink sandbox as well. We may take some similar approach:
* Compile Postgres as usual.
* Move all of binaries in `prefix/bin` to `prefix/libexec/bin-backup`.
* Hard link binaries `prefix/libexec/bin-backup` to `HOMEBREW_PREFIX/bin` during `post_install`.
Clearly, it's still breaking our symlink system. But at least, it can work under sandbox.
Any objection/suggestion/commments? OR should we just vendor all of them inside one mega formula?
cc @mikemcquaid @DomT4
|
main
|
possible way to handle sandbox issues for postgres s plugins as we can seen in and many others prs all of postgres s plugins are broken under sandbox moreover this means all of them are broken during upgrade unlink link switch etc considering the amount of plugins for postgres vending all of them will soon become unscalable however until it s fixed supported by upstream see postgres is inherently hostile to homebrew style sandboxing where several components are symlinked into a common prefix since there isn t any perfect solution we may will just accept some hacking middle ground afaik nixos handles this by copying all of binaries directly to common prefix hence breaking its symlink sandbox as well we may take some similar approach compile postgres as usual move all of binaries in prefix bin to prefix libexec bin backup hard link binaries prefix libexec bin backup to homebrew prefix bin during post install clearly it s still breaking our symlink system but at least it can work under sandbox any objection suggestion commments or should we just vendor all of them inside one mega formula cc mikemcquaid
| 1
|
1,935
| 6,609,881,919
|
IssuesEvent
|
2017-09-19 15:50:54
|
Kristinita/Erics-Green-Room
|
https://api.github.com/repos/Kristinita/Erics-Green-Room
|
closed
|
[Feature request] Отображение, что вариант близкий, для всех правильных ответов
|
need-maintainer
|
### 1. Запрос
Неплохо было бы, если строка
```markdown
Похоже, что ваш ответ "<ответ>" почти правилен, но содержит опечатку.
```
показывалась бы, если игрок совершил опечатку в любом из правильных вариантов ответа, а не только самом первом.
### 2. Данные
Вопрос:
```markdown
Оман*Риал*Риал Омани*-proof-http://www.numizm.ru/html/b/bayza.html
```
Отыгрыш вопроса в комнате:
```markdown
[9:13:08 PM] <GREEN>
Вопрос №45 из 123:
--------------------------------------------------------
Оман
--------------------------------------------------------
[9:13:16 PM] <орнитоптера_Королевы_Александры> реал омани
[9:13:16 PM] <GREEN> Нет, не 'реал омани'
[9:13:19 PM] <орнитоптера_Королевы_Александры> риал омани
[9:13:19 PM] <GREEN> орнитоптера_Королевы_Александры - даёт правильный ответ
[9:13:19 PM] <GREEN> Правильный ответ: "Риал"
[9:13:19 PM] <GREEN> Источник: http://www.numizm.ru/html/b/bayza.html
```
Несмотря на лишь 1 ошибку в 10 символах фразы `Похоже, что ваш ответ ...` не прозвучало.
### 3. Аргументация
1. Отвечающий в комнате не обязан помнить, какой из ответов стоит в пакете на первом месте.
1. В настоящее время, когда видишь `Нет, не <ответ>`, не знаешь: либо ответ совсем неправильный или ты опечатался на ответе, который стоит не первым. Алгоритм определения близких ответов становится немного бесполезным.
1. Если будет реализовано **#2**, игрок, опечатавшийся на ответе, который указан не первым, будет вообще терять очки. Лишаться ответов из-за опечаток, полагаю, не очень справедливо.
Спасибо.
|
True
|
[Feature request] Отображение, что вариант близкий, для всех правильных ответов - ### 1. Запрос
Неплохо было бы, если строка
```markdown
Похоже, что ваш ответ "<ответ>" почти правилен, но содержит опечатку.
```
показывалась бы, если игрок совершил опечатку в любом из правильных вариантов ответа, а не только самом первом.
### 2. Данные
Вопрос:
```markdown
Оман*Риал*Риал Омани*-proof-http://www.numizm.ru/html/b/bayza.html
```
Отыгрыш вопроса в комнате:
```markdown
[9:13:08 PM] <GREEN>
Вопрос №45 из 123:
--------------------------------------------------------
Оман
--------------------------------------------------------
[9:13:16 PM] <орнитоптера_Королевы_Александры> реал омани
[9:13:16 PM] <GREEN> Нет, не 'реал омани'
[9:13:19 PM] <орнитоптера_Королевы_Александры> риал омани
[9:13:19 PM] <GREEN> орнитоптера_Королевы_Александры - даёт правильный ответ
[9:13:19 PM] <GREEN> Правильный ответ: "Риал"
[9:13:19 PM] <GREEN> Источник: http://www.numizm.ru/html/b/bayza.html
```
Несмотря на лишь 1 ошибку в 10 символах фразы `Похоже, что ваш ответ ...` не прозвучало.
### 3. Аргументация
1. Отвечающий в комнате не обязан помнить, какой из ответов стоит в пакете на первом месте.
1. В настоящее время, когда видишь `Нет, не <ответ>`, не знаешь: либо ответ совсем неправильный или ты опечатался на ответе, который стоит не первым. Алгоритм определения близких ответов становится немного бесполезным.
1. Если будет реализовано **#2**, игрок, опечатавшийся на ответе, который указан не первым, будет вообще терять очки. Лишаться ответов из-за опечаток, полагаю, не очень справедливо.
Спасибо.
|
main
|
отображение что вариант близкий для всех правильных ответов запрос неплохо было бы если строка markdown похоже что ваш ответ почти правилен но содержит опечатку показывалась бы если игрок совершил опечатку в любом из правильных вариантов ответа а не только самом первом данные вопрос markdown оман риал риал омани proof отыгрыш вопроса в комнате markdown вопрос № из оман реал омани нет не реал омани риал омани орнитоптера королевы александры даёт правильный ответ правильный ответ риал источник несмотря на лишь ошибку в символах фразы похоже что ваш ответ не прозвучало аргументация отвечающий в комнате не обязан помнить какой из ответов стоит в пакете на первом месте в настоящее время когда видишь нет не не знаешь либо ответ совсем неправильный или ты опечатался на ответе который стоит не первым алгоритм определения близких ответов становится немного бесполезным если будет реализовано игрок опечатавшийся на ответе который указан не первым будет вообще терять очки лишаться ответов из за опечаток полагаю не очень справедливо спасибо
| 1
|
611
| 4,106,165,816
|
IssuesEvent
|
2016-06-06 07:29:20
|
Particular/NServiceBus.SqlServer
|
https://api.github.com/repos/Particular/NServiceBus.SqlServer
|
closed
|
QueryPeeker fails with long running message handlers
|
Tag: Maintainer Prio Type: Bug
|
Due to the fact that read from the queue puts lock on the table row, `QueuePeeker` is not able to do the count query (because the count does lock on the whole index).
### Possible Solutions
- Add `(nolock)` hit to count query
- Add `READPAST` hit to count query
### Approach taken
Change proposed in `PR` adds `READPAST` hint to `Select` query. It enables skipping row that has been locked when reading messages from the queue. It has the advantage from `NOLOCK` that prevents dirty reads.
NOTE: it cannot be used both in `SNAPSHOT` and `SERIALIZABLE` isolation levels but that should be a separate fix made as part of #246
ping @Particular/sqlserver-transport-maintainers
|
True
|
QueryPeeker fails with long running message handlers - Due to the fact that read from the queue puts lock on the table row, `QueuePeeker` is not able to do the count query (because the count does lock on the whole index).
### Possible Solutions
- Add `(nolock)` hit to count query
- Add `READPAST` hit to count query
### Approach taken
Change proposed in `PR` adds `READPAST` hint to `Select` query. It enables skipping row that has been locked when reading messages from the queue. It has the advantage from `NOLOCK` that prevents dirty reads.
NOTE: it cannot be used both in `SNAPSHOT` and `SERIALIZABLE` isolation levels but that should be a separate fix made as part of #246
ping @Particular/sqlserver-transport-maintainers
|
main
|
querypeeker fails with long running message handlers due to the fact that read from the queue puts lock on the table row queuepeeker is not able to do the count query because the count does lock on the whole index possible solutions add nolock hit to count query add readpast hit to count query approach taken change proposed in pr adds readpast hint to select query it enables skipping row that has been locked when reading messages from the queue it has the advantage from nolock that prevents dirty reads note it cannot be used both in snapshot and serializable isolation levels but that should be a separate fix made as part of ping particular sqlserver transport maintainers
| 1
|
3,055
| 11,440,210,696
|
IssuesEvent
|
2020-02-05 09:12:13
|
precice/precice
|
https://api.github.com/repos/precice/precice
|
opened
|
Introduce a Tweakable Assertion Policy
|
good first issue maintainability
|
# Problem description
Currently, assertions are disabled in non-debug builds.
Using assertions in release builds is a trade-off between:
* higher security and trust in the results at the cost of additional work
* less work and checks at the cost of security.
What a user/developer prefers depends on the use-case and we should not enforce a choice.
# Potential Solution
Provide the additional CMake Variable `PRECICE_ASSERTION_POLICY`:
Choice | Comment
--- | ---
`ON` | Always enable assertions
`DEBUG` | Enable assertions on debug builds only **default**
`OFF` | Never enable assertions
|
True
|
Introduce a Tweakable Assertion Policy - # Problem description
Currently, assertions are disabled in non-debug builds.
Using assertions in release builds is a trade-off between:
* higher security and trust in the results at the cost of additional work
* less work and checks at the cost of security.
What a user/developer prefers depends on the use-case and we should not enforce a choice.
# Potential Solution
Provide the additional CMake Variable `PRECICE_ASSERTION_POLICY`:
Choice | Comment
--- | ---
`ON` | Always enable assertions
`DEBUG` | Enable assertions on debug builds only **default**
`OFF` | Never enable assertions
|
main
|
introduce a tweakable assertion policy problem description currently assertions are disabled in non debug builds using assertions in release builds is a trade off between higher security and trust in the results at the cost of additional work less work and checks at the cost of security what a user developer prefers depends on the use case and we should not enforce a choice potential solution provide the additional cmake variable precice assertion policy choice comment on always enable assertions debug enable assertions on debug builds only default off never enable assertions
| 1
|
723,571
| 24,901,547,612
|
IssuesEvent
|
2022-10-28 21:33:12
|
magento/magento2
|
https://api.github.com/repos/magento/magento2
|
closed
|
[Issue] Fix language in cookie_status.phtml
|
Component: Theme Progress: PR in progress Severity: S3 Priority: P4 Issue: ready for confirmation
|
This issue is automatically created based on existing pull request: magento/magento2#33742: Fix language in cookie_status.phtml
---------
The cookie disabled message contains a poorly-worded sentence. This is a suggested fix to the sentence. Dropped "in the case".
<!---
Thank you for contributing to Magento.
To help us process this pull request we recommend that you add the following information:
- Summary of the pull request,
- Issue(s) related to the changes made,
- Manual testing scenarios
Fields marked with (*) are required. Please don't remove the template.
-->
<!--- Please provide a general summary of the Pull Request in the Title above -->
### Description (*)
<!---
Please provide a description of the changes proposed in the pull request.
Letting us know what has changed and why it needed changing will help us validate this pull request.
-->
### Related Pull Requests
<!-- related pull request placeholder -->
### Fixed Issues (if relevant)
<!---
If relevant, please provide a list of fixed issues in the format magento/magento2#<issue_number>.
There could be 1 or more issues linked here and it will help us find some more information about the reasoning behind this change.
-->
1. Fixes magento/magento2#<issue_number>
### Manual testing scenarios (*)
<!---
Please provide a set of unambiguous steps to test the proposed code change.
Giving us manual testing scenarios will help with the processing and validation process.
-->
1. ...
2. ...
### Questions or comments
<!---
If relevant, here you can ask questions or provide comments on your pull request for the reviewer
For example if you need assistance with writing tests or would like some feedback on one of your development ideas
-->
### Contribution checklist (*)
- [ ] Pull request has a meaningful description of its purpose
- [ ] All commits are accompanied by meaningful commit messages
- [ ] All new or changed code is covered with unit/integration tests (if applicable)
- [ ] README.md files for modified modules are updated and included in the pull request if any [README.md predefined sections](https://github.com/magento/devdocs/wiki/Magento-module-README.md) require an update
- [ ] All automated tests passed successfully (all builds are green)
|
1.0
|
[Issue] Fix language in cookie_status.phtml - This issue is automatically created based on existing pull request: magento/magento2#33742: Fix language in cookie_status.phtml
---------
The cookie disabled message contains a poorly-worded sentence. This is a suggested fix to the sentence. Dropped "in the case".
<!---
Thank you for contributing to Magento.
To help us process this pull request we recommend that you add the following information:
- Summary of the pull request,
- Issue(s) related to the changes made,
- Manual testing scenarios
Fields marked with (*) are required. Please don't remove the template.
-->
<!--- Please provide a general summary of the Pull Request in the Title above -->
### Description (*)
<!---
Please provide a description of the changes proposed in the pull request.
Letting us know what has changed and why it needed changing will help us validate this pull request.
-->
### Related Pull Requests
<!-- related pull request placeholder -->
### Fixed Issues (if relevant)
<!---
If relevant, please provide a list of fixed issues in the format magento/magento2#<issue_number>.
There could be 1 or more issues linked here and it will help us find some more information about the reasoning behind this change.
-->
1. Fixes magento/magento2#<issue_number>
### Manual testing scenarios (*)
<!---
Please provide a set of unambiguous steps to test the proposed code change.
Giving us manual testing scenarios will help with the processing and validation process.
-->
1. ...
2. ...
### Questions or comments
<!---
If relevant, here you can ask questions or provide comments on your pull request for the reviewer
For example if you need assistance with writing tests or would like some feedback on one of your development ideas
-->
### Contribution checklist (*)
- [ ] Pull request has a meaningful description of its purpose
- [ ] All commits are accompanied by meaningful commit messages
- [ ] All new or changed code is covered with unit/integration tests (if applicable)
- [ ] README.md files for modified modules are updated and included in the pull request if any [README.md predefined sections](https://github.com/magento/devdocs/wiki/Magento-module-README.md) require an update
- [ ] All automated tests passed successfully (all builds are green)
|
non_main
|
fix language in cookie status phtml this issue is automatically created based on existing pull request magento fix language in cookie status phtml the cookie disabled message contains a poorly worded sentence this is a suggested fix to the sentence dropped in the case thank you for contributing to magento to help us process this pull request we recommend that you add the following information summary of the pull request issue s related to the changes made manual testing scenarios fields marked with are required please don t remove the template description please provide a description of the changes proposed in the pull request letting us know what has changed and why it needed changing will help us validate this pull request related pull requests fixed issues if relevant if relevant please provide a list of fixed issues in the format magento there could be or more issues linked here and it will help us find some more information about the reasoning behind this change fixes magento manual testing scenarios please provide a set of unambiguous steps to test the proposed code change giving us manual testing scenarios will help with the processing and validation process questions or comments if relevant here you can ask questions or provide comments on your pull request for the reviewer for example if you need assistance with writing tests or would like some feedback on one of your development ideas contribution checklist pull request has a meaningful description of its purpose all commits are accompanied by meaningful commit messages all new or changed code is covered with unit integration tests if applicable readme md files for modified modules are updated and included in the pull request if any require an update all automated tests passed successfully all builds are green
| 0
|
2,000
| 6,716,546,198
|
IssuesEvent
|
2017-10-14 09:44:21
|
openpsych/django
|
https://api.github.com/repos/openpsych/django
|
opened
|
Cron Job for Automatic SSL Renewal
|
Area: Server/Host Maintainance
|
A daily cron job needs to be setup to check the status of the LetsEncrypt SSL certificates and renew them when necessary.
When the certificates are renewed, a notification email should be sent to @Deleetdk at the.dfx@gmail.com.
|
True
|
Cron Job for Automatic SSL Renewal - A daily cron job needs to be setup to check the status of the LetsEncrypt SSL certificates and renew them when necessary.
When the certificates are renewed, a notification email should be sent to @Deleetdk at the.dfx@gmail.com.
|
main
|
cron job for automatic ssl renewal a daily cron job needs to be setup to check the status of the letsencrypt ssl certificates and renew them when necessary when the certificates are renewed a notification email should be sent to deleetdk at the dfx gmail com
| 1
|
5,325
| 26,896,368,855
|
IssuesEvent
|
2023-02-06 12:44:18
|
centerofci/mathesar
|
https://api.github.com/repos/centerofci/mathesar
|
closed
|
common_data should not contain content the user does not have access to
|
type: bug work: backend status: ready restricted: maintainers
|
## Description
* Our templates render a `common_data` json that contains the following:
* `current_db`
* `current_schema`
* `schemas`
* `databases`
* `tables`
* `queries`
* `abstract_types`
* `live_demo_mode`
The `databases`, `schemas`, `tables`, and `queries` properties of this list have to be provided taking into account the user's permission levels. Currently, then contain everything.
|
True
|
common_data should not contain content the user does not have access to - ## Description
* Our templates render a `common_data` json that contains the following:
* `current_db`
* `current_schema`
* `schemas`
* `databases`
* `tables`
* `queries`
* `abstract_types`
* `live_demo_mode`
The `databases`, `schemas`, `tables`, and `queries` properties of this list have to be provided taking into account the user's permission levels. Currently, then contain everything.
|
main
|
common data should not contain content the user does not have access to description our templates render a common data json that contains the following current db current schema schemas databases tables queries abstract types live demo mode the databases schemas tables and queries properties of this list have to be provided taking into account the user s permission levels currently then contain everything
| 1
|
4,113
| 19,529,524,888
|
IssuesEvent
|
2021-12-30 14:15:18
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
closed
|
gpgme needs a new maintainer
|
9.needs: maintainer
|
I removed myself as maintainer in #128098. Would anyone be interested to maintain it?
|
True
|
gpgme needs a new maintainer - I removed myself as maintainer in #128098. Would anyone be interested to maintain it?
|
main
|
gpgme needs a new maintainer i removed myself as maintainer in would anyone be interested to maintain it
| 1
|
939
| 4,652,274,009
|
IssuesEvent
|
2016-10-03 13:31:52
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
Fail to check package version
|
affects_2.1 bug_report waiting_on_maintainer
|
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```docker_image``` module.
##### ANSIBLE VERSION
```bash
$ ansible --version
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Default file used.
##### OS / ENVIRONMENT
Docker Container (hosted by Debian Jessie).
##### SUMMARY
When I want pull a docker image, Ansible reports that docker-py package installed is ```1.10.3``` whereas minimum required is ```1.7.0```.
##### STEPS TO REPRODUCE
```bash
$ ansible -m docker_image -a "name=nginx pull=yes" foo
foo | FAILED! => {
"changed": false,
"failed": true,
"msg": "Error: docker-py version is 1.10.3. Minimum version required is 1.7.0."
}
```
|
True
|
Fail to check package version - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```docker_image``` module.
##### ANSIBLE VERSION
```bash
$ ansible --version
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Default file used.
##### OS / ENVIRONMENT
Docker Container (hosted by Debian Jessie).
##### SUMMARY
When I want pull a docker image, Ansible reports that docker-py package installed is ```1.10.3``` whereas minimum required is ```1.7.0```.
##### STEPS TO REPRODUCE
```bash
$ ansible -m docker_image -a "name=nginx pull=yes" foo
foo | FAILED! => {
"changed": false,
"failed": true,
"msg": "Error: docker-py version is 1.10.3. Minimum version required is 1.7.0."
}
```
|
main
|
fail to check package version issue type bug report component name docker image module ansible version bash ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default file used os environment docker container hosted by debian jessie summary when i want pull a docker image ansible reports that docker py package installed is whereas minimum required is steps to reproduce bash ansible m docker image a name nginx pull yes foo foo failed changed false failed true msg error docker py version is minimum version required is
| 1
|
6,393
| 14,498,679,951
|
IssuesEvent
|
2020-12-11 15:48:09
|
ratchetphp/Ratchet
|
https://api.github.com/repos/ratchetphp/Ratchet
|
opened
|
Roadmap
|
architecture docs enhancement feature
|
## v0.5
Updates to the next release of Ratchet will be made against the [v0.5 branch](https://github.com/ratchetphp/Ratchet/tree/v0.5). This version will add some functionality, including a transition period, while keeping backwards compatibility. Key features for this version include:
- WebSocket deflate support. Off by default. A new optional parameter to be added to `WsServer` to enable compression.
- `ConnectionInterface` will implement [PSR-11's `ContainerInterface`](https://www.php-fig.org/psr/psr-11/). Properties from Components will be accessible via `$conn->get('HTTP.request')` as well as the current magic methodical way of `$conn->HTTP->request`.
- Update dependencies to work with all React 1.0 libraries. We will support a range of what's supported now (0.x versions) up to 1.0. A couple of their APIs have changed in 1.0 so this may be a BC break for some people if they're also using React in their projects, hence maintaining support for the old version as well
- Add TLS support to the App Facade (#848)
- Consider adopting [PSR-12](https://www.php-fig.org/psr/psr-12/) in the form of a pre-commit hook or GitHub action to auto-format so the code base is consistent without having to think about it
## v0.6/v1.0
This version will not include any new features but have backwards compatibility breaks from old code.
- Remove the magic accessors from `ConnectionInterface`. All properties set by Components are to be access via `ContainerInterface` methods. This will be a syntactic BC break but not an architectural one.
- New version of PHP requirement (discussions to be had around which version this should be)
- Transition return type declarations on all methods from Docblocks to language
- Session and WAMP components will be moved to their own repositories
- Drop support for pre 1.0 version of React dependencies
- Determine optimal target version of Symfony libraries
|
1.0
|
Roadmap - ## v0.5
Updates to the next release of Ratchet will be made against the [v0.5 branch](https://github.com/ratchetphp/Ratchet/tree/v0.5). This version will add some functionality, including a transition period, while keeping backwards compatibility. Key features for this version include:
- WebSocket deflate support. Off by default. A new optional parameter to be added to `WsServer` to enable compression.
- `ConnectionInterface` will implement [PSR-11's `ContainerInterface`](https://www.php-fig.org/psr/psr-11/). Properties from Components will be accessible via `$conn->get('HTTP.request')` as well as the current magic methodical way of `$conn->HTTP->request`.
- Update dependencies to work with all React 1.0 libraries. We will support a range of what's supported now (0.x versions) up to 1.0. A couple of their APIs have changed in 1.0 so this may be a BC break for some people if they're also using React in their projects, hence maintaining support for the old version as well
- Add TLS support to the App Facade (#848)
- Consider adopting [PSR-12](https://www.php-fig.org/psr/psr-12/) in the form of a pre-commit hook or GitHub action to auto-format so the code base is consistent without having to think about it
## v0.6/v1.0
This version will not include any new features but have backwards compatibility breaks from old code.
- Remove the magic accessors from `ConnectionInterface`. All properties set by Components are to be access via `ContainerInterface` methods. This will be a syntactic BC break but not an architectural one.
- New version of PHP requirement (discussions to be had around which version this should be)
- Transition return type declarations on all methods from Docblocks to language
- Session and WAMP components will be moved to their own repositories
- Drop support for pre 1.0 version of React dependencies
- Determine optimal target version of Symfony libraries
|
non_main
|
roadmap updates to the next release of ratchet will be made against the this version will add some functionality including a transition period while keeping backwards compatibility key features for this version include websocket deflate support off by default a new optional parameter to be added to wsserver to enable compression connectioninterface will implement properties from components will be accessible via conn get http request as well as the current magic methodical way of conn http request update dependencies to work with all react libraries we will support a range of what s supported now x versions up to a couple of their apis have changed in so this may be a bc break for some people if they re also using react in their projects hence maintaining support for the old version as well add tls support to the app facade consider adopting in the form of a pre commit hook or github action to auto format so the code base is consistent without having to think about it this version will not include any new features but have backwards compatibility breaks from old code remove the magic accessors from connectioninterface all properties set by components are to be access via containerinterface methods this will be a syntactic bc break but not an architectural one new version of php requirement discussions to be had around which version this should be transition return type declarations on all methods from docblocks to language session and wamp components will be moved to their own repositories drop support for pre version of react dependencies determine optimal target version of symfony libraries
| 0
|
269,478
| 8,436,026,646
|
IssuesEvent
|
2018-10-17 14:30:35
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.odiamusic.com - see bug description
|
browser-firefox priority-normal
|
<!-- @browser: Firefox 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:63.0) Gecko/20100101 Firefox/63.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://www.odiamusic.com/omplayer/OMPlayerFv12.html
**Browser / Version**: Firefox 63.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Its not Downloading...
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2018/10/53ede9ff-c912-410e-b77f-1fb05a10f9fe.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>buildID: 20181011200118</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: beta</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.odiamusic.com - see bug description - <!-- @browser: Firefox 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:63.0) Gecko/20100101 Firefox/63.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://www.odiamusic.com/omplayer/OMPlayerFv12.html
**Browser / Version**: Firefox 63.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Its not Downloading...
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2018/10/53ede9ff-c912-410e-b77f-1fb05a10f9fe.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>buildID: 20181011200118</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: beta</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_main
|
see bug description url browser version firefox operating system windows tested another browser yes problem type something else description its not downloading steps to reproduce browser configuration mixed active content blocked false buildid tracking content blocked false gfx webrender blob images true gfx webrender all false mixed passive content blocked false gfx webrender enabled false image mem shared true channel beta from with ❤️
| 0
|
237,948
| 18,174,051,982
|
IssuesEvent
|
2021-09-27 23:35:48
|
teknologi-umum/bot
|
https://api.github.com/repos/teknologi-umum/bot
|
closed
|
prepare for hacktoberfest
|
documentation
|
- [x] create CONTRIBUTING.md
- [ ] create pull request template - PR without issue would be rejected
- [x] put hacktoberfest tag
- [ ] put a few hacktoberfest guides on readme such as:
- https://twitter.com/sudo_navendu/status/1437456596473303042
- https://www.digitalocean.com/community/tutorials/hacktoberfest-contributor-s-guide-how-to-find-and-contribute-to-open-source-projects
|
1.0
|
prepare for hacktoberfest - - [x] create CONTRIBUTING.md
- [ ] create pull request template - PR without issue would be rejected
- [x] put hacktoberfest tag
- [ ] put a few hacktoberfest guides on readme such as:
- https://twitter.com/sudo_navendu/status/1437456596473303042
- https://www.digitalocean.com/community/tutorials/hacktoberfest-contributor-s-guide-how-to-find-and-contribute-to-open-source-projects
|
non_main
|
prepare for hacktoberfest create contributing md create pull request template pr without issue would be rejected put hacktoberfest tag put a few hacktoberfest guides on readme such as
| 0
|
81,275
| 10,115,945,383
|
IssuesEvent
|
2019-07-30 23:40:19
|
microsoft/microsoft-ui-xaml
|
https://api.github.com/repos/microsoft/microsoft-ui-xaml
|
reopened
|
Proposal: Update hover visual
|
area-UIDesign feature proposal
|
## Summary
Update the hover visual to align more with the depth model. When item is hovered over, the item is lifted. When you get closer to the item, the color becomes lighter.
## Rationale
* The button today indicates hover via grid border lines. Removed the border and using color is more coherent and consistent.
* Follows the other control updates being proposed.
|
1.0
|
Proposal: Update hover visual - ## Summary
Update the hover visual to align more with the depth model. When item is hovered over, the item is lifted. When you get closer to the item, the color becomes lighter.
## Rationale
* The button today indicates hover via grid border lines. Removed the border and using color is more coherent and consistent.
* Follows the other control updates being proposed.
|
non_main
|
proposal update hover visual summary update the hover visual to align more with the depth model when item is hovered over the item is lifted when you get closer to the item the color becomes lighter rationale the button today indicates hover via grid border lines removed the border and using color is more coherent and consistent follows the other control updates being proposed
| 0
|
24,322
| 3,963,712,438
|
IssuesEvent
|
2016-05-02 21:25:10
|
netty/netty
|
https://api.github.com/repos/netty/netty
|
closed
|
Bug in AbstractMemcacheObjectEncoder with FullMemcacheMessage(s)
|
defect
|
Netty 4.1.0-SNAPSHOT
There is a bug in [AbstractMemcacheObjectEncoder](https://github.com/netty/netty/blob/4.1/codec-memcache/src/main/java/io/netty/handler/codec/memcache/AbstractMemcacheObjectEncoder.java) on line 48.
The `encode()` method returns and the `FullBinaryMemcacheRequest|Response` `content()` gets never written.
|
1.0
|
Bug in AbstractMemcacheObjectEncoder with FullMemcacheMessage(s) - Netty 4.1.0-SNAPSHOT
There is a bug in [AbstractMemcacheObjectEncoder](https://github.com/netty/netty/blob/4.1/codec-memcache/src/main/java/io/netty/handler/codec/memcache/AbstractMemcacheObjectEncoder.java) on line 48.
The `encode()` method returns and the `FullBinaryMemcacheRequest|Response` `content()` gets never written.
|
non_main
|
bug in abstractmemcacheobjectencoder with fullmemcachemessage s netty snapshot there is a bug in on line the encode method returns and the fullbinarymemcacherequest response content gets never written
| 0
|
12,471
| 8,682,500,773
|
IssuesEvent
|
2018-12-02 09:05:07
|
istio/istio
|
https://api.github.com/repos/istio/istio
|
closed
|
TCP proxy from istio gateway to MTLS service fails.
|
area/networking area/security stale
|
I got a TCP proxy working great, with mtls disabled.
Same service, with a DestinationRule and policy setting mtls - fails. Spent some time debugging, will restart next week - we need to sort it out for 1.0
Please try lastest 1.0 build and create a TCP proxy with mtls enabled for any service, let me know if you get it working. (one way to do it is to declare the port as tcp on fortio, or use the test app)
|
True
|
TCP proxy from istio gateway to MTLS service fails. - I got a TCP proxy working great, with mtls disabled.
Same service, with a DestinationRule and policy setting mtls - fails. Spent some time debugging, will restart next week - we need to sort it out for 1.0
Please try lastest 1.0 build and create a TCP proxy with mtls enabled for any service, let me know if you get it working. (one way to do it is to declare the port as tcp on fortio, or use the test app)
|
non_main
|
tcp proxy from istio gateway to mtls service fails i got a tcp proxy working great with mtls disabled same service with a destinationrule and policy setting mtls fails spent some time debugging will restart next week we need to sort it out for please try lastest build and create a tcp proxy with mtls enabled for any service let me know if you get it working one way to do it is to declare the port as tcp on fortio or use the test app
| 0
|
53
| 2,490,598,661
|
IssuesEvent
|
2015-01-02 17:19:42
|
10up/ElasticPress
|
https://api.github.com/repos/10up/ElasticPress
|
closed
|
Port over the autosuggest from EWP
|
enhancement high priority
|
This new plugin is missing the functionality required to make autosuggest work, let's make sure to bring it over.
|
1.0
|
Port over the autosuggest from EWP - This new plugin is missing the functionality required to make autosuggest work, let's make sure to bring it over.
|
non_main
|
port over the autosuggest from ewp this new plugin is missing the functionality required to make autosuggest work let s make sure to bring it over
| 0
|
4,761
| 24,526,196,572
|
IssuesEvent
|
2022-10-11 13:19:04
|
libp2p/js-libp2p-interfaces
|
https://api.github.com/repos/libp2p/js-libp2p-interfaces
|
closed
|
Broken pubsub type when using external pubsub library
|
kind/bug status/ready P2 need/maintainers-input
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs found within the JavaScript implementation of libp2p.
If you are asking a question about how to use libp2p, please ask on https://discuss.libp2p.io
Otherwise please fill in as much of the template below as possible.
-->
- **Version**: 0.39.2
<!--
Check package.json version
-->
- **Platform**: N/A (all platforms)
<!--
Output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows). If using in a Browser, please share the browser version as well
-->
- **Subsystem**: js-libp2p or js-libp2p-interfaces
<!--
If known, please specify affected core module name (e.g Dialer, Pubsub, Relay etc)
-->
#### Severity: Low
<!--
One of following:
Critical - System crash, application panic.
High - The main functionality of the application does not work, API breakage, repo format breakage, etc.
Medium - A non-essential functionality does not work, performance issues, etc.
Low - An optional functionality does not work.
Very Low - Translation or documentation mistake. Something that won't give anyone a bad day.
-->
#### Description:
<!--
- What you did
- What happened
- What you expected to happen
-->
As described in your example [examples/pubsub/message-filtering](https://github.com/libp2p/js-libp2p/blob/0ecc02b2a426b6dfec7b6f46d565fde41ad66954/examples/pubsub/message-filtering/README.md), when using `@chainsafe/libp2p-gossipsub`, you should be able to use `node.pubsub.topicValidators` in order to pass a function which can prevent unwanted messages from being spread. Although the `@chainsafe/libp2p-gossipsub` library does implement this feature correctly, TypeScript does not recognize this feature because your types are hardcoded instead of allowing the external library to pass their types.
This is confirmed [here](https://github.com/libp2p/js-libp2p/blob/0ecc02b2a426b6dfec7b6f46d565fde41ad66954/src/libp2p.ts#L59), where your pubsub type is assigned to the Libp2pNode class, that type definition is imported from `@libp2p/interface-pubsub` [here](https://github.com/libp2p/js-libp2p/blob/0ecc02b2a426b6dfec7b6f46d565fde41ad66954/src/libp2p.ts#L35), which is defined [here](https://github.com/libp2p/js-libp2p-interfaces/blob/399544b4e90ccc10711e86989d748b3751d52765/packages/interface-pubsub/src/index.ts#L136).
#### Steps to reproduce the error:
<!--
If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able
-->
- Try out [your example code](https://github.com/libp2p/js-libp2p/blob/0ecc02b2a426b6dfec7b6f46d565fde41ad66954/examples/pubsub/message-filtering/1.js) for message filtering on pubsub, using TypeScript instead of JavaScript.
|
True
|
Broken pubsub type when using external pubsub library - <!--
Thank you for reporting an issue.
This issue tracker is for bugs found within the JavaScript implementation of libp2p.
If you are asking a question about how to use libp2p, please ask on https://discuss.libp2p.io
Otherwise please fill in as much of the template below as possible.
-->
- **Version**: 0.39.2
<!--
Check package.json version
-->
- **Platform**: N/A (all platforms)
<!--
Output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows). If using in a Browser, please share the browser version as well
-->
- **Subsystem**: js-libp2p or js-libp2p-interfaces
<!--
If known, please specify affected core module name (e.g Dialer, Pubsub, Relay etc)
-->
#### Severity: Low
<!--
One of following:
Critical - System crash, application panic.
High - The main functionality of the application does not work, API breakage, repo format breakage, etc.
Medium - A non-essential functionality does not work, performance issues, etc.
Low - An optional functionality does not work.
Very Low - Translation or documentation mistake. Something that won't give anyone a bad day.
-->
#### Description:
<!--
- What you did
- What happened
- What you expected to happen
-->
As described in your example [examples/pubsub/message-filtering](https://github.com/libp2p/js-libp2p/blob/0ecc02b2a426b6dfec7b6f46d565fde41ad66954/examples/pubsub/message-filtering/README.md), when using `@chainsafe/libp2p-gossipsub`, you should be able to use `node.pubsub.topicValidators` in order to pass a function which can prevent unwanted messages from being spread. Although the `@chainsafe/libp2p-gossipsub` library does implement this feature correctly, TypeScript does not recognize this feature because your types are hardcoded instead of allowing the external library to pass their types.
This is confirmed [here](https://github.com/libp2p/js-libp2p/blob/0ecc02b2a426b6dfec7b6f46d565fde41ad66954/src/libp2p.ts#L59), where your pubsub type is assigned to the Libp2pNode class, that type definition is imported from `@libp2p/interface-pubsub` [here](https://github.com/libp2p/js-libp2p/blob/0ecc02b2a426b6dfec7b6f46d565fde41ad66954/src/libp2p.ts#L35), which is defined [here](https://github.com/libp2p/js-libp2p-interfaces/blob/399544b4e90ccc10711e86989d748b3751d52765/packages/interface-pubsub/src/index.ts#L136).
#### Steps to reproduce the error:
<!--
If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able
-->
- Try out [your example code](https://github.com/libp2p/js-libp2p/blob/0ecc02b2a426b6dfec7b6f46d565fde41ad66954/examples/pubsub/message-filtering/1.js) for message filtering on pubsub, using TypeScript instead of JavaScript.
|
main
|
broken pubsub type when using external pubsub library thank you for reporting an issue this issue tracker is for bugs found within the javascript implementation of if you are asking a question about how to use please ask on otherwise please fill in as much of the template below as possible version check package json version platform n a all platforms output of uname a unix or version and or bit windows if using in a browser please share the browser version as well subsystem js or js interfaces if known please specify affected core module name e g dialer pubsub relay etc severity low one of following critical system crash application panic high the main functionality of the application does not work api breakage repo format breakage etc medium a non essential functionality does not work performance issues etc low an optional functionality does not work very low translation or documentation mistake something that won t give anyone a bad day description what you did what happened what you expected to happen as described in your example when using chainsafe gossipsub you should be able to use node pubsub topicvalidators in order to pass a function which can prevent unwanted messages from being spread although the chainsafe gossipsub library does implement this feature correctly typescript does not recognize this feature because your types are hardcoded instead of allowing the external library to pass their types this is confirmed where your pubsub type is assigned to the class that type definition is imported from interface pubsub which is defined steps to reproduce the error if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able try out for message filtering on pubsub using typescript instead of javascript
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.