Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
171 | 2,734,999,729 | IssuesEvent | 2015-04-18 00:43:05 | jpike/noah_ark | https://api.github.com/repos/jpike/noah_ark | closed | Replace TmxParser Library with Custom Code | maintainability | To reduce problems with keeping the TmxParser third-party library updated over time (across library versions/compiler versions), I've decided to replace it with custom code for this game's needs. | True | Replace TmxParser Library with Custom Code - To reduce problems with keeping the TmxParser third-party library updated over time (across library versions/compiler versions), I've decided to replace it with custom code for this game's needs. | main | replace tmxparser library with custom code to reduce problems with keeping the tmxparser third party library updated over time across library versions compiler versions i ve decided to replace it with custom code for this game s needs | 1 |
87,295 | 15,759,624,220 | IssuesEvent | 2021-03-31 08:07:45 | geea-develop/gwed | https://api.github.com/repos/geea-develop/gwed | opened | WS-2021-0013 (Medium) detected in laravel/framework-v5.4.36 | security vulnerability | ## WS-2021-0013 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>laravel/framework-v5.4.36</b></p></summary>
<p>The Laravel Framework.</p>
<p>Library home page: <a href="https://api.github.com/repos/laravel/framework/zipball/1062a22232071c3e8636487c86ec1ae75681bbf9">https://api.github.com/repos/laravel/framework/zipball/1062a22232071c3e8636487c86ec1ae75681bbf9</a></p>
<p>
Dependency Hierarchy:
- :x: **laravel/framework-v5.4.36** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/geea-develop/gwed/commit/4607036cc34f9286488b8eb2d2d596d33f0e75e8">4607036cc34f9286488b8eb2d2d596d33f0e75e8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Laravel is a web application framework. Versions of Laravel before 6.20.14, 7.30.4 and 8.24.0 contain a query binding exploitation.
If a request is crafted where a field that is normally a non-array value is an array, and that input is not validated or cast to its expected type before being passed to the query builder, an unexpected number of query bindings can be added to the query. In some situations, this will simply lead to no results being returned by the query builder; however, it is possible certain queries could be affected in a way that causes the query to return unexpected results.
<p>Publish Date: 2021-02-02
<p>URL: <a href=https://github.com/laravel/framework/commit/2d9b970257bca7a176be897ec18dd5f6ffc5497f>WS-2021-0013</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-x7p5-p2c9-phvg">https://github.com/advisories/GHSA-x7p5-p2c9-phvg</a></p>
<p>Release Date: 2021-02-02</p>
<p>Fix Resolution: laravel/framework - 6.20.14, 7.30.4, 8.24.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0013 (Medium) detected in laravel/framework-v5.4.36 - ## WS-2021-0013 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>laravel/framework-v5.4.36</b></p></summary>
<p>The Laravel Framework.</p>
<p>Library home page: <a href="https://api.github.com/repos/laravel/framework/zipball/1062a22232071c3e8636487c86ec1ae75681bbf9">https://api.github.com/repos/laravel/framework/zipball/1062a22232071c3e8636487c86ec1ae75681bbf9</a></p>
<p>
Dependency Hierarchy:
- :x: **laravel/framework-v5.4.36** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/geea-develop/gwed/commit/4607036cc34f9286488b8eb2d2d596d33f0e75e8">4607036cc34f9286488b8eb2d2d596d33f0e75e8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Laravel is a web application framework. Versions of Laravel before 6.20.14, 7.30.4 and 8.24.0 contain a query binding exploitation.
If a request is crafted where a field that is normally a non-array value is an array, and that input is not validated or cast to its expected type before being passed to the query builder, an unexpected number of query bindings can be added to the query. In some situations, this will simply lead to no results being returned by the query builder; however, it is possible certain queries could be affected in a way that causes the query to return unexpected results.
<p>Publish Date: 2021-02-02
<p>URL: <a href=https://github.com/laravel/framework/commit/2d9b970257bca7a176be897ec18dd5f6ffc5497f>WS-2021-0013</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-x7p5-p2c9-phvg">https://github.com/advisories/GHSA-x7p5-p2c9-phvg</a></p>
<p>Release Date: 2021-02-02</p>
<p>Fix Resolution: laravel/framework - 6.20.14, 7.30.4, 8.24.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws medium detected in laravel framework ws medium severity vulnerability vulnerable library laravel framework the laravel framework library home page a href dependency hierarchy x laravel framework vulnerable library found in head commit a href found in base branch master vulnerability details laravel is a web application framework versions of laravel before and contain a query binding exploitation if a request is crafted where a field that is normally a non array value is an array and that input is not validated or cast to its expected type before being passed to the query builder an unexpected number of query bindings can be added to the query in some situations this will simply lead to no results being returned by the query builder however it is possible certain queries could be affected in a way that causes the query to return unexpected results publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope changed impact metrics confidentiality impact high integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution laravel framework step up your open source security game with whitesource | 0 |
4,198 | 20,601,582,715 | IssuesEvent | 2022-03-06 10:49:45 | truecharts/apps | https://api.github.com/repos/truecharts/apps | reopened | Add lancache | New App Request No-Maintainer | LanCache is an application designed to provide an OOTB caching solution for Steam, Uplay, Rockstar Games, etc pp.
https://lancache.net/
While it officially aims to provide a fast and uniform solution for lanparties, I mostly use it for being able to reinstall a game in a glimpse.
This is important as I have only very limited internet bandwidth.
While I know that this might be a very specific use case, this is absolutly not that relevant.
Also, I might take care of this myself once I got familiar with Helm Charts. | True | Add lancache - LanCache is an application designed to provide an OOTB caching solution for Steam, Uplay, Rockstar Games, etc pp.
https://lancache.net/
While it officially aims to provide a fast and uniform solution for lanparties, I mostly use it for being able to reinstall a game in a glimpse.
This is important as I have only very limited internet bandwidth.
While I know that this might be a very specific use case, this is absolutly not that relevant.
Also, I might take care of this myself once I got familiar with Helm Charts. | main | add lancache lancache is an application designed to provide an ootb caching solution for steam uplay rockstar games etc pp while it officially aims to provide a fast and uniform solution for lanparties i mostly use it for being able to reinstall a game in a glimpse this is important as i have only very limited internet bandwidth while i know that this might be a very specific use case this is absolutly not that relevant also i might take care of this myself once i got familiar with helm charts | 1 |
543 | 3,962,938,280 | IssuesEvent | 2016-05-02 18:38:05 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | RubyGems: Showing identical repeated results | Bug External Maintainer Input Requested | With this search - https://rubygems.org/api/v1/search.json?query=blacklight - here's what's happening:
<img width="400" alt="ddg-rubygems-blacklight" src="https://cloud.githubusercontent.com/assets/94173/14879800/5bbfe3f0-0d66-11e6-8ccd-ad0adb9f52d4.png">
It looks like the source API is the cause: https://rubygems.org/api/v1/search.json?query=blacklight
------
IA Page: http://duck.co/ia/view/ruby_gems
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @koosha-- | True | RubyGems: Showing identical repeated results - With this search - https://rubygems.org/api/v1/search.json?query=blacklight - here's what's happening:
<img width="400" alt="ddg-rubygems-blacklight" src="https://cloud.githubusercontent.com/assets/94173/14879800/5bbfe3f0-0d66-11e6-8ccd-ad0adb9f52d4.png">
It looks like the source API is the cause: https://rubygems.org/api/v1/search.json?query=blacklight
------
IA Page: http://duck.co/ia/view/ruby_gems
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @koosha-- | main | rubygems showing identical repeated results with this search here s what s happening img width alt ddg rubygems blacklight src it looks like the source api is the cause ia page koosha | 1 |
3,903 | 2,541,938,722 | IssuesEvent | 2015-01-28 13:06:57 | UnifiedViews/Core | https://api.github.com/repos/UnifiedViews/Core | closed | Internal error when putting a SPARQL query in debug mode in an input DPU | priority: High severity: bug | 
It appears after clicking on Run Query with the following SPARQL
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX sukl: <http://linked.opendata.cz/ontology/sukl/>
CONSTRUCT {
?df a sukl:DosageFormConcept ;
skos:inScheme sukl:DosageFormConceptScheme ;
skos:prefLabel ?title ;
skos:notation ?concateddfnotation ;
sukl:hasRouteOfAdministration ?roa .
}
WHERE {
?row <http://purl.org/dc/terms/title> ?title .
OPTIONAL {
?row sukl:hasDosageForm ?dfnotation .
}
OPTIONAL {
?row sukl:hasRouteOfAdministration ?roanotation .
}
BIND(IRI(CONCAT("http://linked.opendata.cz/resource/sukl/df/", IF(ISLITERAL(?dfnotation), REPLACE(?dfnotation, "[\\s\\+]", "-"), ""), IF(ISLITERAL(?dfnotation) && ISLITERAL(?roanotation) > 0, "-", ""), IF(ISLITERAL(?roanotation), REPLACE(?roanotation, "[\\s\\+]", "-"), ""))) AS ?df)
BIND(CONCAT(IF(ISLITERAL(?dfnotation), ?dfnotation, ""), IF(ISLITERAL(?dfnotation) && ISLITERAL(?roanotation) > 0, " ", ""), IF(ISLITERAL(?roanotation), ?roanotation, "")) AS ?concateddfnotation)
BIND(IF(ISLITERAL(?roanotation), IRI(CONCAT("http://linked.opendata.cz/resource/sukl/roa/", REPLACE(?roanotation, "[\\s\\+]", "-"))), IRI("http://linked.opendata.cz/resource/sukl/roa/UNKNOWN")) AS ?roa)
} | 1.0 | Internal error when putting a SPARQL query in debug mode in an input DPU - 
It appears after clicking on Run Query with the following SPARQL
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX sukl: <http://linked.opendata.cz/ontology/sukl/>
CONSTRUCT {
?df a sukl:DosageFormConcept ;
skos:inScheme sukl:DosageFormConceptScheme ;
skos:prefLabel ?title ;
skos:notation ?concateddfnotation ;
sukl:hasRouteOfAdministration ?roa .
}
WHERE {
?row <http://purl.org/dc/terms/title> ?title .
OPTIONAL {
?row sukl:hasDosageForm ?dfnotation .
}
OPTIONAL {
?row sukl:hasRouteOfAdministration ?roanotation .
}
BIND(IRI(CONCAT("http://linked.opendata.cz/resource/sukl/df/", IF(ISLITERAL(?dfnotation), REPLACE(?dfnotation, "[\\s\\+]", "-"), ""), IF(ISLITERAL(?dfnotation) && ISLITERAL(?roanotation) > 0, "-", ""), IF(ISLITERAL(?roanotation), REPLACE(?roanotation, "[\\s\\+]", "-"), ""))) AS ?df)
BIND(CONCAT(IF(ISLITERAL(?dfnotation), ?dfnotation, ""), IF(ISLITERAL(?dfnotation) && ISLITERAL(?roanotation) > 0, " ", ""), IF(ISLITERAL(?roanotation), ?roanotation, "")) AS ?concateddfnotation)
BIND(IF(ISLITERAL(?roanotation), IRI(CONCAT("http://linked.opendata.cz/resource/sukl/roa/", REPLACE(?roanotation, "[\\s\\+]", "-"))), IRI("http://linked.opendata.cz/resource/sukl/roa/UNKNOWN")) AS ?roa)
} | non_main | internal error when putting a sparql query in debug mode in an input dpu it appears after clicking on run query with the following sparql prefix skos prefix sukl construct df a sukl dosageformconcept skos inscheme sukl dosageformconceptscheme skos preflabel title skos notation concateddfnotation sukl hasrouteofadministration roa where row title optional row sukl hasdosageform dfnotation optional row sukl hasrouteofadministration roanotation bind iri concat if isliteral dfnotation replace dfnotation if isliteral dfnotation isliteral roanotation if isliteral roanotation replace roanotation as df bind concat if isliteral dfnotation dfnotation if isliteral dfnotation isliteral roanotation if isliteral roanotation roanotation as concateddfnotation bind if isliteral roanotation iri concat replace roanotation iri as roa | 0 |
1,654 | 6,573,646,532 | IssuesEvent | 2017-09-11 09:35:06 | reactiveui/ReactiveUI | https://api.github.com/repos/reactiveui/ReactiveUI | closed | Cake.PinNuGetDepenencies needs upgrading to Cake 0.22 soon or our builds will break | BREAKING contributor-experience housekeeping maintainer-experience up-for-grabs | Received some advanced heads up from the Cake team
> FYI just a heads up, like with 0.16.x, 0.22.0 will have several breaking changes, so we will require addins to reference Cake.Core 0.22.0 or newer to be loaded, there's a "your one your own anything can happen" setting to skip verification but we recommend to start targeting the newer version as soon as possible post release. Example usage to skip verification
`Cake.exe --settings_skipverification=true`
>
> Cake.Core 0.22.0 pre-release is already on MyGet so you can start to test with that, for most it's probably just a recompile, for modules and similar it can i.e. require implementing new members.
https://www.myget.org/gallery/cake This will be an epic release as we've unified around one scripting engine across all platforms, started work on removing need for nuget.exe for addin/tool directives and support dependencies for those (new nuget handling will be opt-in and described in blog post how to do so). This means bootstrapping will be easier and more consistent across platforms and hopefully targeting .NET Standard 1.6 will be enough to work across all platforms (testing that now), obviously multi targeting will still be supported for platform specific scenarios, but in many cases one assembly will be enough.This also means that we will get up to speed with the latest C# features without the need for the _experimental_ flag. There's also a several bug fixes and new features so all in all this release will rock :wink: | True | Cake.PinNuGetDepenencies needs upgrading to Cake 0.22 soon or our builds will break - Received some advanced heads up from the Cake team
> FYI just a heads up, like with 0.16.x, 0.22.0 will have several breaking changes, so we will require addins to reference Cake.Core 0.22.0 or newer to be loaded, there's a "your one your own anything can happen" setting to skip verification but we recommend to start targeting the newer version as soon as possible post release. Example usage to skip verification
`Cake.exe --settings_skipverification=true`
>
> Cake.Core 0.22.0 pre-release is already on MyGet so you can start to test with that, for most it's probably just a recompile, for modules and similar it can i.e. require implementing new members.
https://www.myget.org/gallery/cake This will be an epic release as we've unified around one scripting engine across all platforms, started work on removing need for nuget.exe for addin/tool directives and support dependencies for those (new nuget handling will be opt-in and described in blog post how to do so). This means bootstrapping will be easier and more consistent across platforms and hopefully targeting .NET Standard 1.6 will be enough to work across all platforms (testing that now), obviously multi targeting will still be supported for platform specific scenarios, but in many cases one assembly will be enough.This also means that we will get up to speed with the latest C# features without the need for the _experimental_ flag. There's also a several bug fixes and new features so all in all this release will rock :wink: | main | cake pinnugetdepenencies needs upgrading to cake soon or our builds will break received some advanced heads up from the cake team fyi just a heads up like with x will have several breaking changes so we will require addins to reference cake core or newer to be loaded there s a your one your own anything can happen setting to skip verification but we recommend to start targeting the newer version as soon as possible post release example usage to skip verification cake exe settings skipverification true cake core pre release is already on myget so you can start to test with that for most it s probably just a recompile for modules and similar it can i e require implementing new members this will be an epic release as we ve unified around one scripting engine across all platforms started work on removing need for nuget exe for addin tool directives and support dependencies for those new nuget handling will be opt in and described in blog post how to do so this means bootstrapping will be easier and more consistent across platforms and hopefully targeting net standard will be enough to work across all platforms testing that now obviously multi targeting will still be supported for platform specific scenarios but in many cases one assembly will be enough this also means that we will get up to speed with the latest c features without the need for the experimental flag there s also a several bug fixes and new features so all in all this release will rock wink | 1 |
7,700 | 18,894,930,616 | IssuesEvent | 2021-11-15 16:50:19 | MicrosoftDocs/architecture-center | https://api.github.com/repos/MicrosoftDocs/architecture-center | closed | Terraform code example for baseline AKS architecture | assigned-to-author triaged product-question architecture-center/svc reference-architecture/subsvc Pri3 | For https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks/secure-baseline-aks#network-topology, is there a Terraform code reference for the implementation that would create all needed resources? What’s the list of subscriptions required for this implementation?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 46723a24-75ac-f6e2-c172-d3827ca0b7d6
* Version Independent ID: c975744e-e594-5667-14b3-c38824bbe24c
* Content: [Baseline architecture for an AKS cluster - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks/secure-baseline-aks#network-topology)
* Content Source: [docs/reference-architectures/containers/aks/secure-baseline-aks.yml](https://github.com/microsoftdocs/architecture-center/blob/master/docs/reference-architectures/containers/aks/secure-baseline-aks.yml)
* Service: **architecture-center**
* Sub-service: **reference-architecture**
* GitHub Login: @PageWriter-MSFT
* Microsoft Alias: **pnp** | 2.0 | Terraform code example for baseline AKS architecture - For https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks/secure-baseline-aks#network-topology, is there a Terraform code reference for the implementation that would create all needed resources? What’s the list of subscriptions required for this implementation?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 46723a24-75ac-f6e2-c172-d3827ca0b7d6
* Version Independent ID: c975744e-e594-5667-14b3-c38824bbe24c
* Content: [Baseline architecture for an AKS cluster - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks/secure-baseline-aks#network-topology)
* Content Source: [docs/reference-architectures/containers/aks/secure-baseline-aks.yml](https://github.com/microsoftdocs/architecture-center/blob/master/docs/reference-architectures/containers/aks/secure-baseline-aks.yml)
* Service: **architecture-center**
* Sub-service: **reference-architecture**
* GitHub Login: @PageWriter-MSFT
* Microsoft Alias: **pnp** | non_main | terraform code example for baseline aks architecture for is there a terraform code reference for the implementation that would create all needed resources what’s the list of subscriptions required for this implementation document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center sub service reference architecture github login pagewriter msft microsoft alias pnp | 0 |
1,259 | 5,332,721,642 | IssuesEvent | 2017-02-15 22:51:49 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | proxmox_kvm : delete args | affects_2.2 bug_report cloud module waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
proxmox_kvm
##### ANSIBLE VERSION
```
2.2.1.0
```
plus
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/misc/proxmox_kvm.py
##### SUMMARY
Deleting ```args``` does not work.
##### STEPS TO REPRODUCE
```
tasks:
- name: Remove
proxmox_kvm:
api_user: "{{ api_user }}"
api_password: "{{ api_password }}"
api_host: "{{ api_host }}"
node: "{{ node }}"
name: "{{ vmname }}"
delete: args
```
##### EXPECTED RESULTS
```args``` get's removed from vm config.
##### ACTUAL RESULTS
Playbook exits successful, but the args are stil in the vm config. | True | proxmox_kvm : delete args - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
proxmox_kvm
##### ANSIBLE VERSION
```
2.2.1.0
```
plus
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/misc/proxmox_kvm.py
##### SUMMARY
Deleting ```args``` does not work.
##### STEPS TO REPRODUCE
```
tasks:
- name: Remove
proxmox_kvm:
api_user: "{{ api_user }}"
api_password: "{{ api_password }}"
api_host: "{{ api_host }}"
node: "{{ node }}"
name: "{{ vmname }}"
delete: args
```
##### EXPECTED RESULTS
```args``` get's removed from vm config.
##### ACTUAL RESULTS
Playbook exits successful, but the args are stil in the vm config. | main | proxmox kvm delete args issue type bug report component name proxmox kvm ansible version plus summary deleting args does not work steps to reproduce tasks name remove proxmox kvm api user api user api password api password api host api host node node name vmname delete args expected results args get s removed from vm config actual results playbook exits successful but the args are stil in the vm config | 1 |
12,008 | 9,551,365,616 | IssuesEvent | 2019-05-02 14:18:40 | terraform-providers/terraform-provider-azurerm | https://api.github.com/repos/terraform-providers/terraform-provider-azurerm | closed | Cannot create storage account that uses network rules | service/storage | Hello,
i would like to create storage account that uses network rules, but i get follow error
`data.azurerm_storage_account.StorageAccount: : invalid or unknown key: network_rules`
This my config:
```
data "azurerm_storage_account" "StorageAccount" {
name = "${var.infrastructure_storage_account_name}"
resource_group_name = "${var.resource_group_name}"
network_rules {
ip_rules = ["foo/bar"]
virtual_network_subnet_ids = ["${azurerm_subnet.abc.id}"]
bypass = "None"
}
}
```
Iam using terraform azurerm 1.24.0. Is this a bug or doing iam something wrong? | 1.0 | Cannot create storage account that uses network rules - Hello,
i would like to create storage account that uses network rules, but i get follow error
`data.azurerm_storage_account.StorageAccount: : invalid or unknown key: network_rules`
This my config:
```
data "azurerm_storage_account" "StorageAccount" {
name = "${var.infrastructure_storage_account_name}"
resource_group_name = "${var.resource_group_name}"
network_rules {
ip_rules = ["foo/bar"]
virtual_network_subnet_ids = ["${azurerm_subnet.abc.id}"]
bypass = "None"
}
}
```
Iam using terraform azurerm 1.24.0. Is this a bug or doing iam something wrong? | non_main | cannot create storage account that uses network rules hello i would like to create storage account that uses network rules but i get follow error data azurerm storage account storageaccount invalid or unknown key network rules this my config data azurerm storage account storageaccount name var infrastructure storage account name resource group name var resource group name network rules ip rules virtual network subnet ids bypass none iam using terraform azurerm is this a bug or doing iam something wrong | 0 |
2,234 | 7,875,805,275 | IssuesEvent | 2018-06-25 21:43:35 | react-navigation/react-navigation | https://api.github.com/repos/react-navigation/react-navigation | closed | Error: There is no route defined... after switching tabs inside drawer scenes | needs response from maintainer | ### Current Behavior
https://github.com/Mtnt/drawerSample - repo with a sample
`Error: There is no route defined for key 'Stack0'. Must be one of: 'scene2','scene3'`
### Expected Behavior
The `Tab0.Stack0.scene0` is opened.
### How to reproduce
Navigation stack (default is made by `initialRouteName` config property):
```
Drawer
Tab0(default)
Stack0(default)
scene0(default)
scene1
Stack1
scene2(default)
scene3
Tab1
Stack2(default)
scene4
```
Test path (made by pressing drawer menu and tabs):
```
Tab0.Stack0.scene0 ->
Tab0.Stack1.scene2 ->
Tab1.Stack2.scene4 ->
Tab0(.Stack0.scene0) - in parentheses because the error is happened here
```
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.4
| react-native | 0.55.4
| node | 10.0.0
| npm | 6.1.0
# ALSO
### Current Behavior
`Tab0.Stack1.scene2` is opened
### Expected Behavior
The `Tab0.Stack0.scene0` is opened (cos it is a default tab).
### How to reproduce
Navigation stack is the same.
Test path:
```
Tab0.Stack0.scene0 ->
Tab0.Stack1.scene2 ->
Tab1.Stack2.scene4 ->
navigation.navigate("Tab0") - press big red button
``` | True | Error: There is no route defined... after switching tabs inside drawer scenes - ### Current Behavior
https://github.com/Mtnt/drawerSample - repo with a sample
`Error: There is no route defined for key 'Stack0'. Must be one of: 'scene2','scene3'`
### Expected Behavior
The `Tab0.Stack0.scene0` is opened.
### How to reproduce
Navigation stack (default is made by `initialRouteName` config property):
```
Drawer
Tab0(default)
Stack0(default)
scene0(default)
scene1
Stack1
scene2(default)
scene3
Tab1
Stack2(default)
scene4
```
Test path (made by pressing drawer menu and tabs):
```
Tab0.Stack0.scene0 ->
Tab0.Stack1.scene2 ->
Tab1.Stack2.scene4 ->
Tab0(.Stack0.scene0) - in parentheses because the error is happened here
```
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.4
| react-native | 0.55.4
| node | 10.0.0
| npm | 6.1.0
# ALSO
### Current Behavior
`Tab0.Stack1.scene2` is opened
### Expected Behavior
The `Tab0.Stack0.scene0` is opened (cos it is a default tab).
### How to reproduce
Navigation stack is the same.
Test path:
```
Tab0.Stack0.scene0 ->
Tab0.Stack1.scene2 ->
Tab1.Stack2.scene4 ->
navigation.navigate("Tab0") - press big red button
``` | main | error there is no route defined after switching tabs inside drawer scenes current behavior repo with a sample error there is no route defined for key must be one of expected behavior the is opened how to reproduce navigation stack default is made by initialroutename config property drawer default default default default default test path made by pressing drawer menu and tabs in parentheses because the error is happened here your environment software version react navigation react native node npm also current behavior is opened expected behavior the is opened cos it is a default tab how to reproduce navigation stack is the same test path navigation navigate press big red button | 1 |
312,532 | 9,548,839,083 | IssuesEvent | 2019-05-02 07:14:22 | CSBiology/FSharp.Stats | https://api.github.com/repos/CSBiology/FSharp.Stats | closed | [Feature Request] add Theil-Sen estimator | priority-low | ## add Theil-Sen estimator
In ordinary least squares regression, fitting a straight regression line is heavily influenced by outliers. The Theil-Sen estimator is a non-parametric method to cope with data corrupted by outliers.
It takes the median of all slopes (and intercepts) between every pair of points in the data set as an estimator for the regression line (f(x)=mx+b).
Sen extended Theil's method by using only pairs of points having distinct x coordinates.
It should be added to: `Fitting.LinearRegression.RobustRegression`
- [x] add Theil's incomplete method
- [x] add Theil-Sen estimator
### Related information
[Theil's incomplete method](http://195.134.76.37/applets/AppletTheil/Appl_Theil2.html)
| 1.0 | [Feature Request] add Theil-Sen estimator - ## add Theil-Sen estimator
In ordinary least squares regression, fitting a straight regression line is heavily influenced by outliers. The Theil-Sen estimator is a non-parametric method to cope with data corrupted by outliers.
It takes the median of all slopes (and intercepts) between every pair of points in the data set as an estimator for the regression line (f(x)=mx+b).
Sen extended Theil's method by using only pairs of points having distinct x coordinates.
It should be added to: `Fitting.LinearRegression.RobustRegression`
- [x] add Theil's incomplete method
- [x] add Theil-Sen estimator
### Related information
[Theil's incomplete method](http://195.134.76.37/applets/AppletTheil/Appl_Theil2.html)
| non_main | add theil sen estimator add theil sen estimator in ordinary least squares regression fitting a straight regression line is heavily influenced by outliers the theil sen estimator is a non parametric method to cope with data corrupted by outliers it takes the median of all slopes and intercepts between every pair of points in the data set as an estimator for the regression line f x mx b sen extended theil s method by using only pairs of points having distinct x coordinates it should be added to fitting linearregression robustregression add theil s incomplete method add theil sen estimator related information | 0 |
275,327 | 8,575,576,157 | IssuesEvent | 2018-11-12 17:39:57 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | Gradient expression crashes when scalars are non-float/double | Bug Likelihood: 3 - Occasional Priority: Normal Severity: 4 - Crash / Wrong Results | The gradient expression expects float or double precision scalars.
If passed unsigned char data, the engine will crash.
This came up on the mailing list when Paul Melis loaded 3D .vti data and attempted to do a ray-traced Volume plot.
The expression should either specify it only handles floats/doubles or it should be modified to handle other types.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1466
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: Gradient expression crashes when scalars are non-float/double
Assigned to: Kathleen Biagas
Category:
Target version: 2.6.3
Author: Kathleen Biagas
Start: 05/21/2013
Due date:
% Done: 0
Estimated time: 2.0
Created: 05/21/2013 01:18 pm
Updated: 06/13/2013 06:10 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.6.2
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
The gradient expression expects float or double precision scalars.
If passed unsigned char data, the engine will crash.
This came up on the mailing list when Paul Melis loaded 3D .vti data and attempted to do a ray-traced Volume plot.
The expression should either specify it only handles floats/doubles or it should be modified to handle other types.
Comments:
Added data from Paul.
I modified the Rectlinear Gradient calculation code to convert non-floating point scalar data.SVN Revision 21139 (2.6RC) 21141(trunk)M /src/avt/Expressions/General/avtGradientExpression.C
| 1.0 | Gradient expression crashes when scalars are non-float/double - The gradient expression expects float or double precision scalars.
If passed unsigned char data, the engine will crash.
This came up on the mailing list when Paul Melis loaded 3D .vti data and attempted to do a ray-traced Volume plot.
The expression should either specify it only handles floats/doubles or it should be modified to handle other types.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1466
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: Gradient expression crashes when scalars are non-float/double
Assigned to: Kathleen Biagas
Category:
Target version: 2.6.3
Author: Kathleen Biagas
Start: 05/21/2013
Due date:
% Done: 0
Estimated time: 2.0
Created: 05/21/2013 01:18 pm
Updated: 06/13/2013 06:10 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.6.2
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
The gradient expression expects float or double precision scalars.
If passed unsigned char data, the engine will crash.
This came up on the mailing list when Paul Melis loaded 3D .vti data and attempted to do a ray-traced Volume plot.
The expression should either specify it only handles floats/doubles or it should be modified to handle other types.
Comments:
Added data from Paul.
I modified the Rectlinear Gradient calculation code to convert non-floating point scalar data.SVN Revision 21139 (2.6RC) 21141(trunk)M /src/avt/Expressions/General/avtGradientExpression.C
| non_main | gradient expression crashes when scalars are non float double the gradient expression expects float or double precision scalars if passed unsigned char data the engine will crash this came up on the mailing list when paul melis loaded vti data and attempted to do a ray traced volume plot the expression should either specify it only handles floats doubles or it should be modified to handle other types redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject gradient expression crashes when scalars are non float double assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description the gradient expression expects float or double precision scalars if passed unsigned char data the engine will crash this came up on the mailing list when paul melis loaded vti data and attempted to do a ray traced volume plot the expression should either specify it only handles floats doubles or it should be modified to handle other types comments added data from paul i modified the rectlinear gradient calculation code to convert non floating point scalar data svn revision trunk m src avt expressions general avtgradientexpression c | 0 |
161,952 | 13,880,389,221 | IssuesEvent | 2020-10-17 18:33:23 | direct-phonology/direct | https://api.github.com/repos/direct-phonology/direct | opened | add auto-generated code documentation | documentation | we could look at Sphinx for this, but something that makes Markdown would be ideal. hosting on GitHub pages would also be preferred. | 1.0 | add auto-generated code documentation - we could look at Sphinx for this, but something that makes Markdown would be ideal. hosting on GitHub pages would also be preferred. | non_main | add auto generated code documentation we could look at sphinx for this but something that makes markdown would be ideal hosting on github pages would also be preferred | 0 |
211 | 2,862,379,385 | IssuesEvent | 2015-06-04 03:51:33 | daemonraco/toobasic | https://api.github.com/repos/daemonraco/toobasic | opened | PostgreSQL Database Structure Maintainer | Database Structure Maintainer enhancement next version | ## What to do
Create an database structure adapter for PostgreSQL. | True | PostgreSQL Database Structure Maintainer - ## What to do
Create an database structure adapter for PostgreSQL. | main | postgresql database structure maintainer what to do create an database structure adapter for postgresql | 1 |
1,013 | 4,793,975,471 | IssuesEvent | 2016-10-31 19:42:33 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | synchronize fails with become module | affects_2.1 bug_report waiting_on_maintainer | #### ISSUE TYPE
Bug Report
#### COMPONENT NAME
synchronize
##### ANSIBLE VERSION
2.1.0.0
#### CONFIGURATION
Default
#### OS / ENVIRONMENT
RHEL 6.7
#### SUMMARY
I am trying to use the synchronize module to copy file from Ansible node to remote node. I want these files to exist as UserB on the remote nodes but I do not have access to UserB directly. Instead UserA has sudo privileges to switch to UserB. So im logging in as UserA.
My environment file says:
ansible_ssh_user=UserA
ansible_ssh_pass=<PassUserA>
ansible_become_method=sudo
ansible_become_user=UserB
ansible_become_pass=<PassUserA>
My task is :
```
- name: Copy and unarchive webapps node.
synchronize: src=/home/ansible/templates/app/Sprint6/webapps dest=/opt/msdp/ca/app checksum=yes
become: yes
```
But when I run the playbook, I get an ERROR:
On the remote node, only UserB can write under : /opt/msdp/ca/app
#### STEPS TO REPRODUCE
#### EXPECTED RESULTS
I should have my files copied to remote user as UserB
#### ACTUAL RESULTS
fatal: [5.232.57.247]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --checksum --archive --rsh 'ssh -S none -o StrictHostKeyChecking=no -o ControlMaster=auto -o ControlPersist=60s' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"/home/ansible/templates/app/Sprint6/webapps\" \"UserA@5.232.57.247:/opt/msdp/ca/app\"", "failed": true, "msg": "sudo: sorry, you must have a tty to run sudo\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]\n", "rc": 12}
| True | synchronize fails with become module - #### ISSUE TYPE
Bug Report
#### COMPONENT NAME
synchronize
##### ANSIBLE VERSION
2.1.0.0
#### CONFIGURATION
Default
#### OS / ENVIRONMENT
RHEL 6.7
#### SUMMARY
I am trying to use the synchronize module to copy file from Ansible node to remote node. I want these files to exist as UserB on the remote nodes but I do not have access to UserB directly. Instead UserA has sudo privileges to switch to UserB. So im logging in as UserA.
My environment file says:
ansible_ssh_user=UserA
ansible_ssh_pass=<PassUserA>
ansible_become_method=sudo
ansible_become_user=UserB
ansible_become_pass=<PassUserA>
My task is :
```
- name: Copy and unarchive webapps node.
synchronize: src=/home/ansible/templates/app/Sprint6/webapps dest=/opt/msdp/ca/app checksum=yes
become: yes
```
But when I run the playbook, I get an ERROR:
On the remote node, only UserB can write under : /opt/msdp/ca/app
#### STEPS TO REPRODUCE
#### EXPECTED RESULTS
I should have my files copied to remote user as UserB
#### ACTUAL RESULTS
fatal: [5.232.57.247]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --checksum --archive --rsh 'ssh -S none -o StrictHostKeyChecking=no -o ControlMaster=auto -o ControlPersist=60s' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"/home/ansible/templates/app/Sprint6/webapps\" \"UserA@5.232.57.247:/opt/msdp/ca/app\"", "failed": true, "msg": "sudo: sorry, you must have a tty to run sudo\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]\n", "rc": 12}
| main | synchronize fails with become module issue type bug report component name synchronize ansible version configuration default os environment rhel summary i am trying to use the synchronize module to copy file from ansible node to remote node i want these files to exist as userb on the remote nodes but i do not have access to userb directly instead usera has sudo privileges to switch to userb so im logging in as usera my environment file says ansible ssh user usera ansible ssh pass ansible become method sudo ansible become user userb ansible become pass my task is name copy and unarchive webapps node synchronize src home ansible templates app webapps dest opt msdp ca app checksum yes become yes but when i run the playbook i get an error on the remote node only userb can write under opt msdp ca app steps to reproduce expected results i should have my files copied to remote user as userb actual results fatal failed changed false cmd usr bin rsync delay updates f compress checksum archive rsh ssh s none o stricthostkeychecking no o controlmaster auto o controlpersist rsync path sudo rsync out format i n l home ansible templates app webapps usera opt msdp ca app failed true msg sudo sorry you must have a tty to run sudo nrsync connection unexpectedly closed bytes received so far nrsync error error in rsync protocol data stream code at io c n rc | 1 |
26,208 | 7,802,482,011 | IssuesEvent | 2018-06-10 13:07:01 | samm-git/gimp-osx-package | https://api.github.com/repos/samm-git/gimp-osx-package | closed | Fix unattended build of the gimp dependencies | important jhbuild | Currently 2 jhbuild dependencies needs local patching to build properly:
- [x] `pygobject` - depends on gio/gdesktopappinfo.h, need to patch local code instead of installing this header, to avoid other apps to detect it.
- [x] `x265` - ~cmake files are located out of the root directory~ | 1.0 | Fix unattended build of the gimp dependencies - Currently 2 jhbuild dependencies needs local patching to build properly:
- [x] `pygobject` - depends on gio/gdesktopappinfo.h, need to patch local code instead of installing this header, to avoid other apps to detect it.
- [x] `x265` - ~cmake files are located out of the root directory~ | non_main | fix unattended build of the gimp dependencies currently jhbuild dependencies needs local patching to build properly pygobject depends on gio gdesktopappinfo h need to patch local code instead of installing this header to avoid other apps to detect it cmake files are located out of the root directory | 0 |
52,870 | 7,787,695,065 | IssuesEvent | 2018-06-06 23:52:38 | kubic-project/velum | https://api.github.com/repos/kubic-project/velum | closed | Improve the README | documentation | Improve the README with more information on what this repo is actually about. | 1.0 | Improve the README - Improve the README with more information on what this repo is actually about. | non_main | improve the readme improve the readme with more information on what this repo is actually about | 0 |
549 | 3,984,840,715 | IssuesEvent | 2016-05-07 13:21:41 | cucumber/aruba | https://api.github.com/repos/cucumber/aruba | closed | Add/Update project status in README | needs-feedback/by-maintainer type-of-issue/todo | ## Summary
New contributors need a sense of "direction" to know what to expect when working on the current Aruba codebase.
## Expected Behavior
Normally, when I want to contribute to a project I "expect":
1. Master to be pretty much stable and usable (or clearly explained why if not)
2. All tests to pass locally (or clearly explained why if not)
3. High coverage (or clearly explained why if not)
4. Stable major releases (or clearly explained why that hasn't happened)
5. ETAs for milestones, releases and major rework (just to get a sense of how long "wait" if needed)
6. A sense of how "busy" things are at any given time (to decide if I should help with major work or if too many people will get in each other's way)
7. Low friction to contributing (or good explanations why to follow conventions to the letter)
8. A general sense of the project (to know whether to report stuff that isn't working or just "wait" for rework to finish first and stable releases to be published)
9. Whether maintainers are invited to apply or not
10. A general "roadmap" to align any contributions I'd make (or just drop them without starting - to avoid wasted time on feature that may no longer make sense)
11. etc...
## Current Behavior
The biggest pain points I've had are:
1. Waiting ages for "dependent commits" to fix a pressing issue for me (RVM support)
2. Lots of "work in progress" without guidance on how to help push things forward
3. Lots of constant and new deprecations before even reaching 1.x
4. No 1.x for over a year.
5. Deprecations messages were more a bother than useful (it would be easier to just have "recommended" documentation on how to do things instead.
6. Aruba still uses it's own deprecations in multiple place.
7. Not enough coverage and constantly changing internal+external API's
8. Reading History.md is not a good way to learn what changed, since I'm already confused what is deprecated and what isn't. (And when something is, I have a hard time to understand what to do instead and especially what's "available").
9. I'd actually prefer crashes rather than deprecations.
10. Ruby 1.8 not dropped and purged from the codebase (wasted time I'll never get back).
11. With all the deprecations, moving around the codebase is a bit discouraging.
12. Hard to work out how to effectively debug and "get stuff done" with Aruba - I can never remember (or find the docs) on how to enable announcing, how to set in-process Aruba, how to set/unset variables, how to capture stderr/stdout for a given process, whether to use "run" or "run_simple", how to set things up with RSpec (config, hooks), how to actually track the execution and what's going on "under the hood" (especially when most errors are "silent" or "output wasn't matched").
13. There's "so much" API for managing processes, I can't seem to figure out what the flow is (to debug an issue - or simply to just understand what I'm doing wrong).
## Possible Solution
Again, general "info" about the project would help. Status, plans, ongoing work, etc. If I "step in" only to have to wait about a year for "other parts of the codebase to get sorted out first" - and then I find that over the year there's still no 1.x ... and while being spammed with deprecations that don't really tell me enough to know what's available (or at least hint *why* something was deprecated so I could intuitively "guess" what could work instead ... all this is a bit discouraging, even if there are valid/sound/understandable reasons behind everything going on.
| True | Add/Update project status in README - ## Summary
New contributors need a sense of "direction" to know what to expect when working on the current Aruba codebase.
## Expected Behavior
Normally, when I want to contribute to a project I "expect":
1. Master to be pretty much stable and usable (or clearly explained why if not)
2. All tests to pass locally (or clearly explained why if not)
3. High coverage (or clearly explained why if not)
4. Stable major releases (or clearly explained why that hasn't happened)
5. ETAs for milestones, releases and major rework (just to get a sense of how long "wait" if needed)
6. A sense of how "busy" things are at any given time (to decide if I should help with major work or if too many people will get in each other's way)
7. Low friction to contributing (or good explanations why to follow conventions to the letter)
8. A general sense of the project (to know whether to report stuff that isn't working or just "wait" for rework to finish first and stable releases to be published)
9. Whether maintainers are invited to apply or not
10. A general "roadmap" to align any contributions I'd make (or just drop them without starting - to avoid wasted time on feature that may no longer make sense)
11. etc...
## Current Behavior
The biggest pain points I've had are:
1. Waiting ages for "dependent commits" to fix a pressing issue for me (RVM support)
2. Lots of "work in progress" without guidance on how to help push things forward
3. Lots of constant and new deprecations before even reaching 1.x
4. No 1.x for over a year.
5. Deprecations messages were more a bother than useful (it would be easier to just have "recommended" documentation on how to do things instead.
6. Aruba still uses it's own deprecations in multiple place.
7. Not enough coverage and constantly changing internal+external API's
8. Reading History.md is not a good way to learn what changed, since I'm already confused what is deprecated and what isn't. (And when something is, I have a hard time to understand what to do instead and especially what's "available").
9. I'd actually prefer crashes rather than deprecations.
10. Ruby 1.8 not dropped and purged from the codebase (wasted time I'll never get back).
11. With all the deprecations, moving around the codebase is a bit discouraging.
12. Hard to work out how to effectively debug and "get stuff done" with Aruba - I can never remember (or find the docs) on how to enable announcing, how to set in-process Aruba, how to set/unset variables, how to capture stderr/stdout for a given process, whether to use "run" or "run_simple", how to set things up with RSpec (config, hooks), how to actually track the execution and what's going on "under the hood" (especially when most errors are "silent" or "output wasn't matched").
13. There's "so much" API for managing processes, I can't seem to figure out what the flow is (to debug an issue - or simply to just understand what I'm doing wrong).
## Possible Solution
Again, general "info" about the project would help. Status, plans, ongoing work, etc. If I "step in" only to have to wait about a year for "other parts of the codebase to get sorted out first" - and then I find that over the year there's still no 1.x ... and while being spammed with deprecations that don't really tell me enough to know what's available (or at least hint *why* something was deprecated so I could intuitively "guess" what could work instead ... all this is a bit discouraging, even if there are valid/sound/understandable reasons behind everything going on.
| main | add update project status in readme summary new contributors need a sense of direction to know what to expect when working on the current aruba codebase expected behavior normally when i want to contribute to a project i expect master to be pretty much stable and usable or clearly explained why if not all tests to pass locally or clearly explained why if not high coverage or clearly explained why if not stable major releases or clearly explained why that hasn t happened etas for milestones releases and major rework just to get a sense of how long wait if needed a sense of how busy things are at any given time to decide if i should help with major work or if too many people will get in each other s way low friction to contributing or good explanations why to follow conventions to the letter a general sense of the project to know whether to report stuff that isn t working or just wait for rework to finish first and stable releases to be published whether maintainers are invited to apply or not a general roadmap to align any contributions i d make or just drop them without starting to avoid wasted time on feature that may no longer make sense etc current behavior the biggest pain points i ve had are waiting ages for dependent commits to fix a pressing issue for me rvm support lots of work in progress without guidance on how to help push things forward lots of constant and new deprecations before even reaching x no x for over a year deprecations messages were more a bother than useful it would be easier to just have recommended documentation on how to do things instead aruba still uses it s own deprecations in multiple place not enough coverage and constantly changing internal external api s reading history md is not a good way to learn what changed since i m already confused what is deprecated and what isn t and when something is i have a hard time to understand what to do instead and especially what s available i d actually prefer crashes rather than deprecations ruby not dropped and purged from the codebase wasted time i ll never get back with all the deprecations moving around the codebase is a bit discouraging hard to work out how to effectively debug and get stuff done with aruba i can never remember or find the docs on how to enable announcing how to set in process aruba how to set unset variables how to capture stderr stdout for a given process whether to use run or run simple how to set things up with rspec config hooks how to actually track the execution and what s going on under the hood especially when most errors are silent or output wasn t matched there s so much api for managing processes i can t seem to figure out what the flow is to debug an issue or simply to just understand what i m doing wrong possible solution again general info about the project would help status plans ongoing work etc if i step in only to have to wait about a year for other parts of the codebase to get sorted out first and then i find that over the year there s still no x and while being spammed with deprecations that don t really tell me enough to know what s available or at least hint why something was deprecated so i could intuitively guess what could work instead all this is a bit discouraging even if there are valid sound understandable reasons behind everything going on | 1 |
412,969 | 12,058,898,698 | IssuesEvent | 2020-04-15 18:17:16 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | opened | Transfer Modal Copy upgrade | Needed for V2 launch Priority: High | The new Transfer button which brings up the Modal still Says Withdraw funds in it
Need to change Withdraw to Transfer | 1.0 | Transfer Modal Copy upgrade - The new Transfer button which brings up the Modal still Says Withdraw funds in it
Need to change Withdraw to Transfer | non_main | transfer modal copy upgrade the new transfer button which brings up the modal still says withdraw funds in it need to change withdraw to transfer | 0 |
390,295 | 26,857,123,080 | IssuesEvent | 2023-02-03 15:25:01 | splunk/observability-workshop | https://api.github.com/repos/splunk/observability-workshop | closed | Update On-Call workshop to use OTel Collector | documentation | Revive the On-Call workshop and convert/move over to use OTel Collector over SmartAgent. | 1.0 | Update On-Call workshop to use OTel Collector - Revive the On-Call workshop and convert/move over to use OTel Collector over SmartAgent. | non_main | update on call workshop to use otel collector revive the on call workshop and convert move over to use otel collector over smartagent | 0 |
175,596 | 21,313,860,622 | IssuesEvent | 2022-04-16 01:11:12 | Nivaskumark/kernel_v4.1.15 | https://api.github.com/repos/Nivaskumark/kernel_v4.1.15 | opened | CVE-2017-15102 (Medium) detected in linuxlinux-4.6 | security vulnerability | ## CVE-2017-15102 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/misc/legousbtower.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The tower_probe function in drivers/usb/misc/legousbtower.c in the Linux kernel before 4.8.1 allows local users (who are physically proximate for inserting a crafted USB device) to gain privileges by leveraging a write-what-where condition that occurs after a race condition and a NULL pointer dereference.
<p>Publish Date: 2017-11-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15102>CVE-2017-15102</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15102">https://nvd.nist.gov/vuln/detail/CVE-2017-15102</a></p>
<p>Release Date: 2017-11-15</p>
<p>Fix Resolution: 4.8.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-15102 (Medium) detected in linuxlinux-4.6 - ## CVE-2017-15102 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/misc/legousbtower.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The tower_probe function in drivers/usb/misc/legousbtower.c in the Linux kernel before 4.8.1 allows local users (who are physically proximate for inserting a crafted USB device) to gain privileges by leveraging a write-what-where condition that occurs after a race condition and a NULL pointer dereference.
<p>Publish Date: 2017-11-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15102>CVE-2017-15102</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15102">https://nvd.nist.gov/vuln/detail/CVE-2017-15102</a></p>
<p>Release Date: 2017-11-15</p>
<p>Fix Resolution: 4.8.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers usb misc legousbtower c vulnerability details the tower probe function in drivers usb misc legousbtower c in the linux kernel before allows local users who are physically proximate for inserting a crafted usb device to gain privileges by leveraging a write what where condition that occurs after a race condition and a null pointer dereference publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
4,499 | 23,416,395,945 | IssuesEvent | 2022-08-13 02:48:41 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | sam local start-api gives CORS error when CORS are enabled | type/question blocked/more-info-needed area/local/start-api maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
I am using AWS CDK (Typescript) and running SAM local start-api to spin up a RESTful API tied to lambda resolvers and am running into a CORS issue when trying to hit the API from a browser.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
#### CDK config:
```
import { Construct } from 'constructs';
import {
IResource,
LambdaIntegration,
MockIntegration,
PassthroughBehavior,
RestApi,
} from 'aws-cdk-lib/aws-apigateway';
import {
NodejsFunction,
NodejsFunctionProps,
} from 'aws-cdk-lib/aws-lambda-nodejs';
import { Runtime } from 'aws-cdk-lib/aws-lambda';
import { join } from 'path';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as rds from 'aws-cdk-lib/aws-rds';
import * as cdk from 'aws-cdk-lib';
export function addCorsOptions(apiResource: IResource) {
apiResource.addMethod(
'OPTIONS',
new MockIntegration({
integrationResponses: [
{
statusCode: '200',
responseParameters: {
'method.response.header.Access-Control-Allow-Headers':
"'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,X-Amz-User-Agent'",
'method.response.header.Access-Control-Allow-Origin': "'*'",
'method.response.header.Access-Control-Allow-Credentials':
"'false'",
'method.response.header.Access-Control-Allow-Methods':
"'OPTIONS,GET,PUT,POST,DELETE'",
},
},
],
passthroughBehavior: PassthroughBehavior.NEVER,
requestTemplates: {
'application/json': '{"statusCode": 200}',
},
}),
{
methodResponses: [
{
statusCode: '200',
responseParameters: {
'method.response.header.Access-Control-Allow-Headers': true,
'method.response.header.Access-Control-Allow-Methods': true,
'method.response.header.Access-Control-Allow-Credentials': true,
'method.response.header.Access-Control-Allow-Origin': true,
},
},
],
}
);
}
export class FrontendService extends Construct {
constructor(scope: Construct, id: string) {
super(scope, id);
const vpc = new ec2.Vpc(this, 'HospoFEVPC');
const cluster = new rds.ServerlessCluster(this, 'AuroraHospoFECluster', {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(
this,
'ParameterGroup',
'default.aurora-postgresql10'
),
defaultDatabaseName: 'hospoFEDB',
vpc,
scaling: {
autoPause: cdk.Duration.seconds(0),
},
});
const bucket = new s3.Bucket(this, 'FrontendStore');
const nodeJsFunctionProps: NodejsFunctionProps = {
environment: {
BUCKET: bucket.bucketName,
CLUSTER_ARN: cluster.clusterArn,
SECRET_ARN: cluster.secret?.secretArn || '',
DB_NAME: 'hospoFEDB',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
},
runtime: Runtime.NODEJS_14_X,
};
const loginLambda = new NodejsFunction(this, 'loginFunction', {
entry: 'dist/lambda/login.js',
memorySize: 1024,
...nodeJsFunctionProps,
});
const loginIntegration = new LambdaIntegration(loginLambda);
const api = new RestApi(this, 'frontend-api', {
restApiName: 'Frontend Service',
description: 'This service serves the frontend.',
});
const loginResource = api.root.addResource('login');
loginResource.addMethod('POST', loginIntegration);
addCorsOptions(loginResource);
}
}
```
#### lambda resolver (login.js)
```
export async function handler(event: any, context: any) {
return {
statusCode: 200,
headers: { 'Access-Control-Allow-Origin': '*' },
body: JSON.stringify(body),
};
}
```
### Observed result:
When running locally using `sam start local-api` I get a CORS error mentioning it does not pass preflight. This error only occurs locally, when the lambda is eployed to AWS I do not get any CORS errors
### Expected result:
<!-- Describe what you expected. -->
With the `Access-Control-Allow-Origin` header set in the lambda response and mock integration for the options request I would expect it to not give me any CORS errors
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
I am on SAM version 1.46.0 on the latest Ubuntu distribution. I have also tried configuring CORS via the CDK RestApi config like so:
```
new apigateway.RestApi(this, 'api', {
defaultCorsPreflightOptions: {
allowOrigins: apigateway.Cors.ALL_ORIGINS,
allowMethods: apigateway.Cors.ALL_METHODS // this is also the default
}
})
```
But it just gives me the following error:

| True | sam local start-api gives CORS error when CORS are enabled - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
I am using AWS CDK (Typescript) and running SAM local start-api to spin up a RESTful API tied to lambda resolvers and am running into a CORS issue when trying to hit the API from a browser.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
#### CDK config:
```
import { Construct } from 'constructs';
import {
IResource,
LambdaIntegration,
MockIntegration,
PassthroughBehavior,
RestApi,
} from 'aws-cdk-lib/aws-apigateway';
import {
NodejsFunction,
NodejsFunctionProps,
} from 'aws-cdk-lib/aws-lambda-nodejs';
import { Runtime } from 'aws-cdk-lib/aws-lambda';
import { join } from 'path';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as rds from 'aws-cdk-lib/aws-rds';
import * as cdk from 'aws-cdk-lib';
export function addCorsOptions(apiResource: IResource) {
apiResource.addMethod(
'OPTIONS',
new MockIntegration({
integrationResponses: [
{
statusCode: '200',
responseParameters: {
'method.response.header.Access-Control-Allow-Headers':
"'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,X-Amz-User-Agent'",
'method.response.header.Access-Control-Allow-Origin': "'*'",
'method.response.header.Access-Control-Allow-Credentials':
"'false'",
'method.response.header.Access-Control-Allow-Methods':
"'OPTIONS,GET,PUT,POST,DELETE'",
},
},
],
passthroughBehavior: PassthroughBehavior.NEVER,
requestTemplates: {
'application/json': '{"statusCode": 200}',
},
}),
{
methodResponses: [
{
statusCode: '200',
responseParameters: {
'method.response.header.Access-Control-Allow-Headers': true,
'method.response.header.Access-Control-Allow-Methods': true,
'method.response.header.Access-Control-Allow-Credentials': true,
'method.response.header.Access-Control-Allow-Origin': true,
},
},
],
}
);
}
export class FrontendService extends Construct {
constructor(scope: Construct, id: string) {
super(scope, id);
const vpc = new ec2.Vpc(this, 'HospoFEVPC');
const cluster = new rds.ServerlessCluster(this, 'AuroraHospoFECluster', {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(
this,
'ParameterGroup',
'default.aurora-postgresql10'
),
defaultDatabaseName: 'hospoFEDB',
vpc,
scaling: {
autoPause: cdk.Duration.seconds(0),
},
});
const bucket = new s3.Bucket(this, 'FrontendStore');
const nodeJsFunctionProps: NodejsFunctionProps = {
environment: {
BUCKET: bucket.bucketName,
CLUSTER_ARN: cluster.clusterArn,
SECRET_ARN: cluster.secret?.secretArn || '',
DB_NAME: 'hospoFEDB',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
},
runtime: Runtime.NODEJS_14_X,
};
const loginLambda = new NodejsFunction(this, 'loginFunction', {
entry: 'dist/lambda/login.js',
memorySize: 1024,
...nodeJsFunctionProps,
});
const loginIntegration = new LambdaIntegration(loginLambda);
const api = new RestApi(this, 'frontend-api', {
restApiName: 'Frontend Service',
description: 'This service serves the frontend.',
});
const loginResource = api.root.addResource('login');
loginResource.addMethod('POST', loginIntegration);
addCorsOptions(loginResource);
}
}
```
#### lambda resolver (login.js)
```
export async function handler(event: any, context: any) {
return {
statusCode: 200,
headers: { 'Access-Control-Allow-Origin': '*' },
body: JSON.stringify(body),
};
}
```
### Observed result:
When running locally using `sam start local-api` I get a CORS error mentioning it does not pass preflight. This error only occurs locally, when the lambda is eployed to AWS I do not get any CORS errors
### Expected result:
<!-- Describe what you expected. -->
With the `Access-Control-Allow-Origin` header set in the lambda response and mock integration for the options request I would expect it to not give me any CORS errors
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
I am on SAM version 1.46.0 on the latest Ubuntu distribution. I have also tried configuring CORS via the CDK RestApi config like so:
```
new apigateway.RestApi(this, 'api', {
defaultCorsPreflightOptions: {
allowOrigins: apigateway.Cors.ALL_ORIGINS,
allowMethods: apigateway.Cors.ALL_METHODS // this is also the default
}
})
```
But it just gives me the following error:

| main | sam local start api gives cors error when cors are enabled make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description i am using aws cdk typescript and running sam local start api to spin up a restful api tied to lambda resolvers and am running into a cors issue when trying to hit the api from a browser steps to reproduce cdk config import construct from constructs import iresource lambdaintegration mockintegration passthroughbehavior restapi from aws cdk lib aws apigateway import nodejsfunction nodejsfunctionprops from aws cdk lib aws lambda nodejs import runtime from aws cdk lib aws lambda import join from path import as lambda from aws cdk lib aws lambda import as from aws cdk lib aws import as from aws cdk lib aws import as rds from aws cdk lib aws rds import as cdk from aws cdk lib export function addcorsoptions apiresource iresource apiresource addmethod options new mockintegration integrationresponses statuscode responseparameters method response header access control allow headers content type x amz date authorization x api key x amz security token x amz user agent method response header access control allow origin method response header access control allow credentials false method response header access control allow methods options get put post delete passthroughbehavior passthroughbehavior never requesttemplates application json statuscode methodresponses statuscode responseparameters method response header access control allow headers true method response header access control allow methods true method response header access control allow credentials true method response header access control allow origin true export class frontendservice extends construct constructor scope construct id string super scope id const vpc new vpc this hospofevpc const cluster new rds serverlesscluster this aurorahospofecluster engine rds databaseclusterengine aurora postgresql parametergroup rds parametergroup fromparametergroupname this parametergroup default aurora defaultdatabasename hospofedb vpc scaling autopause cdk duration seconds const bucket new bucket this frontendstore const nodejsfunctionprops nodejsfunctionprops environment bucket bucket bucketname cluster arn cluster clusterarn secret arn cluster secret secretarn db name hospofedb aws nodejs connection reuse enabled runtime runtime nodejs x const loginlambda new nodejsfunction this loginfunction entry dist lambda login js memorysize nodejsfunctionprops const loginintegration new lambdaintegration loginlambda const api new restapi this frontend api restapiname frontend service description this service serves the frontend const loginresource api root addresource login loginresource addmethod post loginintegration addcorsoptions loginresource lambda resolver login js export async function handler event any context any return statuscode headers access control allow origin body json stringify body observed result when running locally using sam start local api i get a cors error mentioning it does not pass preflight this error only occurs locally when the lambda is eployed to aws i do not get any cors errors expected result with the access control allow origin header set in the lambda response and mock integration for the options request i would expect it to not give me any cors errors additional environment details ex windows mac amazon linux etc i am on sam version on the latest ubuntu distribution i have also tried configuring cors via the cdk restapi config like so new apigateway restapi this api defaultcorspreflightoptions alloworigins apigateway cors all origins allowmethods apigateway cors all methods this is also the default but it just gives me the following error | 1 |
97,483 | 11,014,615,371 | IssuesEvent | 2019-12-04 23:12:50 | shakenbytes/riot | https://api.github.com/repos/shakenbytes/riot | closed | Add getting started documentation | documentation | Describe how to run the project locally for new collaborators. | 1.0 | Add getting started documentation - Describe how to run the project locally for new collaborators. | non_main | add getting started documentation describe how to run the project locally for new collaborators | 0 |
1,416 | 6,175,491,001 | IssuesEvent | 2017-07-01 02:53:55 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | caskroom/cask/android-sdk fails to install behind a proxy | awaiting maintainer feedback | #### General troubleshooting steps
- [*] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [*] None of the templates was appropriate for my issue, or I’m not sure.
- [*] I ran `brew update-reset && brew update` and retried my command.
- [*] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [*] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
https://github.com/caskroom/homebrew-cask/blob/master/Casks/android-sdk.rb#L60
#### Description of issue
caskroom/cask/android-sdk fails to install behind a proxy. sdkmanager does not respect environment variables and requires them inline
https://developer.android.com/studio/command-line/sdkmanager.html#options
#### Output of your command with `--verbose --debug`
```
$ brew install --verbose --debug caskroom/cask/android-sdk
==> brew cask install caskroom/cask/android-sdk --debug --verbose
/usr/local/bin/brew cask install caskroom/cask/android-sdk --debug --verbose
==> Hbc::Installer#install
==> Printing caveats
==> Caveats
We will install android-sdk-tools, platform-tools, and build-tools for you.
You can control android sdk packages via the sdkmanager command.
You may want to add to your profile:
'export ANDROID_SDK_ROOT=/usr/local/share/android-sdk'
This operation may take up to 10 minutes depending on your internet connection.
Please, be patient.
==> Hbc::Installer#fetch
==> Downloading
==> Downloading https://dl.google.com/android/repository/sdk-tools-darwin-3859397.zip
Already downloaded: /Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip
==> Downloaded to -> /Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip
==> Verifying download
==> Determining which verifications to run for Cask android-sdk
==> Checking for verification class Hbc::Verify::Checksum
==> 1 verifications defined
Hbc::Verify::Checksum
==> Running verification of class Hbc::Verify::Checksum
==> Verifying checksum for Cask android-sdk
==> SHA256 checksums match
==> Installing Cask android-sdk
==> Hbc::Installer#stage
==> Extracting primary container
==> Determining which containers to use based on filetype
==> Checking container class Hbc::Container::Pkg
==> Checking container class Hbc::Container::Ttf
==> Checking container class Hbc::Container::Otf
==> Checking container class Hbc::Container::Air
==> Checking container class Hbc::Container::Cab
==> Checking container class Hbc::Container::Dmg
==> Executing: ["/usr/bin/hdiutil", "imageinfo", "/Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip"]
==> Checking container class Hbc::Container::SevenZip
==> Checking container class Hbc::Container::Sit
==> Checking container class Hbc::Container::Rar
==> Checking container class Hbc::Container::Zip
==> Using container class Hbc::Container::Zip for /Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip
==> Executing: ["/usr/bin/ditto", "-x", "-k", "--", "/Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip", "/var/folders/x8/cpj1_sfx60g462dqkyw_h3_00000gn/T/d20170628-2192-10093lg"]
==> Creating metadata directory /usr/local/Caskroom/android-sdk/.metadata/3859397/20170628202646.791.
==> Creating metadata subdirectory /usr/local/Caskroom/android-sdk/.metadata/3859397/20170628202646.791/Casks.
==> Installing artifacts
==> Determining which artifacts are present in Cask android-sdk
==> 3 artifact/s defined
#<Hbc::Artifact::PreflightBlock:0x007fb81590d650>
#<Hbc::Artifact::Binary:0x007fb81590d588>
#<Hbc::Artifact::PostflightBlock:0x007fb81590d510>
==> Installing artifact of class Hbc::Artifact::PreflightBlock
==> Executing: ["/usr/local/Caskroom/android-sdk/3859397/tools/bin/sdkmanager", "tools", "platform-tools", "build-tools;26.0.0"]
==> Warning: java.net.ConnectException: Connection refused
==> Warning: Failed to download any source lists!
==> Warning: Failed to find package tools
==> Purging files for version 3859397 of Cask android-sdk
Error: Command failed to execute!
==> Failed command:
/usr/local/Caskroom/android-sdk/3859397/tools/bin/sdkmanager tools platform-tools build-tools;26.0.0
==> Standard Output of failed command:
==> Standard Error of failed command:
Warning: java.net.ConnectException: Connection refused
Warning: Failed to download any source lists!
Warning: Failed to find package tools
==> Exit status of failed command:
#<Process::Status: pid 2347 exit 1>
Error: nothing to install
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:16:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:98:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:168:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:101:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:173:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:173:in `rescue in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:156:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:101:in `<main>'
```
#### Output of `brew cask doctor`
```
$ brew cask doctor
==> Homebrew-Cask Version
Homebrew-Cask 1.2.3
caskroom/homebrew-cask (git revision 17d80; last commit 2017-06-28)
==> Homebrew-Cask Install Location
<NONE>
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (1 files, 82.2MB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3648 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LC_ALL="en_US.UTF-8"
PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/Server.app/Contents/ServerRoot/usr/bin:/Applications/Server.app/Contents/ServerRoot/usr/sbin:/usr/local/Homebrew/Library/Homebrew/shims/scm"
SHELL="/bin/bash"
```
| True | caskroom/cask/android-sdk fails to install behind a proxy - #### General troubleshooting steps
- [*] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [*] None of the templates was appropriate for my issue, or I’m not sure.
- [*] I ran `brew update-reset && brew update` and retried my command.
- [*] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [*] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
https://github.com/caskroom/homebrew-cask/blob/master/Casks/android-sdk.rb#L60
#### Description of issue
caskroom/cask/android-sdk fails to install behind a proxy. sdkmanager does not respect environment variables and requires them inline
https://developer.android.com/studio/command-line/sdkmanager.html#options
#### Output of your command with `--verbose --debug`
```
$ brew install --verbose --debug caskroom/cask/android-sdk
==> brew cask install caskroom/cask/android-sdk --debug --verbose
/usr/local/bin/brew cask install caskroom/cask/android-sdk --debug --verbose
==> Hbc::Installer#install
==> Printing caveats
==> Caveats
We will install android-sdk-tools, platform-tools, and build-tools for you.
You can control android sdk packages via the sdkmanager command.
You may want to add to your profile:
'export ANDROID_SDK_ROOT=/usr/local/share/android-sdk'
This operation may take up to 10 minutes depending on your internet connection.
Please, be patient.
==> Hbc::Installer#fetch
==> Downloading
==> Downloading https://dl.google.com/android/repository/sdk-tools-darwin-3859397.zip
Already downloaded: /Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip
==> Downloaded to -> /Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip
==> Verifying download
==> Determining which verifications to run for Cask android-sdk
==> Checking for verification class Hbc::Verify::Checksum
==> 1 verifications defined
Hbc::Verify::Checksum
==> Running verification of class Hbc::Verify::Checksum
==> Verifying checksum for Cask android-sdk
==> SHA256 checksums match
==> Installing Cask android-sdk
==> Hbc::Installer#stage
==> Extracting primary container
==> Determining which containers to use based on filetype
==> Checking container class Hbc::Container::Pkg
==> Checking container class Hbc::Container::Ttf
==> Checking container class Hbc::Container::Otf
==> Checking container class Hbc::Container::Air
==> Checking container class Hbc::Container::Cab
==> Checking container class Hbc::Container::Dmg
==> Executing: ["/usr/bin/hdiutil", "imageinfo", "/Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip"]
==> Checking container class Hbc::Container::SevenZip
==> Checking container class Hbc::Container::Sit
==> Checking container class Hbc::Container::Rar
==> Checking container class Hbc::Container::Zip
==> Using container class Hbc::Container::Zip for /Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip
==> Executing: ["/usr/bin/ditto", "-x", "-k", "--", "/Users/uspsbuilduser/Library/Caches/Homebrew/Cask/android-sdk--3859397.zip", "/var/folders/x8/cpj1_sfx60g462dqkyw_h3_00000gn/T/d20170628-2192-10093lg"]
==> Creating metadata directory /usr/local/Caskroom/android-sdk/.metadata/3859397/20170628202646.791.
==> Creating metadata subdirectory /usr/local/Caskroom/android-sdk/.metadata/3859397/20170628202646.791/Casks.
==> Installing artifacts
==> Determining which artifacts are present in Cask android-sdk
==> 3 artifact/s defined
#<Hbc::Artifact::PreflightBlock:0x007fb81590d650>
#<Hbc::Artifact::Binary:0x007fb81590d588>
#<Hbc::Artifact::PostflightBlock:0x007fb81590d510>
==> Installing artifact of class Hbc::Artifact::PreflightBlock
==> Executing: ["/usr/local/Caskroom/android-sdk/3859397/tools/bin/sdkmanager", "tools", "platform-tools", "build-tools;26.0.0"]
==> Warning: java.net.ConnectException: Connection refused
==> Warning: Failed to download any source lists!
==> Warning: Failed to find package tools
==> Purging files for version 3859397 of Cask android-sdk
Error: Command failed to execute!
==> Failed command:
/usr/local/Caskroom/android-sdk/3859397/tools/bin/sdkmanager tools platform-tools build-tools;26.0.0
==> Standard Output of failed command:
==> Standard Error of failed command:
Warning: java.net.ConnectException: Connection refused
Warning: Failed to download any source lists!
Warning: Failed to find package tools
==> Exit status of failed command:
#<Process::Status: pid 2347 exit 1>
Error: nothing to install
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:16:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:98:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:168:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:101:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:173:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:173:in `rescue in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:156:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:101:in `<main>'
```
#### Output of `brew cask doctor`
```
$ brew cask doctor
==> Homebrew-Cask Version
Homebrew-Cask 1.2.3
caskroom/homebrew-cask (git revision 17d80; last commit 2017-06-28)
==> Homebrew-Cask Install Location
<NONE>
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (1 files, 82.2MB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3648 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LC_ALL="en_US.UTF-8"
PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/Server.app/Contents/ServerRoot/usr/bin:/Applications/Server.app/Contents/ServerRoot/usr/sbin:/usr/local/Homebrew/Library/Homebrew/shims/scm"
SHELL="/bin/bash"
```
| main | caskroom cask android sdk fails to install behind a proxy general troubleshooting steps i have checked the instructions for or before opening the issue none of the templates was appropriate for my issue or i’m not sure i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue caskroom cask android sdk fails to install behind a proxy sdkmanager does not respect environment variables and requires them inline output of your command with verbose debug brew install verbose debug caskroom cask android sdk brew cask install caskroom cask android sdk debug verbose usr local bin brew cask install caskroom cask android sdk debug verbose hbc installer install printing caveats caveats we will install android sdk tools platform tools and build tools for you you can control android sdk packages via the sdkmanager command you may want to add to your profile export android sdk root usr local share android sdk this operation may take up to minutes depending on your internet connection please be patient hbc installer fetch downloading downloading already downloaded users uspsbuilduser library caches homebrew cask android sdk zip downloaded to users uspsbuilduser library caches homebrew cask android sdk zip verifying download determining which verifications to run for cask android sdk checking for verification class hbc verify checksum verifications defined hbc verify checksum running verification of class hbc verify checksum verifying checksum for cask android sdk checksums match installing cask android sdk hbc installer stage extracting primary container determining which containers to use based on filetype checking container class hbc container pkg checking container class hbc container ttf checking container class hbc container otf checking container class hbc container air checking container class hbc container cab checking container class hbc container dmg executing checking container class hbc container sevenzip checking container class hbc container sit checking container class hbc container rar checking container class hbc container zip using container class hbc container zip for users uspsbuilduser library caches homebrew cask android sdk zip executing creating metadata directory usr local caskroom android sdk metadata creating metadata subdirectory usr local caskroom android sdk metadata casks installing artifacts determining which artifacts are present in cask android sdk artifact s defined installing artifact of class hbc artifact preflightblock executing warning java net connectexception connection refused warning failed to download any source lists warning failed to find package tools purging files for version of cask android sdk error command failed to execute failed command usr local caskroom android sdk tools bin sdkmanager tools platform tools build tools standard output of failed command standard error of failed command warning java net connectexception connection refused warning failed to download any source lists warning failed to find package tools exit status of failed command error nothing to install usr local homebrew library homebrew cask lib hbc cli install rb in run usr local homebrew library homebrew cask lib hbc cli abstract command rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor brew cask doctor homebrew cask version homebrew cask caskroom homebrew cask git revision last commit homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads library caches homebrew cask files homebrew cask taps usr local homebrew library taps caskroom homebrew cask casks contents of load path usr local homebrew library homebrew cask lib usr local homebrew library homebrew library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal environment variables lc all en us utf path usr local bin usr bin bin usr sbin sbin applications server app contents serverroot usr bin applications server app contents serverroot usr sbin usr local homebrew library homebrew shims scm shell bin bash | 1 |
2,415 | 8,576,615,039 | IssuesEvent | 2018-11-12 20:57:29 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | config-contexts-addition in hostvars in netbox dynamic inventory | affects_2.8 c:inventory/contrib_script feature has_pr inventory module needs_maintainer support:community support:core | ##### SUMMARY
Addition of config contexts in hostvars will let us configure remote machines. For example, configuring AS number in the router using ansible by fetching the info from Netbox via dynamic inventory.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
config-contexts in hostvars in netbox dynamic inventory
##### ADDITIONAL INFORMATION
Feature will be used to configure remote machines by addition of config contexts in hostvars. For example, configuring AS number in the router using ansible by fetching the info from Netbox via dynamic inventory.
```
ansible-inventory -vvv -i netbox_inventory.yml --host random
ansible-inventory 2.6.5
config file = /User/ansible.cfg
configured module search path = [u'/User/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible-inventory
python version = 2.7.15 (default, Sep 5 2018, 14:52:04) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)]
Using /User/ansible.cfg as config file
Fetching: http://localhost:8000/api/dcim/sites/?limit=0
Fetching: http://localhost:8000/api/dcim/regions/?limit=0
Fetching: http://localhost:8000/api/tenancy/tenants/?limit=0
Fetching: http://localhost:8000/api/dcim/racks/?limit=0
Fetching: http://localhost:8000/api/dcim/device-roles/?limit=0
Fetching: http://localhost:8000/api/dcim/device-types/?limit=0
Fetching: http://localhost:8000/api/dcim/manufacturers/?limit=0
Fetching: http://localhost:8000/api/dcim/devices/?limit=0&role=aci-ipn
Fetching: http://localhost:8000/api/virtualization/virtual-machines/?limit=0&role=aci-ipn
Parsed /User/netbox_inventory.yml inventory source with netbox plugin
{
"config_contexts": {
"cc": {
"net": {
"vpn_asn": 66000
}
}
},
"device_roles": [
"ACI IPN"
],
"device_types": [
"random"
],
"manufacturers": [
"Cisco"
],
"racks": [
"abc"
],
"sites": [
"abc"
],
"tenants": [
"xyz"
]
}
```
| True | config-contexts-addition in hostvars in netbox dynamic inventory - ##### SUMMARY
Addition of config contexts in hostvars will let us configure remote machines. For example, configuring AS number in the router using ansible by fetching the info from Netbox via dynamic inventory.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
config-contexts in hostvars in netbox dynamic inventory
##### ADDITIONAL INFORMATION
Feature will be used to configure remote machines by addition of config contexts in hostvars. For example, configuring AS number in the router using ansible by fetching the info from Netbox via dynamic inventory.
```
ansible-inventory -vvv -i netbox_inventory.yml --host random
ansible-inventory 2.6.5
config file = /User/ansible.cfg
configured module search path = [u'/User/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible-inventory
python version = 2.7.15 (default, Sep 5 2018, 14:52:04) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)]
Using /User/ansible.cfg as config file
Fetching: http://localhost:8000/api/dcim/sites/?limit=0
Fetching: http://localhost:8000/api/dcim/regions/?limit=0
Fetching: http://localhost:8000/api/tenancy/tenants/?limit=0
Fetching: http://localhost:8000/api/dcim/racks/?limit=0
Fetching: http://localhost:8000/api/dcim/device-roles/?limit=0
Fetching: http://localhost:8000/api/dcim/device-types/?limit=0
Fetching: http://localhost:8000/api/dcim/manufacturers/?limit=0
Fetching: http://localhost:8000/api/dcim/devices/?limit=0&role=aci-ipn
Fetching: http://localhost:8000/api/virtualization/virtual-machines/?limit=0&role=aci-ipn
Parsed /User/netbox_inventory.yml inventory source with netbox plugin
{
"config_contexts": {
"cc": {
"net": {
"vpn_asn": 66000
}
}
},
"device_roles": [
"ACI IPN"
],
"device_types": [
"random"
],
"manufacturers": [
"Cisco"
],
"racks": [
"abc"
],
"sites": [
"abc"
],
"tenants": [
"xyz"
]
}
```
| main | config contexts addition in hostvars in netbox dynamic inventory summary addition of config contexts in hostvars will let us configure remote machines for example configuring as number in the router using ansible by fetching the info from netbox via dynamic inventory issue type feature idea component name config contexts in hostvars in netbox dynamic inventory additional information feature will be used to configure remote machines by addition of config contexts in hostvars for example configuring as number in the router using ansible by fetching the info from netbox via dynamic inventory ansible inventory vvv i netbox inventory yml host random ansible inventory config file user ansible cfg configured module search path ansible python module location usr local lib site packages ansible executable location usr local bin ansible inventory python version default sep using user ansible cfg as config file fetching fetching fetching fetching fetching fetching fetching fetching fetching parsed user netbox inventory yml inventory source with netbox plugin config contexts cc net vpn asn device roles aci ipn device types random manufacturers cisco racks abc sites abc tenants xyz | 1 |
3,321 | 12,881,620,171 | IssuesEvent | 2020-07-12 13:01:09 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | macports present test not reliable | affects_2.2 bug collection collection:community.general macos module needs_collection_redirect packaging support:community waiting_on_contributor waiting_on_maintainer | - Bug Report
##### COMPONENT NAME
ansible/modules/extras/packaging/os/macports.py
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
OSX El Capitan
##### SUMMARY
If the port "gsettings-desktop-schemas" is installed, then check for present state of package "mas" gives wrong result.
##### STEPS TO REPRODUCE
First install package gsettings-desktop-schemas e.g. manually.
sudo port install gsettings-desktop-schemas
Second make sure package mas is not installed.
sudo port uninstall mas
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: install port packages
become: true
macports: name=mas state=present
```
##### EXPECTED RESULTS
macport package mas should be installed
##### ACTUAL RESULTS
macport package is not installed
##### FIX
Line 72:
rc, out, err = module.run_command("%s installed | grep -q ^.*%s" % (pipes.quote(port_path), pipes.quote(name)), use_unsafe_shell=True)
regular expression should be changed to: '^ *%s ' | True | macports present test not reliable - - Bug Report
##### COMPONENT NAME
ansible/modules/extras/packaging/os/macports.py
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
OSX El Capitan
##### SUMMARY
If the port "gsettings-desktop-schemas" is installed, then check for present state of package "mas" gives wrong result.
##### STEPS TO REPRODUCE
First install package gsettings-desktop-schemas e.g. manually.
sudo port install gsettings-desktop-schemas
Second make sure package mas is not installed.
sudo port uninstall mas
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: install port packages
become: true
macports: name=mas state=present
```
##### EXPECTED RESULTS
macport package mas should be installed
##### ACTUAL RESULTS
macport package is not installed
##### FIX
Line 72:
rc, out, err = module.run_command("%s installed | grep -q ^.*%s" % (pipes.quote(port_path), pipes.quote(name)), use_unsafe_shell=True)
regular expression should be changed to: '^ *%s ' | main | macports present test not reliable bug report component name ansible modules extras packaging os macports py ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment osx el capitan summary if the port gsettings desktop schemas is installed then check for present state of package mas gives wrong result steps to reproduce first install package gsettings desktop schemas e g manually sudo port install gsettings desktop schemas second make sure package mas is not installed sudo port uninstall mas yaml name install port packages become true macports name mas state present expected results macport package mas should be installed actual results macport package is not installed fix line rc out err module run command s installed grep q s pipes quote port path pipes quote name use unsafe shell true regular expression should be changed to s | 1 |
115,018 | 17,268,580,851 | IssuesEvent | 2021-07-22 16:35:48 | harrinry/gocd-build-status-notifier | https://api.github.com/repos/harrinry/gocd-build-status-notifier | opened | CVE-2018-14718 (High) detected in jackson-databind-2.5.3.jar, jackson-databind-2.9.2.jar | security vulnerability | ## CVE-2018-14718 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.5.3.jar</b>, <b>jackson-databind-2.9.2.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.5.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: gocd-build-status-notifier/gitlab-mr-status/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.5.3/c37875ff66127d93e5f672708cb2dcc14c8232ab/jackson-databind-2.5.3.jar</p>
<p>
Dependency Hierarchy:
- java-gitlab-api-4.1.0.jar (Root Library)
- :x: **jackson-databind-2.5.3.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: gocd-build-status-notifier/github-pr-status/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar</p>
<p>
Dependency Hierarchy:
- github-api-1.95.jar (Root Library)
- :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/harrinry/gocd-build-status-notifier/commit/039d60b34386e06662f779ce1a97720d950bae52">039d60b34386e06662f779ce1a97720d950bae52</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization.
<p>Publish Date: 2019-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718>CVE-2018-14718</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14718">https://nvd.nist.gov/vuln/detail/CVE-2018-14718</a></p>
<p>Release Date: 2019-01-02</p>
<p>Fix Resolution: 2.9.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.5.3","packageFilePaths":["/gitlab-mr-status/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.gitlab:java-gitlab-api:4.1.0;com.fasterxml.jackson.core:jackson-databind:2.5.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.7"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","packageFilePaths":["/github-pr-status/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.kohsuke:github-api:1.95;com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-14718","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-14718 (High) detected in jackson-databind-2.5.3.jar, jackson-databind-2.9.2.jar - ## CVE-2018-14718 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.5.3.jar</b>, <b>jackson-databind-2.9.2.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.5.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: gocd-build-status-notifier/gitlab-mr-status/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.5.3/c37875ff66127d93e5f672708cb2dcc14c8232ab/jackson-databind-2.5.3.jar</p>
<p>
Dependency Hierarchy:
- java-gitlab-api-4.1.0.jar (Root Library)
- :x: **jackson-databind-2.5.3.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: gocd-build-status-notifier/github-pr-status/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.2/1d8d8cb7cf26920ba57fb61fa56da88cc123b21f/jackson-databind-2.9.2.jar</p>
<p>
Dependency Hierarchy:
- github-api-1.95.jar (Root Library)
- :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/harrinry/gocd-build-status-notifier/commit/039d60b34386e06662f779ce1a97720d950bae52">039d60b34386e06662f779ce1a97720d950bae52</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization.
<p>Publish Date: 2019-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718>CVE-2018-14718</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14718">https://nvd.nist.gov/vuln/detail/CVE-2018-14718</a></p>
<p>Release Date: 2019-01-02</p>
<p>Fix Resolution: 2.9.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.5.3","packageFilePaths":["/gitlab-mr-status/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.gitlab:java-gitlab-api:4.1.0;com.fasterxml.jackson.core:jackson-databind:2.5.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.7"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","packageFilePaths":["/github-pr-status/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.kohsuke:github-api:1.95;com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-14718","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in jackson databind jar jackson databind jar cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file gocd build status notifier gitlab mr status build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy java gitlab api jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file gocd build status notifier github pr status build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy github api jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before might allow remote attackers to execute arbitrary code by leveraging failure to block the ext class from polymorphic deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org gitlab java gitlab api com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org kohsuke github api com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before might allow remote attackers to execute arbitrary code by leveraging failure to block the ext class from polymorphic deserialization vulnerabilityurl | 0 |
4,178 | 20,111,384,382 | IssuesEvent | 2022-02-07 15:22:32 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | npm audit found vulnerabilities | type: bug work: frontend restricted: maintainers status: triage | ```
# npm audit report
ansi-html *
Severity: high
Uncontrolled Resource Consumption in ansi-html - https://github.com/advisories/GHSA-whgm-jr23-g3j9
fix available via `npm audit fix`
node_modules/ansi-html
webpack-hot-middleware 2.9.0 - 2.25.0
Depends on vulnerable versions of ansi-html
node_modules/webpack-hot-middleware
ansi-regex >2.1.1 <5.0.1
Severity: moderate
Inefficient Regular Expression Complexity in chalk/ansi-regex - https://github.com/advisories/GHSA-93q8-gq69-wqmw
fix available via `npm audit fix`
node_modules/ansi-align/node_modules/ansi-regex
node_modules/ansi-regex
node_modules/sveltedoc-parser/node_modules/ansi-regex
node_modules/wide-align/node_modules/ansi-regex
strip-ansi 4.0.0 - 5.2.0
Depends on vulnerable versions of ansi-regex
node_modules/ansi-align/node_modules/strip-ansi
node_modules/sveltedoc-parser/node_modules/string-width/node_modules/strip-ansi
node_modules/wide-align/node_modules/strip-ansi
string-width 2.1.0 - 4.1.0
Depends on vulnerable versions of strip-ansi
node_modules/ansi-align/node_modules/string-width
node_modules/sveltedoc-parser/node_modules/string-width
node_modules/wide-align/node_modules/string-width
ansi-align 3.0.0
Depends on vulnerable versions of string-width
node_modules/ansi-align
immer <9.0.6
Severity: critical
Prototype Pollution in immer - https://github.com/advisories/GHSA-33f9-j839-rf8h
fix available via `npm audit fix --force`
Will install @storybook/addon-essentials@6.1.21, which is a breaking change
node_modules/immer
react-dev-utils 6.0.6-next.9b4009d7 - 12.0.0-next.60
Depends on vulnerable versions of immer
node_modules/react-dev-utils
@storybook/builder-webpack4 *
Depends on vulnerable versions of postcss
Depends on vulnerable versions of react-dev-utils
node_modules/@storybook/builder-webpack4
@storybook/addon-docs >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/builder-webpack4
Depends on vulnerable versions of @storybook/core
node_modules/@storybook/addon-docs
@storybook/addon-essentials >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/addon-docs
node_modules/@storybook/addon-essentials
@storybook/core-server *
Depends on vulnerable versions of @storybook/builder-webpack4
Depends on vulnerable versions of @storybook/builder-webpack5
Depends on vulnerable versions of @storybook/manager-webpack4
node_modules/@storybook/core-server
@storybook/core >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/builder-webpack5
Depends on vulnerable versions of @storybook/core-server
node_modules/@storybook/core
@storybook/svelte 6.2.0-alpha.0 - 6.5.0-alpha.5
Depends on vulnerable versions of @storybook/core
node_modules/@storybook/svelte
@storybook/addon-svelte-csf >=1.0.1
Depends on vulnerable versions of @storybook/svelte
node_modules/@storybook/addon-svelte-csf
@storybook/builder-webpack5 <=6.5.0-alpha.5
Depends on vulnerable versions of react-dev-utils
node_modules/@storybook/builder-webpack5
json-schema <0.4.0
Severity: moderate
json-schema is vulnerable to Prototype Pollution - https://github.com/advisories/GHSA-896r-f27r-55mw
fix available via `npm audit fix`
node_modules/json-schema
jsprim 0.3.0 - 1.4.1 || 2.0.0 - 2.0.1
Depends on vulnerable versions of json-schema
node_modules/jsprim
nth-check <2.0.1
Severity: moderate
Inefficient Regular Expression Complexity in nth-check - https://github.com/advisories/GHSA-rp65-9cf3-cjxr
fix available via `npm audit fix`
node_modules/nth-check
postcss <8.2.13
Severity: moderate
Regular Expression Denial of Service in postcss - https://github.com/advisories/GHSA-566m-qj78-rww5
fix available via `npm audit fix --force`
Will install @storybook/addon-essentials@6.1.21, which is a breaking change
node_modules/@storybook/builder-webpack4/node_modules/postcss
node_modules/autoprefixer/node_modules/postcss
node_modules/css-loader/node_modules/postcss
node_modules/icss-utils/node_modules/postcss
node_modules/postcss-flexbugs-fixes/node_modules/postcss
node_modules/postcss-modules-extract-imports/node_modules/postcss
node_modules/postcss-modules-local-by-default/node_modules/postcss
node_modules/postcss-modules-scope/node_modules/postcss
node_modules/postcss-modules-values/node_modules/postcss
@storybook/builder-webpack4 *
Depends on vulnerable versions of postcss
Depends on vulnerable versions of react-dev-utils
node_modules/@storybook/builder-webpack4
@storybook/addon-docs >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/builder-webpack4
Depends on vulnerable versions of @storybook/core
node_modules/@storybook/addon-docs
@storybook/addon-essentials >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/addon-docs
node_modules/@storybook/addon-essentials
@storybook/core-server *
Depends on vulnerable versions of @storybook/builder-webpack4
Depends on vulnerable versions of @storybook/builder-webpack5
Depends on vulnerable versions of @storybook/manager-webpack4
node_modules/@storybook/core-server
@storybook/core >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/builder-webpack5
Depends on vulnerable versions of @storybook/core-server
node_modules/@storybook/core
@storybook/svelte 6.2.0-alpha.0 - 6.5.0-alpha.5
Depends on vulnerable versions of @storybook/core
node_modules/@storybook/svelte
@storybook/addon-svelte-csf >=1.0.1
Depends on vulnerable versions of @storybook/svelte
node_modules/@storybook/addon-svelte-csf
autoprefixer 1.0.20131222 - 9.8.8
Depends on vulnerable versions of postcss
node_modules/autoprefixer
css-loader 0.15.0 - 4.3.0
Depends on vulnerable versions of icss-utils
Depends on vulnerable versions of postcss
Depends on vulnerable versions of postcss-modules-values
node_modules/css-loader
@storybook/manager-webpack4 *
Depends on vulnerable versions of css-loader
node_modules/@storybook/manager-webpack4
icss-utils <=4.1.1
Depends on vulnerable versions of postcss
node_modules/icss-utils
postcss-modules-local-by-default <=4.0.0-rc.4
Depends on vulnerable versions of icss-utils
Depends on vulnerable versions of postcss
node_modules/postcss-modules-local-by-default
postcss-modules-values <=4.0.0-rc.5
Depends on vulnerable versions of icss-utils
Depends on vulnerable versions of postcss
node_modules/postcss-modules-values
postcss-flexbugs-fixes <=4.2.1
Depends on vulnerable versions of postcss
node_modules/postcss-flexbugs-fixes
postcss-modules-extract-imports <=2.0.0
Depends on vulnerable versions of postcss
node_modules/postcss-modules-extract-imports
postcss-modules-scope <=2.2.0
Depends on vulnerable versions of postcss
node_modules/postcss-modules-scope
prismjs <1.25.0
Severity: moderate
Regular Expression Denial of Service in prismjs - https://github.com/advisories/GHSA-hqhp-5p83-hx96
fix available via `npm audit fix`
node_modules/prismjs
refractor <=3.4.0 || 4.0.0 - 4.1.1
Depends on vulnerable versions of prismjs
node_modules/refractor
tmpl <1.0.5
Severity: moderate
Regular Expression Denial of Service in tmpl - https://github.com/advisories/GHSA-jgrx-mgxx-jf9v
fix available via `npm audit fix`
node_modules/tmpl
32 vulnerabilities (21 moderate, 2 high, 9 critical)
To address issues that do not require attention, run:
npm audit fix
To address all issues (including breaking changes), run:
npm audit fix --force
``` | True | npm audit found vulnerabilities - ```
# npm audit report
ansi-html *
Severity: high
Uncontrolled Resource Consumption in ansi-html - https://github.com/advisories/GHSA-whgm-jr23-g3j9
fix available via `npm audit fix`
node_modules/ansi-html
webpack-hot-middleware 2.9.0 - 2.25.0
Depends on vulnerable versions of ansi-html
node_modules/webpack-hot-middleware
ansi-regex >2.1.1 <5.0.1
Severity: moderate
Inefficient Regular Expression Complexity in chalk/ansi-regex - https://github.com/advisories/GHSA-93q8-gq69-wqmw
fix available via `npm audit fix`
node_modules/ansi-align/node_modules/ansi-regex
node_modules/ansi-regex
node_modules/sveltedoc-parser/node_modules/ansi-regex
node_modules/wide-align/node_modules/ansi-regex
strip-ansi 4.0.0 - 5.2.0
Depends on vulnerable versions of ansi-regex
node_modules/ansi-align/node_modules/strip-ansi
node_modules/sveltedoc-parser/node_modules/string-width/node_modules/strip-ansi
node_modules/wide-align/node_modules/strip-ansi
string-width 2.1.0 - 4.1.0
Depends on vulnerable versions of strip-ansi
node_modules/ansi-align/node_modules/string-width
node_modules/sveltedoc-parser/node_modules/string-width
node_modules/wide-align/node_modules/string-width
ansi-align 3.0.0
Depends on vulnerable versions of string-width
node_modules/ansi-align
immer <9.0.6
Severity: critical
Prototype Pollution in immer - https://github.com/advisories/GHSA-33f9-j839-rf8h
fix available via `npm audit fix --force`
Will install @storybook/addon-essentials@6.1.21, which is a breaking change
node_modules/immer
react-dev-utils 6.0.6-next.9b4009d7 - 12.0.0-next.60
Depends on vulnerable versions of immer
node_modules/react-dev-utils
@storybook/builder-webpack4 *
Depends on vulnerable versions of postcss
Depends on vulnerable versions of react-dev-utils
node_modules/@storybook/builder-webpack4
@storybook/addon-docs >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/builder-webpack4
Depends on vulnerable versions of @storybook/core
node_modules/@storybook/addon-docs
@storybook/addon-essentials >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/addon-docs
node_modules/@storybook/addon-essentials
@storybook/core-server *
Depends on vulnerable versions of @storybook/builder-webpack4
Depends on vulnerable versions of @storybook/builder-webpack5
Depends on vulnerable versions of @storybook/manager-webpack4
node_modules/@storybook/core-server
@storybook/core >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/builder-webpack5
Depends on vulnerable versions of @storybook/core-server
node_modules/@storybook/core
@storybook/svelte 6.2.0-alpha.0 - 6.5.0-alpha.5
Depends on vulnerable versions of @storybook/core
node_modules/@storybook/svelte
@storybook/addon-svelte-csf >=1.0.1
Depends on vulnerable versions of @storybook/svelte
node_modules/@storybook/addon-svelte-csf
@storybook/builder-webpack5 <=6.5.0-alpha.5
Depends on vulnerable versions of react-dev-utils
node_modules/@storybook/builder-webpack5
json-schema <0.4.0
Severity: moderate
json-schema is vulnerable to Prototype Pollution - https://github.com/advisories/GHSA-896r-f27r-55mw
fix available via `npm audit fix`
node_modules/json-schema
jsprim 0.3.0 - 1.4.1 || 2.0.0 - 2.0.1
Depends on vulnerable versions of json-schema
node_modules/jsprim
nth-check <2.0.1
Severity: moderate
Inefficient Regular Expression Complexity in nth-check - https://github.com/advisories/GHSA-rp65-9cf3-cjxr
fix available via `npm audit fix`
node_modules/nth-check
postcss <8.2.13
Severity: moderate
Regular Expression Denial of Service in postcss - https://github.com/advisories/GHSA-566m-qj78-rww5
fix available via `npm audit fix --force`
Will install @storybook/addon-essentials@6.1.21, which is a breaking change
node_modules/@storybook/builder-webpack4/node_modules/postcss
node_modules/autoprefixer/node_modules/postcss
node_modules/css-loader/node_modules/postcss
node_modules/icss-utils/node_modules/postcss
node_modules/postcss-flexbugs-fixes/node_modules/postcss
node_modules/postcss-modules-extract-imports/node_modules/postcss
node_modules/postcss-modules-local-by-default/node_modules/postcss
node_modules/postcss-modules-scope/node_modules/postcss
node_modules/postcss-modules-values/node_modules/postcss
@storybook/builder-webpack4 *
Depends on vulnerable versions of postcss
Depends on vulnerable versions of react-dev-utils
node_modules/@storybook/builder-webpack4
@storybook/addon-docs >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/builder-webpack4
Depends on vulnerable versions of @storybook/core
node_modules/@storybook/addon-docs
@storybook/addon-essentials >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/addon-docs
node_modules/@storybook/addon-essentials
@storybook/core-server *
Depends on vulnerable versions of @storybook/builder-webpack4
Depends on vulnerable versions of @storybook/builder-webpack5
Depends on vulnerable versions of @storybook/manager-webpack4
node_modules/@storybook/core-server
@storybook/core >=6.2.0-alpha.0
Depends on vulnerable versions of @storybook/builder-webpack5
Depends on vulnerable versions of @storybook/core-server
node_modules/@storybook/core
@storybook/svelte 6.2.0-alpha.0 - 6.5.0-alpha.5
Depends on vulnerable versions of @storybook/core
node_modules/@storybook/svelte
@storybook/addon-svelte-csf >=1.0.1
Depends on vulnerable versions of @storybook/svelte
node_modules/@storybook/addon-svelte-csf
autoprefixer 1.0.20131222 - 9.8.8
Depends on vulnerable versions of postcss
node_modules/autoprefixer
css-loader 0.15.0 - 4.3.0
Depends on vulnerable versions of icss-utils
Depends on vulnerable versions of postcss
Depends on vulnerable versions of postcss-modules-values
node_modules/css-loader
@storybook/manager-webpack4 *
Depends on vulnerable versions of css-loader
node_modules/@storybook/manager-webpack4
icss-utils <=4.1.1
Depends on vulnerable versions of postcss
node_modules/icss-utils
postcss-modules-local-by-default <=4.0.0-rc.4
Depends on vulnerable versions of icss-utils
Depends on vulnerable versions of postcss
node_modules/postcss-modules-local-by-default
postcss-modules-values <=4.0.0-rc.5
Depends on vulnerable versions of icss-utils
Depends on vulnerable versions of postcss
node_modules/postcss-modules-values
postcss-flexbugs-fixes <=4.2.1
Depends on vulnerable versions of postcss
node_modules/postcss-flexbugs-fixes
postcss-modules-extract-imports <=2.0.0
Depends on vulnerable versions of postcss
node_modules/postcss-modules-extract-imports
postcss-modules-scope <=2.2.0
Depends on vulnerable versions of postcss
node_modules/postcss-modules-scope
prismjs <1.25.0
Severity: moderate
Regular Expression Denial of Service in prismjs - https://github.com/advisories/GHSA-hqhp-5p83-hx96
fix available via `npm audit fix`
node_modules/prismjs
refractor <=3.4.0 || 4.0.0 - 4.1.1
Depends on vulnerable versions of prismjs
node_modules/refractor
tmpl <1.0.5
Severity: moderate
Regular Expression Denial of Service in tmpl - https://github.com/advisories/GHSA-jgrx-mgxx-jf9v
fix available via `npm audit fix`
node_modules/tmpl
32 vulnerabilities (21 moderate, 2 high, 9 critical)
To address issues that do not require attention, run:
npm audit fix
To address all issues (including breaking changes), run:
npm audit fix --force
``` | main | npm audit found vulnerabilities npm audit report ansi html severity high uncontrolled resource consumption in ansi html fix available via npm audit fix node modules ansi html webpack hot middleware depends on vulnerable versions of ansi html node modules webpack hot middleware ansi regex severity moderate inefficient regular expression complexity in chalk ansi regex fix available via npm audit fix node modules ansi align node modules ansi regex node modules ansi regex node modules sveltedoc parser node modules ansi regex node modules wide align node modules ansi regex strip ansi depends on vulnerable versions of ansi regex node modules ansi align node modules strip ansi node modules sveltedoc parser node modules string width node modules strip ansi node modules wide align node modules strip ansi string width depends on vulnerable versions of strip ansi node modules ansi align node modules string width node modules sveltedoc parser node modules string width node modules wide align node modules string width ansi align depends on vulnerable versions of string width node modules ansi align immer severity critical prototype pollution in immer fix available via npm audit fix force will install storybook addon essentials which is a breaking change node modules immer react dev utils next next depends on vulnerable versions of immer node modules react dev utils storybook builder depends on vulnerable versions of postcss depends on vulnerable versions of react dev utils node modules storybook builder storybook addon docs alpha depends on vulnerable versions of storybook builder depends on vulnerable versions of storybook core node modules storybook addon docs storybook addon essentials alpha depends on vulnerable versions of storybook addon docs node modules storybook addon essentials storybook core server depends on vulnerable versions of storybook builder depends on vulnerable versions of storybook builder depends on vulnerable versions of storybook manager node modules storybook core server storybook core alpha depends on vulnerable versions of storybook builder depends on vulnerable versions of storybook core server node modules storybook core storybook svelte alpha alpha depends on vulnerable versions of storybook core node modules storybook svelte storybook addon svelte csf depends on vulnerable versions of storybook svelte node modules storybook addon svelte csf storybook builder alpha depends on vulnerable versions of react dev utils node modules storybook builder json schema severity moderate json schema is vulnerable to prototype pollution fix available via npm audit fix node modules json schema jsprim depends on vulnerable versions of json schema node modules jsprim nth check severity moderate inefficient regular expression complexity in nth check fix available via npm audit fix node modules nth check postcss severity moderate regular expression denial of service in postcss fix available via npm audit fix force will install storybook addon essentials which is a breaking change node modules storybook builder node modules postcss node modules autoprefixer node modules postcss node modules css loader node modules postcss node modules icss utils node modules postcss node modules postcss flexbugs fixes node modules postcss node modules postcss modules extract imports node modules postcss node modules postcss modules local by default node modules postcss node modules postcss modules scope node modules postcss node modules postcss modules values node modules postcss storybook builder depends on vulnerable versions of postcss depends on vulnerable versions of react dev utils node modules storybook builder storybook addon docs alpha depends on vulnerable versions of storybook builder depends on vulnerable versions of storybook core node modules storybook addon docs storybook addon essentials alpha depends on vulnerable versions of storybook addon docs node modules storybook addon essentials storybook core server depends on vulnerable versions of storybook builder depends on vulnerable versions of storybook builder depends on vulnerable versions of storybook manager node modules storybook core server storybook core alpha depends on vulnerable versions of storybook builder depends on vulnerable versions of storybook core server node modules storybook core storybook svelte alpha alpha depends on vulnerable versions of storybook core node modules storybook svelte storybook addon svelte csf depends on vulnerable versions of storybook svelte node modules storybook addon svelte csf autoprefixer depends on vulnerable versions of postcss node modules autoprefixer css loader depends on vulnerable versions of icss utils depends on vulnerable versions of postcss depends on vulnerable versions of postcss modules values node modules css loader storybook manager depends on vulnerable versions of css loader node modules storybook manager icss utils depends on vulnerable versions of postcss node modules icss utils postcss modules local by default rc depends on vulnerable versions of icss utils depends on vulnerable versions of postcss node modules postcss modules local by default postcss modules values rc depends on vulnerable versions of icss utils depends on vulnerable versions of postcss node modules postcss modules values postcss flexbugs fixes depends on vulnerable versions of postcss node modules postcss flexbugs fixes postcss modules extract imports depends on vulnerable versions of postcss node modules postcss modules extract imports postcss modules scope depends on vulnerable versions of postcss node modules postcss modules scope prismjs severity moderate regular expression denial of service in prismjs fix available via npm audit fix node modules prismjs refractor depends on vulnerable versions of prismjs node modules refractor tmpl severity moderate regular expression denial of service in tmpl fix available via npm audit fix node modules tmpl vulnerabilities moderate high critical to address issues that do not require attention run npm audit fix to address all issues including breaking changes run npm audit fix force | 1 |
56,794 | 23,904,369,190 | IssuesEvent | 2022-09-08 22:15:06 | MicrosoftDocs/windowsserverdocs | https://api.github.com/repos/MicrosoftDocs/windowsserverdocs | closed | Can't follow directions on thispage | Pri1 windows-server/prod remote-desktop-services/tech |
"From the Connection Center, tap the overflow menu (...) on the command bar at the top of the client."
Where is the Connection center? What does it look like? I don't see (...). I'm stuck. Please update instructions with screenshots. Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 09ab98fc-fea2-a11a-e5ca-de1430211d97
* Version Independent ID: 4d9943c3-fef7-ca86-bd03-1241a04b8135
* Content: [Get started with the Windows Desktop client](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client)
* Content Source: [WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md)
* Product: **windows-server**
* Technology: **remote-desktop-services**
* GitHub Login: @Heidilohr
* Microsoft Alias: **helohr** | 1.0 | Can't follow directions on thispage -
"From the Connection Center, tap the overflow menu (...) on the command bar at the top of the client."
Where is the Connection center? What does it look like? I don't see (...). I'm stuck. Please update instructions with screenshots. Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 09ab98fc-fea2-a11a-e5ca-de1430211d97
* Version Independent ID: 4d9943c3-fef7-ca86-bd03-1241a04b8135
* Content: [Get started with the Windows Desktop client](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client)
* Content Source: [WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md)
* Product: **windows-server**
* Technology: **remote-desktop-services**
* GitHub Login: @Heidilohr
* Microsoft Alias: **helohr** | non_main | can t follow directions on thispage from the connection center tap the overflow menu on the command bar at the top of the client where is the connection center what does it look like i don t see i m stuck please update instructions with screenshots thank you document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product windows server technology remote desktop services github login heidilohr microsoft alias helohr | 0 |
665,521 | 22,320,850,849 | IssuesEvent | 2022-06-14 06:12:12 | opencrvs/opencrvs-core | https://api.github.com/repos/opencrvs/opencrvs-core | closed | In performance, stats data not correct when filtered for provinces | 👹Bug Priority: medium | **Bug description:**
In performance, stats data not correct when filtered for provinces. In Sulaka province, there is one district also named Sulaka. In Sulaka district, Sulaka province, there is one office and 1 registrar. But when filtered to Sulaka Province, there is no office & registrar. Since Sulaka district is a part of Sulaka province, it should show same stats value
**Steps to reproduce:**
1. Login as National system admin
2. Go to Performance
3. from location filter, select any district that has office and registrar, like Sulaka district, Sulaka province
4. Observe Stats value
5. from location filter, select Province from that district
6. Observe Stats value
**Actual result:**
In performance, stats data not correct when filtered for provinces. District values are not showing for province
**Expected result:**
province stats should add up all district stats value within that province. According to the recording Sulaka province, should show 1 registration office, 1 registrar
**Screen recording:**
https://images.zenhubusercontent.com/91778759/17498213-4c82-462f-a5c0-589877c57f25/province_stats_not_correct.mp4
**Tested on:**
https://login.farajaland-qa.opencrvs.org/ | 1.0 | In performance, stats data not correct when filtered for provinces - **Bug description:**
In performance, stats data not correct when filtered for provinces. In Sulaka province, there is one district also named Sulaka. In Sulaka district, Sulaka province, there is one office and 1 registrar. But when filtered to Sulaka Province, there is no office & registrar. Since Sulaka district is a part of Sulaka province, it should show same stats value
**Steps to reproduce:**
1. Login as National system admin
2. Go to Performance
3. from location filter, select any district that has office and registrar, like Sulaka district, Sulaka province
4. Observe Stats value
5. from location filter, select Province from that district
6. Observe Stats value
**Actual result:**
In performance, stats data not correct when filtered for provinces. District values are not showing for province
**Expected result:**
province stats should add up all district stats value within that province. According to the recording Sulaka province, should show 1 registration office, 1 registrar
**Screen recording:**
https://images.zenhubusercontent.com/91778759/17498213-4c82-462f-a5c0-589877c57f25/province_stats_not_correct.mp4
**Tested on:**
https://login.farajaland-qa.opencrvs.org/ | non_main | in performance stats data not correct when filtered for provinces bug description in performance stats data not correct when filtered for provinces in sulaka province there is one district also named sulaka in sulaka district sulaka province there is one office and registrar but when filtered to sulaka province there is no office registrar since sulaka district is a part of sulaka province it should show same stats value steps to reproduce login as national system admin go to performance from location filter select any district that has office and registrar like sulaka district sulaka province observe stats value from location filter select province from that district observe stats value actual result in performance stats data not correct when filtered for provinces district values are not showing for province expected result province stats should add up all district stats value within that province according to the recording sulaka province should show registration office registrar screen recording tested on | 0 |
101,772 | 16,528,040,261 | IssuesEvent | 2021-05-26 23:32:56 | alpersonalwebsite/cards | https://api.github.com/repos/alpersonalwebsite/cards | opened | WS-2019-0318 (High) detected in handlebars-4.1.2.tgz | security vulnerability | ## WS-2019-0318 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: cards/package.json</p>
<p>Path to vulnerable library: cards/node_modules/handlebars</p>
<p>
Dependency Hierarchy:
- jest-expo-32.0.1.tgz (Root Library)
- jest-23.6.0.tgz
- jest-cli-23.6.0.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/cards/commit/959053f33fb963a3ec04cdb4be3c6f705f0312c6">959053f33fb963a3ec04cdb4be3c6f705f0312c6</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In "showdownjs/showdown", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.
<p>Publish Date: 2019-10-20
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0318</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0318 (High) detected in handlebars-4.1.2.tgz - ## WS-2019-0318 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: cards/package.json</p>
<p>Path to vulnerable library: cards/node_modules/handlebars</p>
<p>
Dependency Hierarchy:
- jest-expo-32.0.1.tgz (Root Library)
- jest-23.6.0.tgz
- jest-cli-23.6.0.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/cards/commit/959053f33fb963a3ec04cdb4be3c6f705f0312c6">959053f33fb963a3ec04cdb4be3c6f705f0312c6</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In "showdownjs/showdown", versions prior to v4.4.5 are vulnerable against Regular expression Denial of Service (ReDOS) once receiving specially-crafted templates.
<p>Publish Date: 2019-10-20
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0318</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws high detected in handlebars tgz ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file cards package json path to vulnerable library cards node modules handlebars dependency hierarchy jest expo tgz root library jest tgz jest cli tgz istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href vulnerability details in showdownjs showdown versions prior to are vulnerable against regular expression denial of service redos once receiving specially crafted templates publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
3,853 | 16,994,504,576 | IssuesEvent | 2021-07-01 03:32:33 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | opened | org.geotools.geojson.feature.FeatureJSON is deprecated | Concerns Data Persistence Topology GIS Version 1.8.2 🛠 Affects Maintainability | **Is your request related to a problem? Please describe.**
We are using org.geotools.geojson.feature.FeatureJSON which is deprecated (in the save statement at least). We should replace it.
| True | org.geotools.geojson.feature.FeatureJSON is deprecated - **Is your request related to a problem? Please describe.**
We are using org.geotools.geojson.feature.FeatureJSON which is deprecated (in the save statement at least). We should replace it.
| main | org geotools geojson feature featurejson is deprecated is your request related to a problem please describe we are using org geotools geojson feature featurejson which is deprecated in the save statement at least we should replace it | 1 |
58,575 | 3,089,712,730 | IssuesEvent | 2015-08-25 23:12:08 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | I can't reach service IPs on the network. Unable to access application from web. | kind/support priority/P2 team/any | Hi, I have my kubernetes environment up and running on an ubuntu machine with a master and two minion nodes. I'm able to fetch the nodes and list then.
I'm using https://github.com/GoogleCloudPlatform/kubernetes/tree/master/release-0.19.0/examples/update-demo i.e. to scale nautilus image. I'm unaware of which IP to use to view the images on the web page.
```bash
$ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
update-demo-nautilus-agkk9 172.16.50.2 10.x.x.x/10.x.x.x name=update-demo,version=nautilus Running 18 minutes
update-demo gcr.io/google_containers/update-demo:nautilus Running 18 minutes
update-demo-nautilus-anjzp 172.16.50.3 10.x.x.x/10.x.x.x name=update-demo,version=nautilus Running 18 minutes
update-demo gcr.io/google_containers/update-demo:nautilus Running 18 minutes
```
```bash
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
update-demo-nautilus update-demo gcr.io/google_containers/update-demo:nautilus name=update-demo,version=nautilus 2
``` | 1.0 | I can't reach service IPs on the network. Unable to access application from web. - Hi, I have my kubernetes environment up and running on an ubuntu machine with a master and two minion nodes. I'm able to fetch the nodes and list then.
I'm using https://github.com/GoogleCloudPlatform/kubernetes/tree/master/release-0.19.0/examples/update-demo i.e. to scale nautilus image. I'm unaware of which IP to use to view the images on the web page.
```bash
$ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
update-demo-nautilus-agkk9 172.16.50.2 10.x.x.x/10.x.x.x name=update-demo,version=nautilus Running 18 minutes
update-demo gcr.io/google_containers/update-demo:nautilus Running 18 minutes
update-demo-nautilus-anjzp 172.16.50.3 10.x.x.x/10.x.x.x name=update-demo,version=nautilus Running 18 minutes
update-demo gcr.io/google_containers/update-demo:nautilus Running 18 minutes
```
```bash
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
update-demo-nautilus update-demo gcr.io/google_containers/update-demo:nautilus name=update-demo,version=nautilus 2
``` | non_main | i can t reach service ips on the network unable to access application from web hi i have my kubernetes environment up and running on an ubuntu machine with a master and two minion nodes i m able to fetch the nodes and list then i m using i e to scale nautilus image i m unaware of which ip to use to view the images on the web page bash kubectl get pods pod ip container s image s host labels status created message update demo nautilus x x x x x x name update demo version nautilus running minutes update demo gcr io google containers update demo nautilus running minutes update demo nautilus anjzp x x x x x x name update demo version nautilus running minutes update demo gcr io google containers update demo nautilus running minutes bash kubectl get rc controller container s image s selector replicas update demo nautilus update demo gcr io google containers update demo nautilus name update demo version nautilus | 0 |
312,406 | 23,426,568,536 | IssuesEvent | 2022-08-14 13:27:24 | johnbedeir/Devops-Tools-Documentation | https://api.github.com/repos/johnbedeir/Devops-Tools-Documentation | closed | Re-create Readme | Kubernetes | Devops-Pipeline-Python | documentation | Re-create the readme file in Kubernetes for the python application | 1.0 | Re-create Readme | Kubernetes | Devops-Pipeline-Python - Re-create the readme file in Kubernetes for the python application | non_main | re create readme kubernetes devops pipeline python re create the readme file in kubernetes for the python application | 0 |
2,481 | 8,639,915,900 | IssuesEvent | 2018-11-23 22:40:44 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | setTime.py isn't able to change/sync time | V1 related (not maintained) | What I've done:
sudo timedatectl set-ntp 0 #to disable automatic time update
sudo date -s 14:00:00 #to change clock and see whether it works
Then
./setTime.py
2 20cm Jumpercables are connected to PIN7 and 12
Tried also --gpio4 options, also not synced with my signal.
Raspbian is on stretch from 29.11.2017
Tried with clock 20cm and about 80 cm away, both no success.
Transmitting FM to a radio works, however it is much faster than original audio and quality is really bad compared to pifmrds.
On the DCF77 a button is held for about 1 second so it starts to sync. As long as setTime is transmitting the clock can't sync. When it stops about 2 min later it it is successfully synced.
My assumption is since I'm quite near to the DCF77 sender station (only 200 km) both interfere. Since it is AM and done with pulses of 0.1 or 0.2 seconds with lower signal. So e.g. my signal lowers, but original signal is still high or vice versa thus leading to confusion of the receiver.
Or do you have any ideas what I can try to get it work? | True | setTime.py isn't able to change/sync time - What I've done:
sudo timedatectl set-ntp 0 #to disable automatic time update
sudo date -s 14:00:00 #to change clock and see whether it works
Then
./setTime.py
2 20cm Jumpercables are connected to PIN7 and 12
Tried also --gpio4 options, also not synced with my signal.
Raspbian is on stretch from 29.11.2017
Tried with clock 20cm and about 80 cm away, both no success.
Transmitting FM to a radio works, however it is much faster than original audio and quality is really bad compared to pifmrds.
On the DCF77 a button is held for about 1 second so it starts to sync. As long as setTime is transmitting the clock can't sync. When it stops about 2 min later it it is successfully synced.
My assumption is since I'm quite near to the DCF77 sender station (only 200 km) both interfere. Since it is AM and done with pulses of 0.1 or 0.2 seconds with lower signal. So e.g. my signal lowers, but original signal is still high or vice versa thus leading to confusion of the receiver.
Or do you have any ideas what I can try to get it work? | main | settime py isn t able to change sync time what i ve done sudo timedatectl set ntp to disable automatic time update sudo date s to change clock and see whether it works then settime py jumpercables are connected to and tried also options also not synced with my signal raspbian is on stretch from tried with clock and about cm away both no success transmitting fm to a radio works however it is much faster than original audio and quality is really bad compared to pifmrds on the a button is held for about second so it starts to sync as long as settime is transmitting the clock can t sync when it stops about min later it it is successfully synced my assumption is since i m quite near to the sender station only km both interfere since it is am and done with pulses of or seconds with lower signal so e g my signal lowers but original signal is still high or vice versa thus leading to confusion of the receiver or do you have any ideas what i can try to get it work | 1 |
805,626 | 29,578,753,725 | IssuesEvent | 2023-06-07 02:43:00 | datamol-io/graphium | https://api.github.com/repos/datamol-io/graphium | closed | IPU slower than GPU | High priority Graphcore | Hey Graphcore team. I have ran some benchmarks comparing the performance of the IPU vs GPU on the QM9 dataset. The results are [available here](https://docs.google.com/spreadsheets/d/1GrDNK9Wd3gkthxrutfipuLixZh-X4gLxAMvngI1dcmo/edit?usp=sharing).
For now, I compare a single IPU vs the NVidia A6000 (48Gb), about 30-40% slower than the top A100 chip.
The results for IPU are overall slower. Twice slower for Validation, and a bit less for training. On both hardware, I have optimized the batch size to match the device requirements, but kept every other parameter the same. Everything runs on the branch gpu in Goli, and with precision 32 (no mixed precision).
GPU seems to benefit a lot from large batch sizes.
IPU suffers from delay from compilation (~10 min) and from delays at the start of each train epoch and validation epoch to reload the right model on the machine. These were ignored from the calculations both for IPU and GPUs. In the best case, IPUs perform at 4 seconds for the val epoch and 15 for the train (when ignoring delays), compared to 2/10 for GPUs. Also note that the GPU computes metrics on the training set, so it should be slower, technically.
Also, IPUs seem to suffer from larger models. When trying to fit a model of 42M of parameters, instead of 27M, reducing the batch size to ~10 could not fit on memory, compared to bz=50 with 27M parameter. This is compared to the 15,000 molecules that could fit on GPU for the 27M model.
To try to mimic the batch size = 6,000 from GPUs, I tried using gradient accumulation over 375 epochs with bz=20, but this slows down (just a bit) the computation and increases the memory consumption.
I used deviceIterations=4.
In summary, there seems to still be work needed to match, and eventually surpass, the performance of GPUs. I don't have the pop-vision profile to share. I'll generate one and share it tomorrow.
What's your experience with Tensorflow? Do you have faster computation than Gpus? Do you still expect a 2-3x improvement that was pitched a few months earlier?
In my experiments, the time between the steps is low, meaning that the `deviceIterations` is not the bottleneck. See this image

However, most of the IPU time is spent waiting with `IPUSync::waitMarkCountIsLessEqualThan`, see this image

| 1.0 | IPU slower than GPU - Hey Graphcore team. I have ran some benchmarks comparing the performance of the IPU vs GPU on the QM9 dataset. The results are [available here](https://docs.google.com/spreadsheets/d/1GrDNK9Wd3gkthxrutfipuLixZh-X4gLxAMvngI1dcmo/edit?usp=sharing).
For now, I compare a single IPU vs the NVidia A6000 (48Gb), about 30-40% slower than the top A100 chip.
The results for IPU are overall slower. Twice slower for Validation, and a bit less for training. On both hardware, I have optimized the batch size to match the device requirements, but kept every other parameter the same. Everything runs on the branch gpu in Goli, and with precision 32 (no mixed precision).
GPU seems to benefit a lot from large batch sizes.
IPU suffers from delay from compilation (~10 min) and from delays at the start of each train epoch and validation epoch to reload the right model on the machine. These were ignored from the calculations both for IPU and GPUs. In the best case, IPUs perform at 4 seconds for the val epoch and 15 for the train (when ignoring delays), compared to 2/10 for GPUs. Also note that the GPU computes metrics on the training set, so it should be slower, technically.
Also, IPUs seem to suffer from larger models. When trying to fit a model of 42M of parameters, instead of 27M, reducing the batch size to ~10 could not fit on memory, compared to bz=50 with 27M parameter. This is compared to the 15,000 molecules that could fit on GPU for the 27M model.
To try to mimic the batch size = 6,000 from GPUs, I tried using gradient accumulation over 375 epochs with bz=20, but this slows down (just a bit) the computation and increases the memory consumption.
I used deviceIterations=4.
In summary, there seems to still be work needed to match, and eventually surpass, the performance of GPUs. I don't have the pop-vision profile to share. I'll generate one and share it tomorrow.
What's your experience with Tensorflow? Do you have faster computation than Gpus? Do you still expect a 2-3x improvement that was pitched a few months earlier?
In my experiments, the time between the steps is low, meaning that the `deviceIterations` is not the bottleneck. See this image

However, most of the IPU time is spent waiting with `IPUSync::waitMarkCountIsLessEqualThan`, see this image

| non_main | ipu slower than gpu hey graphcore team i have ran some benchmarks comparing the performance of the ipu vs gpu on the dataset the results are for now i compare a single ipu vs the nvidia about slower than the top chip the results for ipu are overall slower twice slower for validation and a bit less for training on both hardware i have optimized the batch size to match the device requirements but kept every other parameter the same everything runs on the branch gpu in goli and with precision no mixed precision gpu seems to benefit a lot from large batch sizes ipu suffers from delay from compilation min and from delays at the start of each train epoch and validation epoch to reload the right model on the machine these were ignored from the calculations both for ipu and gpus in the best case ipus perform at seconds for the val epoch and for the train when ignoring delays compared to for gpus also note that the gpu computes metrics on the training set so it should be slower technically also ipus seem to suffer from larger models when trying to fit a model of of parameters instead of reducing the batch size to could not fit on memory compared to bz with parameter this is compared to the molecules that could fit on gpu for the model to try to mimic the batch size from gpus i tried using gradient accumulation over epochs with bz but this slows down just a bit the computation and increases the memory consumption i used deviceiterations in summary there seems to still be work needed to match and eventually surpass the performance of gpus i don t have the pop vision profile to share i ll generate one and share it tomorrow what s your experience with tensorflow do you have faster computation than gpus do you still expect a improvement that was pitched a few months earlier in my experiments the time between the steps is low meaning that the deviceiterations is not the bottleneck see this image however most of the ipu time is spent waiting with ipusync waitmarkcountislessequalthan see this image | 0 |
5,034 | 25,837,299,349 | IssuesEvent | 2022-12-12 20:45:15 | ipfs/kubo | https://api.github.com/repos/ipfs/kubo | closed | gateway/dir-index-html: switch dir listing sizes to Tsize | kind/enhancement topic/gateway topic/perf need/triage need/analysis need/maintainers-input | > This was inspired by https://github.com/ipfs/go-ipfs/issues/8178, https://github.com/ipfs/go-ipfs/issues/8455
go-ipfs 0.13 shipped with https://github.com/ipfs/go-ipfs/pull/8853 which introduced a bad-aid where we hide "size" column in big directories.
This allows us to skip child block resolution for directories bigger than 100 items, making the entire thing load really fast.
Sadly, directories smaller than 100 are still slow.
They load slower than a directory with 101 items.
## Why showing size and type is expensive
The root node of every UnixFS DAG includes information about node type and the size of raw data inside of it (without metadata).
It also has links to other DAGs representing files and directories, and the total size of DAGs they represent (data + metadata)
The "size" on directory listing is the data without metadata.
To know the exact size and type of items in a directory listing, every item triggers additional block fetch, which baloons the time it takes to return response with a directory listing.
## Proposed Change
Replace "size" based on raw dfata with "DAG size" based on `Tsize` value already present in the root UnixFS node (see [logical format](https://ipld.io/specs/codecs/dag-pb/spec/#logical-format)). The interface will look the same.
> 
### What is user benefit?
Every directory will load fast, as soon the root UnixFS node is available.
This makes an extreme difference on the first load, when the directory is not present in local cache.
Loading `bafybeiggvykl7skb2ndlmacg2k5modvudocffxjesexlod2pfvg5yhwrqm` (10k items)
will be as fast as a directory with 100.
### Won't this be causing issues?
These values are provided in HTML dir listing only for quick eyeballing, and the difference between raw data and `Tsize` will usually be small enough to not impact this purpose:
- small files that fit into a single raw block will have
- big file size won't be significantly impacted by ipld metadata
But just to be sure, we will add on-hover tooltip explanation of the value (right now, there is none).
### Is this worth it?
To illustrate, I started a new, empty repo each time, and listed a directory with 1864 items.
go-ipfs 0.12 (which fetched every UnixFS child block to read the size) took nearly 3 minutes:
```console
$ time ipfs ls -s --size=true /ipfs/QmdmQXB2mzChmMeKY47C43LxUdg1NDJ5MWcKMKxDu7RgQm
0.59s user 0.13s system 0% cpu 2:37.03 total
```
Same directory, but listed with change to `Tsize`:
```console
$ time ipfs dag get --output-codec=dag-json QmdmQXB2mzChmMeKY47C43LxUdg1NDJ5MWcKMKxDu7RgQm
0.44s user 0.33s system 126% cpu 0.611 total
```
From nearly 3 minutes to under 1s. I'd say worth it. | True | gateway/dir-index-html: switch dir listing sizes to Tsize - > This was inspired by https://github.com/ipfs/go-ipfs/issues/8178, https://github.com/ipfs/go-ipfs/issues/8455
go-ipfs 0.13 shipped with https://github.com/ipfs/go-ipfs/pull/8853 which introduced a bad-aid where we hide "size" column in big directories.
This allows us to skip child block resolution for directories bigger than 100 items, making the entire thing load really fast.
Sadly, directories smaller than 100 are still slow.
They load slower than a directory with 101 items.
## Why showing size and type is expensive
The root node of every UnixFS DAG includes information about node type and the size of raw data inside of it (without metadata).
It also has links to other DAGs representing files and directories, and the total size of DAGs they represent (data + metadata)
The "size" on directory listing is the data without metadata.
To know the exact size and type of items in a directory listing, every item triggers additional block fetch, which baloons the time it takes to return response with a directory listing.
## Proposed Change
Replace "size" based on raw dfata with "DAG size" based on `Tsize` value already present in the root UnixFS node (see [logical format](https://ipld.io/specs/codecs/dag-pb/spec/#logical-format)). The interface will look the same.
> 
### What is user benefit?
Every directory will load fast, as soon the root UnixFS node is available.
This makes an extreme difference on the first load, when the directory is not present in local cache.
Loading `bafybeiggvykl7skb2ndlmacg2k5modvudocffxjesexlod2pfvg5yhwrqm` (10k items)
will be as fast as a directory with 100.
### Won't this be causing issues?
These values are provided in HTML dir listing only for quick eyeballing, and the difference between raw data and `Tsize` will usually be small enough to not impact this purpose:
- small files that fit into a single raw block will have
- big file size won't be significantly impacted by ipld metadata
But just to be sure, we will add on-hover tooltip explanation of the value (right now, there is none).
### Is this worth it?
To illustrate, I started a new, empty repo each time, and listed a directory with 1864 items.
go-ipfs 0.12 (which fetched every UnixFS child block to read the size) took nearly 3 minutes:
```console
$ time ipfs ls -s --size=true /ipfs/QmdmQXB2mzChmMeKY47C43LxUdg1NDJ5MWcKMKxDu7RgQm
0.59s user 0.13s system 0% cpu 2:37.03 total
```
Same directory, but listed with change to `Tsize`:
```console
$ time ipfs dag get --output-codec=dag-json QmdmQXB2mzChmMeKY47C43LxUdg1NDJ5MWcKMKxDu7RgQm
0.44s user 0.33s system 126% cpu 0.611 total
```
From nearly 3 minutes to under 1s. I'd say worth it. | main | gateway dir index html switch dir listing sizes to tsize this was inspired by go ipfs shipped with which introduced a bad aid where we hide size column in big directories this allows us to skip child block resolution for directories bigger than items making the entire thing load really fast sadly directories smaller than are still slow they load slower than a directory with items why showing size and type is expensive the root node of every unixfs dag includes information about node type and the size of raw data inside of it without metadata it also has links to other dags representing files and directories and the total size of dags they represent data metadata the size on directory listing is the data without metadata to know the exact size and type of items in a directory listing every item triggers additional block fetch which baloons the time it takes to return response with a directory listing proposed change replace size based on raw dfata with dag size based on tsize value already present in the root unixfs node see the interface will look the same what is user benefit every directory will load fast as soon the root unixfs node is available this makes an extreme difference on the first load when the directory is not present in local cache loading items will be as fast as a directory with won t this be causing issues these values are provided in html dir listing only for quick eyeballing and the difference between raw data and tsize will usually be small enough to not impact this purpose small files that fit into a single raw block will have big file size won t be significantly impacted by ipld metadata but just to be sure we will add on hover tooltip explanation of the value right now there is none is this worth it to illustrate i started a new empty repo each time and listed a directory with items go ipfs which fetched every unixfs child block to read the size took nearly minutes console time ipfs ls s size true ipfs user system cpu total same directory but listed with change to tsize console time ipfs dag get output codec dag json user system cpu total from nearly minutes to under i d say worth it | 1 |
120,549 | 15,779,045,623 | IssuesEvent | 2021-04-01 08:21:28 | xezon/GeneralsRankedMaps | https://api.github.com/repos/xezon/GeneralsRankedMaps | opened | [Candidate 1v1 Egyptian Oasis ZH v1] Structures in map corners look bad | design | **Issue Description**
Structures in map corners look bad.
The dam does not look good, because the terrain beneath it looks strange and the black faded mountains do not match.
The pyramids and sphinx do not look good, because they extent too far into the black map border.
**Expected behavior**
Map corners are beautifully crafted.
**Screenshots**



| 1.0 | [Candidate 1v1 Egyptian Oasis ZH v1] Structures in map corners look bad - **Issue Description**
Structures in map corners look bad.
The dam does not look good, because the terrain beneath it looks strange and the black faded mountains do not match.
The pyramids and sphinx do not look good, because they extent too far into the black map border.
**Expected behavior**
Map corners are beautifully crafted.
**Screenshots**



| non_main | structures in map corners look bad issue description structures in map corners look bad the dam does not look good because the terrain beneath it looks strange and the black faded mountains do not match the pyramids and sphinx do not look good because they extent too far into the black map border expected behavior map corners are beautifully crafted screenshots | 0 |
48,061 | 2,990,139,880 | IssuesEvent | 2015-07-21 07:14:35 | jayway/rest-assured | https://api.github.com/repos/jayway/rest-assured | closed | Add support for get(), post(), put() etc | enhancement imported Priority-Medium | _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on November 27, 2013 12:46:24_
Added support for http method without any parameters. For example given that
RestAssured.baseUri = " http://myservice.com/something"; it would be useful to be able to do
Response r = get();
Which would call the path specified in the baseUri.
Calling get() having a base path or uri set should call localhost:8080.
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=279_ | 1.0 | Add support for get(), post(), put() etc - _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on November 27, 2013 12:46:24_
Added support for http method without any parameters. For example given that
RestAssured.baseUri = " http://myservice.com/something"; it would be useful to be able to do
Response r = get();
Which would call the path specified in the baseUri.
Calling get() having a base path or uri set should call localhost:8080.
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=279_ | non_main | add support for get post put etc from on november added support for http method without any parameters for example given that restassured baseuri it would be useful to be able to do response r get which would call the path specified in the baseuri calling get having a base path or uri set should call localhost original issue | 0 |
147,933 | 23,294,360,815 | IssuesEvent | 2022-08-06 10:18:57 | zuri-training/Col_Films_Proj_Team_113 | https://api.github.com/repos/zuri-training/Col_Films_Proj_Team_113 | opened | Logo design brief | design | I was assigned the lead team for the logo design brief for the team, here is my my research outcome for the logo design brief | 1.0 | Logo design brief - I was assigned the lead team for the logo design brief for the team, here is my my research outcome for the logo design brief | non_main | logo design brief i was assigned the lead team for the logo design brief for the team here is my my research outcome for the logo design brief | 0 |
592 | 4,086,551,591 | IssuesEvent | 2016-06-01 06:09:37 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | orbit() is undocumented | FUCK Maintainability - Hinders improvements - Not a bug | orbit() doesn't explain its vars well, or their measurements.
Like radius, is this in pixels? tiles? inches? distance between earth and the sun? you'll never know. | True | orbit() is undocumented - orbit() doesn't explain its vars well, or their measurements.
Like radius, is this in pixels? tiles? inches? distance between earth and the sun? you'll never know. | main | orbit is undocumented orbit doesn t explain its vars well or their measurements like radius is this in pixels tiles inches distance between earth and the sun you ll never know | 1 |
55,364 | 23,458,879,302 | IssuesEvent | 2022-08-16 11:24:18 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | EKS: allow EksNodeGroup datasource to use NodeGroupNamePrefix as alternative to NodeGroupName | enhancement service/eks needs-triage | ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Description
`node_group_name_prefix` can be used equally as `node_group_name` to define an `aws_eks_node_group` resource. However, currently the data source only support using `node_group_name` in addition to `cluster_name` to fetch an EKS Node Group.
### New or Affected Resource(s)
Data sources:
* aws_eks_node_group
### Potential Terraform Configuration
```terraform
data "eks_node_group" "by_name" {
cluster_name = "foo"
node_group_name = "foobar-nodes"
}
data "eks_node_group" "by_name_prefix" {
cluster_name = "foo"
node_group_name = "foobar-"
}
| 1.0 | EKS: allow EksNodeGroup datasource to use NodeGroupNamePrefix as alternative to NodeGroupName - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Description
`node_group_name_prefix` can be used equally as `node_group_name` to define an `aws_eks_node_group` resource. However, currently the data source only support using `node_group_name` in addition to `cluster_name` to fetch an EKS Node Group.
### New or Affected Resource(s)
Data sources:
* aws_eks_node_group
### Potential Terraform Configuration
```terraform
data "eks_node_group" "by_name" {
cluster_name = "foo"
node_group_name = "foobar-nodes"
}
data "eks_node_group" "by_name_prefix" {
cluster_name = "foo"
node_group_name = "foobar-"
}
| non_main | eks allow eksnodegroup datasource to use nodegroupnameprefix as alternative to nodegroupname community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description node group name prefix can be used equally as node group name to define an aws eks node group resource however currently the data source only support using node group name in addition to cluster name to fetch an eks node group new or affected resource s data sources aws eks node group potential terraform configuration terraform data eks node group by name cluster name foo node group name foobar nodes data eks node group by name prefix cluster name foo node group name foobar | 0 |
8,745 | 6,648,022,288 | IssuesEvent | 2017-09-28 07:43:46 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | reopened | TF_AddGradients API returns wrong result when multiple outputs specified | stat:awaiting tensorflower type:bug/performance | ### System information
Darwin Mac-Admin.local 15.6.0 Darwin Kernel Version 15.6.0: Thu Jun 23 18:25:34 PDT 2016; root:xnu-3248.60.10~1/RELEASE_X86_64 x86_64
Mac OS X 10.11.6
### Describe the problem
Hi. I've added a unit test for TF_AddGradients API (see code below) which is similar to
[this python test](https://github.com/tensorflow/tensorflow/blob/ca3bc0f1c2f917cf6e7c49d58f5ec604a9af9367/tensorflow/python/ops/gradients_test.py#L337)
In the test, I provide two outputs
y[0]=x[0] ** 2
y[1] = y[0] ** 8
where input x[0]=3.
According to the [documentation](https://github.com/tensorflow/tensorflow/blob/03619fab3f4dd6f28b67418455a953b0fccdd9bf/tensorflow/c/c_api.h#L1018) result should be calculated by formula d(y_1 + y_2 + ...)/dx_1 and be equal to 17502, but the API prints 6.
What am I missing? Thanks.
### Source code / logs
```
TF_Status* s = TF_NewStatus();
TF_Graph* graph = TF_NewGraph();
const int ny = 2;
const int nx = 1;
TF_Output inputs[nx];
TF_Output outputs[ny];
TF_Output grad_outputs[nx];
TF_Operation* ph0 = Placeholder(graph, s);
TF_Operation* y0 = Square(graph, s, ph0, "Square0");
TF_Operation* y1 = Square(graph, s, y0, "Square1");
TF_Operation* y2 = Square(graph, s, y1, "Square2");
inputs[0] = TF_Output{ph0, 0};
outputs[0] = TF_Output{y0, 0};
outputs[1] = TF_Output{y2, 0};
TF_AddGradients(graph, outputs, ny, inputs, nx, nullptr, s, grad_outputs);
EXPECT_EQ(TF_OK, TF_GetCode(s)) << TF_Message(s);
std::unique_ptr<CSession> csession(new CSession(graph, s));
std::vector<TF_Output> grad_outputs_vec;
grad_outputs_vec.assign(grad_outputs, grad_outputs + 2);
csession->SetInputs({{ph0, Int32Tensor(3)}});
csession->SetOutputs(grad_outputs_vec);
csession->Run(s);
ASSERT_EQ(TF_OK, TF_GetCode(s)) << TF_Message(s);
TF_Tensor* out0 = csession->output_tensor(0);
int* data = static_cast<int*>(TF_TensorData(out0));
ASSERT_EQ(17502, *data);
Failure
Expected: 17502
To be equal to: *data
Which is: 6
```
| True | TF_AddGradients API returns wrong result when multiple outputs specified - ### System information
Darwin Mac-Admin.local 15.6.0 Darwin Kernel Version 15.6.0: Thu Jun 23 18:25:34 PDT 2016; root:xnu-3248.60.10~1/RELEASE_X86_64 x86_64
Mac OS X 10.11.6
### Describe the problem
Hi. I've added a unit test for TF_AddGradients API (see code below) which is similar to
[this python test](https://github.com/tensorflow/tensorflow/blob/ca3bc0f1c2f917cf6e7c49d58f5ec604a9af9367/tensorflow/python/ops/gradients_test.py#L337)
In the test, I provide two outputs
y[0]=x[0] ** 2
y[1] = y[0] ** 8
where input x[0]=3.
According to the [documentation](https://github.com/tensorflow/tensorflow/blob/03619fab3f4dd6f28b67418455a953b0fccdd9bf/tensorflow/c/c_api.h#L1018) result should be calculated by formula d(y_1 + y_2 + ...)/dx_1 and be equal to 17502, but the API prints 6.
What am I missing? Thanks.
### Source code / logs
```
TF_Status* s = TF_NewStatus();
TF_Graph* graph = TF_NewGraph();
const int ny = 2;
const int nx = 1;
TF_Output inputs[nx];
TF_Output outputs[ny];
TF_Output grad_outputs[nx];
TF_Operation* ph0 = Placeholder(graph, s);
TF_Operation* y0 = Square(graph, s, ph0, "Square0");
TF_Operation* y1 = Square(graph, s, y0, "Square1");
TF_Operation* y2 = Square(graph, s, y1, "Square2");
inputs[0] = TF_Output{ph0, 0};
outputs[0] = TF_Output{y0, 0};
outputs[1] = TF_Output{y2, 0};
TF_AddGradients(graph, outputs, ny, inputs, nx, nullptr, s, grad_outputs);
EXPECT_EQ(TF_OK, TF_GetCode(s)) << TF_Message(s);
std::unique_ptr<CSession> csession(new CSession(graph, s));
std::vector<TF_Output> grad_outputs_vec;
grad_outputs_vec.assign(grad_outputs, grad_outputs + 2);
csession->SetInputs({{ph0, Int32Tensor(3)}});
csession->SetOutputs(grad_outputs_vec);
csession->Run(s);
ASSERT_EQ(TF_OK, TF_GetCode(s)) << TF_Message(s);
TF_Tensor* out0 = csession->output_tensor(0);
int* data = static_cast<int*>(TF_TensorData(out0));
ASSERT_EQ(17502, *data);
Failure
Expected: 17502
To be equal to: *data
Which is: 6
```
| non_main | tf addgradients api returns wrong result when multiple outputs specified system information darwin mac admin local darwin kernel version thu jun pdt root xnu release mac os x describe the problem hi i ve added a unit test for tf addgradients api see code below which is similar to in the test i provide two outputs y x y y where input x according to the result should be calculated by formula d y y dx and be equal to but the api prints what am i missing thanks source code logs tf status s tf newstatus tf graph graph tf newgraph const int ny const int nx tf output inputs tf output outputs tf output grad outputs tf operation placeholder graph s tf operation square graph s tf operation square graph s tf operation square graph s inputs tf output outputs tf output outputs tf output tf addgradients graph outputs ny inputs nx nullptr s grad outputs expect eq tf ok tf getcode s tf message s std unique ptr csession new csession graph s std vector grad outputs vec grad outputs vec assign grad outputs grad outputs csession setinputs csession setoutputs grad outputs vec csession run s assert eq tf ok tf getcode s tf message s tf tensor csession output tensor int data static cast tf tensordata assert eq data failure expected to be equal to data which is | 0 |
2,393 | 8,499,729,161 | IssuesEvent | 2018-10-29 17:54:05 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Maintainer change request for Back To Top | Maintainer change | As per https://github.com/backdrop-contrib/back_to_top/issues/5, I'd like to take over the maintainership of the [Back To Top](https://github.com/backdrop-contrib/back_to_top) module. | True | Maintainer change request for Back To Top - As per https://github.com/backdrop-contrib/back_to_top/issues/5, I'd like to take over the maintainership of the [Back To Top](https://github.com/backdrop-contrib/back_to_top) module. | main | maintainer change request for back to top as per i d like to take over the maintainership of the module | 1 |
179,021 | 14,688,986,311 | IssuesEvent | 2021-01-02 06:40:40 | arubiales/scikit-route | https://api.github.com/repos/arubiales/scikit-route | opened | Add Windows support | Hard documentation enhancement time high | Actually Scikit Route is only available on Linux.
Detect Windows SO and Change ```setup.py```` , paths, etc. | 1.0 | Add Windows support - Actually Scikit Route is only available on Linux.
Detect Windows SO and Change ```setup.py```` , paths, etc. | non_main | add windows support actually scikit route is only available on linux detect windows so and change setup py paths etc | 0 |
2,779 | 9,962,845,142 | IssuesEvent | 2019-07-07 18:00:57 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFP - asciinema | Status: Available For Maintainer(s) | asciinema is a command line Utility to record terminal sessions and share them.
https://asciinema.org/
| True | RFP - asciinema - asciinema is a command line Utility to record terminal sessions and share them.
https://asciinema.org/
| main | rfp asciinema asciinema is a command line utility to record terminal sessions and share them | 1 |
287,233 | 8,805,891,239 | IssuesEvent | 2018-12-26 23:03:13 | osulp/Scholars-Archive | https://api.github.com/repos/osulp/Scholars-Archive | opened | Replace conference_section predicate on all works needed | Content Priority: High | ### Descriptive summary
Once #1781 is completed, find all works with the old, wrong predicate for conference_section: `https://w2id.org/scholarlydata/ontology/conference-ontology.owl#Track`
Can't rely on just Solr.
Replace with correct predicate on those works and save.
### Related work
#1781 | 1.0 | Replace conference_section predicate on all works needed - ### Descriptive summary
Once #1781 is completed, find all works with the old, wrong predicate for conference_section: `https://w2id.org/scholarlydata/ontology/conference-ontology.owl#Track`
Can't rely on just Solr.
Replace with correct predicate on those works and save.
### Related work
#1781 | non_main | replace conference section predicate on all works needed descriptive summary once is completed find all works with the old wrong predicate for conference section can t rely on just solr replace with correct predicate on those works and save related work | 0 |
4,432 | 23,041,289,239 | IssuesEvent | 2022-07-23 07:09:15 | danbugs/simple_smash_stats | https://api.github.com/repos/danbugs/simple_smash_stats | opened | implement player feature | core maintainer issue epic | The player feature will allow obtaining player results provided a gamer tag. | True | implement player feature - The player feature will allow obtaining player results provided a gamer tag. | main | implement player feature the player feature will allow obtaining player results provided a gamer tag | 1 |
2,835 | 8,378,302,109 | IssuesEvent | 2018-10-06 12:49:47 | ryota-murakami/blog | https://api.github.com/repos/ryota-murakami/blog | closed | JSビルドスタックのマイグレーション | current-scope🔎 re-architecture🚀 | やりたいこと
- [ ] coffeeやめてes2018で書きたい
- [ ] JSモジュールをruby gemで管理するのをやめ、npm管理へ移行したい
- [ ] turbolinksは残した
webpack使いたい訳ではないけどAssetspipelineで要望が実現できなければ検討してみる。 | 1.0 | JSビルドスタックのマイグレーション - やりたいこと
- [ ] coffeeやめてes2018で書きたい
- [ ] JSモジュールをruby gemで管理するのをやめ、npm管理へ移行したい
- [ ] turbolinksは残した
webpack使いたい訳ではないけどAssetspipelineで要望が実現できなければ検討してみる。 | non_main | jsビルドスタックのマイグレーション やりたいこと jsモジュールをruby gemで管理するのをやめ、npm管理へ移行したい turbolinksは残した webpack使いたい訳ではないけどassetspipelineで要望が実現できなければ検討してみる。 | 0 |
2,531 | 8,657,344,803 | IssuesEvent | 2018-11-27 21:03:00 | arcticicestudio/nord-docs | https://api.github.com/repos/arcticicestudio/nord-docs | closed | Netlify Configuration | context-workflow scope-configurability scope-maintainability type-feature | <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48661237-35d1a000-ea6f-11e8-8e16-f48948969be6.png" width="60%" /></p>
> Related epics: #46
This issue documents a part of the implementation of the [hosting & continuous deployment concept][gh-46] with [Netlify's configuration file][netlify-docs-toml-ref].
See the “Hosting” and “Continuous Deployment” (sub)sections for more details about the architecture.
## Tasks
- Implement Netlify's `netlify.toml` configuration file
- [x] Define the `command` for the production `[build]` section
- [x] Define the `publish` path for the production `[build]` section
[gh-46]: https://github.com/arcticicestudio/nord-docs/issues/46
[netlify-docs-toml-ref]: https://www.netlify.com/docs/netlify-toml-reference | True | Netlify Configuration - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48661237-35d1a000-ea6f-11e8-8e16-f48948969be6.png" width="60%" /></p>
> Related epics: #46
This issue documents a part of the implementation of the [hosting & continuous deployment concept][gh-46] with [Netlify's configuration file][netlify-docs-toml-ref].
See the “Hosting” and “Continuous Deployment” (sub)sections for more details about the architecture.
## Tasks
- Implement Netlify's `netlify.toml` configuration file
- [x] Define the `command` for the production `[build]` section
- [x] Define the `publish` path for the production `[build]` section
[gh-46]: https://github.com/arcticicestudio/nord-docs/issues/46
[netlify-docs-toml-ref]: https://www.netlify.com/docs/netlify-toml-reference | main | netlify configuration related epics this issue documents a part of the implementation of the with see the “hosting” and “continuous deployment” sub sections for more details about the architecture tasks implement netlify s netlify toml configuration file define the command for the production section define the publish path for the production section | 1 |
3,712 | 15,253,806,101 | IssuesEvent | 2021-02-20 09:19:14 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Add checksums to releases | enhancement maintainability | We could consider publishing checksums of our released artifacts.
I would normally expect to provide checksums for the entire archives, not the individual binaries, although that is what was requested today on the mailing list. Given that our binaries are just launchers and all the code resides in class files, I am not sure what guarantees binary checksums give.
This is related to the fact that our packages are unsigned, which displays scary messages to Mac users (and Windows too, I think) when they install the software. The process (and cost) to fix this depends on the target operating system.
https://groups.google.com/forum/#!topic/openrefine/1g957CEbtKI | True | Add checksums to releases - We could consider publishing checksums of our released artifacts.
I would normally expect to provide checksums for the entire archives, not the individual binaries, although that is what was requested today on the mailing list. Given that our binaries are just launchers and all the code resides in class files, I am not sure what guarantees binary checksums give.
This is related to the fact that our packages are unsigned, which displays scary messages to Mac users (and Windows too, I think) when they install the software. The process (and cost) to fix this depends on the target operating system.
https://groups.google.com/forum/#!topic/openrefine/1g957CEbtKI | main | add checksums to releases we could consider publishing checksums of our released artifacts i would normally expect to provide checksums for the entire archives not the individual binaries although that is what was requested today on the mailing list given that our binaries are just launchers and all the code resides in class files i am not sure what guarantees binary checksums give this is related to the fact that our packages are unsigned which displays scary messages to mac users and windows too i think when they install the software the process and cost to fix this depends on the target operating system | 1 |
2,881 | 10,319,566,667 | IssuesEvent | 2019-08-30 17:53:29 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Request : Node Detail View module port | Maintainer application Port in progress | Hi there, I have a D7 module called Node Detail View that I am very much interested to port to Backdrop CMS. I think this could be a good UI for admins/content producer who manage this CMS. I've just started to port my module.
Would love to port my other modules as well later.
Module link - https://www.drupal.org/project/node_detail_view
My Github - https://github.com/gauravjeet/
Module Backdrop branch - https://github.com/gauravjeet/node_detail_view/tree/backdrop
Let me know if there is anything I missed out.
| True | Request : Node Detail View module port - Hi there, I have a D7 module called Node Detail View that I am very much interested to port to Backdrop CMS. I think this could be a good UI for admins/content producer who manage this CMS. I've just started to port my module.
Would love to port my other modules as well later.
Module link - https://www.drupal.org/project/node_detail_view
My Github - https://github.com/gauravjeet/
Module Backdrop branch - https://github.com/gauravjeet/node_detail_view/tree/backdrop
Let me know if there is anything I missed out.
| main | request node detail view module port hi there i have a module called node detail view that i am very much interested to port to backdrop cms i think this could be a good ui for admins content producer who manage this cms i ve just started to port my module would love to port my other modules as well later module link my github module backdrop branch let me know if there is anything i missed out | 1 |
409,402 | 11,961,892,047 | IssuesEvent | 2020-04-05 10:14:49 | InfiniteFlightAirportEditing/Airports | https://api.github.com/repos/InfiniteFlightAirportEditing/Airports | reopened | WAYY-Mozes Kilangin Airport-PAPUA-INDONESIA | Being Redone Low Priority | Mozes Kilangin Airport, Timika, Indonesia
will do this..... | 1.0 | WAYY-Mozes Kilangin Airport-PAPUA-INDONESIA - Mozes Kilangin Airport, Timika, Indonesia
will do this..... | non_main | wayy mozes kilangin airport papua indonesia mozes kilangin airport timika indonesia will do this | 0 |
5,783 | 30,642,325,638 | IssuesEvent | 2023-07-24 23:36:32 | microsoft/mu_plus | https://api.github.com/repos/microsoft/mu_plus | closed | [Feature]: UefiTestingPkg binary releases | state:stale state:needs-triage state:needs-maintainer-feedback type:feature-request urgency:low | ### Feature Overview
UefiTestingPkg provides various conformance tests for the UEFI environment. These are at least mostly generic and may be used on any UEFI implementation, including such not based on Project Mu. However, its source code has dependencies on Project Mu packages and thus cannot easily be built without a separate development environment.
For this reason, binary releases would greatly improve the usability of UefiTestingPkg outside of Project Mu platforms. Unfortunately, UefiTestingPkg.dsc is explicitly not designed to produce usable binaries as of now:
https://github.com/microsoft/mu_plus/blob/79dd13a1088fcc4dab256951bb6b7ad7b10f1c21/UefiTestingPkg/UefiTestingPkg.dsc#L18-L20
Many of the dummy library classes are effectively generic. These include:
- MemoryAllocationLib
- DebugPrintErrorLevelLib
- PcdLib
One more library class could be, but currently sometimes may not be [1]:
- HobLib
The one library class that cannot really be generic as of now is DebugLib. While this should usually not pose a problem, the default instance for the UnitTestResultReportLib library class actually is UnitTestResultReportLibDebugLib:
https://github.com/microsoft/mu_plus/blob/79dd13a1088fcc4dab256951bb6b7ad7b10f1c21/UefiTestingPkg/UefiTestingPkg.dsc#L58
This means that there is no trivial way to build universal binary releases.
[1] In case a non-Null DebugLib is required, ArmVirtPkg has an ArmVirtDxeHobLib that does not depend on DebugLib. The reason is the risk for circular dependencies, such as the ArmVirt serial code that depends on HobLib:
https://github.com/microsoft/mu_silicon_arm_tiano/blob/922860bc4cb762e229b441de8a4ed2b127070168/ArmVirtPkg/Library/FdtPL011SerialPortLib/FdtPL011SerialPortLib.inf#L25
There is nothing ARM-specific about this approach and it might as well be DxeHobLibNoDebug variant (which actually could share its core code with the regular DxeHobLib).
### Solution Overview
As I don't have a good overview of everyone's workloads and use cases, I can't propose a solution myself. Potential approaches I can think of are:
- Using UnitTestResultReportLibConOut for binary builds. This would trivially be universal and require only a small patch to UefiTestingPkg.dsc. However, I'm not sure parsing outputs to ConOut is common practice and well-supported. I'm not aware something like this exists, but it may require something to re-route ConOut outputs to a variety of targets, such as a serial port (which we currently use for our CI, as it can easily be routed to stdout via QEMU).
- Rather than hooking ConOut, a test result protocol can be defined by the platform ahead of launching the test application to route the test result to any arbitrary target. This would require a new instance of UnitTestResultReportLib and a driver to expose said protocol.
- The protocol could be more "generic" in the sense that it may be used for other kinds of output. For OpenCorePkg, we have a centralized logging protocol that is also utilized by DebugLib:
https://github.com/acidanthera/OpenCorePkg/blob/2e93c954023df44d7d68c4d896c32a5d07439b3c/Include/Acidanthera/Library/OcLogAggregatorLib.h
This leaves the OpenCore application in charge of the logging targets and the OpenCore drivers consuming its logging protocol. We use this approach to decide where to route logging output based on a user configuration file, which may include multiple targets at once. Naturally, a new DebugLib instance would need to be introduced. However, in this scenario, UnitTestResultReportLibDebugLib could be re-used as-is.
### Alternatives Considered
_No response_
### Urgency
Low
### Are you going to implement the feature request?
Someone else needs to implement the feature
### Do you need maintainer feedback?
Maintainer feedback requested
### Anything else?
_No response_ | True | [Feature]: UefiTestingPkg binary releases - ### Feature Overview
UefiTestingPkg provides various conformance tests for the UEFI environment. These are at least mostly generic and may be used on any UEFI implementation, including such not based on Project Mu. However, its source code has dependencies on Project Mu packages and thus cannot easily be built without a separate development environment.
For this reason, binary releases would greatly improve the usability of UefiTestingPkg outside of Project Mu platforms. Unfortunately, UefiTestingPkg.dsc is explicitly not designed to produce usable binaries as of now:
https://github.com/microsoft/mu_plus/blob/79dd13a1088fcc4dab256951bb6b7ad7b10f1c21/UefiTestingPkg/UefiTestingPkg.dsc#L18-L20
Many of the dummy library classes are effectively generic. These include:
- MemoryAllocationLib
- DebugPrintErrorLevelLib
- PcdLib
One more library class could be, but currently sometimes may not be [1]:
- HobLib
The one library class that cannot really be generic as of now is DebugLib. While this should usually not pose a problem, the default instance for the UnitTestResultReportLib library class actually is UnitTestResultReportLibDebugLib:
https://github.com/microsoft/mu_plus/blob/79dd13a1088fcc4dab256951bb6b7ad7b10f1c21/UefiTestingPkg/UefiTestingPkg.dsc#L58
This means that there is no trivial way to build universal binary releases.
[1] In case a non-Null DebugLib is required, ArmVirtPkg has an ArmVirtDxeHobLib that does not depend on DebugLib. The reason is the risk for circular dependencies, such as the ArmVirt serial code that depends on HobLib:
https://github.com/microsoft/mu_silicon_arm_tiano/blob/922860bc4cb762e229b441de8a4ed2b127070168/ArmVirtPkg/Library/FdtPL011SerialPortLib/FdtPL011SerialPortLib.inf#L25
There is nothing ARM-specific about this approach and it might as well be DxeHobLibNoDebug variant (which actually could share its core code with the regular DxeHobLib).
### Solution Overview
As I don't have a good overview of everyone's workloads and use cases, I can't propose a solution myself. Potential approaches I can think of are:
- Using UnitTestResultReportLibConOut for binary builds. This would trivially be universal and require only a small patch to UefiTestingPkg.dsc. However, I'm not sure parsing outputs to ConOut is common practice and well-supported. I'm not aware something like this exists, but it may require something to re-route ConOut outputs to a variety of targets, such as a serial port (which we currently use for our CI, as it can easily be routed to stdout via QEMU).
- Rather than hooking ConOut, a test result protocol can be defined by the platform ahead of launching the test application to route the test result to any arbitrary target. This would require a new instance of UnitTestResultReportLib and a driver to expose said protocol.
- The protocol could be more "generic" in the sense that it may be used for other kinds of output. For OpenCorePkg, we have a centralized logging protocol that is also utilized by DebugLib:
https://github.com/acidanthera/OpenCorePkg/blob/2e93c954023df44d7d68c4d896c32a5d07439b3c/Include/Acidanthera/Library/OcLogAggregatorLib.h
This leaves the OpenCore application in charge of the logging targets and the OpenCore drivers consuming its logging protocol. We use this approach to decide where to route logging output based on a user configuration file, which may include multiple targets at once. Naturally, a new DebugLib instance would need to be introduced. However, in this scenario, UnitTestResultReportLibDebugLib could be re-used as-is.
### Alternatives Considered
_No response_
### Urgency
Low
### Are you going to implement the feature request?
Someone else needs to implement the feature
### Do you need maintainer feedback?
Maintainer feedback requested
### Anything else?
_No response_ | main | uefitestingpkg binary releases feature overview uefitestingpkg provides various conformance tests for the uefi environment these are at least mostly generic and may be used on any uefi implementation including such not based on project mu however its source code has dependencies on project mu packages and thus cannot easily be built without a separate development environment for this reason binary releases would greatly improve the usability of uefitestingpkg outside of project mu platforms unfortunately uefitestingpkg dsc is explicitly not designed to produce usable binaries as of now many of the dummy library classes are effectively generic these include memoryallocationlib debugprinterrorlevellib pcdlib one more library class could be but currently sometimes may not be hoblib the one library class that cannot really be generic as of now is debuglib while this should usually not pose a problem the default instance for the unittestresultreportlib library class actually is unittestresultreportlibdebuglib this means that there is no trivial way to build universal binary releases in case a non null debuglib is required armvirtpkg has an armvirtdxehoblib that does not depend on debuglib the reason is the risk for circular dependencies such as the armvirt serial code that depends on hoblib there is nothing arm specific about this approach and it might as well be dxehoblibnodebug variant which actually could share its core code with the regular dxehoblib solution overview as i don t have a good overview of everyone s workloads and use cases i can t propose a solution myself potential approaches i can think of are using unittestresultreportlibconout for binary builds this would trivially be universal and require only a small patch to uefitestingpkg dsc however i m not sure parsing outputs to conout is common practice and well supported i m not aware something like this exists but it may require something to re route conout outputs to a variety of targets such as a serial port which we currently use for our ci as it can easily be routed to stdout via qemu rather than hooking conout a test result protocol can be defined by the platform ahead of launching the test application to route the test result to any arbitrary target this would require a new instance of unittestresultreportlib and a driver to expose said protocol the protocol could be more generic in the sense that it may be used for other kinds of output for opencorepkg we have a centralized logging protocol that is also utilized by debuglib this leaves the opencore application in charge of the logging targets and the opencore drivers consuming its logging protocol we use this approach to decide where to route logging output based on a user configuration file which may include multiple targets at once naturally a new debuglib instance would need to be introduced however in this scenario unittestresultreportlibdebuglib could be re used as is alternatives considered no response urgency low are you going to implement the feature request someone else needs to implement the feature do you need maintainer feedback maintainer feedback requested anything else no response | 1 |
422,166 | 12,267,376,537 | IssuesEvent | 2020-05-07 10:34:02 | ooni/probe | https://api.github.com/repos/ooni/probe | closed | Discuss the react-native integration plan | discuss ooni/probe-mobile priority/medium research prototype | We agreed we would be discussing how to move forward the react-native integration.
Relevant to this discussion is also the fact that airbnb stopped using it: https://medium.com/airbnb-engineering/sunsetting-react-native-1868ba28e30a.
We should research prior work to and come up with a set of questions we should answer in order fully evaluate if it's a good idea to move forward with the plan. | 1.0 | Discuss the react-native integration plan - We agreed we would be discussing how to move forward the react-native integration.
Relevant to this discussion is also the fact that airbnb stopped using it: https://medium.com/airbnb-engineering/sunsetting-react-native-1868ba28e30a.
We should research prior work to and come up with a set of questions we should answer in order fully evaluate if it's a good idea to move forward with the plan. | non_main | discuss the react native integration plan we agreed we would be discussing how to move forward the react native integration relevant to this discussion is also the fact that airbnb stopped using it we should research prior work to and come up with a set of questions we should answer in order fully evaluate if it s a good idea to move forward with the plan | 0 |
3,640 | 14,714,322,391 | IssuesEvent | 2021-01-05 11:43:25 | shiruka/shiruka | https://api.github.com/repos/shiruka/shiruka | closed | Fix "similar-code" issue in src/main/java/io/github/shiruka/shiruka/misc/NibbleArray.java | maintainability | Similar blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/shiruka/shiruka/src/main/java/io/github/shiruka/shiruka/misc/NibbleArray.java#issue_5fca26959cef50000100010c | True | Fix "similar-code" issue in src/main/java/io/github/shiruka/shiruka/misc/NibbleArray.java - Similar blocks of code found in 2 locations. Consider refactoring.
https://codeclimate.com/github/shiruka/shiruka/src/main/java/io/github/shiruka/shiruka/misc/NibbleArray.java#issue_5fca26959cef50000100010c | main | fix similar code issue in src main java io github shiruka shiruka misc nibblearray java similar blocks of code found in locations consider refactoring | 1 |
5,715 | 30,209,494,922 | IssuesEvent | 2023-07-05 11:49:58 | jupyter-naas/awesome-notebooks | https://api.github.com/repos/jupyter-naas/awesome-notebooks | closed | Datetime - Convert timestamp to a datetime object | templates maintainer | This notebook will show how to convert a timestamp to a datetime object in Python. It is usefull for organizations that need to convert timestamps to datetime objects for further analysis.
| True | Datetime - Convert timestamp to a datetime object - This notebook will show how to convert a timestamp to a datetime object in Python. It is usefull for organizations that need to convert timestamps to datetime objects for further analysis.
| main | datetime convert timestamp to a datetime object this notebook will show how to convert a timestamp to a datetime object in python it is usefull for organizations that need to convert timestamps to datetime objects for further analysis | 1 |
816,678 | 30,606,704,452 | IssuesEvent | 2023-07-23 04:50:21 | cpeditor/cpeditor | https://api.github.com/repos/cpeditor/cpeditor | closed | Collapse Function | enhancement medium_priority | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is this feature provided in the master branch?
- [X] I have checked the changelog
### The problem you are facing
why can't we collapse a function , if there are multiple if else statements , I want to collapse them and reopen them when needed.
### Describe the feature you'd like
Collapse a function , if statement , while loop , for loop
### Describe alternatives you've considered
_No response_
### Are you willing to contribute to this?
- [ ] I'm ready to contribute to this (and will open a PR soon)
- [ ] I'd like to have a try (with the help of the maintainers)
- [X] No, thanks
### Anything else?
_No response_ | 1.0 | Collapse Function - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is this feature provided in the master branch?
- [X] I have checked the changelog
### The problem you are facing
why can't we collapse a function , if there are multiple if else statements , I want to collapse them and reopen them when needed.
### Describe the feature you'd like
Collapse a function , if statement , while loop , for loop
### Describe alternatives you've considered
_No response_
### Are you willing to contribute to this?
- [ ] I'm ready to contribute to this (and will open a PR soon)
- [ ] I'd like to have a try (with the help of the maintainers)
- [X] No, thanks
### Anything else?
_No response_ | non_main | collapse function is there an existing issue for this i have searched the existing issues is this feature provided in the master branch i have checked the changelog the problem you are facing why can t we collapse a function if there are multiple if else statements i want to collapse them and reopen them when needed describe the feature you d like collapse a function if statement while loop for loop describe alternatives you ve considered no response are you willing to contribute to this i m ready to contribute to this and will open a pr soon i d like to have a try with the help of the maintainers no thanks anything else no response | 0 |
773,110 | 27,146,538,023 | IssuesEvent | 2023-02-16 20:25:36 | thoth-station/micropipenv | https://api.github.com/repos/thoth-station/micropipenv | closed | Key error raised on requirements subcommand when VCS dependency is specified | priority/critical-urgent kind/bug | **Describe the bug**
Given the following Pipfile:
```toml
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
selinon = {git = "https://github.com/selinon/selinon"}
[dev-packages]
```
Running the following command:
```
micropipenv --verbose req --only-direct
```
I get the following error:
```python
Traceback (most recent call last):
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1464, in <module>
return_code = main()
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1451, in main
handler(**arguments)
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1301, in requirements
sections = sections or get_requirements_sections(
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1097, in get_requirements_sections
result["default"] = {
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1098, in <dictcomp>
dependency_name: _parse_pipfile_dependency_info(pipfile_entry)
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1064, in _parse_pipfile_dependency_info
result = {"version": pipfile_entry["version"]}
KeyError: 'version'
```
**Expected behavior**
Requirements should be generated respecting VCS dependency stated in the provided Pipfile.
| 1.0 | Key error raised on requirements subcommand when VCS dependency is specified - **Describe the bug**
Given the following Pipfile:
```toml
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
selinon = {git = "https://github.com/selinon/selinon"}
[dev-packages]
```
Running the following command:
```
micropipenv --verbose req --only-direct
```
I get the following error:
```python
Traceback (most recent call last):
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1464, in <module>
return_code = main()
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1451, in main
handler(**arguments)
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1301, in requirements
sections = sections or get_requirements_sections(
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1097, in get_requirements_sections
result["default"] = {
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1098, in <dictcomp>
dependency_name: _parse_pipfile_dependency_info(pipfile_entry)
File "/Users/fridolin.pokorny/git/fridex/selinon/demo-worker/./micropipenv.py", line 1064, in _parse_pipfile_dependency_info
result = {"version": pipfile_entry["version"]}
KeyError: 'version'
```
**Expected behavior**
Requirements should be generated respecting VCS dependency stated in the provided Pipfile.
| non_main | key error raised on requirements subcommand when vcs dependency is specified describe the bug given the following pipfile toml url verify ssl true name pypi selinon git running the following command micropipenv verbose req only direct i get the following error python traceback most recent call last file users fridolin pokorny git fridex selinon demo worker micropipenv py line in return code main file users fridolin pokorny git fridex selinon demo worker micropipenv py line in main handler arguments file users fridolin pokorny git fridex selinon demo worker micropipenv py line in requirements sections sections or get requirements sections file users fridolin pokorny git fridex selinon demo worker micropipenv py line in get requirements sections result file users fridolin pokorny git fridex selinon demo worker micropipenv py line in dependency name parse pipfile dependency info pipfile entry file users fridolin pokorny git fridex selinon demo worker micropipenv py line in parse pipfile dependency info result version pipfile entry keyerror version expected behavior requirements should be generated respecting vcs dependency stated in the provided pipfile | 0 |
209,404 | 16,019,828,887 | IssuesEvent | 2021-04-20 21:04:07 | cookiecutter/cookiecutter | https://api.github.com/repos/cookiecutter/cookiecutter | closed | CI Fails by coverage lowered after Travis stopped working | CI/CD tests | * Cookiecutter version: latest
### Description:
Tests on Windows triggers `onerror` on `rmtree` function from `utils.py` causing [force_delete](https://github.com/cookiecutter/cookiecutter/blob/master/cookiecutter/utils.py#L30) function to be called and covered.
Since Travis stopped working we are relying only on Github Actions report that is not combining coverage from Windows, this is lowering coverage making CI fails.
### Fix:
We should combine coverage from all OS also on Github Actions
| 1.0 | CI Fails by coverage lowered after Travis stopped working - * Cookiecutter version: latest
### Description:
Tests on Windows triggers `onerror` on `rmtree` function from `utils.py` causing [force_delete](https://github.com/cookiecutter/cookiecutter/blob/master/cookiecutter/utils.py#L30) function to be called and covered.
Since Travis stopped working we are relying only on Github Actions report that is not combining coverage from Windows, this is lowering coverage making CI fails.
### Fix:
We should combine coverage from all OS also on Github Actions
| non_main | ci fails by coverage lowered after travis stopped working cookiecutter version latest description tests on windows triggers onerror on rmtree function from utils py causing function to be called and covered since travis stopped working we are relying only on github actions report that is not combining coverage from windows this is lowering coverage making ci fails fix we should combine coverage from all os also on github actions | 0 |
244 | 2,981,332,408 | IssuesEvent | 2015-07-17 00:03:23 | daemonraco/toobasic | https://api.github.com/repos/daemonraco/toobasic | closed | PostgreSQL Database Structure Maintainer | Database Structure Maintainer enhancement next version | ## What to do
Create an database structure adapter for PostgreSQL.
## Tests
* Instalation
* Representations
| True | PostgreSQL Database Structure Maintainer - ## What to do
Create an database structure adapter for PostgreSQL.
## Tests
* Instalation
* Representations
| main | postgresql database structure maintainer what to do create an database structure adapter for postgresql tests instalation representations | 1 |
3,186 | 12,226,869,743 | IssuesEvent | 2020-05-03 12:55:43 | gfleetwood/asteres | https://api.github.com/repos/gfleetwood/asteres | opened | drwaterman/pydatadctesting (158047633) | HTML maintain | https://github.com/drwaterman/pydatadctesting
Companion repo to the PyData DC 2018 presentation "Do Your Homework! Writing tests for Data Science and Stochastic Code" | True | drwaterman/pydatadctesting (158047633) - https://github.com/drwaterman/pydatadctesting
Companion repo to the PyData DC 2018 presentation "Do Your Homework! Writing tests for Data Science and Stochastic Code" | main | drwaterman pydatadctesting companion repo to the pydata dc presentation do your homework writing tests for data science and stochastic code | 1 |
3,204 | 12,236,610,463 | IssuesEvent | 2020-05-04 16:37:40 | RockefellerArchiveCenter/aurora | https://api.github.com/repos/RockefellerArchiveCenter/aurora | closed | Update Aurora to use Django 2.x | maintainability python3 | ## Is your feature request related to a problem? Please describe.
Aurora uses Django 1.x. The current major version is 2.x
## Describe the solution you'd like
Update Aurora to use the current 2.x version of Django. | True | Update Aurora to use Django 2.x - ## Is your feature request related to a problem? Please describe.
Aurora uses Django 1.x. The current major version is 2.x
## Describe the solution you'd like
Update Aurora to use the current 2.x version of Django. | main | update aurora to use django x is your feature request related to a problem please describe aurora uses django x the current major version is x describe the solution you d like update aurora to use the current x version of django | 1 |
5,691 | 29,952,337,576 | IssuesEvent | 2023-06-23 03:04:31 | spicetify/spicetify-themes | https://api.github.com/repos/spicetify/spicetify-themes | closed | [Onepunch Dark] Weird shadow things in the title of the song. | ☠️ unmaintained | **Describe the bug**
When i play a song, the title shows weird shadows things on the side.
**To Reproduce**
Steps to reproduce the behavior:
1. Play any song
2. See error
**Screenshots**
See the picture shown here.
[https://imgur.com/a/Ko0zmxq](url)
**Specifics:**
- OS: openSUSE Tumbleweed
- Spotify version: Latest
- Spicetify version: Latest
- Theme name: Onepunch (Dark variant)
| True | [Onepunch Dark] Weird shadow things in the title of the song. - **Describe the bug**
When i play a song, the title shows weird shadows things on the side.
**To Reproduce**
Steps to reproduce the behavior:
1. Play any song
2. See error
**Screenshots**
See the picture shown here.
[https://imgur.com/a/Ko0zmxq](url)
**Specifics:**
- OS: openSUSE Tumbleweed
- Spotify version: Latest
- Spicetify version: Latest
- Theme name: Onepunch (Dark variant)
| main | weird shadow things in the title of the song describe the bug when i play a song the title shows weird shadows things on the side to reproduce steps to reproduce the behavior play any song see error screenshots see the picture shown here url specifics os opensuse tumbleweed spotify version latest spicetify version latest theme name onepunch dark variant | 1 |
805 | 4,425,294,306 | IssuesEvent | 2016-08-16 15:04:15 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Cannot attach pool without unregistering with redhat_subscribe | bug_report P3 waiting_on_maintainer | ##### Issue Type:
Bug Report
##### Component Name:
redhat_subscription module
##### Ansible Version:
1.7
##### Environment:
RHEL 6
##### Summary:
Cannot attach a subscription pool if a system is already registered
##### Steps To Reproduce:
Register a RHEL system.
Run `ansible rh-host -m redhat_subscription -a 'username=user password=password pool="Red Hat Enterprise Linux Server"'` to attach a pool
##### Expected Results:
I expected a pool to be attached to the system similar to if I were to run `subscription-manager attach --pool=XXXXXXXXX` from the command line
##### Actual Results:
When trying to attach a pool to a system that is already registered I get the following output
```
rh-host | success >> {
"changed": false,
"msg": "System already registered."
}
```
To make this work I have to unsubscribe the system first before re-subscribing and attaching a subscription in the same step. I should be able to attach a subscription at any time if the system is subscribed or being registered. | True | Cannot attach pool without unregistering with redhat_subscribe - ##### Issue Type:
Bug Report
##### Component Name:
redhat_subscription module
##### Ansible Version:
1.7
##### Environment:
RHEL 6
##### Summary:
Cannot attach a subscription pool if a system is already registered
##### Steps To Reproduce:
Register a RHEL system.
Run `ansible rh-host -m redhat_subscription -a 'username=user password=password pool="Red Hat Enterprise Linux Server"'` to attach a pool
##### Expected Results:
I expected a pool to be attached to the system similar to if I were to run `subscription-manager attach --pool=XXXXXXXXX` from the command line
##### Actual Results:
When trying to attach a pool to a system that is already registered I get the following output
```
rh-host | success >> {
"changed": false,
"msg": "System already registered."
}
```
To make this work I have to unsubscribe the system first before re-subscribing and attaching a subscription in the same step. I should be able to attach a subscription at any time if the system is subscribed or being registered. | main | cannot attach pool without unregistering with redhat subscribe issue type bug report component name redhat subscription module ansible version environment rhel summary cannot attach a subscription pool if a system is already registered steps to reproduce register a rhel system run ansible rh host m redhat subscription a username user password password pool red hat enterprise linux server to attach a pool expected results i expected a pool to be attached to the system similar to if i were to run subscription manager attach pool xxxxxxxxx from the command line actual results when trying to attach a pool to a system that is already registered i get the following output rh host success changed false msg system already registered to make this work i have to unsubscribe the system first before re subscribing and attaching a subscription in the same step i should be able to attach a subscription at any time if the system is subscribed or being registered | 1 |
45,916 | 2,942,034,861 | IssuesEvent | 2015-07-02 12:00:36 | shakna-israel/doccu | https://api.github.com/repos/shakna-israel/doccu | closed | Header | colorific Jinja2 needs investigation Priority: Low templates | The header of all documents should include:
* Logo (globally defined?)
* Writer/s
* Document name
* Date of Authority
* Date of Expiration
* Date of Signing
* Document Status
Aesthetically, an underline beneath the header is desirable. Colorific could be used to fetch the secondary color from the logo for this purpose. | 1.0 | Header - The header of all documents should include:
* Logo (globally defined?)
* Writer/s
* Document name
* Date of Authority
* Date of Expiration
* Date of Signing
* Document Status
Aesthetically, an underline beneath the header is desirable. Colorific could be used to fetch the secondary color from the logo for this purpose. | non_main | header the header of all documents should include logo globally defined writer s document name date of authority date of expiration date of signing document status aesthetically an underline beneath the header is desirable colorific could be used to fetch the secondary color from the logo for this purpose | 0 |
250,408 | 27,086,166,596 | IssuesEvent | 2023-02-14 17:08:31 | solana-labs/solana | https://api.github.com/repos/solana-labs/solana | closed | Gossip is vulnerable to UDP reflection | security | #### Problem
Gossip responses are sent to the source IP, rather than the IP address in gossip. This means that the mechanism is vulnerable to source IP spoofing, with responses sent to the spoofed IP.
The real-world impact of this ranges from "nuisance" to "very bad" depending on the amplification factor.
Spoofing is still trivial in today's internet - many providers still don't do proper filtering.
#### Proposed Solution
In a connectionless protocol, the source IP can never be trusted. The canonical solution to this is a three-way handshake that authenticates the source IP.
| True | Gossip is vulnerable to UDP reflection - #### Problem
Gossip responses are sent to the source IP, rather than the IP address in gossip. This means that the mechanism is vulnerable to source IP spoofing, with responses sent to the spoofed IP.
The real-world impact of this ranges from "nuisance" to "very bad" depending on the amplification factor.
Spoofing is still trivial in today's internet - many providers still don't do proper filtering.
#### Proposed Solution
In a connectionless protocol, the source IP can never be trusted. The canonical solution to this is a three-way handshake that authenticates the source IP.
| non_main | gossip is vulnerable to udp reflection problem gossip responses are sent to the source ip rather than the ip address in gossip this means that the mechanism is vulnerable to source ip spoofing with responses sent to the spoofed ip the real world impact of this ranges from nuisance to very bad depending on the amplification factor spoofing is still trivial in today s internet many providers still don t do proper filtering proposed solution in a connectionless protocol the source ip can never be trusted the canonical solution to this is a three way handshake that authenticates the source ip | 0 |
61,173 | 14,618,875,830 | IssuesEvent | 2020-12-22 16:54:40 | kenferrara/clearcode-toolkit | https://api.github.com/repos/kenferrara/clearcode-toolkit | opened | CVE-2020-1738 (Low) detected in ansible-2.9.9.tar.gz | security vulnerability | ## CVE-2020-1738 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: clearcode-toolkit/devops-requirements.txt</p>
<p>Path to vulnerable library: clearcode-toolkit/devops-requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kenferrara/clearcode-toolkit/commit/9127e806188e9564b4890b1dcb24bcc5dc4d0332">9127e806188e9564b4890b1dcb24bcc5dc4d0332</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in Ansible Engine when the module package or service is used and the parameter 'use' is not specified. If a previous task is executed with a malicious user, the module sent can be selected by the attacker using the ansible facts file. All versions in 2.7.x, 2.8.x and 2.9.x branches are believed to be vulnerable.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1738>CVE-2020-1738</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-1738">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-1738</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: ansible-engine 2.9.7</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansible-engine 2.9.7"}],"vulnerabilityIdentifier":"CVE-2020-1738","vulnerabilityDetails":"A flaw was found in Ansible Engine when the module package or service is used and the parameter \u0027use\u0027 is not specified. If a previous task is executed with a malicious user, the module sent can be selected by the attacker using the ansible facts file. All versions in 2.7.x, 2.8.x and 2.9.x branches are believed to be vulnerable.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1738","cvss3Severity":"low","cvss3Score":"3.9","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Changed","C":"None","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-1738 (Low) detected in ansible-2.9.9.tar.gz - ## CVE-2020-1738 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: clearcode-toolkit/devops-requirements.txt</p>
<p>Path to vulnerable library: clearcode-toolkit/devops-requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kenferrara/clearcode-toolkit/commit/9127e806188e9564b4890b1dcb24bcc5dc4d0332">9127e806188e9564b4890b1dcb24bcc5dc4d0332</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in Ansible Engine when the module package or service is used and the parameter 'use' is not specified. If a previous task is executed with a malicious user, the module sent can be selected by the attacker using the ansible facts file. All versions in 2.7.x, 2.8.x and 2.9.x branches are believed to be vulnerable.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1738>CVE-2020-1738</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-1738">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-1738</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: ansible-engine 2.9.7</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansible-engine 2.9.7"}],"vulnerabilityIdentifier":"CVE-2020-1738","vulnerabilityDetails":"A flaw was found in Ansible Engine when the module package or service is used and the parameter \u0027use\u0027 is not specified. If a previous task is executed with a malicious user, the module sent can be selected by the attacker using the ansible facts file. All versions in 2.7.x, 2.8.x and 2.9.x branches are believed to be vulnerable.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1738","cvss3Severity":"low","cvss3Score":"3.9","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Changed","C":"None","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_main | cve low detected in ansible tar gz cve low severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file clearcode toolkit devops requirements txt path to vulnerable library clearcode toolkit devops requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in ansible engine when the module package or service is used and the parameter use is not specified if a previous task is executed with a malicious user the module sent can be selected by the attacker using the ansible facts file all versions in x x and x branches are believed to be vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction required scope changed impact metrics confidentiality impact none integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansible engine rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a flaw was found in ansible engine when the module package or service is used and the parameter is not specified if a previous task is executed with a malicious user the module sent can be selected by the attacker using the ansible facts file all versions in x x and x branches are believed to be vulnerable vulnerabilityurl | 0 |
2,323 | 3,618,782,122 | IssuesEvent | 2016-02-08 13:25:05 | camptocamp/ngeo | https://api.github.com/repos/camptocamp/ngeo | closed | The layerorder.html doesn't build properly | Backlog Infrastructure | On master branch, when running `make check-examples`, we currently have the following error:
```
$ make check-examples
mkdir -p .build/examples-hosted/lib/
cp dist/gmf.js .build/examples-hosted/lib/gmf.js
mkdir -p .build/examples-hosted/lib/
cp dist/gmf.js.map .build/examples-hosted/lib/gmf.js.map
mkdir -p .build/examples-hosted/fonts
cp node_modules/font-awesome/fonts/* .build/examples-hosted/fonts
mkdir -p .build/examples-hosted/partials
cp examples/partials/* .build/examples-hosted/partials
mkdir -p .build/
./node_modules/.bin/phantomjs --local-to-remote-url-access=true buildtools/check-example.js .build/examples-hosted/layerorder.html
console: TypeError: 'undefined' is not a function (evaluating 'ba.requestAnimationFrame(this.Ha)')
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:369
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:369
at c (file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:12)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:14
at Nb (file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:17)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:17
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:17
at U (file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:360)
at file:///opt/ngeo/adube/.build/examples-hosted/layerorder.js:72
at e (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:39)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:80
at K (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:61)
at g (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:54)
at g (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:54)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:53
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:20
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:133
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:133
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:20
at e (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:39)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:20
at zc (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:20)
at Zd (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:19)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:293
at a (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:174)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:35
make: *** [.build/layerorder.check.timestamp] Error 1
``` | 1.0 | The layerorder.html doesn't build properly - On master branch, when running `make check-examples`, we currently have the following error:
```
$ make check-examples
mkdir -p .build/examples-hosted/lib/
cp dist/gmf.js .build/examples-hosted/lib/gmf.js
mkdir -p .build/examples-hosted/lib/
cp dist/gmf.js.map .build/examples-hosted/lib/gmf.js.map
mkdir -p .build/examples-hosted/fonts
cp node_modules/font-awesome/fonts/* .build/examples-hosted/fonts
mkdir -p .build/examples-hosted/partials
cp examples/partials/* .build/examples-hosted/partials
mkdir -p .build/
./node_modules/.bin/phantomjs --local-to-remote-url-access=true buildtools/check-example.js .build/examples-hosted/layerorder.html
console: TypeError: 'undefined' is not a function (evaluating 'ba.requestAnimationFrame(this.Ha)')
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:369
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:369
at c (file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:12)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:14
at Nb (file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:17)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:17
at file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:17
at U (file:///opt/ngeo/adube/.build/examples-hosted/lib/ngeo.js:360)
at file:///opt/ngeo/adube/.build/examples-hosted/layerorder.js:72
at e (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:39)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:80
at K (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:61)
at g (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:54)
at g (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:54)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:53
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:20
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:133
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:133
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:20
at e (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:39)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:20
at zc (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:20)
at Zd (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:19)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:293
at a (file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:174)
at file:///opt/ngeo/adube/.build/examples-hosted/lib/angular.min.js:35
make: *** [.build/layerorder.check.timestamp] Error 1
``` | non_main | the layerorder html doesn t build properly on master branch when running make check examples we currently have the following error make check examples mkdir p build examples hosted lib cp dist gmf js build examples hosted lib gmf js mkdir p build examples hosted lib cp dist gmf js map build examples hosted lib gmf js map mkdir p build examples hosted fonts cp node modules font awesome fonts build examples hosted fonts mkdir p build examples hosted partials cp examples partials build examples hosted partials mkdir p build node modules bin phantomjs local to remote url access true buildtools check example js build examples hosted layerorder html console typeerror undefined is not a function evaluating ba requestanimationframe this ha at file opt ngeo adube build examples hosted lib ngeo js at file opt ngeo adube build examples hosted lib ngeo js at c file opt ngeo adube build examples hosted lib ngeo js at file opt ngeo adube build examples hosted lib ngeo js at nb file opt ngeo adube build examples hosted lib ngeo js at file opt ngeo adube build examples hosted lib ngeo js at file opt ngeo adube build examples hosted lib ngeo js at u file opt ngeo adube build examples hosted lib ngeo js at file opt ngeo adube build examples hosted layerorder js at e file opt ngeo adube build examples hosted lib angular min js at file opt ngeo adube build examples hosted lib angular min js at k file opt ngeo adube build examples hosted lib angular min js at g file opt ngeo adube build examples hosted lib angular min js at g file opt ngeo adube build examples hosted lib angular min js at file opt ngeo adube build examples hosted lib angular min js at file opt ngeo adube build examples hosted lib angular min js at file opt ngeo adube build examples hosted lib angular min js at file opt ngeo adube build examples hosted lib angular min js at file opt ngeo adube build examples hosted lib angular min js at e file opt ngeo adube build examples hosted lib angular min js at file opt ngeo adube build examples hosted lib angular min js at zc file opt ngeo adube build examples hosted lib angular min js at zd file opt ngeo adube build examples hosted lib angular min js at file opt ngeo adube build examples hosted lib angular min js at a file opt ngeo adube build examples hosted lib angular min js at file opt ngeo adube build examples hosted lib angular min js make error | 0 |
176,217 | 28,045,023,192 | IssuesEvent | 2023-03-28 21:54:12 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | closed | Roadmap and Prioritize Site Improvement Recommendations | design | following up issue #6303
The recommendations from the IA refresh has been labeled and categorized on a spreadsheet https://docs.google.com/spreadsheets/d/14XlcxPYT5qJFPnMUsPmDk512lMzTuLx0OJeSwGbmAQM/edit?usp=sharing
These will need be prioritized so 'complexity for implementation' and 'impact' need to be scoped. The action items can then be roadmapped to plan when/where it fits in with the rest of the foundation site work. | 1.0 | Roadmap and Prioritize Site Improvement Recommendations - following up issue #6303
The recommendations from the IA refresh has been labeled and categorized on a spreadsheet https://docs.google.com/spreadsheets/d/14XlcxPYT5qJFPnMUsPmDk512lMzTuLx0OJeSwGbmAQM/edit?usp=sharing
These will need be prioritized so 'complexity for implementation' and 'impact' need to be scoped. The action items can then be roadmapped to plan when/where it fits in with the rest of the foundation site work. | non_main | roadmap and prioritize site improvement recommendations following up issue the recommendations from the ia refresh has been labeled and categorized on a spreadsheet these will need be prioritized so complexity for implementation and impact need to be scoped the action items can then be roadmapped to plan when where it fits in with the rest of the foundation site work | 0 |
5,673 | 29,503,658,242 | IssuesEvent | 2023-06-03 03:25:32 | ipfs/js-ipfs | https://api.github.com/repos/ipfs/js-ipfs | closed | A viable alternative to go-ipfs? | kind/support status/ready kind/discussion need/maintainer-input kind/maybe-in-helia | I would very much like for the js-ipfs implementation to be the most widely used IPFS implementation and I'm interested to know what the main blockers are for that.
_We realise you have a choice of IPFS provider and your choice in important to us._
Or in other words, why do you type `ipfs daemon` and not `jsipfs daemon`?
I've had some interesting insight already:
> @olizilla:
> it's running on a different api port, so i have to config things to use it
> i've got a repo in ~/.ipfs that has some things in i want to share
> and the command is jsipfs rather than ipfs
> these are all things to make it easier to dev both, but they all make me feel like the go one is the one i should be running day-to-day
> also there is like 10 people on the js dht and 800+ on the go one
> @lidel
> this may be a niche reason, but there is an official docker image with go version as well
> it may be important for devs, because historically, i used different docker images to quickly find regressions in IPFS APIs used by companion having one for jsipfs would be nice
| True | A viable alternative to go-ipfs? - I would very much like for the js-ipfs implementation to be the most widely used IPFS implementation and I'm interested to know what the main blockers are for that.
_We realise you have a choice of IPFS provider and your choice in important to us._
Or in other words, why do you type `ipfs daemon` and not `jsipfs daemon`?
I've had some interesting insight already:
> @olizilla:
> it's running on a different api port, so i have to config things to use it
> i've got a repo in ~/.ipfs that has some things in i want to share
> and the command is jsipfs rather than ipfs
> these are all things to make it easier to dev both, but they all make me feel like the go one is the one i should be running day-to-day
> also there is like 10 people on the js dht and 800+ on the go one
> @lidel
> this may be a niche reason, but there is an official docker image with go version as well
> it may be important for devs, because historically, i used different docker images to quickly find regressions in IPFS APIs used by companion having one for jsipfs would be nice
| main | a viable alternative to go ipfs i would very much like for the js ipfs implementation to be the most widely used ipfs implementation and i m interested to know what the main blockers are for that we realise you have a choice of ipfs provider and your choice in important to us or in other words why do you type ipfs daemon and not jsipfs daemon i ve had some interesting insight already olizilla it s running on a different api port so i have to config things to use it i ve got a repo in ipfs that has some things in i want to share and the command is jsipfs rather than ipfs these are all things to make it easier to dev both but they all make me feel like the go one is the one i should be running day to day also there is like people on the js dht and on the go one lidel this may be a niche reason but there is an official docker image with go version as well it may be important for devs because historically i used different docker images to quickly find regressions in ipfs apis used by companion having one for jsipfs would be nice | 1 |
5,073 | 25,960,183,372 | IssuesEvent | 2022-12-18 19:59:40 | cran-task-views/Hydrology | https://api.github.com/repos/cran-task-views/Hydrology | closed | Package 'getMet' has been archived on CRAN for more than 60 days | maintainer-contacted | Package [getMet](https://CRAN.R-project.org/package=getMet) is currently listed in CRAN Task View [Hydrology](https://CRAN.R-project.org/view=Hydrology) but the package has actually been archived for more than 60 days on CRAN. Often this indicates that the package is currently not sufficiently actively maintained and should be excluded from the task view.
Alternatively, you might also consider reaching out to the authors of the package and encourage (or even help) them to bring the package back to CRAN.
In any case, the situation should be resolved in the next four weeks. If the package does not seem to be brought back to CRAN, please exclude it from the task view. | True | Package 'getMet' has been archived on CRAN for more than 60 days - Package [getMet](https://CRAN.R-project.org/package=getMet) is currently listed in CRAN Task View [Hydrology](https://CRAN.R-project.org/view=Hydrology) but the package has actually been archived for more than 60 days on CRAN. Often this indicates that the package is currently not sufficiently actively maintained and should be excluded from the task view.
Alternatively, you might also consider reaching out to the authors of the package and encourage (or even help) them to bring the package back to CRAN.
In any case, the situation should be resolved in the next four weeks. If the package does not seem to be brought back to CRAN, please exclude it from the task view. | main | package getmet has been archived on cran for more than days package is currently listed in cran task view but the package has actually been archived for more than days on cran often this indicates that the package is currently not sufficiently actively maintained and should be excluded from the task view alternatively you might also consider reaching out to the authors of the package and encourage or even help them to bring the package back to cran in any case the situation should be resolved in the next four weeks if the package does not seem to be brought back to cran please exclude it from the task view | 1 |
153,552 | 12,150,996,197 | IssuesEvent | 2020-04-24 19:03:13 | awslabs/s2n | https://api.github.com/repos/awslabs/s2n | closed | Integration Test V2 infrastructure | type/integration_test | ## **Problem:**
We should keep running existing integration tests as we build out the new framework. This means we need a new Codebuild job to run the new tests while the tests are in development.
## **Completion Requirements**
- [x] A method to create a new Codebuild stack to run these tests (#1749)
- [x] Updates to codebuild / makefiles / dependencies to get the requirements to run tox/pytest
- [x] A working test to show we can use s2n in Codebuild (this doens't have to be a real, long term, test. Just something to prove the above requirements are actually doing what needs to be done).
[//]: # (NOTE: If you believe this might be a security issue, please email aws-security@amazon.com instead of creating a GitHub issue. For more details, see the AWS Vulnerability Reporting Guide: https://aws.amazon.com/security/vulnerability-reporting/ )
| 1.0 | Integration Test V2 infrastructure - ## **Problem:**
We should keep running existing integration tests as we build out the new framework. This means we need a new Codebuild job to run the new tests while the tests are in development.
## **Completion Requirements**
- [x] A method to create a new Codebuild stack to run these tests (#1749)
- [x] Updates to codebuild / makefiles / dependencies to get the requirements to run tox/pytest
- [x] A working test to show we can use s2n in Codebuild (this doens't have to be a real, long term, test. Just something to prove the above requirements are actually doing what needs to be done).
[//]: # (NOTE: If you believe this might be a security issue, please email aws-security@amazon.com instead of creating a GitHub issue. For more details, see the AWS Vulnerability Reporting Guide: https://aws.amazon.com/security/vulnerability-reporting/ )
| non_main | integration test infrastructure problem we should keep running existing integration tests as we build out the new framework this means we need a new codebuild job to run the new tests while the tests are in development completion requirements a method to create a new codebuild stack to run these tests updates to codebuild makefiles dependencies to get the requirements to run tox pytest a working test to show we can use in codebuild this doens t have to be a real long term test just something to prove the above requirements are actually doing what needs to be done note if you believe this might be a security issue please email aws security amazon com instead of creating a github issue for more details see the aws vulnerability reporting guide | 0 |
4,372 | 3,358,971,152 | IssuesEvent | 2015-11-19 12:18:10 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | _iconv undefined symbols on OSX at least | bug platform:osx topic:buildsystem | Undefined symbols for architecture x86_64:
"_iconv", referenced from:
pe_bliss::pe_utils::to_ucs2(std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
pe_bliss::pe_utils::from_ucs2(std::__1::basic_string<unsigned short, std::__1::char_traits<unsigned short>, std::__1::allocator<unsigned short> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
"_iconv_close", referenced from:
pe_bliss::pe_utils::to_ucs2(std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
pe_bliss::pe_utils::from_ucs2(std::__1::basic_string<unsigned short, std::__1::char_traits<unsigned short>, std::__1::allocator<unsigned short> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
"_iconv_open", referenced from:
pe_bliss::pe_utils::to_ucs2(std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
pe_bliss::pe_utils::from_ucs2(std::__1::basic_string<unsigned short, std::__1::char_traits<unsigned short>, std::__1::allocator<unsigned short> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
ld: symbol(s) not found for architecture x86_64
This is on the current latest revision: da4f618 | 1.0 | _iconv undefined symbols on OSX at least - Undefined symbols for architecture x86_64:
"_iconv", referenced from:
pe_bliss::pe_utils::to_ucs2(std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
pe_bliss::pe_utils::from_ucs2(std::__1::basic_string<unsigned short, std::__1::char_traits<unsigned short>, std::__1::allocator<unsigned short> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
"_iconv_close", referenced from:
pe_bliss::pe_utils::to_ucs2(std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
pe_bliss::pe_utils::from_ucs2(std::__1::basic_string<unsigned short, std::__1::char_traits<unsigned short>, std::__1::allocator<unsigned short> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
"_iconv_open", referenced from:
pe_bliss::pe_utils::to_ucs2(std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
pe_bliss::pe_utils::from_ucs2(std::__1::basic_string<unsigned short, std::__1::char_traits<unsigned short>, std::__1::allocator<unsigned short> > const&) in libtool.osx.opt.tools.64.a(utils.osx.opt.tools.64.o)
ld: symbol(s) not found for architecture x86_64
This is on the current latest revision: da4f618 | non_main | iconv undefined symbols on osx at least undefined symbols for architecture iconv referenced from pe bliss pe utils to std basic string std allocator const in libtool osx opt tools a utils osx opt tools o pe bliss pe utils from std basic string std allocator const in libtool osx opt tools a utils osx opt tools o iconv close referenced from pe bliss pe utils to std basic string std allocator const in libtool osx opt tools a utils osx opt tools o pe bliss pe utils from std basic string std allocator const in libtool osx opt tools a utils osx opt tools o iconv open referenced from pe bliss pe utils to std basic string std allocator const in libtool osx opt tools a utils osx opt tools o pe bliss pe utils from std basic string std allocator const in libtool osx opt tools a utils osx opt tools o ld symbol s not found for architecture this is on the current latest revision | 0 |
4,903 | 25,187,081,690 | IssuesEvent | 2022-11-11 19:11:18 | amyjko/faculty | https://api.github.com/repos/amyjko/faculty | opened | Check for broken links at compile time | maintainability | We might be able to do this via tests, at least for internal site links. | True | Check for broken links at compile time - We might be able to do this via tests, at least for internal site links. | main | check for broken links at compile time we might be able to do this via tests at least for internal site links | 1 |
89,490 | 15,829,633,058 | IssuesEvent | 2021-04-06 11:27:22 | VivekBuzruk/UI | https://api.github.com/repos/VivekBuzruk/UI | opened | CVE-2020-7774 (High) detected in y18n-4.0.0.tgz | security vulnerability | ## CVE-2020-7774 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p>
<p>Path to dependency file: UI/package.json</p>
<p>Path to vulnerable library: UI/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- compiler-cli-8.0.3.tgz (Root Library)
- yargs-13.1.0.tgz
- :x: **y18n-4.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/VivekBuzruk/UI/commits/edeb2a2fd15349abe4886893f9325323672726f3">edeb2a2fd15349abe4886893f9325323672726f3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package y18n before 3.2.2, 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true
<p>Publish Date: 2020-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1654">https://www.npmjs.com/advisories/1654</a></p>
<p>Release Date: 2020-11-17</p>
<p>Fix Resolution: 3.2.2, 4.0.1, 5.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7774 (High) detected in y18n-4.0.0.tgz - ## CVE-2020-7774 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p>
<p>Path to dependency file: UI/package.json</p>
<p>Path to vulnerable library: UI/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- compiler-cli-8.0.3.tgz (Root Library)
- yargs-13.1.0.tgz
- :x: **y18n-4.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/VivekBuzruk/UI/commits/edeb2a2fd15349abe4886893f9325323672726f3">edeb2a2fd15349abe4886893f9325323672726f3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package y18n before 3.2.2, 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true
<p>Publish Date: 2020-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1654">https://www.npmjs.com/advisories/1654</a></p>
<p>Release Date: 2020-11-17</p>
<p>Fix Resolution: 3.2.2, 4.0.1, 5.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in tgz cve high severity vulnerability vulnerable library tgz the bare bones internationalization library used by yargs library home page a href path to dependency file ui package json path to vulnerable library ui node modules package json dependency hierarchy compiler cli tgz root library yargs tgz x tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package before and poc by const require setlocale proto updatelocale polluted true console log polluted true publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
565 | 4,030,256,392 | IssuesEvent | 2016-05-18 13:44:58 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Indeed Jobs: Show local time in the location | Maintainer Submitted | As request by user.
------
IA Page: http://duck.co/ia/view/indeed_jobs
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @tagawa | True | Indeed Jobs: Show local time in the location - As request by user.
------
IA Page: http://duck.co/ia/view/indeed_jobs
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @tagawa | main | indeed jobs show local time in the location as request by user ia page tagawa | 1 |
1,189 | 5,103,995,126 | IssuesEvent | 2017-01-04 23:15:46 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Ansible docker_container not exposing ports (2.2.0.0) | affects_2.2 bug_report cloud docker waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /Users/rilindo/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Using the following config:
```
stardust:ansible_docker_demo rilindo$ export ANSIBLE_CONFIG=$(pwd)/ansible.cfg
stardust:ansible_docker_demo rilindo$ export ANSIBLE_HOSTS=$(pwd)/hosts
stardust:ansible_docker_demo rilindo$ cat ansible.cfg
[defaults]
roles_path=~/src/ansible_docker_demo/roles
stardust:ansible_docker_demo rilindo$ cat hosts
192.168.64.8
```
##### OS / ENVIRONMENT
- Workstation OS X MacSiera
- Server: Ubuntu 14.04 with port 2375 open for docker
- Python version: 2.7.10
List of python modules:
```
altgraph (0.10.2)
ansible (2.2.0.0)
argparse (1.2.1)
awscli (1.7.36)
bdist-mpkg (0.5.0)
bonjour-py (0.3)
boto (2.10.0)
botocore (1.0.1)
cached-property (1.3.0)
colorama (0.3.3)
docker-compose (1.9.0rc3)
docker-py (1.10.6)
docker-pycreds (0.2.1)
dockerpty (0.4.1)
docopt (0.6.2)
docutils (0.12)
ecdsa (0.13)
elasticsearch (1.6.0)
enum34 (1.1.6)
```
##### SUMMARY
The docker_container does not appear to be configuring and exposing the ports
##### STEPS TO REPRODUCE
Here is the code I am using:
```
---
- hosts: all
remote_user: ubuntu
vars_files:
- secret.yml
become: yes
become_method: sudo
tasks:
- name: create custom docker container
docker_container:
name: mycustomcontainer
image: rilindo/myapacheweb:v1
state: present
network_mode: bridge
exposed_ports: "80"
published_ports: "80:80"
```
##### EXPECTED RESULTS
I expect to see this in docker ps -a
```
root@ubuntu:~# docker run -d -p 80:80 rilindo/myapacheweb:v1 --name mycontainer
30d6c5ceb274bf4e0b3767eac0c3df3ed9b58c480f0d25378c1e66bf2b106767
root@ubuntu:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30d6c5ceb274 rilindo/myapacheweb:v1 "/bin/sh -c 'apachect" 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp angry_stonebraker
root@ubuntu:~#
```
##### ACTUAL RESULTS
This is what it looks like from my workstation
```
stardust:ansible_docker_demo rilindo$ ansible-playbook create_custom_docker_container.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.64.8]
TASK [create custom docker container] ******************************************
changed: [192.168.64.8]
PLAY RECAP *********************************************************************
192.168.64.8 : ok=3 changed=1 unreachable=0 failed=0
```
So far, so good.
This is what I see on the docker host:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff35fdf076de rilindo/myapacheweb:v1 "/bin/sh -c 'apachect" 57 seconds ago Created mycustomcontainer
```
Note that the ports do not appear to be present under the **PORTS** column | True | Ansible docker_container not exposing ports (2.2.0.0) - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /Users/rilindo/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Using the following config:
```
stardust:ansible_docker_demo rilindo$ export ANSIBLE_CONFIG=$(pwd)/ansible.cfg
stardust:ansible_docker_demo rilindo$ export ANSIBLE_HOSTS=$(pwd)/hosts
stardust:ansible_docker_demo rilindo$ cat ansible.cfg
[defaults]
roles_path=~/src/ansible_docker_demo/roles
stardust:ansible_docker_demo rilindo$ cat hosts
192.168.64.8
```
##### OS / ENVIRONMENT
- Workstation OS X MacSiera
- Server: Ubuntu 14.04 with port 2375 open for docker
- Python version: 2.7.10
List of python modules:
```
altgraph (0.10.2)
ansible (2.2.0.0)
argparse (1.2.1)
awscli (1.7.36)
bdist-mpkg (0.5.0)
bonjour-py (0.3)
boto (2.10.0)
botocore (1.0.1)
cached-property (1.3.0)
colorama (0.3.3)
docker-compose (1.9.0rc3)
docker-py (1.10.6)
docker-pycreds (0.2.1)
dockerpty (0.4.1)
docopt (0.6.2)
docutils (0.12)
ecdsa (0.13)
elasticsearch (1.6.0)
enum34 (1.1.6)
```
##### SUMMARY
The docker_container does not appear to be configuring and exposing the ports
##### STEPS TO REPRODUCE
Here is the code I am using:
```
---
- hosts: all
remote_user: ubuntu
vars_files:
- secret.yml
become: yes
become_method: sudo
tasks:
- name: create custom docker container
docker_container:
name: mycustomcontainer
image: rilindo/myapacheweb:v1
state: present
network_mode: bridge
exposed_ports: "80"
published_ports: "80:80"
```
##### EXPECTED RESULTS
I expect to see this in docker ps -a
```
root@ubuntu:~# docker run -d -p 80:80 rilindo/myapacheweb:v1 --name mycontainer
30d6c5ceb274bf4e0b3767eac0c3df3ed9b58c480f0d25378c1e66bf2b106767
root@ubuntu:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30d6c5ceb274 rilindo/myapacheweb:v1 "/bin/sh -c 'apachect" 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp angry_stonebraker
root@ubuntu:~#
```
##### ACTUAL RESULTS
This is what it looks like from my workstation
```
stardust:ansible_docker_demo rilindo$ ansible-playbook create_custom_docker_container.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [192.168.64.8]
TASK [create custom docker container] ******************************************
changed: [192.168.64.8]
PLAY RECAP *********************************************************************
192.168.64.8 : ok=3 changed=1 unreachable=0 failed=0
```
So far, so good.
This is what I see on the docker host:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff35fdf076de rilindo/myapacheweb:v1 "/bin/sh -c 'apachect" 57 seconds ago Created mycustomcontainer
```
Note that the ports do not appear to be present under the **PORTS** column | main | ansible docker container not exposing ports issue type bug report component name docker container ansible version ansible config file users rilindo ansible cfg configured module search path default w o overrides configuration using the following config stardust ansible docker demo rilindo export ansible config pwd ansible cfg stardust ansible docker demo rilindo export ansible hosts pwd hosts stardust ansible docker demo rilindo cat ansible cfg roles path src ansible docker demo roles stardust ansible docker demo rilindo cat hosts os environment workstation os x macsiera server ubuntu with port open for docker python version list of python modules altgraph ansible argparse awscli bdist mpkg bonjour py boto botocore cached property colorama docker compose docker py docker pycreds dockerpty docopt docutils ecdsa elasticsearch summary the docker container does not appear to be configuring and exposing the ports steps to reproduce here is the code i am using hosts all remote user ubuntu vars files secret yml become yes become method sudo tasks name create custom docker container docker container name mycustomcontainer image rilindo myapacheweb state present network mode bridge exposed ports published ports expected results i expect to see this in docker ps a root ubuntu docker run d p rilindo myapacheweb name mycontainer root ubuntu docker ps a container id image command created status ports names rilindo myapacheweb bin sh c apachect seconds ago up seconds tcp angry stonebraker root ubuntu actual results this is what it looks like from my workstation stardust ansible docker demo rilindo ansible playbook create custom docker container yml play task ok task changed play recap ok changed unreachable failed so far so good this is what i see on the docker host container id image command created status ports names rilindo myapacheweb bin sh c apachect seconds ago created mycustomcontainer note that the ports do not appear to be present under the ports column | 1 |
1,821 | 6,577,329,571 | IssuesEvent | 2017-09-12 00:09:03 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | rds_param_group unable to set long_query_time for MySQL | affects_2.0 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
rds_param_group
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = ~/ansible.cfg
configured module search path = Default w/o overrides
boto (2.40.0)
```
##### CONFIGURATION
[defaults]
ansible_managed = Ansible Managed
display_skipped_hosts = False
forks = 50
gathering = explicit
host_key_checking = False
nocows = 1
retry_files_enabled = False
lookup_plugins = ./lookup_plugins
##### OS / ENVIRONMENT
From: Mac OSX El Capitan
To: N/A (AWS)
##### SUMMARY
I am unable to set the long_query_time rds parameter group setting using the rds_param_grroup module. I have been able to successfully set many other variables without issue. The only thing I see different with this parameter is that it is a float data type. I have tried using the following values without success, "5", "5.000000", "5.0". The value reported for this variable from MySQL is "5.000000". From the error below, I believe this may be an issue with boto. I found someone reporting the same issue in the Google Group back in January 2015 and it said it was an issue with boto: https://groups.google.com/forum/#!msg/ansible-project/iN7cFi2aw98/w1oc2SEFKUcJ. No solution was posted.
##### STEPS TO REPRODUCE
Create a RDS parameter group for MySQL and try to set the long_query_time parameter.
```
rds_parameter_groups:
nw_mysql56_default:
description: "default rds parameter group for - mysql5.6"
engine: mysql5.6
immediate: yes
name: nw-mysql56-default
params:
connect_timeout: 100
general_log: 0
innodb_flush_log_at_trx_commit: 2
log_output: FILE
slow_query_log: 1
long_query_time: 5.000000
wait_timeout: 180000
state: present
- name: create rds parameter groups
rds_param_group:
description: "{{ item.value.description }}"
engine: "{{ item.value.engine }}"
immediate: "{{ item.value.immediate }}"
name: "{{ item.value.name }}"
params: "{{ item.value.params|to_json }}"
state: "{{ item.value.state }}"
with_dict: "{{ rds_parameter_groups }}"
```
##### EXPECTED RESULTS
The long_query_time parameter should be set to the value specified.
##### ACTUAL RESULTS
The module reports the error below:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: value must be in 0-31536000
failed: [localhost] (item={'value': {u'engine': u'mysql5.6', u'description': u'default rds parameter group for - mysql5.6', u'immediate': True, u'state': u'present', u'params': {u'general_log': 0, u'slow_query_log': 1, u'connect_timeout': 100, u'wait_timeout': 180000, u'log_output': u'FILE', u'long_query_time': u'5.000000', u'innodb_flush_log_at_trx_commit': 2}, u'name': u'nw-mysql56-default'}, 'key': u'nw_mysql56_default'}) => {"failed": true, "item": {"key": "nw_mysql56_default", "value": {"description": "default rds parameter group for - mysql5.6", "engine": "mysql5.6", "immediate": true, "name": "nw-mysql56-default", "params": {"connect_timeout": 100, "general_log": 0, "innodb_flush_log_at_trx_commit": 2, "log_output": "FILE", "long_query_time": "5.000000", "slow_query_log": 1, "wait_timeout": 180000}, "state": "present"}}, "module_stderr": "Traceback (most recent call last):\n File \"/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\", line 2506, in <module>\n main()\n File \"/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\", line 277, in main\n changed_params, group_params = modify_group(next_group, group_params, immediate)\n File \"/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\", line 198, in modify_group\n set_parameter(param, new_value, immediate)\n File \"/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\", line 167, in set_parameter\n param.value = converted_value\n File \"/usr/local/lib/python2.7/site-packages/boto/rds/parametergroup.py\", line 169, in set_value\n self._set_string_value(value)\n File \"/usr/local/lib/python2.7/site-packages/boto/rds/parametergroup.py\", line 141, in _set_string_value\n raise ValueError('value must be in %s' % self.allowed_values)\nValueError: value must be in 0-31536000\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
```
| True | rds_param_group unable to set long_query_time for MySQL - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
rds_param_group
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = ~/ansible.cfg
configured module search path = Default w/o overrides
boto (2.40.0)
```
##### CONFIGURATION
[defaults]
ansible_managed = Ansible Managed
display_skipped_hosts = False
forks = 50
gathering = explicit
host_key_checking = False
nocows = 1
retry_files_enabled = False
lookup_plugins = ./lookup_plugins
##### OS / ENVIRONMENT
From: Mac OSX El Capitan
To: N/A (AWS)
##### SUMMARY
I am unable to set the long_query_time rds parameter group setting using the rds_param_grroup module. I have been able to successfully set many other variables without issue. The only thing I see different with this parameter is that it is a float data type. I have tried using the following values without success, "5", "5.000000", "5.0". The value reported for this variable from MySQL is "5.000000". From the error below, I believe this may be an issue with boto. I found someone reporting the same issue in the Google Group back in January 2015 and it said it was an issue with boto: https://groups.google.com/forum/#!msg/ansible-project/iN7cFi2aw98/w1oc2SEFKUcJ. No solution was posted.
##### STEPS TO REPRODUCE
Create a RDS parameter group for MySQL and try to set the long_query_time parameter.
```
rds_parameter_groups:
nw_mysql56_default:
description: "default rds parameter group for - mysql5.6"
engine: mysql5.6
immediate: yes
name: nw-mysql56-default
params:
connect_timeout: 100
general_log: 0
innodb_flush_log_at_trx_commit: 2
log_output: FILE
slow_query_log: 1
long_query_time: 5.000000
wait_timeout: 180000
state: present
- name: create rds parameter groups
rds_param_group:
description: "{{ item.value.description }}"
engine: "{{ item.value.engine }}"
immediate: "{{ item.value.immediate }}"
name: "{{ item.value.name }}"
params: "{{ item.value.params|to_json }}"
state: "{{ item.value.state }}"
with_dict: "{{ rds_parameter_groups }}"
```
##### EXPECTED RESULTS
The long_query_time parameter should be set to the value specified.
##### ACTUAL RESULTS
The module reports the error below:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: value must be in 0-31536000
failed: [localhost] (item={'value': {u'engine': u'mysql5.6', u'description': u'default rds parameter group for - mysql5.6', u'immediate': True, u'state': u'present', u'params': {u'general_log': 0, u'slow_query_log': 1, u'connect_timeout': 100, u'wait_timeout': 180000, u'log_output': u'FILE', u'long_query_time': u'5.000000', u'innodb_flush_log_at_trx_commit': 2}, u'name': u'nw-mysql56-default'}, 'key': u'nw_mysql56_default'}) => {"failed": true, "item": {"key": "nw_mysql56_default", "value": {"description": "default rds parameter group for - mysql5.6", "engine": "mysql5.6", "immediate": true, "name": "nw-mysql56-default", "params": {"connect_timeout": 100, "general_log": 0, "innodb_flush_log_at_trx_commit": 2, "log_output": "FILE", "long_query_time": "5.000000", "slow_query_log": 1, "wait_timeout": 180000}, "state": "present"}}, "module_stderr": "Traceback (most recent call last):\n File \"/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\", line 2506, in <module>\n main()\n File \"/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\", line 277, in main\n changed_params, group_params = modify_group(next_group, group_params, immediate)\n File \"/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\", line 198, in modify_group\n set_parameter(param, new_value, immediate)\n File \"/Users/jeremy/.ansible/tmp/ansible-tmp-1466681599.29-50297407190710/rds_param_group\", line 167, in set_parameter\n param.value = converted_value\n File \"/usr/local/lib/python2.7/site-packages/boto/rds/parametergroup.py\", line 169, in set_value\n self._set_string_value(value)\n File \"/usr/local/lib/python2.7/site-packages/boto/rds/parametergroup.py\", line 141, in _set_string_value\n raise ValueError('value must be in %s' % self.allowed_values)\nValueError: value must be in 0-31536000\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
```
| main | rds param group unable to set long query time for mysql issue type bug report component name rds param group ansible version ansible config file ansible cfg configured module search path default w o overrides boto configuration ansible managed ansible managed display skipped hosts false forks gathering explicit host key checking false nocows retry files enabled false lookup plugins lookup plugins os environment from mac osx el capitan to n a aws summary i am unable to set the long query time rds parameter group setting using the rds param grroup module i have been able to successfully set many other variables without issue the only thing i see different with this parameter is that it is a float data type i have tried using the following values without success the value reported for this variable from mysql is from the error below i believe this may be an issue with boto i found someone reporting the same issue in the google group back in january and it said it was an issue with boto no solution was posted steps to reproduce create a rds parameter group for mysql and try to set the long query time parameter rds parameter groups nw default description default rds parameter group for engine immediate yes name nw default params connect timeout general log innodb flush log at trx commit log output file slow query log long query time wait timeout state present name create rds parameter groups rds param group description item value description engine item value engine immediate item value immediate name item value name params item value params to json state item value state with dict rds parameter groups expected results the long query time parameter should be set to the value specified actual results the module reports the error below an exception occurred during task execution to see the full traceback use vvv the error was valueerror value must be in failed item value u engine u u description u default rds parameter group for u immediate true u state u present u params u general log u slow query log u connect timeout u wait timeout u log output u file u long query time u u innodb flush log at trx commit u name u nw default key u nw default failed true item key nw default value description default rds parameter group for engine immediate true name nw default params connect timeout general log innodb flush log at trx commit log output file long query time slow query log wait timeout state present module stderr traceback most recent call last n file users jeremy ansible tmp ansible tmp rds param group line in n main n file users jeremy ansible tmp ansible tmp rds param group line in main n changed params group params modify group next group group params immediate n file users jeremy ansible tmp ansible tmp rds param group line in modify group n set parameter param new value immediate n file users jeremy ansible tmp ansible tmp rds param group line in set parameter n param value converted value n file usr local lib site packages boto rds parametergroup py line in set value n self set string value value n file usr local lib site packages boto rds parametergroup py line in set string value n raise valueerror value must be in s self allowed values nvalueerror value must be in n module stdout msg module failure parsed false | 1 |
4,185 | 20,239,382,259 | IssuesEvent | 2022-02-14 07:38:38 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | opened | Update the minimum cython version in setup.py | maintainability | See discussion at https://github.com/MDAnalysis/mdanalysis/pull/3527#issuecomment-1038518309
The minimum version of cython set in `setup.py` is older than the minimum supported by numpy.
This may be related to #3526 from @tylerjereddy and is a follow up to #3527. | True | Update the minimum cython version in setup.py - See discussion at https://github.com/MDAnalysis/mdanalysis/pull/3527#issuecomment-1038518309
The minimum version of cython set in `setup.py` is older than the minimum supported by numpy.
This may be related to #3526 from @tylerjereddy and is a follow up to #3527. | main | update the minimum cython version in setup py see discussion at the minimum version of cython set in setup py is older than the minimum supported by numpy this may be related to from tylerjereddy and is a follow up to | 1 |
125,025 | 12,245,544,460 | IssuesEvent | 2020-05-05 13:09:37 | niftools/blender_nif_plugin | https://api.github.com/repos/niftools/blender_nif_plugin | closed | Scene : Make the versioning system functional | Documentation Improvement Usability User Interface | Implement an actual versioning system which is user friendly.
Currently we store the version information in the scene for comparability reasons.
I was originally going to totally remove it although as it doesn't add value and its the main generator of issues.
Existing solution was based on the fact that we only really care about versioning at import/export time.
Reasoning
- Reduce export ui
- Decouple the code in some areas that relies on version information
- Information Filtering - https://github.com/niftools/blender_nif_plugin/issues/59
Issues with solution :
- Knowing whether or not the global should be updated
- Conversion between decimal and hex
- Different systems to represent the same information variant - game, version, tuples
This is an usability improvement over the following basic implementation #227
Improvements to solutions
- [x] Let the user decide to override the target on import
- [x] Decouple the versioning code
- [ ] Implement a user friendly ui that exposes the more common x.y.z notation
- [ ] store a user friendly version
- [ ] Game prefabs
| 1.0 | Scene : Make the versioning system functional - Implement an actual versioning system which is user friendly.
Currently we store the version information in the scene for comparability reasons.
I was originally going to totally remove it although as it doesn't add value and its the main generator of issues.
Existing solution was based on the fact that we only really care about versioning at import/export time.
Reasoning
- Reduce export ui
- Decouple the code in some areas that relies on version information
- Information Filtering - https://github.com/niftools/blender_nif_plugin/issues/59
Issues with solution :
- Knowing whether or not the global should be updated
- Conversion between decimal and hex
- Different systems to represent the same information variant - game, version, tuples
This is an usability improvement over the following basic implementation #227
Improvements to solutions
- [x] Let the user decide to override the target on import
- [x] Decouple the versioning code
- [ ] Implement a user friendly ui that exposes the more common x.y.z notation
- [ ] store a user friendly version
- [ ] Game prefabs
| non_main | scene make the versioning system functional implement an actual versioning system which is user friendly currently we store the version information in the scene for comparability reasons i was originally going to totally remove it although as it doesn t add value and its the main generator of issues existing solution was based on the fact that we only really care about versioning at import export time reasoning reduce export ui decouple the code in some areas that relies on version information information filtering issues with solution knowing whether or not the global should be updated conversion between decimal and hex different systems to represent the same information variant game version tuples this is an usability improvement over the following basic implementation improvements to solutions let the user decide to override the target on import decouple the versioning code implement a user friendly ui that exposes the more common x y z notation store a user friendly version game prefabs | 0 |
1,424 | 6,193,952,896 | IssuesEvent | 2017-07-05 08:42:27 | ocaml/opam-repository | https://api.github.com/repos/ocaml/opam-repository | closed | missing package: oth | incorrect constraints needs maintainer action | The following packages test-depend on `oth`, which doesn't exist in `opam-repository`.
* `lua_pattern.1.7`
* `snabela.1.0`
| True | missing package: oth - The following packages test-depend on `oth`, which doesn't exist in `opam-repository`.
* `lua_pattern.1.7`
* `snabela.1.0`
| main | missing package oth the following packages test depend on oth which doesn t exist in opam repository lua pattern snabela | 1 |
1,224 | 5,217,706,303 | IssuesEvent | 2017-01-26 14:42:28 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Conversions: feet + inches to metric | Improvement Maintainer Approved Status: PR Received | I'm European and basically the only times I see imperial height measurements they most often come in the format of x' y'' which (to my knowledge) DDG cannot convert. To do this I have to do 2 queries and then sum. One for feet and one for inches.
My request is that this get patched to be able to handle feet and inches in a single query, meaning being able to handle ft + in postfixes as well as ' as feet and both double quotes " and 2 single quotes '' as inches.
I also imagine that people in countries that use imperial would like to have metric converted to 2 answers (at least when querying about "x unit to ft"); one that simply converts x to y, and one that converts x to y' z''.
---
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft
| True | Conversions: feet + inches to metric - I'm European and basically the only times I see imperial height measurements they most often come in the format of x' y'' which (to my knowledge) DDG cannot convert. To do this I have to do 2 queries and then sum. One for feet and one for inches.
My request is that this get patched to be able to handle feet and inches in a single query, meaning being able to handle ft + in postfixes as well as ' as feet and both double quotes " and 2 single quotes '' as inches.
I also imagine that people in countries that use imperial would like to have metric converted to 2 answers (at least when querying about "x unit to ft"); one that simply converts x to y, and one that converts x to y' z''.
---
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft
| main | conversions feet inches to metric i m european and basically the only times i see imperial height measurements they most often come in the format of x y which to my knowledge ddg cannot convert to do this i have to do queries and then sum one for feet and one for inches my request is that this get patched to be able to handle feet and inches in a single query meaning being able to handle ft in postfixes as well as as feet and both double quotes and single quotes as inches i also imagine that people in countries that use imperial would like to have metric converted to answers at least when querying about x unit to ft one that simply converts x to y and one that converts x to y z ia page mintsoft | 1 |
2,953 | 10,602,529,347 | IssuesEvent | 2019-10-10 14:23:03 | lrusso96/simple-biblio | https://api.github.com/repos/lrusso96/simple-biblio | closed | Fix "method_lines" issue LibraryGenesis class | maintainability | Method `search` has 30 lines of code (exceeds 25 allowed). Consider refactoring.
https://codeclimate.com/github/lrusso96/simple-biblio/src/main/java/lrusso96/simplebiblio/core/providers/libgen/LibraryGenesis.java#issue_5d9f378f725a710001000026 | True | Fix "method_lines" issue LibraryGenesis class - Method `search` has 30 lines of code (exceeds 25 allowed). Consider refactoring.
https://codeclimate.com/github/lrusso96/simple-biblio/src/main/java/lrusso96/simplebiblio/core/providers/libgen/LibraryGenesis.java#issue_5d9f378f725a710001000026 | main | fix method lines issue librarygenesis class method search has lines of code exceeds allowed consider refactoring | 1 |
110,911 | 9,481,976,332 | IssuesEvent | 2019-04-21 10:51:36 | wp-cli/embed-command | https://api.github.com/repos/wp-cli/embed-command | closed | Test broken for `--force-regex` flag | bug scope:testing | The tests are currently failing for the `List oEmbed providers` scenario when it tries to test the effects of the `--force-regex` flag.
It took me a bit of research to understand what's happening. Here's my current understanding:
- oEmbed provider formats can be provided as proper regexes, or in a simplified format that uses an asterisk as a wildcard.
- The `--force-regex` flag turns the simplified format in a proper regex.
- The test assumed that Core comes with some simplified formats by default. Therefore, to test whether the `--force-regex` had an effect, it simplifies the logic by just testing whether the result with the flag is different from the result without the flag.
- Now, Core currently only ships with proper regex formats by default, no simplified ones.
- This makes the test fail, as there's nothing to convert, and hence the result with or without the flag is exactly the same.
This means that the test needs to be changed to verify this in a different way. I think the easiest is to filter the list of providers before retrieving them, injecting a simplified format in there and comparing before & after application of the `--force-regex` flag. If we inject one ourselves, we can even test against the exact regex to be generated. | 1.0 | Test broken for `--force-regex` flag - The tests are currently failing for the `List oEmbed providers` scenario when it tries to test the effects of the `--force-regex` flag.
It took me a bit of research to understand what's happening. Here's my current understanding:
- oEmbed provider formats can be provided as proper regexes, or in a simplified format that uses an asterisk as a wildcard.
- The `--force-regex` flag turns the simplified format in a proper regex.
- The test assumed that Core comes with some simplified formats by default. Therefore, to test whether the `--force-regex` had an effect, it simplifies the logic by just testing whether the result with the flag is different from the result without the flag.
- Now, Core currently only ships with proper regex formats by default, no simplified ones.
- This makes the test fail, as there's nothing to convert, and hence the result with or without the flag is exactly the same.
This means that the test needs to be changed to verify this in a different way. I think the easiest is to filter the list of providers before retrieving them, injecting a simplified format in there and comparing before & after application of the `--force-regex` flag. If we inject one ourselves, we can even test against the exact regex to be generated. | non_main | test broken for force regex flag the tests are currently failing for the list oembed providers scenario when it tries to test the effects of the force regex flag it took me a bit of research to understand what s happening here s my current understanding oembed provider formats can be provided as proper regexes or in a simplified format that uses an asterisk as a wildcard the force regex flag turns the simplified format in a proper regex the test assumed that core comes with some simplified formats by default therefore to test whether the force regex had an effect it simplifies the logic by just testing whether the result with the flag is different from the result without the flag now core currently only ships with proper regex formats by default no simplified ones this makes the test fail as there s nothing to convert and hence the result with or without the flag is exactly the same this means that the test needs to be changed to verify this in a different way i think the easiest is to filter the list of providers before retrieving them injecting a simplified format in there and comparing before after application of the force regex flag if we inject one ourselves we can even test against the exact regex to be generated | 0 |
53,588 | 13,183,348,479 | IssuesEvent | 2020-08-12 17:21:11 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | ERROR HINDERING MY ABILITY TO USE TENSOR FLOW | stalled stat:awaiting response type:build/install |
[TF Error.docx](https://github.com/tensorflow/tensorflow/files/4993986/TF.Error.docx)
Thank you for submitting a TensorFlow documentation issue. Per our GitHub
policy, we only address code/doc bugs, performance issues, feature requests, and
build/installation issues on GitHub.
The TensorFlow docs are open source! To get involved, read the documentation
contributor guide: https://www.tensorflow.org/community/contribute/docs
## URL(s) with the issue:
Please provide a link to the documentation entry, for example:
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/MyMethod
## Description of issue (what needs changing):
### Clear description
For example, why should someone use this method? How is it useful?
### Correct links
Is the link to the source code correct?
### Parameters defined
Are all parameters defined and formatted correctly?
### Returns defined
Are return values defined?
### Raises listed and defined
Are the errors defined? For example,
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file#raises
### Usage example
Is there a usage example?
See the API guide: https://www.tensorflow.org/community/contribute/docs_ref
on how to write testable usage examples.
### Request visuals, if applicable
Are there currently visuals? If not, will it clarify the content?
### Submit a pull request?
Are you planning to also submit a pull request to fix the issue? See the docs
contributor guide: https://www.tensorflow.org/community/contribute/docs,
docs API guide: https://www.tensorflow.org/community/contribute/docs_ref and the
docs style guide: https://www.tensorflow.org/community/contribute/docs_style
| 1.0 | ERROR HINDERING MY ABILITY TO USE TENSOR FLOW -
[TF Error.docx](https://github.com/tensorflow/tensorflow/files/4993986/TF.Error.docx)
Thank you for submitting a TensorFlow documentation issue. Per our GitHub
policy, we only address code/doc bugs, performance issues, feature requests, and
build/installation issues on GitHub.
The TensorFlow docs are open source! To get involved, read the documentation
contributor guide: https://www.tensorflow.org/community/contribute/docs
## URL(s) with the issue:
Please provide a link to the documentation entry, for example:
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/MyMethod
## Description of issue (what needs changing):
### Clear description
For example, why should someone use this method? How is it useful?
### Correct links
Is the link to the source code correct?
### Parameters defined
Are all parameters defined and formatted correctly?
### Returns defined
Are return values defined?
### Raises listed and defined
Are the errors defined? For example,
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file#raises
### Usage example
Is there a usage example?
See the API guide: https://www.tensorflow.org/community/contribute/docs_ref
on how to write testable usage examples.
### Request visuals, if applicable
Are there currently visuals? If not, will it clarify the content?
### Submit a pull request?
Are you planning to also submit a pull request to fix the issue? See the docs
contributor guide: https://www.tensorflow.org/community/contribute/docs,
docs API guide: https://www.tensorflow.org/community/contribute/docs_ref and the
docs style guide: https://www.tensorflow.org/community/contribute/docs_style
| non_main | error hindering my ability to use tensor flow thank you for submitting a tensorflow documentation issue per our github policy we only address code doc bugs performance issues feature requests and build installation issues on github the tensorflow docs are open source to get involved read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what needs changing clear description for example why should someone use this method how is it useful correct links is the link to the source code correct parameters defined are all parameters defined and formatted correctly returns defined are return values defined raises listed and defined are the errors defined for example usage example is there a usage example see the api guide on how to write testable usage examples request visuals if applicable are there currently visuals if not will it clarify the content submit a pull request are you planning to also submit a pull request to fix the issue see the docs contributor guide docs api guide and the docs style guide | 0 |
2,121 | 7,243,320,948 | IssuesEvent | 2018-02-14 11:16:07 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | opened | DependencyProperty.Register() calls should use correct data | Area: analyzer Area: maintainability feature | To prevent typos, all `DependencyProperty.Register()` calls should use
- `nameof(xyz)` as 1st parameter
- correct return value of `xyz` property as 2nd parameter
- correct type (same type) as 3rd parameter
- correct value for 4th parameter (default value for `xyz` property)
`
public static readonly DependencyProperty IsSelectedProperty = DependencyProperty.Register(nameof(XYZ), typeof(bool), typeof(MyType), new PropertyMetadata(default(bool)));
` | True | DependencyProperty.Register() calls should use correct data - To prevent typos, all `DependencyProperty.Register()` calls should use
- `nameof(xyz)` as 1st parameter
- correct return value of `xyz` property as 2nd parameter
- correct type (same type) as 3rd parameter
- correct value for 4th parameter (default value for `xyz` property)
`
public static readonly DependencyProperty IsSelectedProperty = DependencyProperty.Register(nameof(XYZ), typeof(bool), typeof(MyType), new PropertyMetadata(default(bool)));
` | main | dependencyproperty register calls should use correct data to prevent typos all dependencyproperty register calls should use nameof xyz as parameter correct return value of xyz property as parameter correct type same type as parameter correct value for parameter default value for xyz property public static readonly dependencyproperty isselectedproperty dependencyproperty register nameof xyz typeof bool typeof mytype new propertymetadata default bool | 1 |
4,004 | 18,681,171,066 | IssuesEvent | 2021-11-01 05:59:24 | prismatic-obloquy/chattor | https://api.github.com/repos/prismatic-obloquy/chattor | opened | Tasks for 0.0.1 | maintainer-only | No real organization yet, so I'm dumping all the (remaining, as of this writing) tasks into one big pile:
- [ ] Finish writing test infrastructure
- [ ] Write cleaner aggregation for unit tests
- [ ] Figure out how to do integration tests
- [ ] Put together fuzz testing, or decide not to
- [ ] Build CI pipeline somewhere to build and do all the tests
- [ ] Set up repo
- [ ] Go through issue and PR templates and make sure they're good
- [ ] Decide on mechanism for asking questions (Discussions?)
- [ ] Integrate CI, see previous
- [ ] Protect release/staging to require PRs + passing CI + code review
- [ ] Code owners?
- [ ] Set up documentation
- [ ] Figure out where stuff like COC, API docs, protocol docs will live
- [ ] Auto-build API docs somehow
- [ ] Deduplicate README and website
- [ ] Smoke tests
- [ ] Build on Windows
- [ ] Build on Mac
- [ ] Build for ARM
- [ ] Build against Clang + MUSL (set that as the default?) | True | Tasks for 0.0.1 - No real organization yet, so I'm dumping all the (remaining, as of this writing) tasks into one big pile:
- [ ] Finish writing test infrastructure
- [ ] Write cleaner aggregation for unit tests
- [ ] Figure out how to do integration tests
- [ ] Put together fuzz testing, or decide not to
- [ ] Build CI pipeline somewhere to build and do all the tests
- [ ] Set up repo
- [ ] Go through issue and PR templates and make sure they're good
- [ ] Decide on mechanism for asking questions (Discussions?)
- [ ] Integrate CI, see previous
- [ ] Protect release/staging to require PRs + passing CI + code review
- [ ] Code owners?
- [ ] Set up documentation
- [ ] Figure out where stuff like COC, API docs, protocol docs will live
- [ ] Auto-build API docs somehow
- [ ] Deduplicate README and website
- [ ] Smoke tests
- [ ] Build on Windows
- [ ] Build on Mac
- [ ] Build for ARM
- [ ] Build against Clang + MUSL (set that as the default?) | main | tasks for no real organization yet so i m dumping all the remaining as of this writing tasks into one big pile finish writing test infrastructure write cleaner aggregation for unit tests figure out how to do integration tests put together fuzz testing or decide not to build ci pipeline somewhere to build and do all the tests set up repo go through issue and pr templates and make sure they re good decide on mechanism for asking questions discussions integrate ci see previous protect release staging to require prs passing ci code review code owners set up documentation figure out where stuff like coc api docs protocol docs will live auto build api docs somehow deduplicate readme and website smoke tests build on windows build on mac build for arm build against clang musl set that as the default | 1 |
333,779 | 29,807,499,084 | IssuesEvent | 2023-06-16 12:47:54 | Khalon-Bridge/gitunioin-test-specs | https://api.github.com/repos/Khalon-Bridge/gitunioin-test-specs | opened | Contact Property Managers | bounty: 2 currency: ETH feature repo:gitunioin-test tasks:3 gitunion gitunion-app owner:Khalon-Bridge private org tech: solidity/react/nodeJs | ### Description
As a user, once I've found the property that I am interested in, I want to be able to contact the property manager directly from the platform so that I can get more information or arrange a viewing without having to visit multiple websites or make phone calls.
### Acceptance criteria
- [ ] The user can click on a button to initiate contact with the property manager from the property's page
- [ ] The contact form should be easily accessible and intuitive for users to fill in
- [ ] The property manager should respond to the user's inquiry within a reasonable timeframe | 1.0 | Contact Property Managers - ### Description
As a user, once I've found the property that I am interested in, I want to be able to contact the property manager directly from the platform so that I can get more information or arrange a viewing without having to visit multiple websites or make phone calls.
### Acceptance criteria
- [ ] The user can click on a button to initiate contact with the property manager from the property's page
- [ ] The contact form should be easily accessible and intuitive for users to fill in
- [ ] The property manager should respond to the user's inquiry within a reasonable timeframe | non_main | contact property managers description as a user once i ve found the property that i am interested in i want to be able to contact the property manager directly from the platform so that i can get more information or arrange a viewing without having to visit multiple websites or make phone calls acceptance criteria the user can click on a button to initiate contact with the property manager from the property s page the contact form should be easily accessible and intuitive for users to fill in the property manager should respond to the user s inquiry within a reasonable timeframe | 0 |
14,001 | 8,440,252,913 | IssuesEvent | 2018-10-18 06:31:35 | cdie/CQELight | https://api.github.com/repos/cdie/CQELight | closed | Performance issue in ReflectionTools | performance | On method `GetAllTypes`, the `s_AllTypes` is set with a distinct call every call, even if no new types are added. This should not happens if there's nothing new | True | Performance issue in ReflectionTools - On method `GetAllTypes`, the `s_AllTypes` is set with a distinct call every call, even if no new types are added. This should not happens if there's nothing new | non_main | performance issue in reflectiontools on method getalltypes the s alltypes is set with a distinct call every call even if no new types are added this should not happens if there s nothing new | 0 |
111,163 | 17,019,007,726 | IssuesEvent | 2021-07-02 15:53:20 | raft-tech/TANF-app | https://api.github.com/repos/raft-tech/TANF-app | closed | AU-02: Auditable Events | security | **Note**
This might be inherited from cloud.gov. If so, this wouldn't need to be documented in github.
AC:
- [ ] Control implementation statement has been reviewed by Raft for technical accuracy
- [ ] Control implementation statement has passed QASP review
DoD:
- [ ] Control implementation statement has been documented in GitHub
**Control Description:**
The organization:
a. Determines that the information system is capable of auditing the following events: [i. The following events must be identified within server audit logs:
• Server startup and shutdown;
• Loading and unloading of services;
• Installation and removal of software;
• System alerts and error messages;
• User logon and logoff;
• System administration activities;
• Accesses to sensitive information, files, and systems
• Account creation, modification, or deletion;
• Modifications of privileges and access controls; and,
• Additional security-related events, as required by the System Owner (SO) or to support the nature of the supported business and applications.
ii. The following events must be identified within application and database audit logs:
• Modifications to the application;
• Application alerts and error messages;
• User logon and logoff;
• System administration activities;
• Accesses to information and files
• Account creation, modification, or deletion; and,
• Modifications of privileges and access controls.
iii. The following events must be identified within network device (e.g., router, firewall, switch, wireless access point) audit logs:
• Device startup and shutdown;
• Administrator logon and logoff;
• Configuration changes;
• Account creation, modification, or deletion;
• Modifications of privileges and access controls; and,
• System alerts and error messages.];
b. Coordinates the security audit function with other organizational entities requiring audit-related information to enhance mutual support and to help guide the selection of auditable events;
c. Provides a rationale for why the auditable events are deemed to be adequate to support after-the-fact investigations of security incidents; and
d. Determines that the following events are to be audited within the information system: [Unsuccessful log-on attempts that result in a locked account/node; Configuration changes; Application alerts and error messages; System administration activities; Modification of privileges and access; and Account creation, modification, or deletion].
For CSP Only
AU-2 (a) [Successful and unsuccessful account logon events, account management events, object access, policy change, privilege functions, process tracking, and system events For Web applications: all administrator activity, authentication checks, authorization checks, data deletions, data access, data changes, and permission changes]
AU-2 (d) [organization-defined subset of the auditable events defined in AU-2 a to be audited continually for each identified event]
AU-2 Additional FedRAMP Requirements and Guidance:
Requirement: Coordination between service provider and consumer shall be documented and accepted by the JAB/AO. | True | AU-02: Auditable Events - **Note**
This might be inherited from cloud.gov. If so, this wouldn't need to be documented in github.
AC:
- [ ] Control implementation statement has been reviewed by Raft for technical accuracy
- [ ] Control implementation statement has passed QASP review
DoD:
- [ ] Control implementation statement has been documented in GitHub
**Control Description:**
The organization:
a. Determines that the information system is capable of auditing the following events: [i. The following events must be identified within server audit logs:
• Server startup and shutdown;
• Loading and unloading of services;
• Installation and removal of software;
• System alerts and error messages;
• User logon and logoff;
• System administration activities;
• Accesses to sensitive information, files, and systems
• Account creation, modification, or deletion;
• Modifications of privileges and access controls; and,
• Additional security-related events, as required by the System Owner (SO) or to support the nature of the supported business and applications.
ii. The following events must be identified within application and database audit logs:
• Modifications to the application;
• Application alerts and error messages;
• User logon and logoff;
• System administration activities;
• Accesses to information and files
• Account creation, modification, or deletion; and,
• Modifications of privileges and access controls.
iii. The following events must be identified within network device (e.g., router, firewall, switch, wireless access point) audit logs:
• Device startup and shutdown;
• Administrator logon and logoff;
• Configuration changes;
• Account creation, modification, or deletion;
• Modifications of privileges and access controls; and,
• System alerts and error messages.];
b. Coordinates the security audit function with other organizational entities requiring audit-related information to enhance mutual support and to help guide the selection of auditable events;
c. Provides a rationale for why the auditable events are deemed to be adequate to support after-the-fact investigations of security incidents; and
d. Determines that the following events are to be audited within the information system: [Unsuccessful log-on attempts that result in a locked account/node; Configuration changes; Application alerts and error messages; System administration activities; Modification of privileges and access; and Account creation, modification, or deletion].
For CSP Only
AU-2 (a) [Successful and unsuccessful account logon events, account management events, object access, policy change, privilege functions, process tracking, and system events For Web applications: all administrator activity, authentication checks, authorization checks, data deletions, data access, data changes, and permission changes]
AU-2 (d) [organization-defined subset of the auditable events defined in AU-2 a to be audited continually for each identified event]
AU-2 Additional FedRAMP Requirements and Guidance:
Requirement: Coordination between service provider and consumer shall be documented and accepted by the JAB/AO. | non_main | au auditable events note this might be inherited from cloud gov if so this wouldn t need to be documented in github ac control implementation statement has been reviewed by raft for technical accuracy control implementation statement has passed qasp review dod control implementation statement has been documented in github control description the organization a determines that the information system is capable of auditing the following events i the following events must be identified within server audit logs • server startup and shutdown • loading and unloading of services • installation and removal of software • system alerts and error messages • user logon and logoff • system administration activities • accesses to sensitive information files and systems • account creation modification or deletion • modifications of privileges and access controls and • additional security related events as required by the system owner so or to support the nature of the supported business and applications ii the following events must be identified within application and database audit logs • modifications to the application • application alerts and error messages • user logon and logoff • system administration activities • accesses to information and files • account creation modification or deletion and • modifications of privileges and access controls iii the following events must be identified within network device e g router firewall switch wireless access point audit logs • device startup and shutdown • administrator logon and logoff • configuration changes • account creation modification or deletion • modifications of privileges and access controls and • system alerts and error messages b coordinates the security audit function with other organizational entities requiring audit related information to enhance mutual support and to help guide the selection of auditable events c provides a rationale for why the auditable events are deemed to be adequate to support after the fact investigations of security incidents and d determines that the following events are to be audited within the information system for csp only au a au d au additional fedramp requirements and guidance requirement coordination between service provider and consumer shall be documented and accepted by the jab ao | 0 |
1,085 | 4,932,175,740 | IssuesEvent | 2016-11-28 12:48:08 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Cannot pull facts over SSH with snmp_facts on ASA | affects_2.3 bug_report networking waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
snmp_facts
##### ANSIBLE VERSION
```
ansible --version
2.3.0 (commit 20161110.abc9133)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
inventory = ./hosts
gathering = explicit
roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/Roles/roles
private_role_vars = yes
log_path = /var/log/ansible.log
fact_caching = redis
fact_caching_timeout = 86400
retry_files_enabled = False
##### OS / ENVIRONMENT
- **Local host**: Ubuntu 16.10 4.8
- **Target nodes**: ASAv 9.5
##### SUMMARY
Running snmp_facts on this particular type of target triggers a fatal error (Permission denied), whereas I can manually login into the target node with SSH or run snmp_facts without issue on IOSv targets.
We will continue to experience issues with this module as long as the API is not consistent with other networking modules (no provider, no ssh_keyfile, no password,...). Cf. the [provider proposal](https://github.com/ansible/ansible-modules-extras/issues/3176).
##### STEPS TO REPRODUCE
```
- name: Fetching SNMPv{{ snmp.new_version }} facts from the remote node
snmp_facts:
community: "{{ snmp.ssh.community }}"
host: "{{ snmp.ssh.host }}"
version: "{{ snmp.ssh.version }}"
register: facts
```
##### EXPECTED RESULTS
Successful snmp_facts
##### ACTUAL RESULTS
```
fatal: [ASAv2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n", "unreachable": true}
```
Despite the fact that SNMP is configured on the target with the same community:
```
sh run | inc snmp
...
snmp-server host management 172.21.100.1 community ***** version 2c
snmp-server location California
snmp-server contact xxxxxxxxxxxxxxxx
snmp-server community *****
snmp-server enable traps syslog
...
```
Also:
```
ssh admin@172.21.100.251
Type help or '?' for a list of available commands.
ASAv1> en
Password: **********
ASAv1# sh ver
Cisco Adaptive Security Appliance Software Version 9.5(2)204
Device Manager Version 7.5(2)
...
``` | True | Cannot pull facts over SSH with snmp_facts on ASA - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
snmp_facts
##### ANSIBLE VERSION
```
ansible --version
2.3.0 (commit 20161110.abc9133)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
inventory = ./hosts
gathering = explicit
roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/Roles/roles
private_role_vars = yes
log_path = /var/log/ansible.log
fact_caching = redis
fact_caching_timeout = 86400
retry_files_enabled = False
##### OS / ENVIRONMENT
- **Local host**: Ubuntu 16.10 4.8
- **Target nodes**: ASAv 9.5
##### SUMMARY
Running snmp_facts on this particular type of target triggers a fatal error (Permission denied), whereas I can manually login into the target node with SSH or run snmp_facts without issue on IOSv targets.
We will continue to experience issues with this module as long as the API is not consistent with other networking modules (no provider, no ssh_keyfile, no password,...). Cf. the [provider proposal](https://github.com/ansible/ansible-modules-extras/issues/3176).
##### STEPS TO REPRODUCE
```
- name: Fetching SNMPv{{ snmp.new_version }} facts from the remote node
snmp_facts:
community: "{{ snmp.ssh.community }}"
host: "{{ snmp.ssh.host }}"
version: "{{ snmp.ssh.version }}"
register: facts
```
##### EXPECTED RESULTS
Successful snmp_facts
##### ACTUAL RESULTS
```
fatal: [ASAv2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n", "unreachable": true}
```
Despite the fact that SNMP is configured on the target with the same community:
```
sh run | inc snmp
...
snmp-server host management 172.21.100.1 community ***** version 2c
snmp-server location California
snmp-server contact xxxxxxxxxxxxxxxx
snmp-server community *****
snmp-server enable traps syslog
...
```
Also:
```
ssh admin@172.21.100.251
Type help or '?' for a list of available commands.
ASAv1> en
Password: **********
ASAv1# sh ver
Cisco Adaptive Security Appliance Software Version 9.5(2)204
Device Manager Version 7.5(2)
...
``` | main | cannot pull facts over ssh with snmp facts on asa issue type bug report component name snmp facts ansible version ansible version commit config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible git ansible roles roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment local host ubuntu target nodes asav summary running snmp facts on this particular type of target triggers a fatal error permission denied whereas i can manually login into the target node with ssh or run snmp facts without issue on iosv targets we will continue to experience issues with this module as long as the api is not consistent with other networking modules no provider no ssh keyfile no password cf the steps to reproduce name fetching snmpv snmp new version facts from the remote node snmp facts community snmp ssh community host snmp ssh host version snmp ssh version register facts expected results successful snmp facts actual results fatal unreachable changed false msg failed to connect to the host via ssh permission denied publickey password r n unreachable true despite the fact that snmp is configured on the target with the same community sh run inc snmp snmp server host management community version snmp server location california snmp server contact xxxxxxxxxxxxxxxx snmp server community snmp server enable traps syslog also ssh admin type help or for a list of available commands en password sh ver cisco adaptive security appliance software version device manager version | 1 |
381,132 | 11,274,003,249 | IssuesEvent | 2020-01-14 17:38:12 | kubernetes/website | https://api.github.com/repos/kubernetes/website | closed | https://k8s.io/pl/ looks incomplete | language/pl priority/important-longterm | **This is a Bug Report**
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**Problem:**
~https://k8s.io/pl/ doesn't look right~
~I'd thought that the [preview](https://deploy-preview-18659--kubernetes-io-master-staging.netlify.com/pl/) for #18659 looked OK but in fact it doesn't and neither does the live site.~
https://k8s.io/pl/ is not yet complete.
**Proposed Solution:**
Complete localization work for https://k8s.io/pl/
**Page to Update:**
https://kubernetes.io/pl/
/language pl
/priority important-longterm | 1.0 | https://k8s.io/pl/ looks incomplete - **This is a Bug Report**
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**Problem:**
~https://k8s.io/pl/ doesn't look right~
~I'd thought that the [preview](https://deploy-preview-18659--kubernetes-io-master-staging.netlify.com/pl/) for #18659 looked OK but in fact it doesn't and neither does the live site.~
https://k8s.io/pl/ is not yet complete.
**Proposed Solution:**
Complete localization work for https://k8s.io/pl/
**Page to Update:**
https://kubernetes.io/pl/
/language pl
/priority important-longterm | non_main | looks incomplete this is a bug report problem doesn t look right i d thought that the for looked ok but in fact it doesn t and neither does the live site is not yet complete proposed solution complete localization work for page to update language pl priority important longterm | 0 |
2,145 | 7,404,775,487 | IssuesEvent | 2018-03-20 06:51:12 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Variables should not have 'Mock' or 'Stub' in their names | Area: analyzer Area: maintainability feature in progress | Variables should not have a `Mock` or `Stub` in their names as that is obvious.
In addition it means that the developer simply puts focus on the wrong thing (she should focus on the behavior and not on the mocking itself). | True | Variables should not have 'Mock' or 'Stub' in their names - Variables should not have a `Mock` or `Stub` in their names as that is obvious.
In addition it means that the developer simply puts focus on the wrong thing (she should focus on the behavior and not on the mocking itself). | main | variables should not have mock or stub in their names variables should not have a mock or stub in their names as that is obvious in addition it means that the developer simply puts focus on the wrong thing she should focus on the behavior and not on the mocking itself | 1 |
5,165 | 26,282,171,379 | IssuesEvent | 2023-01-07 12:30:31 | kkkkan/CsvToQrConverter | https://api.github.com/repos/kkkkan/CsvToQrConverter | closed | [新規ページ] 動作確認済み環境などを記した、「このサイトについて」的なページを新設したい | enhancement maintainer's-memo | - 動作確認済み環境
- お問い合わせリンク
- メアド
- Twitter
- Githubリンク
- リリース履歴 | True | [新規ページ] 動作確認済み環境などを記した、「このサイトについて」的なページを新設したい - - 動作確認済み環境
- お問い合わせリンク
- メアド
- Twitter
- Githubリンク
- リリース履歴 | main | 動作確認済み環境などを記した、「このサイトについて」的なページを新設したい 動作確認済み環境 お問い合わせリンク メアド twitter githubリンク リリース履歴 | 1 |
1,959 | 6,688,139,783 | IssuesEvent | 2017-10-08 11:03:12 | OpenLightingProject/ola | https://api.github.com/repos/OpenLightingProject/ola | closed | Experiment with switching to the fully virtualised Travis infrastructure | Maintainability | Hi!
I noticed as part of looking at travis-ci/travis-ci#8315 that the current Travis job is using their EC2 container infrastructure (`sudo: false`) and takes ~12 minutes to run. Whilst jobs start much faster than the fully virtualised GCE infra (1s vs 30s), the container environment has half the RAM and often suffers from CPU contention. See the comparison table here:
https://docs.travis-ci.com/user/reference/overview/
For one of my projects I found switching to the fully visualised GCE infra (`sudo: required`) halved the job runtime, so still resulted in a significant net time saving even after including the 30s bootup time penalty.
Anyway, thought I'd mention it just in case it helped you too :-) | True | Experiment with switching to the fully virtualised Travis infrastructure - Hi!
I noticed as part of looking at travis-ci/travis-ci#8315 that the current Travis job is using their EC2 container infrastructure (`sudo: false`) and takes ~12 minutes to run. Whilst jobs start much faster than the fully virtualised GCE infra (1s vs 30s), the container environment has half the RAM and often suffers from CPU contention. See the comparison table here:
https://docs.travis-ci.com/user/reference/overview/
For one of my projects I found switching to the fully visualised GCE infra (`sudo: required`) halved the job runtime, so still resulted in a significant net time saving even after including the 30s bootup time penalty.
Anyway, thought I'd mention it just in case it helped you too :-) | main | experiment with switching to the fully virtualised travis infrastructure hi i noticed as part of looking at travis ci travis ci that the current travis job is using their container infrastructure sudo false and takes minutes to run whilst jobs start much faster than the fully virtualised gce infra vs the container environment has half the ram and often suffers from cpu contention see the comparison table here for one of my projects i found switching to the fully visualised gce infra sudo required halved the job runtime so still resulted in a significant net time saving even after including the bootup time penalty anyway thought i d mention it just in case it helped you too | 1 |
2,915 | 10,411,346,251 | IssuesEvent | 2019-09-13 13:40:37 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | reopened | Applescript unavailability prevents whole actions from running | awaiting maintainer feedback bug help wanted | Actions that require AppleScript (like quitting apps) don't always work and so I have to re-run them sometimes, in addition, it is impossible to uninstall apps that use `:login_item` over SSH because the System Events app cannot be started over SSH.
Is there a flag that makes it possible to bypass AppleScript actions?
reference: **Add support for uninstalling login items** #15740
| True | Applescript unavailability prevents whole actions from running - Actions that require AppleScript (like quitting apps) don't always work and so I have to re-run them sometimes, in addition, it is impossible to uninstall apps that use `:login_item` over SSH because the System Events app cannot be started over SSH.
Is there a flag that makes it possible to bypass AppleScript actions?
reference: **Add support for uninstalling login items** #15740
| main | applescript unavailability prevents whole actions from running actions that require applescript like quitting apps don t always work and so i have to re run them sometimes in addition it is impossible to uninstall apps that use login item over ssh because the system events app cannot be started over ssh is there a flag that makes it possible to bypass applescript actions reference add support for uninstalling login items | 1 |
2,411 | 8,562,884,634 | IssuesEvent | 2018-11-09 12:11:53 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | AWS ec2_instance Module Doesn't Parse ebs_optimized Flag Correctly | affects_2.7 aws bug cloud module needs_maintainer support:community | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Creating EBS optimised EC2 instances using the `ec2_instance` module is defective and doesn't work as described in the doco.
The [ce2_instance module documentation states](https://github.com/ansible/ansible/blob/d5a4a401ea44d267c5525208d83644b9b0a63897/lib/ansible/modules/cloud/amazon/ec2_instance.py#L164):
```python
ebs_optimized:
description:
- Whether instance is should use optimized EBS volumes, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
```
Sadly, this isn't working. Setting `ebs_optimzed: yes` doesn't enable EBS optimisation for the EC2 instance. Looking at [the code, the module checks for the flag as a sub-option of `network` instead of a top-level flag](https://github.com/ansible/ansible/blob/d5a4a401ea44d267c5525208d83644b9b0a63897/lib/ansible/modules/cloud/amazon/ec2_instance.py#L1114):
```python
if (params.get('network') or {}).get('ebs_optimized') is not None:
spec['EbsOptimized'] = params['network'].get('ebs_optimized')
```
However, moving the flag under the `network` option like so correctly enables EBS optimisation for the EC2 instance:
```yaml
- name: Create EC2 Instance with EBS Optimisation
ec2_instance:
network:
ebs_optimized: yes
```
We should either change the doco to reflect the inner workings of the module, or change the module to function as documented.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_instance
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.7.1
config file = /Users/dennis.conrad/REDACTED/ansible.cfg
configured module search path = [u'/Users/dennis.conrad/REDACTED/ansible/library/modules', u'/Users/dennis.conrad/REDACTED/.venv/lib/python2.7/site-packages/REDACTED/ansible/library/modules']
ansible python module location = /Users/dennis.conrad/.pyenv/versions/2.7.14/lib/python2.7/site-packages/ansible
executable location = /Users/dennis.conrad/.pyenv/versions/2.7.14/bin/ansible
python version = 2.7.14 (default, Jun 18 2018, 15:53:27) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create an EC2 instance withe `ebs_optimized: yes`:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create EC2 Instance with EBS Optimisation
ec2_instance:
ebs_optimized: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
An EC2 instance with EBS optimisation __enabled__.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
An EC2 instance with EBS optimisation __disabled__.
A workaround is described in the [SUMMARY](#SUMMARY)
| True | AWS ec2_instance Module Doesn't Parse ebs_optimized Flag Correctly - <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Creating EBS optimised EC2 instances using the `ec2_instance` module is defective and doesn't work as described in the doco.
The [ce2_instance module documentation states](https://github.com/ansible/ansible/blob/d5a4a401ea44d267c5525208d83644b9b0a63897/lib/ansible/modules/cloud/amazon/ec2_instance.py#L164):
```python
ebs_optimized:
description:
- Whether instance is should use optimized EBS volumes, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
```
Sadly, this isn't working. Setting `ebs_optimzed: yes` doesn't enable EBS optimisation for the EC2 instance. Looking at [the code, the module checks for the flag as a sub-option of `network` instead of a top-level flag](https://github.com/ansible/ansible/blob/d5a4a401ea44d267c5525208d83644b9b0a63897/lib/ansible/modules/cloud/amazon/ec2_instance.py#L1114):
```python
if (params.get('network') or {}).get('ebs_optimized') is not None:
spec['EbsOptimized'] = params['network'].get('ebs_optimized')
```
However, moving the flag under the `network` option like so correctly enables EBS optimisation for the EC2 instance:
```yaml
- name: Create EC2 Instance with EBS Optimisation
ec2_instance:
network:
ebs_optimized: yes
```
We should either change the doco to reflect the inner workings of the module, or change the module to function as documented.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_instance
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.7.1
config file = /Users/dennis.conrad/REDACTED/ansible.cfg
configured module search path = [u'/Users/dennis.conrad/REDACTED/ansible/library/modules', u'/Users/dennis.conrad/REDACTED/.venv/lib/python2.7/site-packages/REDACTED/ansible/library/modules']
ansible python module location = /Users/dennis.conrad/.pyenv/versions/2.7.14/lib/python2.7/site-packages/ansible
executable location = /Users/dennis.conrad/.pyenv/versions/2.7.14/bin/ansible
python version = 2.7.14 (default, Jun 18 2018, 15:53:27) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create an EC2 instance withe `ebs_optimized: yes`:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create EC2 Instance with EBS Optimisation
ec2_instance:
ebs_optimized: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
An EC2 instance with EBS optimisation __enabled__.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
An EC2 instance with EBS optimisation __disabled__.
A workaround is described in the [SUMMARY](#SUMMARY)
| main | aws instance module doesn t parse ebs optimized flag correctly summary creating ebs optimised instances using the instance module is defective and doesn t work as described in the doco the python ebs optimized description whether instance is should use optimized ebs volumes see u sadly this isn t working setting ebs optimzed yes doesn t enable ebs optimisation for the instance looking at python if params get network or get ebs optimized is not none spec params get ebs optimized however moving the flag under the network option like so correctly enables ebs optimisation for the instance yaml name create instance with ebs optimisation instance network ebs optimized yes we should either change the doco to reflect the inner workings of the module or change the module to function as documented issue type bug report component name instance ansible version paste below ansible config file users dennis conrad redacted ansible cfg configured module search path ansible python module location users dennis conrad pyenv versions lib site packages ansible executable location users dennis conrad pyenv versions bin ansible python version default jun configuration n a os environment n a steps to reproduce create an instance withe ebs optimized yes yaml name create instance with ebs optimisation instance ebs optimized yes expected results an instance with ebs optimisation enabled actual results an instance with ebs optimisation disabled a workaround is described in the summary | 1 |
2,844 | 10,218,042,272 | IssuesEvent | 2019-08-15 15:01:57 | zaproxy/zaproxy | https://api.github.com/repos/zaproxy/zaproxy | closed | Unused parameter : "model" in ReportLastScan methods | IdealFirstBug Maintainability good first issue | most methods of **ReportLastScan.java** has a parameter:
`org.parosproxy.paros.model.ModelModel model` , but it is actually unused. those methods just pass this parameter to: `public void generate(StringBuilder report, Model model) throws Exception` where it is not used at all.
| True | Unused parameter : "model" in ReportLastScan methods - most methods of **ReportLastScan.java** has a parameter:
`org.parosproxy.paros.model.ModelModel model` , but it is actually unused. those methods just pass this parameter to: `public void generate(StringBuilder report, Model model) throws Exception` where it is not used at all.
| main | unused parameter model in reportlastscan methods most methods of reportlastscan java has a parameter org parosproxy paros model modelmodel model but it is actually unused those methods just pass this parameter to public void generate stringbuilder report model model throws exception where it is not used at all | 1 |
399,190 | 11,744,324,061 | IssuesEvent | 2020-03-12 07:25:36 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | mobile.twitter.com - video or audio doesn't play | browser-fenix engine-gecko priority-critical | <!-- @browser: Firefox Mobile 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:75.0) Gecko/75.0 Firefox/75.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/49982 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://mobile.twitter.com/holdmyale/status/1236914159733932032?s=09
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: Videos are taking too long to load and are choppy when playing
**Steps to Reproduce**:
1. play all videos in order
2. it seems each video takes more time to load and playback is choppy even on wifi
3. chrome seems to be loading videos instantaneously and playing them smoothly
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | mobile.twitter.com - video or audio doesn't play - <!-- @browser: Firefox Mobile 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:75.0) Gecko/75.0 Firefox/75.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/49982 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://mobile.twitter.com/holdmyale/status/1236914159733932032?s=09
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: Videos are taking too long to load and are choppy when playing
**Steps to Reproduce**:
1. play all videos in order
2. it seems each video takes more time to load and playback is choppy even on wifi
3. chrome seems to be loading videos instantaneously and playing them smoothly
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | mobile twitter com video or audio doesn t play url browser version firefox mobile operating system android tested another browser yes problem type video or audio doesn t play description videos are taking too long to load and are choppy when playing steps to reproduce play all videos in order it seems each video takes more time to load and playback is choppy even on wifi chrome seems to be loading videos instantaneously and playing them smoothly browser configuration none from with ❤️ | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.