Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
523,891 | 15,191,708,449 | IssuesEvent | 2021-02-15 20:27:20 | openforcefield/openff-toolkit | https://api.github.com/repos/openforcefield/openff-toolkit | opened | Use ELF10 for system creation when OpenEyeToolkitWrapper is available | effort:low priority:high | **Describe the bug**
Original message by @SimonBoothroyd:
> Am I right in thinking that since the 0.7.0 release of the toolkit we don't actually use ELF10 to compute partial charges, even when OE is installed and licensed?
> (i.e. in #471 the call to a toolkit compute_partial_charges_am1bcc function by a molecules compute_partial_charges_am1bcc function was replaced by a call to assign_partial_charges with am1bcc explicitly stated)
The above is a regression introduced in the 0.7.0 release. We should fix it by having `ToolkitAM1BCCHandler.create_force` do a `try/except` where it attempts to use `partial_charge_method='am1bccelf10'` first, and then, if there's a `ChargeMethodUnavailableError`, drops down to `partial_charge_method='am1bcc'`. This will cause ELF10 to be used (if available), and vanilla AM1-BCC otherwise.
**Additional context**
cc #447 | 1.0 | Use ELF10 for system creation when OpenEyeToolkitWrapper is available - **Describe the bug**
Original message by @SimonBoothroyd:
> Am I right in thinking that since the 0.7.0 release of the toolkit we don't actually use ELF10 to compute partial charges, even when OE is installed and licensed?
> (i.e. in #471 the call to a toolkit compute_partial_charges_am1bcc function by a molecules compute_partial_charges_am1bcc function was replaced by a call to assign_partial_charges with am1bcc explicitly stated)
The above is a regression introduced in the 0.7.0 release. We should fix it by having `ToolkitAM1BCCHandler.create_force` do a `try/except` where it attempts to use `partial_charge_method='am1bccelf10'` first, and then, if there's a `ChargeMethodUnavailableError`, drops down to `partial_charge_method='am1bcc'`. This will cause ELF10 to be used (if available), and vanilla AM1-BCC otherwise.
**Additional context**
cc #447 | non_main | use for system creation when openeyetoolkitwrapper is available describe the bug original message by simonboothroyd am i right in thinking that since the release of the toolkit we don t actually use to compute partial charges even when oe is installed and licensed i e in the call to a toolkit compute partial charges function by a molecules compute partial charges function was replaced by a call to assign partial charges with explicitly stated the above is a regression introduced in the release we should fix it by having create force do a try except where it attempts to use partial charge method first and then if there s a chargemethodunavailableerror drops down to partial charge method this will cause to be used if available and vanilla bcc otherwise additional context cc | 0 |
2,126 | 7,267,113,103 | IssuesEvent | 2018-02-20 02:34:30 | dgets/DANT2a | https://api.github.com/repos/dgets/DANT2a | closed | Modularize form friendliness/usability code | enhancement maintainability | Just started `FriendlyForms.Usability.*`, which is an attempt to modularize the ridiculously large amounts of redundant code shared between the different _Add*_ forms. Trying to learn how to pass the different `Form` components (primarily `TextBox` controls right now) back and forth properly, in order to be able to make the necessary changes on the active _Add*_ Form.
This should be useful for many modularization tasks in the future. Definitely delving deeper into C# workings than I've been before. | True | Modularize form friendliness/usability code - Just started `FriendlyForms.Usability.*`, which is an attempt to modularize the ridiculously large amounts of redundant code shared between the different _Add*_ forms. Trying to learn how to pass the different `Form` components (primarily `TextBox` controls right now) back and forth properly, in order to be able to make the necessary changes on the active _Add*_ Form.
This should be useful for many modularization tasks in the future. Definitely delving deeper into C# workings than I've been before. | main | modularize form friendliness usability code just started friendlyforms usability which is an attempt to modularize the ridiculously large amounts of redundant code shared between the different add forms trying to learn how to pass the different form components primarily textbox controls right now back and forth properly in order to be able to make the necessary changes on the active add form this should be useful for many modularization tasks in the future definitely delving deeper into c workings than i ve been before | 1 |
1,585 | 6,572,359,990 | IssuesEvent | 2017-09-11 01:42:20 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | zabbix_host.py does not update linked templates or groups correctly | affects_1.9 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_host.py
##### ANSIBLE VERSION
```
ansible 1.9.2
configured module search path = None
```
##### CONFIGURATION
##### OS / ENVIRONMENT
CentOS Linux release 7.1.1503 (Core)
##### SUMMARY
zabbix_host.py seems to remove previous group membership / template linkage
##### STEPS TO REPRODUCE
Steps to reproduce:
With the following inventory:
[groupa]
node-1
[groupb]
node-1
node-2
node-3
And the following playbooks:
```
---
- name: Add nodes to Zabbix
hosts: groupa
tags:
- grpa
tasks:
- local_action:
module: zabbix_host
server_url: "http://192.168.0.10/zabbix/api_jsonrpc.php"
login_user: admin
login_password: zabbix
host_name: "{{ inventory_hostname }}"
host_groups:
- Group A
link_templates:
- Custom Template 1
- Custom Template 2
- Template OS Linux
status: enabled
state: present
inventory_mode: automatic
interfaces:
- type: 1
main: 1
useip: 0
ip: ""
dns: "{{ inventory_hostname }}"
port: 10050
- name: Update group and template association
hosts: groupb
tags:
- grpb
tasks:
- local_action:
module: zabbix_host
server_url: "http://192.168.0.10/zabbix/api_jsonrpc.php"
login_user: admin
login_password: zabbix
host_name: "{{ inventory_hostname }}"
host_groups:
- Group B
link_templates:
- Custom Template 3
- Custom Template 4
- Template OS Linux
status: enabled
state: present
inventory_mode: automatic
interfaces:
- type: 1
main: 1
useip: 0
ip: ""
dns: "{{ inventory_hostname }}"
port: 10050
```
##### EXPECTED RESULTS
node-1 is a member of Zabbix "Group A" and "Group B" with custom templates 1-4, and "Template OS Linux" linked to it.
##### ACTUAL RESULTS
node-1 is a member of Zabbix "Group B" only and only has custom templates 3-4 and "Template OS Linux" linked to it.
There's no useful output from -vvvv as the API calls are not logged in -vvvv AFAICT.
| True | zabbix_host.py does not update linked templates or groups correctly - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_host.py
##### ANSIBLE VERSION
```
ansible 1.9.2
configured module search path = None
```
##### CONFIGURATION
##### OS / ENVIRONMENT
CentOS Linux release 7.1.1503 (Core)
##### SUMMARY
zabbix_host.py seems to remove previous group membership / template linkage
##### STEPS TO REPRODUCE
Steps to reproduce:
With the following inventory:
[groupa]
node-1
[groupb]
node-1
node-2
node-3
And the following playbooks:
```
---
- name: Add nodes to Zabbix
hosts: groupa
tags:
- grpa
tasks:
- local_action:
module: zabbix_host
server_url: "http://192.168.0.10/zabbix/api_jsonrpc.php"
login_user: admin
login_password: zabbix
host_name: "{{ inventory_hostname }}"
host_groups:
- Group A
link_templates:
- Custom Template 1
- Custom Template 2
- Template OS Linux
status: enabled
state: present
inventory_mode: automatic
interfaces:
- type: 1
main: 1
useip: 0
ip: ""
dns: "{{ inventory_hostname }}"
port: 10050
- name: Update group and template association
hosts: groupb
tags:
- grpb
tasks:
- local_action:
module: zabbix_host
server_url: "http://192.168.0.10/zabbix/api_jsonrpc.php"
login_user: admin
login_password: zabbix
host_name: "{{ inventory_hostname }}"
host_groups:
- Group B
link_templates:
- Custom Template 3
- Custom Template 4
- Template OS Linux
status: enabled
state: present
inventory_mode: automatic
interfaces:
- type: 1
main: 1
useip: 0
ip: ""
dns: "{{ inventory_hostname }}"
port: 10050
```
##### EXPECTED RESULTS
node-1 is a member of Zabbix "Group A" and "Group B" with custom templates 1-4, and "Template OS Linux" linked to it.
##### ACTUAL RESULTS
node-1 is a member of Zabbix "Group B" only and only has custom templates 3-4 and "Template OS Linux" linked to it.
There's no useful output from -vvvv as the API calls are not logged in -vvvv AFAICT.
| main | zabbix host py does not update linked templates or groups correctly issue type bug report component name zabbix host py ansible version ansible configured module search path none configuration os environment centos linux release core summary zabbix host py seems to remove previous group membership template linkage steps to reproduce steps to reproduce with the following inventory node node node node and the following playbooks name add nodes to zabbix hosts groupa tags grpa tasks local action module zabbix host server url login user admin login password zabbix host name inventory hostname host groups group a link templates custom template custom template template os linux status enabled state present inventory mode automatic interfaces type main useip ip dns inventory hostname port name update group and template association hosts groupb tags grpb tasks local action module zabbix host server url login user admin login password zabbix host name inventory hostname host groups group b link templates custom template custom template template os linux status enabled state present inventory mode automatic interfaces type main useip ip dns inventory hostname port expected results node is a member of zabbix group a and group b with custom templates and template os linux linked to it actual results node is a member of zabbix group b only and only has custom templates and template os linux linked to it there s no useful output from vvvv as the api calls are not logged in vvvv afaict | 1 |
4,966 | 25,519,490,404 | IssuesEvent | 2022-11-28 19:10:00 | Enselic/cargo-public-api | https://api.github.com/repos/Enselic/cargo-public-api | closed | Auto-bless output and create PR if nightly CI build fails | maintainability | Latest nightly changes the output slightly, namely like this:
```diff
diff --git a/cargo-public-api/tests/expected-output/public_api_list.txt b/cargo-public-api/tests/expected-output/public_api_list.txt
index 9e4b835..8cd24c2 100644
--- a/cargo-public-api/tests/expected-output/public_api_list.txt
+++ b/cargo-public-api/tests/expected-output/public_api_list.txt
@@ -119,5 +119,5 @@ impl core::cmp::PartialOrd<public_api::PublicItem> for public_api::PublicItem
pub fn public_api::PublicItem::partial_cmp(&self, other: &Self) -> core::option::Option<core::cmp::Ordering>
impl core::marker::StructuralEq for public_api::PublicItem
impl core::marker::StructuralPartialEq for public_api::PublicItem
-pub const public_api::MINIMUM_RUSTDOC_JSON_VERSION: &'static str
+pub const public_api::MINIMUM_RUSTDOC_JSON_VERSION: &str
pub type public_api::Result<T> = core::result::Result<T, public_api::Error>
diff --git a/public-api/public-api.txt b/public-api/public-api.txt
index b930a51..4a40b7e 100644
--- a/public-api/public-api.txt
+++ b/public-api/public-api.txt
@@ -292,5 +292,5 @@ pub fn public_api::PublicItem::try_from(value: U) -> core::result::Result<T, <T
impl<T, U> core::convert::TryInto<U> for public_api::PublicItem where U: core::convert::TryFrom<T>
pub type public_api::PublicItem::Error = <U as core::convert::TryFrom<T>>::Error
pub fn public_api::PublicItem::try_into(self) -> core::result::Result<U, <U as core::convert::TryFrom<T>>::Error>
-pub const public_api::MINIMUM_RUSTDOC_JSON_VERSION: &'static str
+pub const public_api::MINIMUM_RUSTDOC_JSON_VERSION: &str
pub type public_api::Result<T> = core::result::Result<T, public_api::Error>
diff --git a/public-api/tests/expected-output/comprehensive_api.txt b/public-api/tests/expected-output/comprehensive_api.txt
index 075a689..66ad144 100644
--- a/public-api/tests/expected-output/comprehensive_api.txt
+++ b/public-api/tests/expected-output/comprehensive_api.txt
@@ -13,7 +13,7 @@ pub struct field comprehensive_api::attributes::C::b: bool
#[export_name = "something_arbitrary"] pub fn comprehensive_api::attributes::export_name()
pub fn comprehensive_api::attributes::must_use() -> usize
pub mod comprehensive_api::constants
-pub const comprehensive_api::constants::CONST: &'static str
+pub const comprehensive_api::constants::CONST: &str
pub mod comprehensive_api::enums
pub enum comprehensive_api::enums::DiverseVariants
pub enum variant comprehensive_api::enums::DiverseVariants::Recursive
```
Rather than manually blessing this, I want to set up CI so that a PR with this diff is automatically created. So to bless slight changes in nightly like this, a maintainer can just merge the auto-created PR after making sure the changes looks OK.
| True | Auto-bless output and create PR if nightly CI build fails - Latest nightly changes the output slightly, namely like this:
```diff
diff --git a/cargo-public-api/tests/expected-output/public_api_list.txt b/cargo-public-api/tests/expected-output/public_api_list.txt
index 9e4b835..8cd24c2 100644
--- a/cargo-public-api/tests/expected-output/public_api_list.txt
+++ b/cargo-public-api/tests/expected-output/public_api_list.txt
@@ -119,5 +119,5 @@ impl core::cmp::PartialOrd<public_api::PublicItem> for public_api::PublicItem
pub fn public_api::PublicItem::partial_cmp(&self, other: &Self) -> core::option::Option<core::cmp::Ordering>
impl core::marker::StructuralEq for public_api::PublicItem
impl core::marker::StructuralPartialEq for public_api::PublicItem
-pub const public_api::MINIMUM_RUSTDOC_JSON_VERSION: &'static str
+pub const public_api::MINIMUM_RUSTDOC_JSON_VERSION: &str
pub type public_api::Result<T> = core::result::Result<T, public_api::Error>
diff --git a/public-api/public-api.txt b/public-api/public-api.txt
index b930a51..4a40b7e 100644
--- a/public-api/public-api.txt
+++ b/public-api/public-api.txt
@@ -292,5 +292,5 @@ pub fn public_api::PublicItem::try_from(value: U) -> core::result::Result<T, <T
impl<T, U> core::convert::TryInto<U> for public_api::PublicItem where U: core::convert::TryFrom<T>
pub type public_api::PublicItem::Error = <U as core::convert::TryFrom<T>>::Error
pub fn public_api::PublicItem::try_into(self) -> core::result::Result<U, <U as core::convert::TryFrom<T>>::Error>
-pub const public_api::MINIMUM_RUSTDOC_JSON_VERSION: &'static str
+pub const public_api::MINIMUM_RUSTDOC_JSON_VERSION: &str
pub type public_api::Result<T> = core::result::Result<T, public_api::Error>
diff --git a/public-api/tests/expected-output/comprehensive_api.txt b/public-api/tests/expected-output/comprehensive_api.txt
index 075a689..66ad144 100644
--- a/public-api/tests/expected-output/comprehensive_api.txt
+++ b/public-api/tests/expected-output/comprehensive_api.txt
@@ -13,7 +13,7 @@ pub struct field comprehensive_api::attributes::C::b: bool
#[export_name = "something_arbitrary"] pub fn comprehensive_api::attributes::export_name()
pub fn comprehensive_api::attributes::must_use() -> usize
pub mod comprehensive_api::constants
-pub const comprehensive_api::constants::CONST: &'static str
+pub const comprehensive_api::constants::CONST: &str
pub mod comprehensive_api::enums
pub enum comprehensive_api::enums::DiverseVariants
pub enum variant comprehensive_api::enums::DiverseVariants::Recursive
```
Rather than manually blessing this, I want to set up CI so that a PR with this diff is automatically created. So to bless slight changes in nightly like this, a maintainer can just merge the auto-created PR after making sure the changes looks OK.
| main | auto bless output and create pr if nightly ci build fails latest nightly changes the output slightly namely like this diff diff git a cargo public api tests expected output public api list txt b cargo public api tests expected output public api list txt index a cargo public api tests expected output public api list txt b cargo public api tests expected output public api list txt impl core cmp partialord for public api publicitem pub fn public api publicitem partial cmp self other self core option option impl core marker structuraleq for public api publicitem impl core marker structuralpartialeq for public api publicitem pub const public api minimum rustdoc json version static str pub const public api minimum rustdoc json version str pub type public api result core result result diff git a public api public api txt b public api public api txt index a public api public api txt b public api public api txt pub fn public api publicitem try from value u core result result t t impl core convert tryinto for public api publicitem where u core convert tryfrom pub type public api publicitem error error pub fn public api publicitem try into self core result result error pub const public api minimum rustdoc json version static str pub const public api minimum rustdoc json version str pub type public api result core result result diff git a public api tests expected output comprehensive api txt b public api tests expected output comprehensive api txt index a public api tests expected output comprehensive api txt b public api tests expected output comprehensive api txt pub struct field comprehensive api attributes c b bool pub fn comprehensive api attributes export name pub fn comprehensive api attributes must use usize pub mod comprehensive api constants pub const comprehensive api constants const static str pub const comprehensive api constants const str pub mod comprehensive api enums pub enum comprehensive api enums diversevariants pub enum variant comprehensive api enums diversevariants recursive rather than manually blessing this i want to set up ci so that a pr with this diff is automatically created so to bless slight changes in nightly like this a maintainer can just merge the auto created pr after making sure the changes looks ok | 1 |
4,732 | 24,438,087,469 | IssuesEvent | 2022-10-06 12:55:20 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Columns not refreshed when undoing a change to base table | type: bug work: frontend status: ready restricted: maintainers | ## Reproduce
1. New Exploration
1. Select base table A
1. Observe columns loaded for base table A
1. Change base table to B
1. Observe columns loaded for table table B
1. Undo
1. Expect to see columns change to display columns for table A
1. Observe columns for table B
CC @pavish | True | Columns not refreshed when undoing a change to base table - ## Reproduce
1. New Exploration
1. Select base table A
1. Observe columns loaded for base table A
1. Change base table to B
1. Observe columns loaded for table table B
1. Undo
1. Expect to see columns change to display columns for table A
1. Observe columns for table B
CC @pavish | main | columns not refreshed when undoing a change to base table reproduce new exploration select base table a observe columns loaded for base table a change base table to b observe columns loaded for table table b undo expect to see columns change to display columns for table a observe columns for table b cc pavish | 1 |
2,748 | 9,793,325,717 | IssuesEvent | 2019-06-10 19:40:07 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | Amazon cloud _facts modules need to be renamed to _info | affects_2.9 aws bug cloud module needs_maintainer support:community support:core | ##### SUMMARY
Modules should only be called `_facts` if they return `ansible_facts`, and `ansible_facts` should only be used for system-specific information. Information on cloud providers is not system specific. (See #54280 and [this checklist](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_checklist.html#contributing-to-ansible-objective-requirements).) Since this has only been clarified shortly before Ansible 2.8 went into feature freeze, it was decided to do bulk renames for Ansible 2.9.
Below you can find a list of modules from [modules/cloud/amazon](https://github.com/ansible/ansible/tree/devel/lib/ansible/modules/cloud/amazon/) which are called `_facts` but seem to not return `ansible_facts` at all. These should be renamed (similarly to #56822).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
~~aws_acm_facts~~
~~aws_az_facts~~
~~aws_caller_facts~~
~~aws_kms_facts~~
~~aws_region_facts~~
~~aws_sgw_facts~~
~~aws_waf_facts~~
~~cloudwatchlogs_log_group_facts~~
~~ec2_ami_facts~~
~~ec2_asg_facts~~
~~ec2_customer_gateway_facts~~
~~ec2_eip_facts~~
~~ec2_elb_facts~~
~~ec2_eni_facts~~
~~ec2_group_facts~~
~~ec2_instance_facts~~
~~ec2_lc_facts~~
~~ec2_placement_group_facts~~
~~ec2_snapshot_facts~~
~~ec2_vol_facts~~
ec2_vpc_dhcp_option_facts
ec2_vpc_endpoint_facts
ec2_vpc_igw_facts
ec2_vpc_nacl_facts
ec2_vpc_nat_gateway_facts
ec2_vpc_net_facts
ec2_vpc_peering_facts
ec2_vpc_route_table_facts
ec2_vpc_subnet_facts
ec2_vpc_vgw_facts
ec2_vpc_vpn_facts
~~ecs_taskdefinition_facts~~
~~elasticache_facts~~
~~elb_application_lb_facts~~
~~elb_classic_lb_facts~~
~~elb_target_facts~~
~~elb_target_group_facts~~
~~iam_mfa_device_facts~~
~~iam_role_facts~~
~~iam_server_certificate_facts~~
~~rds_instance_facts~~
~~rds_snapshot_facts~~
~~redshift_facts~~
~~route53_facts~~
##### ANSIBLE VERSION
2.9.0
| True | Amazon cloud _facts modules need to be renamed to _info - ##### SUMMARY
Modules should only be called `_facts` if they return `ansible_facts`, and `ansible_facts` should only be used for system-specific information. Information on cloud providers is not system specific. (See #54280 and [this checklist](https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_checklist.html#contributing-to-ansible-objective-requirements).) Since this has only been clarified shortly before Ansible 2.8 went into feature freeze, it was decided to do bulk renames for Ansible 2.9.
Below you can find a list of modules from [modules/cloud/amazon](https://github.com/ansible/ansible/tree/devel/lib/ansible/modules/cloud/amazon/) which are called `_facts` but seem to not return `ansible_facts` at all. These should be renamed (similarly to #56822).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
~~aws_acm_facts~~
~~aws_az_facts~~
~~aws_caller_facts~~
~~aws_kms_facts~~
~~aws_region_facts~~
~~aws_sgw_facts~~
~~aws_waf_facts~~
~~cloudwatchlogs_log_group_facts~~
~~ec2_ami_facts~~
~~ec2_asg_facts~~
~~ec2_customer_gateway_facts~~
~~ec2_eip_facts~~
~~ec2_elb_facts~~
~~ec2_eni_facts~~
~~ec2_group_facts~~
~~ec2_instance_facts~~
~~ec2_lc_facts~~
~~ec2_placement_group_facts~~
~~ec2_snapshot_facts~~
~~ec2_vol_facts~~
ec2_vpc_dhcp_option_facts
ec2_vpc_endpoint_facts
ec2_vpc_igw_facts
ec2_vpc_nacl_facts
ec2_vpc_nat_gateway_facts
ec2_vpc_net_facts
ec2_vpc_peering_facts
ec2_vpc_route_table_facts
ec2_vpc_subnet_facts
ec2_vpc_vgw_facts
ec2_vpc_vpn_facts
~~ecs_taskdefinition_facts~~
~~elasticache_facts~~
~~elb_application_lb_facts~~
~~elb_classic_lb_facts~~
~~elb_target_facts~~
~~elb_target_group_facts~~
~~iam_mfa_device_facts~~
~~iam_role_facts~~
~~iam_server_certificate_facts~~
~~rds_instance_facts~~
~~rds_snapshot_facts~~
~~redshift_facts~~
~~route53_facts~~
##### ANSIBLE VERSION
2.9.0
| main | amazon cloud facts modules need to be renamed to info summary modules should only be called facts if they return ansible facts and ansible facts should only be used for system specific information information on cloud providers is not system specific see and since this has only been clarified shortly before ansible went into feature freeze it was decided to do bulk renames for ansible below you can find a list of modules from which are called facts but seem to not return ansible facts at all these should be renamed similarly to issue type bug report component name aws acm facts aws az facts aws caller facts aws kms facts aws region facts aws sgw facts aws waf facts cloudwatchlogs log group facts ami facts asg facts customer gateway facts eip facts elb facts eni facts group facts instance facts lc facts placement group facts snapshot facts vol facts vpc dhcp option facts vpc endpoint facts vpc igw facts vpc nacl facts vpc nat gateway facts vpc net facts vpc peering facts vpc route table facts vpc subnet facts vpc vgw facts vpc vpn facts ecs taskdefinition facts elasticache facts elb application lb facts elb classic lb facts elb target facts elb target group facts iam mfa device facts iam role facts iam server certificate facts rds instance facts rds snapshot facts redshift facts facts ansible version | 1 |
156,532 | 24,624,567,437 | IssuesEvent | 2022-10-16 10:52:22 | dotnet/efcore | https://api.github.com/repos/dotnet/efcore | closed | Dictinct not working for custom select | closed-by-design | Dictinct not add in SQL
For query:
```
var queryable = this.dbSet.Where(
x =>
x.Translation.StartsWith(model.Word) &&
(x.Localization == model.From || x.Localization == model.Dest)).Distinct()
.Select(x => new DictionaryWordCompleterApiModel
{
Word = x.Translation,
Localization = x.Localization
}).Take(10).Distinct();
```
I have DISTINCT in SQL:
```
SELECT TOP(10) [t].[Translation], [t].[Localization]
FROM (
SELECT DISTINCT [x0].*
FROM [GameDictionaryWords] AS [x0]
WHERE ([x0].[Translation] LIKE N'guide' + N'%' AND (CHARINDEX(N'guide', [x0].[Translation]) = 1)) AND [x0].[Localization] IN (N'en', N'es')
) AS [t]
```
BUT for query
```
var queryable = this.dbSet.Where(
x =>
x.Translation.StartsWith(model.Word) &&
(x.Localization == model.From || x.Localization == model.Dest))
.Select(x => new DictionaryWordCompleterApiModel
{
Word = x.Translation,
Localization = x.Localization
}).Take(10).Distinct();
```
not DISTINCT in SQL:
```
SELECT TOP(10) [x].[Translation], [x].[Localization]
FROM [GameDictionaryWords] AS [x]
WHERE ([x].[Translation] LIKE N'guid' + N'%' AND (CHARINDEX(N'guid', [x].[Translation]) = 1)) AND [x].[Localization] IN (N'en', N'es')
```
Why not add Distinct?
### Further technical details
EF Core version: 1.1.0
Database Provider: Microsoft.EntityFrameworkCore.SqlServer
Operating system: Windows 10
IDE: Visual Studio 2015
| 1.0 | Dictinct not working for custom select - Dictinct not add in SQL
For query:
```
var queryable = this.dbSet.Where(
x =>
x.Translation.StartsWith(model.Word) &&
(x.Localization == model.From || x.Localization == model.Dest)).Distinct()
.Select(x => new DictionaryWordCompleterApiModel
{
Word = x.Translation,
Localization = x.Localization
}).Take(10).Distinct();
```
I have DISTINCT in SQL:
```
SELECT TOP(10) [t].[Translation], [t].[Localization]
FROM (
SELECT DISTINCT [x0].*
FROM [GameDictionaryWords] AS [x0]
WHERE ([x0].[Translation] LIKE N'guide' + N'%' AND (CHARINDEX(N'guide', [x0].[Translation]) = 1)) AND [x0].[Localization] IN (N'en', N'es')
) AS [t]
```
BUT for query
```
var queryable = this.dbSet.Where(
x =>
x.Translation.StartsWith(model.Word) &&
(x.Localization == model.From || x.Localization == model.Dest))
.Select(x => new DictionaryWordCompleterApiModel
{
Word = x.Translation,
Localization = x.Localization
}).Take(10).Distinct();
```
not DISTINCT in SQL:
```
SELECT TOP(10) [x].[Translation], [x].[Localization]
FROM [GameDictionaryWords] AS [x]
WHERE ([x].[Translation] LIKE N'guid' + N'%' AND (CHARINDEX(N'guid', [x].[Translation]) = 1)) AND [x].[Localization] IN (N'en', N'es')
```
Why not add Distinct?
### Further technical details
EF Core version: 1.1.0
Database Provider: Microsoft.EntityFrameworkCore.SqlServer
Operating system: Windows 10
IDE: Visual Studio 2015
| non_main | dictinct not working for custom select dictinct not add in sql for query var queryable this dbset where x x translation startswith model word x localization model from x localization model dest distinct select x new dictionarywordcompleterapimodel word x translation localization x localization take distinct i have distinct in sql select top from select distinct from as where like n guide n and charindex n guide and in n en n es as but for query var queryable this dbset where x x translation startswith model word x localization model from x localization model dest select x new dictionarywordcompleterapimodel word x translation localization x localization take distinct not distinct in sql select top from as where like n guid n and charindex n guid and in n en n es why not add distinct further technical details ef core version database provider microsoft entityframeworkcore sqlserver operating system windows ide visual studio | 0 |
341,997 | 24,724,858,975 | IssuesEvent | 2022-10-20 13:25:02 | SandraScherer/EntertainmentInfothek | https://api.github.com/repos/SandraScherer/EntertainmentInfothek | closed | Add genre information to series | documentation enhancement database program | - [x] Add table Series_Genre to database
- [x] Add/adapt Genre classes in EntertainmentDB.dll
- [x] Add tests to EntertainmentDB.Tests
- [x] Add/adapt ContentCreator classes in WikiPageCreator
- [x] Add tests to WikiPageCreator.Tests
- [x] Update documentation
- [x] EntertainmentInfothek_Database.vpp
- [x] EntertainmentInfothek_EntertainmentDB.dll.vpp
- [x] EntertainmentInfothek_WikiPageCreator.vpp
- [x] Doxygen | 1.0 | Add genre information to series - - [x] Add table Series_Genre to database
- [x] Add/adapt Genre classes in EntertainmentDB.dll
- [x] Add tests to EntertainmentDB.Tests
- [x] Add/adapt ContentCreator classes in WikiPageCreator
- [x] Add tests to WikiPageCreator.Tests
- [x] Update documentation
- [x] EntertainmentInfothek_Database.vpp
- [x] EntertainmentInfothek_EntertainmentDB.dll.vpp
- [x] EntertainmentInfothek_WikiPageCreator.vpp
- [x] Doxygen | non_main | add genre information to series add table series genre to database add adapt genre classes in entertainmentdb dll add tests to entertainmentdb tests add adapt contentcreator classes in wikipagecreator add tests to wikipagecreator tests update documentation entertainmentinfothek database vpp entertainmentinfothek entertainmentdb dll vpp entertainmentinfothek wikipagecreator vpp doxygen | 0 |
4,030 | 18,836,144,132 | IssuesEvent | 2021-11-11 01:17:09 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | RequestContextV2 is missing some fields | type/bug stage/waiting-for-release maintainer/need-followup | Missing values appear to be (this is a real event echo'd and compared with Wireshark captured local SAM submitted event:
`
{
"stage": "$default",
"time": "26/Sep/2021:02:59:48 +0000",
"timeEpoch": 1632625188400,
"domainName": "123ypbcjy1.execute-api.us-east-1.amazonaws.com",
"domainPrefix": "123ypbcjy1"
}
`
`stage` always seems to arrive as null, the other 4 are missing when running locally.
This appears to break the Rust Lambda Runtime which is very strict, when trying to test locally using SAM:
https://github.com/awslabs/aws-lambda-rust-runtime/blob/master/lambda-http/src/request.rs#L121-L145 | True | RequestContextV2 is missing some fields - Missing values appear to be (this is a real event echo'd and compared with Wireshark captured local SAM submitted event:
`
{
"stage": "$default",
"time": "26/Sep/2021:02:59:48 +0000",
"timeEpoch": 1632625188400,
"domainName": "123ypbcjy1.execute-api.us-east-1.amazonaws.com",
"domainPrefix": "123ypbcjy1"
}
`
`stage` always seems to arrive as null, the other 4 are missing when running locally.
This appears to break the Rust Lambda Runtime which is very strict, when trying to test locally using SAM:
https://github.com/awslabs/aws-lambda-rust-runtime/blob/master/lambda-http/src/request.rs#L121-L145 | main | is missing some fields missing values appear to be this is a real event echo d and compared with wireshark captured local sam submitted event stage default time sep timeepoch domainname execute api us east amazonaws com domainprefix stage always seems to arrive as null the other are missing when running locally this appears to break the rust lambda runtime which is very strict when trying to test locally using sam | 1 |
2,687 | 9,377,829,124 | IssuesEvent | 2019-04-04 11:20:54 | chocolatey/chocolatey-package-requests | https://api.github.com/repos/chocolatey/chocolatey-package-requests | closed | RFM - pycharm-community | Status: Available For Maintainer(s) | [Pycharm (Community)](https://www.jetbrains.com/pycharm/download/#section=windows) is a lightweight IDE for Python & Scientific development .
https://chocolatey.org/packages/PyCharm-community
I am searching for a new maintainer. SHA-256 checksums and direct links to each version are provided on the download page, time requested for maintaining this package is minimal. | True | RFM - pycharm-community - [Pycharm (Community)](https://www.jetbrains.com/pycharm/download/#section=windows) is a lightweight IDE for Python & Scientific development .
https://chocolatey.org/packages/PyCharm-community
I am searching for a new maintainer. SHA-256 checksums and direct links to each version are provided on the download page, time requested for maintaining this package is minimal. | main | rfm pycharm community is a lightweight ide for python scientific development i am searching for a new maintainer sha checksums and direct links to each version are provided on the download page time requested for maintaining this package is minimal | 1 |
3,806 | 16,488,246,422 | IssuesEvent | 2021-05-24 21:33:10 | ajour/ajour | https://api.github.com/repos/ajour/ajour | closed | [minor Bug][Classic ERA] addon with incorrect Source | B - bug C - waiting on maintainer S - addons | **Describe the bug**
"Movement Speed" addon have "unknow" source, despite having been installed via Ajour (via Curse).
**Expected behavior**
Source should be "CurseForge"
**Software involved**
- OS: Windows 10
- Ajour version: 1.2.0
- Classic ERA | True | [minor Bug][Classic ERA] addon with incorrect Source - **Describe the bug**
"Movement Speed" addon have "unknow" source, despite having been installed via Ajour (via Curse).
**Expected behavior**
Source should be "CurseForge"
**Software involved**
- OS: Windows 10
- Ajour version: 1.2.0
- Classic ERA | main | addon with incorrect source describe the bug movement speed addon have unknow source despite having been installed via ajour via curse expected behavior source should be curseforge software involved os windows ajour version classic era | 1 |
17,670 | 23,494,212,463 | IssuesEvent | 2022-08-17 22:17:58 | benthosdev/benthos | https://api.github.com/repos/benthosdev/benthos | opened | Add support for parquet logical types to parquet_encode processor | enhancement processors | In some cases, users will need to specify the logical type in the `schema` field. Details here: https://github.com/apache/parquet-format/blob/master/LogicalTypes.md
For example, when using `type: BYTE_ARRAY` to encode a string value, they might want to set the logical type to `STRING` so decoders will be able to interpret it correctly. For example, given this config:
```yaml
input:
generate:
mapping: root.test = "deadbeef"
count: 1
interval: 0s
pipeline:
processors:
- parquet_encode:
schema:
- name: test
type: BYTE_ARRAY
output:
file:
path: output.parquet
codec: all-bytes
```
will produce a parquet binary which, when decoded with parquet-tools will contain a base64-encoded value:
```shell
> docker run --rm -v$(pwd):/tmp/parquet nathanhowell/parquet-tools cat /tmp/parquet/output.parquet
test = ZGVhZGJlZWY=
```
however, if we change [this](https://github.com/benthosdev/benthos/blob/ba4b1d13570756ac273774d1a2c4772fef18680a/internal/impl/parquet/processor_encode.go#L121) line of code to `n = parquet.String()`, then parquet-tools will output `test = deadbeef`. | 1.0 | Add support for parquet logical types to parquet_encode processor - In some cases, users will need to specify the logical type in the `schema` field. Details here: https://github.com/apache/parquet-format/blob/master/LogicalTypes.md
For example, when using `type: BYTE_ARRAY` to encode a string value, they might want to set the logical type to `STRING` so decoders will be able to interpret it correctly. For example, given this config:
```yaml
input:
generate:
mapping: root.test = "deadbeef"
count: 1
interval: 0s
pipeline:
processors:
- parquet_encode:
schema:
- name: test
type: BYTE_ARRAY
output:
file:
path: output.parquet
codec: all-bytes
```
will produce a parquet binary which, when decoded with parquet-tools will contain a base64-encoded value:
```shell
> docker run --rm -v$(pwd):/tmp/parquet nathanhowell/parquet-tools cat /tmp/parquet/output.parquet
test = ZGVhZGJlZWY=
```
however, if we change [this](https://github.com/benthosdev/benthos/blob/ba4b1d13570756ac273774d1a2c4772fef18680a/internal/impl/parquet/processor_encode.go#L121) line of code to `n = parquet.String()`, then parquet-tools will output `test = deadbeef`. | non_main | add support for parquet logical types to parquet encode processor in some cases users will need to specify the logical type in the schema field details here for example when using type byte array to encode a string value they might want to set the logical type to string so decoders will be able to interpret it correctly for example given this config yaml input generate mapping root test deadbeef count interval pipeline processors parquet encode schema name test type byte array output file path output parquet codec all bytes will produce a parquet binary which when decoded with parquet tools will contain a encoded value shell docker run rm v pwd tmp parquet nathanhowell parquet tools cat tmp parquet output parquet test zgvhzgjlzwy however if we change line of code to n parquet string then parquet tools will output test deadbeef | 0 |
3,235 | 12,368,706,439 | IssuesEvent | 2020-05-18 14:13:30 | Kashdeya/Tiny-Progressions | https://api.github.com/repos/Kashdeya/Tiny-Progressions | closed | updated to 3.3.33 hotfix and mod dissapears from game even though in mod folder | Fixed Version not Maintainted | updated to the 6/6/19 update and now the entire mod vanished and when i do @tiny in the jei nothing shows up and also all cobblestone generators i had placed are all gone and the mod is acting like it is not installed but as soon as i revert back to previous version everything is back again | True | updated to 3.3.33 hotfix and mod dissapears from game even though in mod folder - updated to the 6/6/19 update and now the entire mod vanished and when i do @tiny in the jei nothing shows up and also all cobblestone generators i had placed are all gone and the mod is acting like it is not installed but as soon as i revert back to previous version everything is back again | main | updated to hotfix and mod dissapears from game even though in mod folder updated to the update and now the entire mod vanished and when i do tiny in the jei nothing shows up and also all cobblestone generators i had placed are all gone and the mod is acting like it is not installed but as soon as i revert back to previous version everything is back again | 1 |
178,432 | 21,509,397,642 | IssuesEvent | 2022-04-28 01:36:44 | gabriel-milan/is_ufrj_down | https://api.github.com/repos/gabriel-milan/is_ufrj_down | opened | WS-2021-0153 (High) detected in ejs-2.7.4.tgz | security vulnerability | ## WS-2021-0153 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ejs-2.7.4.tgz</b></p></summary>
<p>Embedded JavaScript templates</p>
<p>Library home page: <a href="https://registry.npmjs.org/ejs/-/ejs-2.7.4.tgz">https://registry.npmjs.org/ejs/-/ejs-2.7.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ejs/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.11.tgz (Root Library)
- webpack-bundle-analyzer-3.9.0.tgz
- :x: **ejs-2.7.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gabriel-milan/is_ufrj_down/commit/927038f15864574c2161551b2f84d4189be22b6b">927038f15864574c2161551b2f84d4189be22b6b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Injection vulnerability was found in ejs before 3.1.6. Caused by filename which isn't sanitized for display.
<p>Publish Date: 2021-01-22
<p>URL: <a href=https://github.com/mde/ejs/commit/abaee2be937236b1b8da9a1f55096c17dda905fd>WS-2021-0153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mde/ejs/issues/571">https://github.com/mde/ejs/issues/571</a></p>
<p>Release Date: 2021-01-22</p>
<p>Fix Resolution: ejs - 3.1.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0153 (High) detected in ejs-2.7.4.tgz - ## WS-2021-0153 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ejs-2.7.4.tgz</b></p></summary>
<p>Embedded JavaScript templates</p>
<p>Library home page: <a href="https://registry.npmjs.org/ejs/-/ejs-2.7.4.tgz">https://registry.npmjs.org/ejs/-/ejs-2.7.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ejs/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.11.tgz (Root Library)
- webpack-bundle-analyzer-3.9.0.tgz
- :x: **ejs-2.7.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gabriel-milan/is_ufrj_down/commit/927038f15864574c2161551b2f84d4189be22b6b">927038f15864574c2161551b2f84d4189be22b6b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Injection vulnerability was found in ejs before 3.1.6. Caused by filename which isn't sanitized for display.
<p>Publish Date: 2021-01-22
<p>URL: <a href=https://github.com/mde/ejs/commit/abaee2be937236b1b8da9a1f55096c17dda905fd>WS-2021-0153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mde/ejs/issues/571">https://github.com/mde/ejs/issues/571</a></p>
<p>Release Date: 2021-01-22</p>
<p>Fix Resolution: ejs - 3.1.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws high detected in ejs tgz ws high severity vulnerability vulnerable library ejs tgz embedded javascript templates library home page a href path to dependency file package json path to vulnerable library node modules ejs package json dependency hierarchy cli service tgz root library webpack bundle analyzer tgz x ejs tgz vulnerable library found in head commit a href found in base branch master vulnerability details arbitrary code injection vulnerability was found in ejs before caused by filename which isn t sanitized for display publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ejs step up your open source security game with whitesource | 0 |
1,809 | 6,576,169,387 | IssuesEvent | 2017-09-11 18:47:00 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | git module always fails on update if submodules are checked out at different commit | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
git
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
```
(current `devel` branch)
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Host: CentOS 7, Target: CentOS 7
##### SUMMARY
<!--- Explain the problem briefly -->
If a submodule's checked out commit is different than the SHA-1 commited to the super-repository, an update attempt of the super-repository using the git module always fails with `Local modifications exist`, even if `force=yes` was given.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
###### Preconditions
- Repo `repo.git`, which contains submodules, has already been cloned
- At least one submodule has been checked out at a different commit than commited to the repo `repo.git`; i.e. the instance of the checked out repo is "dirty" and `git status` shows something like:
```
# On branch master
# Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded.
# (use "git pull" to update your local branch)
#
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: <submodule> (new commits)
```
###### Example playbook to reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: all
gather_facts: False
tasks:
- name: checkout repo
git:
repo: user@host:/var/lib/git/repo.git
dest: /tmp/repo.git
accept_hostkey: yes
ssh_opts: "-o StrictHostKeyChecking=no"
force: yes
update: yes
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
The super-repository should be updated and submodules shall be checkedout to the stored SHA-1 reference within the super-repository.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
fatal: [cali]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "git"
},
"module_stderr": "OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /home/mf/.ssh/config\r\ndebug1: /home/mf/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 25086\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to cali closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_rYIDTj/ansible_module_git.py\", line 1022, in <module>\r\n main()\r\n File \"/tmp/ansible_rYIDTj/ansible_module_git.py\", line 973, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n",
"msg": "MODULE FAILURE"
}
```
| True | git module always fails on update if submodules are checked out at different commit - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
git
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
```
(current `devel` branch)
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Host: CentOS 7, Target: CentOS 7
##### SUMMARY
<!--- Explain the problem briefly -->
If a submodule's checked out commit is different than the SHA-1 commited to the super-repository, an update attempt of the super-repository using the git module always fails with `Local modifications exist`, even if `force=yes` was given.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
###### Preconditions
- Repo `repo.git`, which contains submodules, has already been cloned
- At least one submodule has been checked out at a different commit than commited to the repo `repo.git`; i.e. the instance of the checked out repo is "dirty" and `git status` shows something like:
```
# On branch master
# Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded.
# (use "git pull" to update your local branch)
#
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: <submodule> (new commits)
```
###### Example playbook to reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: all
gather_facts: False
tasks:
- name: checkout repo
git:
repo: user@host:/var/lib/git/repo.git
dest: /tmp/repo.git
accept_hostkey: yes
ssh_opts: "-o StrictHostKeyChecking=no"
force: yes
update: yes
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
The super-repository should be updated and submodules shall be checkedout to the stored SHA-1 reference within the super-repository.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
fatal: [cali]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "git"
},
"module_stderr": "OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /home/mf/.ssh/config\r\ndebug1: /home/mf/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 25086\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to cali closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_rYIDTj/ansible_module_git.py\", line 1022, in <module>\r\n main()\r\n File \"/tmp/ansible_rYIDTj/ansible_module_git.py\", line 973, in main\r\n result.update(changed=True, after=remote_head, msg='Local modifications exist')\r\nUnboundLocalError: local variable 'remote_head' referenced before assignment\r\n",
"msg": "MODULE FAILURE"
}
```
| main | git module always fails on update if submodules are checked out at different commit issue type bug report component name git ansible version ansible current devel branch configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific host centos target centos summary if a submodule s checked out commit is different than the sha commited to the super repository an update attempt of the super repository using the git module always fails with local modifications exist even if force yes was given steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used preconditions repo repo git which contains submodules has already been cloned at least one submodule has been checked out at a different commit than commited to the repo repo git i e the instance of the checked out repo is dirty and git status shows something like on branch master your branch is behind origin master by commits and can be fast forwarded use git pull to update your local branch changes not staged for commit use git add to update what will be committed use git checkout to discard changes in working directory modified new commits example playbook to reproduce hosts all gather facts false tasks name checkout repo git repo user host var lib git repo git dest tmp repo git accept hostkey yes ssh opts o stricthostkeychecking no force yes update yes expected results the super repository should be updated and submodules shall be checkedout to the stored sha reference within the super repository actual results fatal failed changed false failed true invocation module name git module stderr openssh openssl fips feb r reading configuration data home mf ssh config r home mf ssh config line applying options for r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to cali closed r n module stdout traceback most recent call last r n file tmp ansible ryidtj ansible module git py line in r n main r n file tmp ansible ryidtj ansible module git py line in main r n result update changed true after remote head msg local modifications exist r nunboundlocalerror local variable remote head referenced before assignment r n msg module failure | 1 |
680 | 4,226,774,567 | IssuesEvent | 2016-07-02 17:51:13 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | reopened | Places: show working time range | Maintainer Input Requested | Could we also show opening/closing time on queries like "McDonalds nearby".
------
IA Page: http://duck.co/ia/view/maps_places
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @nilnilnil | True | Places: show working time range - Could we also show opening/closing time on queries like "McDonalds nearby".
------
IA Page: http://duck.co/ia/view/maps_places
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @nilnilnil | main | places show working time range could we also show opening closing time on queries like mcdonalds nearby ia page nilnilnil | 1 |
2,143 | 7,381,751,910 | IssuesEvent | 2018-03-15 00:33:58 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | Azure NSG Documenation missing some fields | affects_2.3 azure cloud deprecated docs needs_maintainer support:core | From @bearrito on 2016-10-13T14:45:43Z
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
http://docs.ansible.com/ansible/azure_rm_securitygroup_module.html
##### ANSIBLE VERSION
2.3
##### SUMMARY
The documention does not indicate some fields related to rules
- source_address_prefix - Does this support tags as in the PORTAL?
- direction - Assumed set is Inbound/Outbound
- access - Assumed set isDeny/Allow
Copied from original issue: ansible/ansible-modules-core#5252
| True | Azure NSG Documenation missing some fields - From @bearrito on 2016-10-13T14:45:43Z
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
http://docs.ansible.com/ansible/azure_rm_securitygroup_module.html
##### ANSIBLE VERSION
2.3
##### SUMMARY
The documention does not indicate some fields related to rules
- source_address_prefix - Does this support tags as in the PORTAL?
- direction - Assumed set is Inbound/Outbound
- access - Assumed set isDeny/Allow
Copied from original issue: ansible/ansible-modules-core#5252
| main | azure nsg documenation missing some fields from bearrito on issue type documentation report component name ansible version summary the documention does not indicate some fields related to rules source address prefix does this support tags as in the portal direction assumed set is inbound outbound access assumed set isdeny allow copied from original issue ansible ansible modules core | 1 |
5,007 | 25,733,011,085 | IssuesEvent | 2022-12-07 21:54:07 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Maintainer change request: Block reference (Current Maintainer agreed) | Maintainer change request | **Thank you for supporting the Backdrop community!**
Please note the procedure to add a new maintainer to a project:
1. Please join the Backdrop Contrib group (if you have not already) by
submitting [an application](https://github.com/backdrop-ops/contrib/issues/new?assignees=klonos&labels=Maintainer+application&template=application-to-join-the-contrib-group.md&title=Application+to+join+the+Contrib+Group%3A).
_DONE_
2. File an issue in the current project's issue queue offering to help maintain
that project.
_DONE_
3. Create a PR for that project that adds your name to the README.md file in
the list of maintainers. <!-- The project maintainer, or a backdrop-contrib
administrator, will merge this PR to accept your offer of help. -->
_DONE
However the current maintainer does not have the permissions to merge._
4. If the project does not have a listed maintainer, or if a current maintainer
does not respond within 2 weeks, create *this issue* to take over the project.
**Please include a link to the issue you filed for the project.**
<!-- example: https://github.com/backdrop-contrib/feeds_jsonpath_parser/issues/7 -->
https://github.com/backdrop-contrib/blockreference/issues/14
**Please include a link to the PR that adds your name to the README.md file.**
<!-- example: https://github.com/backdrop-contrib/feeds_jsonpath_parser/pull/8 -->
https://github.com/backdrop-contrib/blockreference/pull/15
<!-- After confirming the project has been abandoned for a period of 2 weeks or
more, a Backdrop Contrib administrator will add your name to the list of
maintainers in that project's README.md file, and grant you admin access to the
project. -->
| True | Maintainer change request: Block reference (Current Maintainer agreed) - **Thank you for supporting the Backdrop community!**
Please note the procedure to add a new maintainer to a project:
1. Please join the Backdrop Contrib group (if you have not already) by
submitting [an application](https://github.com/backdrop-ops/contrib/issues/new?assignees=klonos&labels=Maintainer+application&template=application-to-join-the-contrib-group.md&title=Application+to+join+the+Contrib+Group%3A).
_DONE_
2. File an issue in the current project's issue queue offering to help maintain
that project.
_DONE_
3. Create a PR for that project that adds your name to the README.md file in
the list of maintainers. <!-- The project maintainer, or a backdrop-contrib
administrator, will merge this PR to accept your offer of help. -->
_DONE
However the current maintainer does not have the permissions to merge._
4. If the project does not have a listed maintainer, or if a current maintainer
does not respond within 2 weeks, create *this issue* to take over the project.
**Please include a link to the issue you filed for the project.**
<!-- example: https://github.com/backdrop-contrib/feeds_jsonpath_parser/issues/7 -->
https://github.com/backdrop-contrib/blockreference/issues/14
**Please include a link to the PR that adds your name to the README.md file.**
<!-- example: https://github.com/backdrop-contrib/feeds_jsonpath_parser/pull/8 -->
https://github.com/backdrop-contrib/blockreference/pull/15
<!-- After confirming the project has been abandoned for a period of 2 weeks or
more, a Backdrop Contrib administrator will add your name to the list of
maintainers in that project's README.md file, and grant you admin access to the
project. -->
| main | maintainer change request block reference current maintainer agreed thank you for supporting the backdrop community please note the procedure to add a new maintainer to a project please join the backdrop contrib group if you have not already by submitting done file an issue in the current project s issue queue offering to help maintain that project done create a pr for that project that adds your name to the readme md file in the list of maintainers the project maintainer or a backdrop contrib administrator will merge this pr to accept your offer of help done however the current maintainer does not have the permissions to merge if the project does not have a listed maintainer or if a current maintainer does not respond within weeks create this issue to take over the project please include a link to the issue you filed for the project please include a link to the pr that adds your name to the readme md file after confirming the project has been abandoned for a period of weeks or more a backdrop contrib administrator will add your name to the list of maintainers in that project s readme md file and grant you admin access to the project | 1 |
580,175 | 17,211,730,953 | IssuesEvent | 2021-07-19 06:02:57 | eatmyvenom/hyarcade | https://api.github.com/repos/eatmyvenom/hyarcade | closed | Add more maintenance control | Low priority disc:overall enhancement t:discord | For doing advanced tasks with the bot, there is not much ability for trusted users/ developers. There should be far more commands to do tasks like set pfp or edit more parts of the database. | 1.0 | Add more maintenance control - For doing advanced tasks with the bot, there is not much ability for trusted users/ developers. There should be far more commands to do tasks like set pfp or edit more parts of the database. | non_main | add more maintenance control for doing advanced tasks with the bot there is not much ability for trusted users developers there should be far more commands to do tasks like set pfp or edit more parts of the database | 0 |
2,781 | 9,973,343,120 | IssuesEvent | 2019-07-09 08:07:39 | dgets/lasttime | https://api.github.com/repos/dgets/lasttime | opened | Admin routines for full database consolidation should be possible | enhancement maintainability | This would just be an admin user level routine possible for consolidation of all entries (see also #131) in a given substance, category, or the entire database. This will have to wait for awhile until the different user level groups are implemented in different areas within this project, probably. It may be a good issue to start implementing this functionality with, however. | True | Admin routines for full database consolidation should be possible - This would just be an admin user level routine possible for consolidation of all entries (see also #131) in a given substance, category, or the entire database. This will have to wait for awhile until the different user level groups are implemented in different areas within this project, probably. It may be a good issue to start implementing this functionality with, however. | main | admin routines for full database consolidation should be possible this would just be an admin user level routine possible for consolidation of all entries see also in a given substance category or the entire database this will have to wait for awhile until the different user level groups are implemented in different areas within this project probably it may be a good issue to start implementing this functionality with however | 1 |
33,578 | 4,499,713,401 | IssuesEvent | 2016-09-01 00:10:07 | elegantthemes/Divi-Beta | https://api.github.com/repos/elegantthemes/Divi-Beta | closed | Page Settings Icon Snap | DESIGN SIGNOFF FEATURE READY FOR REVIEW | Similar to how the settings modal can be customized to snap to any side of the page, it would be cool to have the same option for the page settings icons that currently slide in from the bottom.
Once they slide it, the main settings icon could be used as an anchor to drag the buttons to any side of the page. It could snap and animate using React motion similar to how the "Facebook Home" avatars used to snap.
| 1.0 | Page Settings Icon Snap - Similar to how the settings modal can be customized to snap to any side of the page, it would be cool to have the same option for the page settings icons that currently slide in from the bottom.
Once they slide it, the main settings icon could be used as an anchor to drag the buttons to any side of the page. It could snap and animate using React motion similar to how the "Facebook Home" avatars used to snap.
| non_main | page settings icon snap similar to how the settings modal can be customized to snap to any side of the page it would be cool to have the same option for the page settings icons that currently slide in from the bottom once they slide it the main settings icon could be used as an anchor to drag the buttons to any side of the page it could snap and animate using react motion similar to how the facebook home avatars used to snap | 0 |
3,020 | 11,185,112,137 | IssuesEvent | 2019-12-31 22:27:17 | laminas/laminas-stdlib | https://api.github.com/repos/laminas/laminas-stdlib | opened | ArrayUtils::inArray should use strict in_array? | Awaiting Maintainer Response BC Break | Related: https://github.com/zendframework/zend-form/issues/18
In [`Zend\Form\View\Helper\FormSelect`](https://github.com/zendframework/zend-form/blob/767ed409dc19b127d36ed4cdc01de0318207b50b/src/View/Helper/FormSelect.php#L206) `ArrayUtils::inArray` is called with the strict parameter `false`. This causes an issue with string matching ('1.1' and '1.10' treated equivalent):
``` php
<?php
$needle = '1.10';
$haystack = ['1.1'];
assert(in_array($needle, $haystack) === false);
// PHP Warning: assert(): Assertion failed in <file> on line 5
```
(3v4l: https://3v4l.org/HKM8Q)
Simply changing FormSelect to use strict=true breaks [an existing test which uses integer keys in the value options array](https://github.com/zendframework/zend-form/blob/767ed409dc19b127d36ed4cdc01de0318207b50b/test/View/Helper/FormSelectTest.php#L367-L385).
Since `ArrayUtils::inArray` uses string casting to work around `in_array`'s wonky non-strict behaviour, shouldn't the call to `in_array` always have strict=true?
``` diff
diff --git a/src/ArrayUtils.php b/src/ArrayUtils.php
index 17e3ae3..69b9721 100644
--- a/src/ArrayUtils.php
+++ b/src/ArrayUtils.php
@@ -199,7 +199,7 @@ abstract class ArrayUtils
}
}
}
- return in_array($needle, $haystack, $strict);
+ return in_array($needle, $haystack, true);
}
/**
```
I've tested this change here (all tests pass) and against `zend-form` (where it fixes the reported issue).
What's the protocol for testing changes to packages which may have knock-on effects on other packages? Pull every repo that has a dependency on stdlib, apply the patch, and run the tests?
---
Originally posted by @adamlundrigan at https://github.com/zendframework/zend-stdlib/issues/41 | True | ArrayUtils::inArray should use strict in_array? - Related: https://github.com/zendframework/zend-form/issues/18
In [`Zend\Form\View\Helper\FormSelect`](https://github.com/zendframework/zend-form/blob/767ed409dc19b127d36ed4cdc01de0318207b50b/src/View/Helper/FormSelect.php#L206) `ArrayUtils::inArray` is called with the strict parameter `false`. This causes an issue with string matching ('1.1' and '1.10' treated equivalent):
``` php
<?php
$needle = '1.10';
$haystack = ['1.1'];
assert(in_array($needle, $haystack) === false);
// PHP Warning: assert(): Assertion failed in <file> on line 5
```
(3v4l: https://3v4l.org/HKM8Q)
Simply changing FormSelect to use strict=true breaks [an existing test which uses integer keys in the value options array](https://github.com/zendframework/zend-form/blob/767ed409dc19b127d36ed4cdc01de0318207b50b/test/View/Helper/FormSelectTest.php#L367-L385).
Since `ArrayUtils::inArray` uses string casting to work around `in_array`'s wonky non-strict behaviour, shouldn't the call to `in_array` always have strict=true?
``` diff
diff --git a/src/ArrayUtils.php b/src/ArrayUtils.php
index 17e3ae3..69b9721 100644
--- a/src/ArrayUtils.php
+++ b/src/ArrayUtils.php
@@ -199,7 +199,7 @@ abstract class ArrayUtils
}
}
}
- return in_array($needle, $haystack, $strict);
+ return in_array($needle, $haystack, true);
}
/**
```
I've tested this change here (all tests pass) and against `zend-form` (where it fixes the reported issue).
What's the protocol for testing changes to packages which may have knock-on effects on other packages? Pull every repo that has a dependency on stdlib, apply the patch, and run the tests?
---
Originally posted by @adamlundrigan at https://github.com/zendframework/zend-stdlib/issues/41 | main | arrayutils inarray should use strict in array related in arrayutils inarray is called with the strict parameter false this causes an issue with string matching and treated equivalent php php needle haystack assert in array needle haystack false php warning assert assertion failed in on line simply changing formselect to use strict true breaks since arrayutils inarray uses string casting to work around in array s wonky non strict behaviour shouldn t the call to in array always have strict true diff diff git a src arrayutils php b src arrayutils php index a src arrayutils php b src arrayutils php abstract class arrayutils return in array needle haystack strict return in array needle haystack true i ve tested this change here all tests pass and against zend form where it fixes the reported issue what s the protocol for testing changes to packages which may have knock on effects on other packages pull every repo that has a dependency on stdlib apply the patch and run the tests originally posted by adamlundrigan at | 1 |
2,213 | 7,809,303,959 | IssuesEvent | 2018-06-11 23:46:39 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Disposal "CLONG, Clong!" size being based on proximity means you still get logspammed by them if there is lots of clonging, especially with multiple people | Bug Maintainability/Hinders improvements | CLONG, clong!
CLONG, clong!
CLONG, clong!
CLONG, clong!
CLONG, clong!x2
CLONG, clong!x6
CLONG, clong!x2
CLONG, clong!x3
CLONG, clong!x3
CLONG, clong!x11
is the result of orbiting a person going through disposals clonging along with other people
it stacks up properly when they're at an outlet, since they all go on the same tile | True | Disposal "CLONG, Clong!" size being based on proximity means you still get logspammed by them if there is lots of clonging, especially with multiple people - CLONG, clong!
CLONG, clong!
CLONG, clong!
CLONG, clong!
CLONG, clong!x2
CLONG, clong!x6
CLONG, clong!x2
CLONG, clong!x3
CLONG, clong!x3
CLONG, clong!x11
is the result of orbiting a person going through disposals clonging along with other people
it stacks up properly when they're at an outlet, since they all go on the same tile | main | disposal clong clong size being based on proximity means you still get logspammed by them if there is lots of clonging especially with multiple people clong clong clong clong clong clong clong clong clong clong clong clong clong clong clong clong clong clong clong clong is the result of orbiting a person going through disposals clonging along with other people it stacks up properly when they re at an outlet since they all go on the same tile | 1 |
1,359 | 9,977,263,394 | IssuesEvent | 2019-07-09 16:49:30 | elastic/apm-agent-nodejs | https://api.github.com/repos/elastic/apm-agent-nodejs | opened | Ensure npm is only updated on Node.js 8.1.* or below in Jenkins | automation ci | For details see this discussion:
https://github.com/elastic/apm-agent-nodejs/pull/1202#discussion_r301544610
In bash, I'd do something like this, but I'm not sure about Dockerfiles:
```bash
FULL_NODE_VERSION=`node --version`
[[ $FULL_NODE_VERSION =~ [0-9]+ ]]
MAJOR_NODE_VERSION="${BASH_REMATCH[0]}"
if [[ $MAJOR_NODE_VERSION -le 7 ]] || [[ $FULL_NODE_VERSION == v8.1.* ]]
then
npm install npm@latest
fi
``` | 1.0 | Ensure npm is only updated on Node.js 8.1.* or below in Jenkins - For details see this discussion:
https://github.com/elastic/apm-agent-nodejs/pull/1202#discussion_r301544610
In bash, I'd do something like this, but I'm not sure about Dockerfiles:
```bash
FULL_NODE_VERSION=`node --version`
[[ $FULL_NODE_VERSION =~ [0-9]+ ]]
MAJOR_NODE_VERSION="${BASH_REMATCH[0]}"
if [[ $MAJOR_NODE_VERSION -le 7 ]] || [[ $FULL_NODE_VERSION == v8.1.* ]]
then
npm install npm@latest
fi
``` | non_main | ensure npm is only updated on node js or below in jenkins for details see this discussion in bash i d do something like this but i m not sure about dockerfiles bash full node version node version major node version bash rematch if then npm install npm latest fi | 0 |
600 | 4,099,697,948 | IssuesEvent | 2016-06-03 13:42:02 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Sort: Shouldn't trigger on single numbers | Maintainer Input Requested Triggering | We shouldn't trigger if there is only one input number, eg., https://duckduckgo.com/?q=sort+1&ia=answer
------
IA Page: http://duck.co/ia/view/sort
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @koosha-- | True | Sort: Shouldn't trigger on single numbers - We shouldn't trigger if there is only one input number, eg., https://duckduckgo.com/?q=sort+1&ia=answer
------
IA Page: http://duck.co/ia/view/sort
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @koosha-- | main | sort shouldn t trigger on single numbers we shouldn t trigger if there is only one input number eg ia page koosha | 1 |
352,177 | 10,533,048,417 | IssuesEvent | 2019-10-01 12:14:25 | AY1920S1-CS2113T-W13-2/main | https://api.github.com/repos/AY1920S1-CS2113T-W13-2/main | opened | As a user, I want to check reminders to read topics... | priority.High | so that I am reminded of what topics to read before a previously set deadline | 1.0 | As a user, I want to check reminders to read topics... - so that I am reminded of what topics to read before a previously set deadline | non_main | as a user i want to check reminders to read topics so that i am reminded of what topics to read before a previously set deadline | 0 |
2,749 | 9,793,710,156 | IssuesEvent | 2019-06-10 20:39:22 | Fuzzik/ttt-weapon-placer | https://api.github.com/repos/Fuzzik/ttt-weapon-placer | closed | Reformat all repo code | maintainability | - Change spaces to tabs
- Remove spaces before and after parentheses
- Add spaces before and after equal signs | True | Reformat all repo code - - Change spaces to tabs
- Remove spaces before and after parentheses
- Add spaces before and after equal signs | main | reformat all repo code change spaces to tabs remove spaces before and after parentheses add spaces before and after equal signs | 1 |
158,152 | 20,007,811,462 | IssuesEvent | 2022-02-01 00:22:58 | RG4421/openedr | https://api.github.com/repos/RG4421/openedr | opened | WS-2018-0652 (High) detected in curlcurl-7_63_0 | security vulnerability | ## WS-2018-0652 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>curlcurl-7_63_0</b></p></summary>
<p>
<p>A command line tool and library for transferring data with URL syntax, supporting HTTP, HTTPS, FTP, FTPS, GOPHER, TFTP, SCP, SFTP, SMB, TELNET, DICT, LDAP, LDAPS, FILE, IMAP, SMTP, POP3, RTSP and RTMP. libcurl offers a myriad of powerful features</p>
<p>Library home page: <a href=https://github.com/curl/curl.git>https://github.com/curl/curl.git</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/openedr/edrav2/eprj/curl/lib/url.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/openedr/edrav2/eprj/curl/lib/url.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
curl in versions curl-7_40_0 to curl-7_63_0 is vulnerable to UNKNOWN READ related to lib/url.c
<p>Publish Date: 2018-12-23
<p>URL: <a href=https://github.com/curl/curl/commit/f3ce38739fa49008e36959aa8189c01ab1bad5b5>WS-2018-0652</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/OSV-2018-40">https://osv.dev/vulnerability/OSV-2018-40</a></p>
<p>Release Date: 2018-12-23</p>
<p>Fix Resolution: curl-7_64_0</p>
</p>
</details>
<p></p>
| True | WS-2018-0652 (High) detected in curlcurl-7_63_0 - ## WS-2018-0652 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>curlcurl-7_63_0</b></p></summary>
<p>
<p>A command line tool and library for transferring data with URL syntax, supporting HTTP, HTTPS, FTP, FTPS, GOPHER, TFTP, SCP, SFTP, SMB, TELNET, DICT, LDAP, LDAPS, FILE, IMAP, SMTP, POP3, RTSP and RTMP. libcurl offers a myriad of powerful features</p>
<p>Library home page: <a href=https://github.com/curl/curl.git>https://github.com/curl/curl.git</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/openedr/edrav2/eprj/curl/lib/url.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/openedr/edrav2/eprj/curl/lib/url.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
curl in versions curl-7_40_0 to curl-7_63_0 is vulnerable to UNKNOWN READ related to lib/url.c
<p>Publish Date: 2018-12-23
<p>URL: <a href=https://github.com/curl/curl/commit/f3ce38739fa49008e36959aa8189c01ab1bad5b5>WS-2018-0652</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/OSV-2018-40">https://osv.dev/vulnerability/OSV-2018-40</a></p>
<p>Release Date: 2018-12-23</p>
<p>Fix Resolution: curl-7_64_0</p>
</p>
</details>
<p></p>
| non_main | ws high detected in curlcurl ws high severity vulnerability vulnerable library curlcurl a command line tool and library for transferring data with url syntax supporting http https ftp ftps gopher tftp scp sftp smb telnet dict ldap ldaps file imap smtp rtsp and rtmp libcurl offers a myriad of powerful features library home page a href found in base branch main vulnerable source files openedr eprj curl lib url c openedr eprj curl lib url c vulnerability details curl in versions curl to curl is vulnerable to unknown read related to lib url c publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution curl | 0 |
1,333 | 5,718,505,743 | IssuesEvent | 2017-04-19 19:42:26 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Incorrect handling of quotes in lineinfile module | affects_1.9 bug_report waiting_on_maintainer | ### Issue Type
Bug Report
### Component Name
lineinfile module
### Ansible Version
1.9.3
### Summary
I am using Ansible 1.9.3.
I have following content in my `test.yml` file:
``` yaml
---
- hosts: all
connection: local
tasks:
- lineinfile:
dest: ./example.txt
create: yes
line: "something '\"content\"' something else"
```
And I run it as follows:
`ansible-playbook -i localhost, test.yml`
### Expected result
`example.txt` contains `something '"content"' something else`.
### Actual result
`example.txt` contains `something content something else`. Without any quotes.
### Investigation
It seems the problem lies in the `lineinfile` module adding quotes and then executing `line = module.safe_eval(line)`. Relevant [code](https://github.com/ansible/ansible-modules-core/blob/cf88f2786822ab5f4a1cd711761a40df49bd93f0/files/lineinfile.py).
After adding quotes line looks like `'something '"content"' something else'` and when passed to `module.safe_eval()` the Pythons's implicit string concatenation is applied and it looses all quotes.
---
**Haven't checked if problem exists in development version, it seems module was significantly rewritten.**
| True | Incorrect handling of quotes in lineinfile module - ### Issue Type
Bug Report
### Component Name
lineinfile module
### Ansible Version
1.9.3
### Summary
I am using Ansible 1.9.3.
I have following content in my `test.yml` file:
``` yaml
---
- hosts: all
connection: local
tasks:
- lineinfile:
dest: ./example.txt
create: yes
line: "something '\"content\"' something else"
```
And I run it as follows:
`ansible-playbook -i localhost, test.yml`
### Expected result
`example.txt` contains `something '"content"' something else`.
### Actual result
`example.txt` contains `something content something else`. Without any quotes.
### Investigation
It seems the problem lies in the `lineinfile` module adding quotes and then executing `line = module.safe_eval(line)`. Relevant [code](https://github.com/ansible/ansible-modules-core/blob/cf88f2786822ab5f4a1cd711761a40df49bd93f0/files/lineinfile.py).
After adding quotes line looks like `'something '"content"' something else'` and when passed to `module.safe_eval()` the Pythons's implicit string concatenation is applied and it looses all quotes.
---
**Haven't checked if problem exists in development version, it seems module was significantly rewritten.**
| main | incorrect handling of quotes in lineinfile module issue type bug report component name lineinfile module ansible version summary i am using ansible i have following content in my test yml file yaml hosts all connection local tasks lineinfile dest example txt create yes line something content something else and i run it as follows ansible playbook i localhost test yml expected result example txt contains something content something else actual result example txt contains something content something else without any quotes investigation it seems the problem lies in the lineinfile module adding quotes and then executing line module safe eval line relevant after adding quotes line looks like something content something else and when passed to module safe eval the pythons s implicit string concatenation is applied and it looses all quotes haven t checked if problem exists in development version it seems module was significantly rewritten | 1 |
5,048 | 25,866,658,740 | IssuesEvent | 2022-12-13 21:33:08 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Maintainer change request: Markdown Filter | Maintainer change request | I would like to take over as maintainer for the Markdown Filter module.
[Here is my request in the module issue queue to become a maintainer.](https://github.com/backdrop-contrib/markdown/issues/9) It has aged the required 2-week period (during which it has developed a complex, full-bodied bouquet with hints of chocolate, currants, and green pepper.)
[Here is the PR that updates the README file.](https://github.com/backdrop-contrib/markdown/pull/11)
Please give me admin privileges for the module (and then I will merge the PR, plus the several other outstanding PRs).
Thanks! | True | Maintainer change request: Markdown Filter - I would like to take over as maintainer for the Markdown Filter module.
[Here is my request in the module issue queue to become a maintainer.](https://github.com/backdrop-contrib/markdown/issues/9) It has aged the required 2-week period (during which it has developed a complex, full-bodied bouquet with hints of chocolate, currants, and green pepper.)
[Here is the PR that updates the README file.](https://github.com/backdrop-contrib/markdown/pull/11)
Please give me admin privileges for the module (and then I will merge the PR, plus the several other outstanding PRs).
Thanks! | main | maintainer change request markdown filter i would like to take over as maintainer for the markdown filter module it has aged the required week period during which it has developed a complex full bodied bouquet with hints of chocolate currants and green pepper please give me admin privileges for the module and then i will merge the pr plus the several other outstanding prs thanks | 1 |
9,580 | 7,754,420,403 | IssuesEvent | 2018-05-31 06:36:20 | cvidler/rtmarchive | https://api.github.com/repos/cvidler/rtmarchive | opened | rtmarchive / archiveamd / queryrumc password usage behavior | enhancement security | rewrite queryrumc archiveamd and rtmarchive to not require the saving of plain text passwords in amdlist.cfg.
i.e. decrypt the password as required in archiveamd.sh. | True | rtmarchive / archiveamd / queryrumc password usage behavior - rewrite queryrumc archiveamd and rtmarchive to not require the saving of plain text passwords in amdlist.cfg.
i.e. decrypt the password as required in archiveamd.sh. | non_main | rtmarchive archiveamd queryrumc password usage behavior rewrite queryrumc archiveamd and rtmarchive to not require the saving of plain text passwords in amdlist cfg i e decrypt the password as required in archiveamd sh | 0 |
2,729 | 9,661,663,984 | IssuesEvent | 2019-05-20 18:39:12 | spack/spack | https://api.github.com/repos/spack/spack | closed | "Too Many" New Packages / Updated Versions | feature maintainers | @tgamblin
We are drowning in new-package and version-update PRs. I suggest we recruit more volunteers to process these routine PRs and get them merged quickly. I'm happy to volunteer for that (but I believe we need more than just one new recruit for this job; it shouldn't be a big burden on anyone).
| True | "Too Many" New Packages / Updated Versions - @tgamblin
We are drowning in new-package and version-update PRs. I suggest we recruit more volunteers to process these routine PRs and get them merged quickly. I'm happy to volunteer for that (but I believe we need more than just one new recruit for this job; it shouldn't be a big burden on anyone).
| main | too many new packages updated versions tgamblin we are drowning in new package and version update prs i suggest we recruit more volunteers to process these routine prs and get them merged quickly i m happy to volunteer for that but i believe we need more than just one new recruit for this job it shouldn t be a big burden on anyone | 1 |
258,444 | 8,175,389,519 | IssuesEvent | 2018-08-28 01:48:19 | ITDP/the-online-brt-planning-guide | https://api.github.com/repos/ITDP/the-online-brt-planning-guide | closed | Switch TeX math rendering engines for HTML | discussion low priority manu | From the beginning, we had the option of picking either MathJax or KaTeX. We ended up picking MathJax because it had been around for more time, and because we couldn't evaluate performance requirements at that time.
However, we could now use a performance boost, and KaTeX might be faster than MathJax by a magnitude order. So, I think that at this point we should evaluate the switch and make some experiments... | 1.0 | Switch TeX math rendering engines for HTML - From the beginning, we had the option of picking either MathJax or KaTeX. We ended up picking MathJax because it had been around for more time, and because we couldn't evaluate performance requirements at that time.
However, we could now use a performance boost, and KaTeX might be faster than MathJax by a magnitude order. So, I think that at this point we should evaluate the switch and make some experiments... | non_main | switch tex math rendering engines for html from the beginning we had the option of picking either mathjax or katex we ended up picking mathjax because it had been around for more time and because we couldn t evaluate performance requirements at that time however we could now use a performance boost and katex might be faster than mathjax by a magnitude order so i think that at this point we should evaluate the switch and make some experiments | 0 |
245,102 | 20,745,790,504 | IssuesEvent | 2022-03-14 22:48:35 | sourcegraph/sec-pr-audit-trail | https://api.github.com/repos/sourcegraph/sec-pr-audit-trail | closed | sourcegraph/sourcegraph#32000: "feat: add pings for IDE extensions usage" | exception/test-plan sourcegraph/sourcegraph | https://github.com/sourcegraph/sourcegraph/pull/32000 "feat: add pings for IDE extensions usage" **has no test plan**.
Learn more about test plans in our [testing guidelines](https://docs.sourcegraph.com/dev/background-information/testing_principles#test-plans).
@abeatrix please comment in this issue with an explanation for this exception and close this issue. | 1.0 | sourcegraph/sourcegraph#32000: "feat: add pings for IDE extensions usage" - https://github.com/sourcegraph/sourcegraph/pull/32000 "feat: add pings for IDE extensions usage" **has no test plan**.
Learn more about test plans in our [testing guidelines](https://docs.sourcegraph.com/dev/background-information/testing_principles#test-plans).
@abeatrix please comment in this issue with an explanation for this exception and close this issue. | non_main | sourcegraph sourcegraph feat add pings for ide extensions usage feat add pings for ide extensions usage has no test plan learn more about test plans in our abeatrix please comment in this issue with an explanation for this exception and close this issue | 0 |
5,670 | 29,496,197,473 | IssuesEvent | 2023-06-02 17:12:07 | revoltchat/translations | https://api.github.com/repos/revoltchat/translations | closed | badge: Latitn for Rymko | status: no maintainer | ### What language(s) did you translate?
Latin
### What is your Revolt ID?
184742
### Link to Weblate profile
https://weblate.insrt.uk/user/Rymko/
### Validations
- [ ] The language has a maintainer (see the full list [here](https://github.com/revoltchat/translations#languages)).
- [X] You agree that your translations are to a reasonable standard. | True | badge: Latitn for Rymko - ### What language(s) did you translate?
Latin
### What is your Revolt ID?
184742
### Link to Weblate profile
https://weblate.insrt.uk/user/Rymko/
### Validations
- [ ] The language has a maintainer (see the full list [here](https://github.com/revoltchat/translations#languages)).
- [X] You agree that your translations are to a reasonable standard. | main | badge latitn for rymko what language s did you translate latin what is your revolt id link to weblate profile validations the language has a maintainer see the full list you agree that your translations are to a reasonable standard | 1 |
2,553 | 8,690,190,503 | IssuesEvent | 2018-12-03 20:51:08 | hassio-addons/addon-grafana | https://api.github.com/repos/hassio-addons/addon-grafana | closed | Wish to upgrade grafana version to 5.4 | Type: Maintaince | Could Grafana be upgraded to the 5.4 version? At the moment this version is still in `beta`, but brings a nice feature with it. From this version it is possible to use the graphical query editor for those who use a MySQL database. [More info](https://github.com/grafana/grafana/issues/13762) | True | Wish to upgrade grafana version to 5.4 - Could Grafana be upgraded to the 5.4 version? At the moment this version is still in `beta`, but brings a nice feature with it. From this version it is possible to use the graphical query editor for those who use a MySQL database. [More info](https://github.com/grafana/grafana/issues/13762) | main | wish to upgrade grafana version to could grafana be upgraded to the version at the moment this version is still in beta but brings a nice feature with it from this version it is possible to use the graphical query editor for those who use a mysql database | 1 |
2,447 | 8,639,860,719 | IssuesEvent | 2018-11-23 22:09:34 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | File system gets corrupted | V1 related (not maintained) | Hello,
i have a Raspberry Pi 3B with stretch, kernal version Linux 4.9.80-v7+, firmware Mar 13 2018 (6e08617e7767b09ef97b3d6cee8b75eba6d7ee0b).
After successfully sending a doorbell command (3 seconds long) for a couple of times, the root file system /dev/mmcblk0p2 was automatically remounted read-only and write protected by the system. Fsck on the read-only FS showed errors. After a reboot with file check and recovery, the system was ok and running.
/var/log/syslog had several suspicious entries with the first entry being:
> mmc0: timeout waiting for hardware interrupt.
I tried to reproduce the error by sending the doorbell sequence again. This time I checked syslog immediately, which showed new error lines:
> Mar 19 10:02:29 raspi2 kernel: [29063.150745] mmc0: timeout waiting for hardware interrupt.
> Mar 19 10:02:29 raspi2 kernel: [29063.150762] [033a6af0] DMA< b945ea40 0
> Mar 19 10:02:29 raspi2 kernel: [29063.150769] [033a6af1] DMA 99 10801
> Mar 19 10:02:29 raspi2 kernel: [29063.150776] [033a6af3] FDA< b945e968 0
> Mar 19 10:02:29 raspi2 kernel: [29063.150783] [033a6af4] TCM< b945ea40 0
> Mar 19 10:02:29 raspi2 kernel: [29063.150789] [033a6af5] CMD< c 0
> Mar 19 10:02:29 raspi2 kernel: [29063.150796] [033a6af7] TCM> b945ea40 0
[...]
> Mar 19 10:02:29 raspi2 kernel: [29063.152471] [0480b8e1] TIM< 0 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152484] mmc0: sbc op 23 arg 0x30 flags 0x15 - resp 00000000 00000000 00000000 00000000, err 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152494] mmc0: cmd op 25 arg 0x4748b8 flags 0xb5 - resp 00000900 00000000 00000000 00000000, err 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152500] mmc0: data blocks 30 blksz 200 - err 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152510] mmc0: stop op 12 arg 0x0 flags 0x49d - resp 00000000 00000000 00000000 00000000, err 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152514] mmc0: =========== REGISTER DUMP ===========
> Mar 19 10:02:29 raspi2 kernel: [29063.152519] mmc0: SDCMD 0x00000099
> Mar 19 10:02:29 raspi2 kernel: [29063.152523] mmc0: SDARG 0x004748b8
> Mar 19 10:02:29 raspi2 kernel: [29063.152528] mmc0: SDTOUT 0x017d7840
> Mar 19 10:02:29 raspi2 kernel: [29063.152532] mmc0: SDCDIV 0x00000003
> Mar 19 10:02:29 raspi2 kernel: [29063.152537] mmc0: SDRSP0 0x00000900
> Mar 19 10:02:29 raspi2 kernel: [29063.152541] mmc0: SDRSP1 0x00001918
> Mar 19 10:02:29 raspi2 kernel: [29063.152546] mmc0: SDRSP2 0xffffffff
> Mar 19 10:02:29 raspi2 kernel: [29063.152551] mmc0: SDRSP3 0x0002400f
> Mar 19 10:02:29 raspi2 kernel: [29063.152555] mmc0: SDHSTS 0x00000001
> Mar 19 10:02:29 raspi2 kernel: [29063.152560] mmc0: SDVDD 0x00000001
> Mar 19 10:02:29 raspi2 kernel: [29063.152564] mmc0: SDEDM 0x0001080a
> Mar 19 10:02:29 raspi2 kernel: [29063.152569] mmc0: SDHCFG 0x0000040e
> Mar 19 10:02:29 raspi2 kernel: [29063.152573] mmc0: SDHBCT 0x00000200
> Mar 19 10:02:29 raspi2 kernel: [29063.152578] mmc0: SDHBLC 0x00000030
> Mar 19 10:02:29 raspi2 kernel: [29063.152582] mmc0: ===========================================
> Mar 19 10:02:29 raspi2 kernel: [29063.152887] mmcblk0: error -110 transferring data, sector 4671672, nr 48, cmd response 0x900, card status 0xc00
After reboot and fsck I tried again and succeded to reproduce the mmc0 lines in syslog, kern.log and messages.
Has anyone also observed this behavior? - I was using the current version of rpitx from master.
-Andreas
| True | File system gets corrupted - Hello,
i have a Raspberry Pi 3B with stretch, kernal version Linux 4.9.80-v7+, firmware Mar 13 2018 (6e08617e7767b09ef97b3d6cee8b75eba6d7ee0b).
After successfully sending a doorbell command (3 seconds long) for a couple of times, the root file system /dev/mmcblk0p2 was automatically remounted read-only and write protected by the system. Fsck on the read-only FS showed errors. After a reboot with file check and recovery, the system was ok and running.
/var/log/syslog had several suspicious entries with the first entry being:
> mmc0: timeout waiting for hardware interrupt.
I tried to reproduce the error by sending the doorbell sequence again. This time I checked syslog immediately, which showed new error lines:
> Mar 19 10:02:29 raspi2 kernel: [29063.150745] mmc0: timeout waiting for hardware interrupt.
> Mar 19 10:02:29 raspi2 kernel: [29063.150762] [033a6af0] DMA< b945ea40 0
> Mar 19 10:02:29 raspi2 kernel: [29063.150769] [033a6af1] DMA 99 10801
> Mar 19 10:02:29 raspi2 kernel: [29063.150776] [033a6af3] FDA< b945e968 0
> Mar 19 10:02:29 raspi2 kernel: [29063.150783] [033a6af4] TCM< b945ea40 0
> Mar 19 10:02:29 raspi2 kernel: [29063.150789] [033a6af5] CMD< c 0
> Mar 19 10:02:29 raspi2 kernel: [29063.150796] [033a6af7] TCM> b945ea40 0
[...]
> Mar 19 10:02:29 raspi2 kernel: [29063.152471] [0480b8e1] TIM< 0 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152484] mmc0: sbc op 23 arg 0x30 flags 0x15 - resp 00000000 00000000 00000000 00000000, err 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152494] mmc0: cmd op 25 arg 0x4748b8 flags 0xb5 - resp 00000900 00000000 00000000 00000000, err 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152500] mmc0: data blocks 30 blksz 200 - err 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152510] mmc0: stop op 12 arg 0x0 flags 0x49d - resp 00000000 00000000 00000000 00000000, err 0
> Mar 19 10:02:29 raspi2 kernel: [29063.152514] mmc0: =========== REGISTER DUMP ===========
> Mar 19 10:02:29 raspi2 kernel: [29063.152519] mmc0: SDCMD 0x00000099
> Mar 19 10:02:29 raspi2 kernel: [29063.152523] mmc0: SDARG 0x004748b8
> Mar 19 10:02:29 raspi2 kernel: [29063.152528] mmc0: SDTOUT 0x017d7840
> Mar 19 10:02:29 raspi2 kernel: [29063.152532] mmc0: SDCDIV 0x00000003
> Mar 19 10:02:29 raspi2 kernel: [29063.152537] mmc0: SDRSP0 0x00000900
> Mar 19 10:02:29 raspi2 kernel: [29063.152541] mmc0: SDRSP1 0x00001918
> Mar 19 10:02:29 raspi2 kernel: [29063.152546] mmc0: SDRSP2 0xffffffff
> Mar 19 10:02:29 raspi2 kernel: [29063.152551] mmc0: SDRSP3 0x0002400f
> Mar 19 10:02:29 raspi2 kernel: [29063.152555] mmc0: SDHSTS 0x00000001
> Mar 19 10:02:29 raspi2 kernel: [29063.152560] mmc0: SDVDD 0x00000001
> Mar 19 10:02:29 raspi2 kernel: [29063.152564] mmc0: SDEDM 0x0001080a
> Mar 19 10:02:29 raspi2 kernel: [29063.152569] mmc0: SDHCFG 0x0000040e
> Mar 19 10:02:29 raspi2 kernel: [29063.152573] mmc0: SDHBCT 0x00000200
> Mar 19 10:02:29 raspi2 kernel: [29063.152578] mmc0: SDHBLC 0x00000030
> Mar 19 10:02:29 raspi2 kernel: [29063.152582] mmc0: ===========================================
> Mar 19 10:02:29 raspi2 kernel: [29063.152887] mmcblk0: error -110 transferring data, sector 4671672, nr 48, cmd response 0x900, card status 0xc00
After reboot and fsck I tried again and succeded to reproduce the mmc0 lines in syslog, kern.log and messages.
Has anyone also observed this behavior? - I was using the current version of rpitx from master.
-Andreas
| main | file system gets corrupted hello i have a raspberry pi with stretch kernal version linux firmware mar after successfully sending a doorbell command seconds long for a couple of times the root file system dev was automatically remounted read only and write protected by the system fsck on the read only fs showed errors after a reboot with file check and recovery the system was ok and running var log syslog had several suspicious entries with the first entry being timeout waiting for hardware interrupt i tried to reproduce the error by sending the doorbell sequence again this time i checked syslog immediately which showed new error lines mar kernel timeout waiting for hardware interrupt mar kernel dma mar kernel dma mar kernel fda mar kernel tcm mar kernel cmd c mar kernel tcm mar kernel tim mar kernel sbc op arg flags resp err mar kernel cmd op arg flags resp err mar kernel data blocks blksz err mar kernel stop op arg flags resp err mar kernel register dump mar kernel sdcmd mar kernel sdarg mar kernel sdtout mar kernel sdcdiv mar kernel mar kernel mar kernel mar kernel mar kernel sdhsts mar kernel sdvdd mar kernel sdedm mar kernel sdhcfg mar kernel sdhbct mar kernel sdhblc mar kernel mar kernel error transferring data sector nr cmd response card status after reboot and fsck i tried again and succeded to reproduce the lines in syslog kern log and messages has anyone also observed this behavior i was using the current version of rpitx from master andreas | 1 |
2,077 | 7,037,195,185 | IssuesEvent | 2017-12-28 13:28:05 | aroberge/reeborg | https://api.github.com/repos/aroberge/reeborg | closed | build tool needs to be revised | easier to maintain | There are at least two python files (reeborg_en.py and reeborg_fr.py) that are replicated in the offline distribution ... and they are likely not identical.
The build tool should be revised so as to create identical copies of the relevant files whenever they
are changed.
The integration test should use the offline version by default which would help ensure that no file update has been missed. | True | build tool needs to be revised - There are at least two python files (reeborg_en.py and reeborg_fr.py) that are replicated in the offline distribution ... and they are likely not identical.
The build tool should be revised so as to create identical copies of the relevant files whenever they
are changed.
The integration test should use the offline version by default which would help ensure that no file update has been missed. | main | build tool needs to be revised there are at least two python files reeborg en py and reeborg fr py that are replicated in the offline distribution and they are likely not identical the build tool should be revised so as to create identical copies of the relevant files whenever they are changed the integration test should use the offline version by default which would help ensure that no file update has been missed | 1 |
21,487 | 29,577,960,503 | IssuesEvent | 2023-06-07 01:38:14 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | closed | Bazel can't find the following tools: cl.exe, link.exe, lib.exe, ml64.exe for x64 target architecture | P2 type: support / not a bug (process) area-Windows team-OSS stale | Hi,
I am trying to build the demo application with visual 2017 VS Code installed but facing this issue. Have seen earlier similar issue and tried workaround but did not work.
C:\Users\akayal\mediapipe_repo\mediapipe>bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 --action_env PYTHON_BIN_PATH="C:/Users/akayal/AppData/Local/Continuum/anaconda3/python.exe" mediapipe/examples/desktop/hello_world
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Analyzed target //mediapipe/examples/desktop/hello_world:hello_world (55 packages loaded, 1315 targets configured).
INFO: Found 1 target...
ERROR: C:/users/akayal/_bazel_akayal/awz7mfov/external/com_google_protobuf/BUILD:120:11: C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed (Exit 1)
The target you are compiling requires Visual C++ build tools.
Bazel couldn't find a valid Visual C++ build tools installation on your machine.
Visual C++ build tools seems to be installed at C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC
But Bazel can't find the following tools:
cl.exe, link.exe, lib.exe, ml64.exe
for x64 target architecture
Please check your installation following https://docs.bazel.build/versions/master/windows.html#using
Target //mediapipe/examples/desktop/hello_world:hello_world failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 18.112s, Critical Path: 4.12s
INFO: 133 processes: 133 internal.
FAILED: Build did NOT complete successfully
C:\Users\akayal\mediapipe_repo\mediapipe>bazel --version
bazel 3.6.0
C:\Users\akayal\mediapipe_repo\mediapipe> | 1.0 | Bazel can't find the following tools: cl.exe, link.exe, lib.exe, ml64.exe for x64 target architecture - Hi,
I am trying to build the demo application with visual 2017 VS Code installed but facing this issue. Have seen earlier similar issue and tried workaround but did not work.
C:\Users\akayal\mediapipe_repo\mediapipe>bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 --action_env PYTHON_BIN_PATH="C:/Users/akayal/AppData/Local/Continuum/anaconda3/python.exe" mediapipe/examples/desktop/hello_world
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Analyzed target //mediapipe/examples/desktop/hello_world:hello_world (55 packages loaded, 1315 targets configured).
INFO: Found 1 target...
ERROR: C:/users/akayal/_bazel_akayal/awz7mfov/external/com_google_protobuf/BUILD:120:11: C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed (Exit 1)
The target you are compiling requires Visual C++ build tools.
Bazel couldn't find a valid Visual C++ build tools installation on your machine.
Visual C++ build tools seems to be installed at C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC
But Bazel can't find the following tools:
cl.exe, link.exe, lib.exe, ml64.exe
for x64 target architecture
Please check your installation following https://docs.bazel.build/versions/master/windows.html#using
Target //mediapipe/examples/desktop/hello_world:hello_world failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 18.112s, Critical Path: 4.12s
INFO: 133 processes: 133 internal.
FAILED: Build did NOT complete successfully
C:\Users\akayal\mediapipe_repo\mediapipe>bazel --version
bazel 3.6.0
C:\Users\akayal\mediapipe_repo\mediapipe> | non_main | bazel can t find the following tools cl exe link exe lib exe exe for target architecture hi i am trying to build the demo application with visual vs code installed but facing this issue have seen earlier similar issue and tried workaround but did not work c users akayal mediapipe repo mediapipe bazel build c opt define mediapipe disable gpu action env python bin path c users akayal appdata local continuum python exe mediapipe examples desktop hello world extracting bazel installation starting local bazel server and connecting to it info analyzed target mediapipe examples desktop hello world hello world packages loaded targets configured info found target error c users akayal bazel akayal external com google protobuf build c compilation of rule com google protobuf protobuf lite failed exit the target you are compiling requires visual c build tools bazel couldn t find a valid visual c build tools installation on your machine visual c build tools seems to be installed at c program files microsoft visual studio buildtools vc but bazel can t find the following tools cl exe link exe lib exe exe for target architecture please check your installation following target mediapipe examples desktop hello world hello world failed to build use verbose failures to see the command lines of failed build steps info elapsed time critical path info processes internal failed build did not complete successfully c users akayal mediapipe repo mediapipe bazel version bazel c users akayal mediapipe repo mediapipe | 0 |
273,894 | 23,792,943,787 | IssuesEvent | 2022-09-02 16:13:42 | cosmos/interchain-security | https://api.github.com/repos/cosmos/interchain-security | closed | Test Validator Downtime in Integration Tests | testing | We need to integration test the voting power of validators when a downtime has occurred. This should be tested for both a downtime observed on a consumer chain, and a downtime observed on a provider chain. | 1.0 | Test Validator Downtime in Integration Tests - We need to integration test the voting power of validators when a downtime has occurred. This should be tested for both a downtime observed on a consumer chain, and a downtime observed on a provider chain. | non_main | test validator downtime in integration tests we need to integration test the voting power of validators when a downtime has occurred this should be tested for both a downtime observed on a consumer chain and a downtime observed on a provider chain | 0 |
650,779 | 21,416,971,645 | IssuesEvent | 2022-04-22 11:54:14 | sahar-avsh/SWE-599 | https://api.github.com/repos/sahar-avsh/SWE-599 | closed | Q&A - showing question result | Show stopper Hard High priority Q&A UI | _After showing all the question results_, user shall be able to **click** on one and see the **whole question** and **related answers** | 1.0 | Q&A - showing question result - _After showing all the question results_, user shall be able to **click** on one and see the **whole question** and **related answers** | non_main | q a showing question result after showing all the question results user shall be able to click on one and see the whole question and related answers | 0 |
192,800 | 6,876,781,243 | IssuesEvent | 2017-11-20 03:25:31 | GluuFederation/oxTrust | https://api.github.com/repos/GluuFederation/oxTrust | closed | Unable to delete created attributes | bug High Priority | I'm unable to delete newly registered attributes in oxTrust.
 | 1.0 | Unable to delete created attributes - I'm unable to delete newly registered attributes in oxTrust.
 | non_main | unable to delete created attributes i m unable to delete newly registered attributes in oxtrust | 0 |
94,693 | 11,902,879,225 | IssuesEvent | 2020-03-30 14:34:25 | Opentrons/opentrons | https://api.github.com/repos/Opentrons/opentrons | closed | PD | Gen 2 Modules: Edit Module Modal | :spider: SPDDRS protocol designer | - [ ] Model dropdown should start empty. It is a required field
- [ ] When a GEN2 module is being placed or edited, any of the perimeter slots (slots 1, 3, 4, 6, 7, 9, 10) may be selected as a position. Default location in the dropdown is the recommended location.
- For GEN1, it's still restricted to the default location slot
- [ ] You can click Save with no model selected, but it won't save or close the modal. It will make the form non-pristine, so the "This field is required" error will show on the empty Model field. | 1.0 | PD | Gen 2 Modules: Edit Module Modal - - [ ] Model dropdown should start empty. It is a required field
- [ ] When a GEN2 module is being placed or edited, any of the perimeter slots (slots 1, 3, 4, 6, 7, 9, 10) may be selected as a position. Default location in the dropdown is the recommended location.
- For GEN1, it's still restricted to the default location slot
- [ ] You can click Save with no model selected, but it won't save or close the modal. It will make the form non-pristine, so the "This field is required" error will show on the empty Model field. | non_main | pd gen modules edit module modal model dropdown should start empty it is a required field when a module is being placed or edited any of the perimeter slots slots may be selected as a position default location in the dropdown is the recommended location for it s still restricted to the default location slot you can click save with no model selected but it won t save or close the modal it will make the form non pristine so the this field is required error will show on the empty model field | 0 |
183,379 | 14,940,150,734 | IssuesEvent | 2021-01-25 17:52:57 | TokenEngineeringCommunity/BalancerAMM_Model | https://api.github.com/repos/TokenEngineeringCommunity/BalancerAMM_Model | opened | Using Historical Data instruction | documentation | add instructions to Gitbook documentation according to doc attached
[Historical_data_instruction.txt](https://github.com/TokenEngineeringCommunity/BalancerAMM_Model/files/5868520/Historical_data_instruction.txt)
| 1.0 | Using Historical Data instruction - add instructions to Gitbook documentation according to doc attached
[Historical_data_instruction.txt](https://github.com/TokenEngineeringCommunity/BalancerAMM_Model/files/5868520/Historical_data_instruction.txt)
| non_main | using historical data instruction add instructions to gitbook documentation according to doc attached | 0 |
5,637 | 28,364,186,128 | IssuesEvent | 2023-04-12 12:54:17 | google/wasefire | https://api.github.com/repos/google/wasefire | closed | Add scripts/nordic-test.sh to test what the CI can't | good first issue needs:implementation for:maintainability runner:nordic | In particular making sure `cargo xtask applet <lang> <name> runner nordic` uses probe-run correctly and produces the expected outcome. Some outcome may be checked automatically (like hello applet) but others need manual instructions (like store applet). | True | Add scripts/nordic-test.sh to test what the CI can't - In particular making sure `cargo xtask applet <lang> <name> runner nordic` uses probe-run correctly and produces the expected outcome. Some outcome may be checked automatically (like hello applet) but others need manual instructions (like store applet). | main | add scripts nordic test sh to test what the ci can t in particular making sure cargo xtask applet runner nordic uses probe run correctly and produces the expected outcome some outcome may be checked automatically like hello applet but others need manual instructions like store applet | 1 |
18,750 | 3,086,820,721 | IssuesEvent | 2015-08-25 07:27:49 | CocoaPods/CocoaPods | https://api.github.com/repos/CocoaPods/CocoaPods | closed | Error after deleting a header in a development Pod | s2:confirmed t2:defect | ### Command
```
/Users/eleos/.rbenv/versions/2.2.2/bin/pod install
```
### Report
* What did you do?
If you run `pod install`, delete a header file from a development pod, and run `pod install` again, CocoaPods crashes.
* What did you expect to happen?
`pod install` should run normally, deleting the header from the `Pods/` directory as needed.
* What happened instead?
CocoaPods crashes trying to `chmod` the header's broken link.
### Stack
```
CocoaPods : 0.38.2
Ruby : ruby 2.2.2p95 (2015-04-13 revision 50295) [x86_64-darwin14]
RubyGems : 2.4.5
Host : Mac OS X 10.10.3 (14D136)
Xcode : 6.3.2 (6D2105)
Git : git version 2.2.1
Ruby lib dir : /Users/eleos/.rbenv/versions/2.2.2/lib
Repositories : master - https://github.com/CocoaPods/Specs.git @ e651ffcab678d6ad932a6b566681a8b13eff3ab9
```
### Plugins
```
cocoapods-plugins : 0.4.2
cocoapods-stats : 0.5.3
cocoapods-trunk : 0.6.1
cocoapods-try : 0.4.5
```
### Podfile
```ruby
source 'https://github.com/CocoaPods/Specs.git'
use_frameworks!
target 'chmod_Example', :exclusive => true do
pod "chmod", :path => "../"
end
target 'chmod_Tests', :exclusive => true do
pod "chmod", :path => "../"
end
```
### Error
```
Errno::ENOENT - No such file or directory @ rb_file_s_stat - /Users/eleos/cocoapods-duplicate/chmod/Example/Pods/Headers/Private/chmod/Example.h
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:904:in `stat'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:904:in `symbolic_modes_to_i'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:951:in `fu_mode'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1025:in `block (2 levels) in chmod_R'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1477:in `preorder_traverse'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1023:in `block in chmod_R'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1022:in `each'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1022:in `chmod_R'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/installer.rb:117:in `block in prepare'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/user_interface.rb:140:in `message'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/installer.rb:116:in `prepare'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/installer.rb:103:in `install!'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/command/project.rb:71:in `run_install_with_update'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/command/project.rb:101:in `run'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/claide-0.9.1/lib/claide/command.rb:312:in `run'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/command.rb:48:in `run'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/bin/pod:44:in `<top (required)>'
/Users/eleos/.rbenv/versions/2.2.2/bin/pod:23:in `load'
/Users/eleos/.rbenv/versions/2.2.2/bin/pod:23:in `<main>'
```
You can reproduce this by doing:
pod lib create chmod
cd chmod
touch Pod/Classes/Example.h
cd Example
pod install
rm ../Pod/Classes/Example.h
pod install | 1.0 | Error after deleting a header in a development Pod - ### Command
```
/Users/eleos/.rbenv/versions/2.2.2/bin/pod install
```
### Report
* What did you do?
If you run `pod install`, delete a header file from a development pod, and run `pod install` again, CocoaPods crashes.
* What did you expect to happen?
`pod install` should run normally, deleting the header from the `Pods/` directory as needed.
* What happened instead?
CocoaPods crashes trying to `chmod` the header's broken link.
### Stack
```
CocoaPods : 0.38.2
Ruby : ruby 2.2.2p95 (2015-04-13 revision 50295) [x86_64-darwin14]
RubyGems : 2.4.5
Host : Mac OS X 10.10.3 (14D136)
Xcode : 6.3.2 (6D2105)
Git : git version 2.2.1
Ruby lib dir : /Users/eleos/.rbenv/versions/2.2.2/lib
Repositories : master - https://github.com/CocoaPods/Specs.git @ e651ffcab678d6ad932a6b566681a8b13eff3ab9
```
### Plugins
```
cocoapods-plugins : 0.4.2
cocoapods-stats : 0.5.3
cocoapods-trunk : 0.6.1
cocoapods-try : 0.4.5
```
### Podfile
```ruby
source 'https://github.com/CocoaPods/Specs.git'
use_frameworks!
target 'chmod_Example', :exclusive => true do
pod "chmod", :path => "../"
end
target 'chmod_Tests', :exclusive => true do
pod "chmod", :path => "../"
end
```
### Error
```
Errno::ENOENT - No such file or directory @ rb_file_s_stat - /Users/eleos/cocoapods-duplicate/chmod/Example/Pods/Headers/Private/chmod/Example.h
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:904:in `stat'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:904:in `symbolic_modes_to_i'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:951:in `fu_mode'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1025:in `block (2 levels) in chmod_R'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1477:in `preorder_traverse'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1023:in `block in chmod_R'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1022:in `each'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/2.2.0/fileutils.rb:1022:in `chmod_R'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/installer.rb:117:in `block in prepare'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/user_interface.rb:140:in `message'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/installer.rb:116:in `prepare'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/installer.rb:103:in `install!'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/command/project.rb:71:in `run_install_with_update'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/command/project.rb:101:in `run'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/claide-0.9.1/lib/claide/command.rb:312:in `run'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/lib/cocoapods/command.rb:48:in `run'
/Users/eleos/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/cocoapods-0.38.2/bin/pod:44:in `<top (required)>'
/Users/eleos/.rbenv/versions/2.2.2/bin/pod:23:in `load'
/Users/eleos/.rbenv/versions/2.2.2/bin/pod:23:in `<main>'
```
You can reproduce this by doing:
pod lib create chmod
cd chmod
touch Pod/Classes/Example.h
cd Example
pod install
rm ../Pod/Classes/Example.h
pod install | non_main | error after deleting a header in a development pod command users eleos rbenv versions bin pod install report what did you do if you run pod install delete a header file from a development pod and run pod install again cocoapods crashes what did you expect to happen pod install should run normally deleting the header from the pods directory as needed what happened instead cocoapods crashes trying to chmod the header s broken link stack cocoapods ruby ruby revision rubygems host mac os x xcode git git version ruby lib dir users eleos rbenv versions lib repositories master plugins cocoapods plugins cocoapods stats cocoapods trunk cocoapods try podfile ruby source use frameworks target chmod example exclusive true do pod chmod path end target chmod tests exclusive true do pod chmod path end error errno enoent no such file or directory rb file s stat users eleos cocoapods duplicate chmod example pods headers private chmod example h users eleos rbenv versions lib ruby fileutils rb in stat users eleos rbenv versions lib ruby fileutils rb in symbolic modes to i users eleos rbenv versions lib ruby fileutils rb in fu mode users eleos rbenv versions lib ruby fileutils rb in block levels in chmod r users eleos rbenv versions lib ruby fileutils rb in preorder traverse users eleos rbenv versions lib ruby fileutils rb in block in chmod r users eleos rbenv versions lib ruby fileutils rb in each users eleos rbenv versions lib ruby fileutils rb in chmod r users eleos rbenv versions lib ruby gems gems cocoapods lib cocoapods installer rb in block in prepare users eleos rbenv versions lib ruby gems gems cocoapods lib cocoapods user interface rb in message users eleos rbenv versions lib ruby gems gems cocoapods lib cocoapods installer rb in prepare users eleos rbenv versions lib ruby gems gems cocoapods lib cocoapods installer rb in install users eleos rbenv versions lib ruby gems gems cocoapods lib cocoapods command project rb in run install with update users eleos rbenv versions lib ruby gems gems cocoapods lib cocoapods command project rb in run users eleos rbenv versions lib ruby gems gems claide lib claide command rb in run users eleos rbenv versions lib ruby gems gems cocoapods lib cocoapods command rb in run users eleos rbenv versions lib ruby gems gems cocoapods bin pod in users eleos rbenv versions bin pod in load users eleos rbenv versions bin pod in you can reproduce this by doing pod lib create chmod cd chmod touch pod classes example h cd example pod install rm pod classes example h pod install | 0 |
544 | 3,963,322,632 | IssuesEvent | 2016-05-02 20:01:38 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Git Cheat Sheet: | Maintainer Input Requested | Under "Redo Commits" it shows as:
> Undoes all commits after \[commit\], preserving changes locally
Note the backslashes before the square brackets. Can't understand why they are there though...
------
IA Page: http://duck.co/ia/view/git_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @zekiel | True | Git Cheat Sheet: - Under "Redo Commits" it shows as:
> Undoes all commits after \[commit\], preserving changes locally
Note the backslashes before the square brackets. Can't understand why they are there though...
------
IA Page: http://duck.co/ia/view/git_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @zekiel | main | git cheat sheet under redo commits it shows as undoes all commits after preserving changes locally note the backslashes before the square brackets can t understand why they are there though ia page zekiel | 1 |
1,125 | 4,995,839,253 | IssuesEvent | 2016-12-09 11:36:43 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | nxos_config, rollbacked on abnormal checkpoint created by Ansible | affects_2.2 bug_report networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
- nxos_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/users/kpersonnic/projects/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
centos7
NXOS Software
BIOS: version 3.1.0
kickstart: version 6.2(10)
system: version 6.2(10)
BIOS compile time: 02/27/2013
kickstart image file is: bootflash:///n7700-s2-kickstart-npe.6.2.10.bin
kickstart compile time: 10/9/2014 12:00:00 [10/17/2014 03:45:04]
system image file is: bootflash:///n7700-s2-dk9-npe.6.2.10.bin
system compile time: 10/9/2014 12:00:00 [10/17/2014 05:39:17]
Hardware
cisco Nexus7700 C7706 (6 Slot) Chassis ("Supervisor Module-2")
Intel(R) Xeon(R) CPU with 32745060 kB of memory.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
Hi,
While writing a new role for nexus switches dedicated to radius configuration I executed a "non working" version of my role. As expected Ansible rollbacked the change when it encountered an error.
But the ckeckpoint created by Ansible contained unexpected commands lines which unconfigured two ports-profils used by several network port of the switch.
How Ansible can create a checkpoint with "random" content ?
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
I don't have the command history anymore in my term and I don't have spare N7K to reproduce the bug.
But I think It might append with any failed commands which result in a rollback from the checkpoint created at the beginning of the run.
<!--- Paste example playbooks or commands between quotes below -->
```
ansible-playbook playbooks/config_common.yml -k
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expected content of the checkpoint :
```
[NEXUS7K]# show rollback log exec
Operation : Rollback to Checkpoint
Checkpoint name : ansible_1479404111
Rollback done By : kpersonnic
Rollback mode : atomic
Verbose : disabled
Start Time : Thu, 17:35:12 17 Nov 2016
End Time : Thu, 17:35:32 17 Nov 2016
Rollback Status : Success
...
Executing Patch:
----------------
`conf t`
`no aaa group server radius RAD_10.10.50.9`
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
Actual content of the checkpoint executed by ansible
```
[NEXUS7K]# show rollback log exec
Operation : Rollback to Checkpoint
Checkpoint name : ansible_1479404111
Rollback done By : kpersonnic
Rollback mode : atomic
Verbose : disabled
Start Time : Thu, 17:35:12 17 Nov 2016
End Time : Thu, 17:35:32 17 Nov 2016
Rollback Status : Success
...
Executing Patch:
----------------
`conf t`
`port-profile type ethernet FRONT_ESX_ETH1-ETH3`
`no switchport mode`
`exit`
`port-profile type ethernet FRONT_ESX_ETH0-ETH2`
`no switchport mode`
`exit`
`no aaa group server radius RAD_10.10.50.9`
```
| True | nxos_config, rollbacked on abnormal checkpoint created by Ansible - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
- nxos_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/users/kpersonnic/projects/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
centos7
NXOS Software
BIOS: version 3.1.0
kickstart: version 6.2(10)
system: version 6.2(10)
BIOS compile time: 02/27/2013
kickstart image file is: bootflash:///n7700-s2-kickstart-npe.6.2.10.bin
kickstart compile time: 10/9/2014 12:00:00 [10/17/2014 03:45:04]
system image file is: bootflash:///n7700-s2-dk9-npe.6.2.10.bin
system compile time: 10/9/2014 12:00:00 [10/17/2014 05:39:17]
Hardware
cisco Nexus7700 C7706 (6 Slot) Chassis ("Supervisor Module-2")
Intel(R) Xeon(R) CPU with 32745060 kB of memory.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
Hi,
While writing a new role for nexus switches dedicated to radius configuration I executed a "non working" version of my role. As expected Ansible rollbacked the change when it encountered an error.
But the ckeckpoint created by Ansible contained unexpected commands lines which unconfigured two ports-profils used by several network port of the switch.
How Ansible can create a checkpoint with "random" content ?
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
I don't have the command history anymore in my term and I don't have spare N7K to reproduce the bug.
But I think It might append with any failed commands which result in a rollback from the checkpoint created at the beginning of the run.
<!--- Paste example playbooks or commands between quotes below -->
```
ansible-playbook playbooks/config_common.yml -k
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expected content of the checkpoint :
```
[NEXUS7K]# show rollback log exec
Operation : Rollback to Checkpoint
Checkpoint name : ansible_1479404111
Rollback done By : kpersonnic
Rollback mode : atomic
Verbose : disabled
Start Time : Thu, 17:35:12 17 Nov 2016
End Time : Thu, 17:35:32 17 Nov 2016
Rollback Status : Success
...
Executing Patch:
----------------
`conf t`
`no aaa group server radius RAD_10.10.50.9`
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
Actual content of the checkpoint executed by ansible
```
[NEXUS7K]# show rollback log exec
Operation : Rollback to Checkpoint
Checkpoint name : ansible_1479404111
Rollback done By : kpersonnic
Rollback mode : atomic
Verbose : disabled
Start Time : Thu, 17:35:12 17 Nov 2016
End Time : Thu, 17:35:32 17 Nov 2016
Rollback Status : Success
...
Executing Patch:
----------------
`conf t`
`port-profile type ethernet FRONT_ESX_ETH1-ETH3`
`no switchport mode`
`exit`
`port-profile type ethernet FRONT_ESX_ETH0-ETH2`
`no switchport mode`
`exit`
`no aaa group server radius RAD_10.10.50.9`
```
| main | nxos config rollbacked on abnormal checkpoint created by ansible issue type bug report component name nxos config ansible version ansible config file home users kpersonnic projects ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment nxos software bios version kickstart version system version bios compile time kickstart image file is bootflash kickstart npe bin kickstart compile time system image file is bootflash npe bin system compile time hardware cisco slot chassis supervisor module intel r xeon r cpu with kb of memory summary hi while writing a new role for nexus switches dedicated to radius configuration i executed a non working version of my role as expected ansible rollbacked the change when it encountered an error but the ckeckpoint created by ansible contained unexpected commands lines which unconfigured two ports profils used by several network port of the switch how ansible can create a checkpoint with random content steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i don t have the command history anymore in my term and i don t have spare to reproduce the bug but i think it might append with any failed commands which result in a rollback from the checkpoint created at the beginning of the run ansible playbook playbooks config common yml k expected results expected content of the checkpoint show rollback log exec operation rollback to checkpoint checkpoint name ansible rollback done by kpersonnic rollback mode atomic verbose disabled start time thu nov end time thu nov rollback status success executing patch conf t no aaa group server radius rad actual results actual content of the checkpoint executed by ansible show rollback log exec operation rollback to checkpoint checkpoint name ansible rollback done by kpersonnic rollback mode atomic verbose disabled start time thu nov end time thu nov rollback status success executing patch conf t port profile type ethernet front esx no switchport mode exit port profile type ethernet front esx no switchport mode exit no aaa group server radius rad | 1 |
17,222 | 9,660,381,091 | IssuesEvent | 2019-05-20 15:22:02 | codesandbox/codesandbox-client | https://api.github.com/repos/codesandbox/codesandbox-client | closed | RavenJS kills performance. | performance | # 🐛 bug report
## Description of the problem
## How has this issue affected you? What are you trying to accomplish?
I'm working on a small game with PIXI and until quite recently, everything was fine. Then I started to get really poor performance. With Chrome, I ran a performance profiling. Here are the results:

Link to the picture: [Link](https://image.noelshack.com/fichiers/2019/13/5/1553897425-capture.png)
This was running in the "Open in New Window" tab, so it's not coming from the IDE. Every bump in the app performance is caused by what seems to be a call to a ravenjs library on a CDN. I downloaded my project and can confirm it isn't part of my package.json files and isn't installed nor included in my project when I build it. Why is it here? It **very** regularly make calls, and completely kills the app performance. What is going on here? I don't understand. Also, why are there 1200 listeners when my app only uses like 13? Why is the app consuming 200Mo while my local build consumes around 8Mo? There's something really strange going on, the online build doesn't act like the downloaded + built zip at all.
### Link to sandbox: [link]() (optional)
### Your Environment
| Software | Name/Version|
| ---------------- | ---------- |
| Сodesandbox | latest
| Browser | Chrome
| Operating System | Windows 10 | True | RavenJS kills performance. - # 🐛 bug report
## Description of the problem
## How has this issue affected you? What are you trying to accomplish?
I'm working on a small game with PIXI and until quite recently, everything was fine. Then I started to get really poor performance. With Chrome, I ran a performance profiling. Here are the results:

Link to the picture: [Link](https://image.noelshack.com/fichiers/2019/13/5/1553897425-capture.png)
This was running in the "Open in New Window" tab, so it's not coming from the IDE. Every bump in the app performance is caused by what seems to be a call to a ravenjs library on a CDN. I downloaded my project and can confirm it isn't part of my package.json files and isn't installed nor included in my project when I build it. Why is it here? It **very** regularly make calls, and completely kills the app performance. What is going on here? I don't understand. Also, why are there 1200 listeners when my app only uses like 13? Why is the app consuming 200Mo while my local build consumes around 8Mo? There's something really strange going on, the online build doesn't act like the downloaded + built zip at all.
### Link to sandbox: [link]() (optional)
### Your Environment
| Software | Name/Version|
| ---------------- | ---------- |
| Сodesandbox | latest
| Browser | Chrome
| Operating System | Windows 10 | non_main | ravenjs kills performance 🐛 bug report description of the problem how has this issue affected you what are you trying to accomplish i m working on a small game with pixi and until quite recently everything was fine then i started to get really poor performance with chrome i ran a performance profiling here are the results link to the picture this was running in the open in new window tab so it s not coming from the ide every bump in the app performance is caused by what seems to be a call to a ravenjs library on a cdn i downloaded my project and can confirm it isn t part of my package json files and isn t installed nor included in my project when i build it why is it here it very regularly make calls and completely kills the app performance what is going on here i don t understand also why are there listeners when my app only uses like why is the app consuming while my local build consumes around there s something really strange going on the online build doesn t act like the downloaded built zip at all link to sandbox optional your environment software name version сodesandbox latest browser chrome operating system windows | 0 |
799,422 | 28,306,574,241 | IssuesEvent | 2023-04-10 11:38:42 | bounswe/bounswe2023group8 | https://api.github.com/repos/bounswe/bounswe2023group8 | closed | Milestone Report - Executive Summary | status: to-do priority: high effort: low | TODOS:
- Introduction (Summary of our project and group.)
- Status ( Brief story about what is done so far.)
| 1.0 | Milestone Report - Executive Summary - TODOS:
- Introduction (Summary of our project and group.)
- Status ( Brief story about what is done so far.)
| non_main | milestone report executive summary todos introduction summary of our project and group status brief story about what is done so far | 0 |
1,608 | 6,572,399,310 | IssuesEvent | 2017-09-11 02:01:15 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ssh to netscaler device not working | affects_2.1 bug_report networking waiting_on_maintainer | I am trying to ssh to netscaler device with following ad-hoc command:-
"[root@localhost ansible]# ansible 10.102.56.141 -a "sh ver" -u nsroot --ask-pass "
but i get the following error:-
"10.102.56.141 | UNREACHABLE! => {
"changed": false,
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"`echo $HOME/.ansible/tmp/ansible-tmp-1476372521.28-33913150124873`\" && echo ansible-tmp-1476372521.28-33913150124873=\"`echo $HOME/.ansible/tmp/ansible-tmp-1476372521.28-33913150124873`\" ), exited with result 1: Done\nERROR: No such command\n",
"unreachable": true
}
"
With -vvvv:-
"Loaded callback minimal of type stdout, v2.0
<10.102.56.141> ESTABLISH SSH CONNECTION FOR USER: nsroot
<10.102.56.141> SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.102.56.141> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=nsroot)
<10.102.56.141> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.102.56.141> SSH: PlayContext set ssh_common_args: ()
<10.102.56.141> SSH: PlayContext set ssh_extra_args: ()
<10.102.56.141> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r)
<10.102.56.141> SSH: EXEC sshpass -d11 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User=nsroot -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 10.102.56.141 '/bin/sh -c '"'"'( umask 77 && mkdir -p "`echo $HOME/.ansible/tmp/ansible-tmp-1476372618.74-97489407251661`" && echo ansible-tmp-1476372618.74-97489407251661="`echo $HOME/.ansible/tmp/ansible-tmp-1476372618.74-97489407251661`" ) && sleep 0'"'"''
10.102.56.141 | UNREACHABLE! => {
"changed": false,
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"`echo $HOME/.ansible/tmp/ansible-tmp-1476372618.74-97489407251661`\" && echo ansible-tmp-1476372618.74-97489407251661=\"`echo $HOME/.ansible/tmp/ansible-tmp-1476372618.74-97489407251661`\" ), exited with result 1: Done\nERROR: No such command\n",
"unreachable": true
}
"
I also tried doing that with netscaler module but got the below error:-
"root@ubuntu-3:~# ansible host -m netscaler -a "nsc_host=10.102.56.141 user=nsroot password=nsroot action=enable"
ERROR! Specified hosts and/or --limit does not match any hosts
"
Ansible version being used is :- 2.1.2.1
ansible host is fedora-20.
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.1.2.1
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ansible host:- fedora
Remote host:- netscaler
##### SUMMARY
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| True | ssh to netscaler device not working - I am trying to ssh to netscaler device with following ad-hoc command:-
"[root@localhost ansible]# ansible 10.102.56.141 -a "sh ver" -u nsroot --ask-pass "
but i get the following error:-
"10.102.56.141 | UNREACHABLE! => {
"changed": false,
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"`echo $HOME/.ansible/tmp/ansible-tmp-1476372521.28-33913150124873`\" && echo ansible-tmp-1476372521.28-33913150124873=\"`echo $HOME/.ansible/tmp/ansible-tmp-1476372521.28-33913150124873`\" ), exited with result 1: Done\nERROR: No such command\n",
"unreachable": true
}
"
With -vvvv:-
"Loaded callback minimal of type stdout, v2.0
<10.102.56.141> ESTABLISH SSH CONNECTION FOR USER: nsroot
<10.102.56.141> SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<10.102.56.141> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=nsroot)
<10.102.56.141> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<10.102.56.141> SSH: PlayContext set ssh_common_args: ()
<10.102.56.141> SSH: PlayContext set ssh_extra_args: ()
<10.102.56.141> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r)
<10.102.56.141> SSH: EXEC sshpass -d11 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o User=nsroot -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 10.102.56.141 '/bin/sh -c '"'"'( umask 77 && mkdir -p "`echo $HOME/.ansible/tmp/ansible-tmp-1476372618.74-97489407251661`" && echo ansible-tmp-1476372618.74-97489407251661="`echo $HOME/.ansible/tmp/ansible-tmp-1476372618.74-97489407251661`" ) && sleep 0'"'"''
10.102.56.141 | UNREACHABLE! => {
"changed": false,
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"`echo $HOME/.ansible/tmp/ansible-tmp-1476372618.74-97489407251661`\" && echo ansible-tmp-1476372618.74-97489407251661=\"`echo $HOME/.ansible/tmp/ansible-tmp-1476372618.74-97489407251661`\" ), exited with result 1: Done\nERROR: No such command\n",
"unreachable": true
}
"
I also tried doing that with netscaler module but got the below error:-
"root@ubuntu-3:~# ansible host -m netscaler -a "nsc_host=10.102.56.141 user=nsroot password=nsroot action=enable"
ERROR! Specified hosts and/or --limit does not match any hosts
"
Ansible version being used is :- 2.1.2.1
ansible host is fedora-20.
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.1.2.1
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ansible host:- fedora
Remote host:- netscaler
##### SUMMARY
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| main | ssh to netscaler device not working i am trying to ssh to netscaler device with following ad hoc command ansible a sh ver u nsroot ask pass but i get the following error unreachable changed false msg authentication or permission failure in some cases you may have been able to authenticate and did not have permissions on the remote directory consider changing the remote temp path in ansible cfg to a path rooted in tmp failed command was umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp exited with result done nerror no such command n unreachable true with vvvv loaded callback minimal of type stdout establish ssh connection for user nsroot ssh ansible cfg set ssh args o controlmaster auto o controlpersist ssh ansible remote user remote user ansible user user u set o user nsroot ssh ansible timeout timeout set o connecttimeout ssh playcontext set ssh common args ssh playcontext set ssh extra args ssh found only controlpersist added controlpath o controlpath root ansible cp ansible ssh h p r ssh exec sshpass ssh c vvv o controlmaster auto o controlpersist o user nsroot o connecttimeout o controlpath root ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep unreachable changed false msg authentication or permission failure in some cases you may have been able to authenticate and did not have permissions on the remote directory consider changing the remote temp path in ansible cfg to a path rooted in tmp failed command was umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp exited with result done nerror no such command n unreachable true i also tried doing that with netscaler module but got the below error root ubuntu ansible host m netscaler a nsc host user nsroot password nsroot action enable error specified hosts and or limit does not match any hosts ansible version being used is ansible host is fedora issue type bug report component name ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ansible host fedora remote host netscaler summary steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results | 1 |
72,875 | 31,769,573,477 | IssuesEvent | 2023-09-12 10:53:30 | gauravrs18/issue_onboarding | https://api.github.com/repos/gauravrs18/issue_onboarding | closed | dev-angular-style-account-services-new-connection-component-connect-component
-consumer-details-component
-application-component
-payment-component | CX-account-services | dev-angular-style-account-services-new-connection-component-connect-component
-consumer-details-component
-application-component
-payment-component | 1.0 | dev-angular-style-account-services-new-connection-component-connect-component
-consumer-details-component
-application-component
-payment-component - dev-angular-style-account-services-new-connection-component-connect-component
-consumer-details-component
-application-component
-payment-component | non_main | dev angular style account services new connection component connect component consumer details component application component payment component dev angular style account services new connection component connect component consumer details component application component payment component | 0 |
626,983 | 19,849,087,363 | IssuesEvent | 2022-01-21 10:14:00 | UniVE-SSV/lisa | https://api.github.com/repos/UniVE-SSV/lisa | opened | [FEATURE REQUEST] More customization about call resolution | enhancement priority-p1 | **Description**
`ResolutionStrategy` can only decide how to match parameter lists with expressions. We should add capabilities for:
- performing the assignment of values to parameters (eg for python)
- name matching (only cfg name, unit + cfg names, ...)
- types traversal (how to search implementations from the hierarrchy of retrieved types)
- ...?
| 1.0 | [FEATURE REQUEST] More customization about call resolution - **Description**
`ResolutionStrategy` can only decide how to match parameter lists with expressions. We should add capabilities for:
- performing the assignment of values to parameters (eg for python)
- name matching (only cfg name, unit + cfg names, ...)
- types traversal (how to search implementations from the hierarrchy of retrieved types)
- ...?
| non_main | more customization about call resolution description resolutionstrategy can only decide how to match parameter lists with expressions we should add capabilities for performing the assignment of values to parameters eg for python name matching only cfg name unit cfg names types traversal how to search implementations from the hierarrchy of retrieved types | 0 |
280,455 | 30,826,274,011 | IssuesEvent | 2023-08-01 20:20:27 | GHCbflam1/Easy-Buggy | https://api.github.com/repos/GHCbflam1/Easy-Buggy | opened | CVE-2018-20677 (Medium) detected in bootstrap-3.3.7.min.js | Mend: dependency security vulnerability | ## CVE-2018-20677 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /src/main/webapp/dfi/style_bootstrap.html</p>
<p>Path to vulnerable library: /src/main/webapp/dfi/style_bootstrap.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/GHCbflam1/Easy-Buggy/commit/b478faf89f68b8e991cf688bde9520087add7126">b478faf89f68b8e991cf688bde9520087add7126</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-20677>CVE-2018-20677</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: Bootstrap - v3.4.0;NorDroN.AngularTemplate - 0.1.6;Dynamic.NET.Express.ProjectTemplates - 0.8.0;dotnetng.template - 1.0.0.4;ZNxtApp.Core.Module.Theme - 1.0.9-Beta;JMeter - 5.0.0</p>
</p>
</details>
<p></p>
| True | CVE-2018-20677 (Medium) detected in bootstrap-3.3.7.min.js - ## CVE-2018-20677 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /src/main/webapp/dfi/style_bootstrap.html</p>
<p>Path to vulnerable library: /src/main/webapp/dfi/style_bootstrap.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/GHCbflam1/Easy-Buggy/commit/b478faf89f68b8e991cf688bde9520087add7126">b478faf89f68b8e991cf688bde9520087add7126</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-20677>CVE-2018-20677</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: Bootstrap - v3.4.0;NorDroN.AngularTemplate - 0.1.6;Dynamic.NET.Express.ProjectTemplates - 0.8.0;dotnetng.template - 1.0.0.4;ZNxtApp.Core.Module.Theme - 1.0.9-Beta;JMeter - 5.0.0</p>
</p>
</details>
<p></p>
| non_main | cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file src main webapp dfi style bootstrap html path to vulnerable library src main webapp dfi style bootstrap html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap before xss is possible in the affix configuration target property publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap nordron angulartemplate dynamic net express projecttemplates dotnetng template znxtapp core module theme beta jmeter | 0 |
733,008 | 25,284,535,627 | IssuesEvent | 2022-11-16 18:10:08 | libp2p/rust-libp2p | https://api.github.com/repos/libp2p/rust-libp2p | opened | protocols/kad/: Replace manual procedural state machines with async/await | priority:nicetohave difficulty:easy help wanted getting-started | Reading and writing on inbound/outbound streams in `libp2p-kad` is implemented via hand written procedural / sequential state machines. This logic can be simplified by using Rust's async-await.
Manual state machines in `libp2p-kad`:
https://github.com/libp2p/rust-libp2p/blob/43fdfe27ea7b38fd38d84dae7ea51e9b713039cc/protocols/kad/src/handler.rs#L140-L163
Example of using async-await in `libp2p-dcutr`:
https://github.com/libp2p/rust-libp2p/blob/43fdfe27ea7b38fd38d84dae7ea51e9b713039cc/protocols/relay/src/v2/protocol/inbound_stop.rs#L124-L146 | 1.0 | protocols/kad/: Replace manual procedural state machines with async/await - Reading and writing on inbound/outbound streams in `libp2p-kad` is implemented via hand written procedural / sequential state machines. This logic can be simplified by using Rust's async-await.
Manual state machines in `libp2p-kad`:
https://github.com/libp2p/rust-libp2p/blob/43fdfe27ea7b38fd38d84dae7ea51e9b713039cc/protocols/kad/src/handler.rs#L140-L163
Example of using async-await in `libp2p-dcutr`:
https://github.com/libp2p/rust-libp2p/blob/43fdfe27ea7b38fd38d84dae7ea51e9b713039cc/protocols/relay/src/v2/protocol/inbound_stop.rs#L124-L146 | non_main | protocols kad replace manual procedural state machines with async await reading and writing on inbound outbound streams in kad is implemented via hand written procedural sequential state machines this logic can be simplified by using rust s async await manual state machines in kad example of using async await in dcutr | 0 |
19,196 | 11,163,711,269 | IssuesEvent | 2019-12-27 00:32:56 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | Error while following the "Host a web application with Azure App service" tutorial | Service Attention Web Apps |
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az webapp up`
**Errors:**
```
'NoneType' object has no attribute 'upper'
Traceback (most recent call last):
python3.6/site-packages/knack/cli.py, ln 206, in invoke
cmd_result = self.invocation.execute(args)
cli/core/commands/__init__.py, ln 603, in execute
raise ex
cli/core/commands/__init__.py, ln 661, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
cli/core/commands/__init__.py, ln 652, in _run_job
cmd_copy.exception_handler(ex)
...
cli/command_modules/appservice/custom.py, ln 2993, in webapp_up
create_app_service_plan(cmd, rg_name, plan, _is_linux, False, sku, 1 if _is_linux else None, location)
cli/command_modules/appservice/custom.py, ln 1390, in create_app_service_plan
sku = _normalize_sku(sku)
cli/command_modules/appservice/utils.py, ln 20, in _normalize_sku
sku = sku.upper()
AttributeError: 'NoneType' object has no attribute 'upper'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- Following the tutorial at https://docs.microsoft.com/en-gb/learn/modules/host-a-web-app-with-azure-app-service/6-exercise-deploy-your-code-to-app-service?pivots=csharp the error occurs after entering these commands:
-APPNAME=$(az webapp list --query [0].name --output tsv)
-APPRG=$(az webapp list --query [0].resourceGroup --output tsv)
-APPPLAN=$(az appservice plan list --query [0].name --output tsv)
-APPSKU=$(az appservice plan list --query [0].sku.name --output tsv)
-APPLOCATION=$(az appservice plan list --query [0].location --output tsv)
- az webapp up --name $APPNAME --resource-group $APPRG --plan $APPPLAN --sku $APPSKU --location "$APPLOCATION"
## Expected Behavior
The Test App deploys as outlined in the tutorial under the step "Exercise - Deploy your code to App Service"
## Actual Behavior
The error shown above occurs after entering the final command
## Environment Summary
```
Linux-4.15.0-1063-azure-x86_64-with-debian-stretch-sid
Python 3.6.5
Shell: bash
azure-cli 2.0.76
```
## Additional Context
Sandbox is activated and an App Service is deployed
<!--Please don't remove this:-->
<!--auto-generated-->
| 1.0 | Error while following the "Host a web application with Azure App service" tutorial -
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az webapp up`
**Errors:**
```
'NoneType' object has no attribute 'upper'
Traceback (most recent call last):
python3.6/site-packages/knack/cli.py, ln 206, in invoke
cmd_result = self.invocation.execute(args)
cli/core/commands/__init__.py, ln 603, in execute
raise ex
cli/core/commands/__init__.py, ln 661, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
cli/core/commands/__init__.py, ln 652, in _run_job
cmd_copy.exception_handler(ex)
...
cli/command_modules/appservice/custom.py, ln 2993, in webapp_up
create_app_service_plan(cmd, rg_name, plan, _is_linux, False, sku, 1 if _is_linux else None, location)
cli/command_modules/appservice/custom.py, ln 1390, in create_app_service_plan
sku = _normalize_sku(sku)
cli/command_modules/appservice/utils.py, ln 20, in _normalize_sku
sku = sku.upper()
AttributeError: 'NoneType' object has no attribute 'upper'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- Following the tutorial at https://docs.microsoft.com/en-gb/learn/modules/host-a-web-app-with-azure-app-service/6-exercise-deploy-your-code-to-app-service?pivots=csharp the error occurs after entering these commands:
-APPNAME=$(az webapp list --query [0].name --output tsv)
-APPRG=$(az webapp list --query [0].resourceGroup --output tsv)
-APPPLAN=$(az appservice plan list --query [0].name --output tsv)
-APPSKU=$(az appservice plan list --query [0].sku.name --output tsv)
-APPLOCATION=$(az appservice plan list --query [0].location --output tsv)
- az webapp up --name $APPNAME --resource-group $APPRG --plan $APPPLAN --sku $APPSKU --location "$APPLOCATION"
## Expected Behavior
The Test App deploys as outlined in the tutorial under the step "Exercise - Deploy your code to App Service"
## Actual Behavior
The error shown above occurs after entering the final command
## Environment Summary
```
Linux-4.15.0-1063-azure-x86_64-with-debian-stretch-sid
Python 3.6.5
Shell: bash
azure-cli 2.0.76
```
## Additional Context
Sandbox is activated and an App Service is deployed
<!--Please don't remove this:-->
<!--auto-generated-->
| non_main | error while following the host a web application with azure app service tutorial this is autogenerated please review and update as needed describe the bug command name az webapp up errors nonetype object has no attribute upper traceback most recent call last site packages knack cli py ln in invoke cmd result self invocation execute args cli core commands init py ln in execute raise ex cli core commands init py ln in run jobs serially results append self run job expanded arg cmd copy cli core commands init py ln in run job cmd copy exception handler ex cli command modules appservice custom py ln in webapp up create app service plan cmd rg name plan is linux false sku if is linux else none location cli command modules appservice custom py ln in create app service plan sku normalize sku sku cli command modules appservice utils py ln in normalize sku sku sku upper attributeerror nonetype object has no attribute upper to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information following the tutorial at the error occurs after entering these commands appname az webapp list query name output tsv apprg az webapp list query resourcegroup output tsv appplan az appservice plan list query name output tsv appsku az appservice plan list query sku name output tsv applocation az appservice plan list query location output tsv az webapp up name appname resource group apprg plan appplan sku appsku location applocation expected behavior the test app deploys as outlined in the tutorial under the step exercise deploy your code to app service actual behavior the error shown above occurs after entering the final command environment summary linux azure with debian stretch sid python shell bash azure cli additional context sandbox is activated and an app service is deployed | 0 |
10,482 | 8,954,848,381 | IssuesEvent | 2019-01-26 01:07:10 | Microsoft/vscode-cpptools | https://api.github.com/repos/Microsoft/vscode-cpptools | opened | Completion annoying overwrites text to the right of the cursor | Feature Request Language Service quick fix | Have some code like:
```
int var1 = -1;
int var2 = std::a|var;
```
Typing : triggers completion, and a completion character is entered at | and the result is std::abs or std::abs() with the "var" part on the right deleted -- annoyingly.
Bob was saying this was "by design" -- does anyone else not like this? I think Visual Studio has some smarter processing that sometimes doesn't replace the contents on the right, but sometimes it does.
| 1.0 | Completion annoying overwrites text to the right of the cursor - Have some code like:
```
int var1 = -1;
int var2 = std::a|var;
```
Typing : triggers completion, and a completion character is entered at | and the result is std::abs or std::abs() with the "var" part on the right deleted -- annoyingly.
Bob was saying this was "by design" -- does anyone else not like this? I think Visual Studio has some smarter processing that sometimes doesn't replace the contents on the right, but sometimes it does.
| non_main | completion annoying overwrites text to the right of the cursor have some code like int int std a var typing triggers completion and a completion character is entered at and the result is std abs or std abs with the var part on the right deleted annoyingly bob was saying this was by design does anyone else not like this i think visual studio has some smarter processing that sometimes doesn t replace the contents on the right but sometimes it does | 0 |
47,252 | 2,974,603,280 | IssuesEvent | 2015-07-15 02:16:59 | Reimashi/jotai | https://api.github.com/repos/Reimashi/jotai | closed | Add support for the Intelligent Fan Controller | auto-migrated Priority-Medium Type-Enhancement | ```
Details can be found here
http://members.iinet.net.au/~geoffg/fancontroller.html or in the attached file.
```
Original issue reported on code.google.com by `moel.mich` on 23 Jul 2010 at 7:29
Attachments:
* [Communications Protocol.pdf](https://storage.googleapis.com/google-code-attachments/open-hardware-monitor/issue-96/comment-0/Communications Protocol.pdf)
| 1.0 | Add support for the Intelligent Fan Controller - ```
Details can be found here
http://members.iinet.net.au/~geoffg/fancontroller.html or in the attached file.
```
Original issue reported on code.google.com by `moel.mich` on 23 Jul 2010 at 7:29
Attachments:
* [Communications Protocol.pdf](https://storage.googleapis.com/google-code-attachments/open-hardware-monitor/issue-96/comment-0/Communications Protocol.pdf)
| non_main | add support for the intelligent fan controller details can be found here or in the attached file original issue reported on code google com by moel mich on jul at attachments protocol pdf | 0 |
57,748 | 6,556,394,734 | IssuesEvent | 2017-09-06 14:01:13 | wallabyjs/public | https://api.github.com/repos/wallabyjs/public | closed | Support Qunit 2 | feature request testing framework | ### Issue description or question
I'm trying to use Qunit2 Framework and it appears that Wallaby is only bringing in v1.8. Is there a way to configure wallaby to bring in a newer qunit version? I've tried setting the testFramework: 'qunit@2.4.0' with no luck. Any suggestions would be appreciated.
### Wallaby.js configuration file
```javascript
module.exports = function(wallaby) {
return {
files: [
...
],
tests: [
...
],
debug: true,
slowTestThreshold: 5000, // 5000ms
testFramework: 'qunit'
}
}
```
### Code editor or IDE name and version
Visual Studio v1.15
### OS name and version
Windows 10
| 1.0 | Support Qunit 2 - ### Issue description or question
I'm trying to use Qunit2 Framework and it appears that Wallaby is only bringing in v1.8. Is there a way to configure wallaby to bring in a newer qunit version? I've tried setting the testFramework: 'qunit@2.4.0' with no luck. Any suggestions would be appreciated.
### Wallaby.js configuration file
```javascript
module.exports = function(wallaby) {
return {
files: [
...
],
tests: [
...
],
debug: true,
slowTestThreshold: 5000, // 5000ms
testFramework: 'qunit'
}
}
```
### Code editor or IDE name and version
Visual Studio v1.15
### OS name and version
Windows 10
| non_main | support qunit issue description or question i m trying to use framework and it appears that wallaby is only bringing in is there a way to configure wallaby to bring in a newer qunit version i ve tried setting the testframework qunit with no luck any suggestions would be appreciated wallaby js configuration file javascript module exports function wallaby return files tests debug true slowtestthreshold testframework qunit code editor or ide name and version visual studio os name and version windows | 0 |
3,725 | 15,434,978,810 | IssuesEvent | 2021-03-07 06:29:37 | diofant/diofant | https://api.github.com/repos/diofant/diofant | closed | Use prod() and isqrt() from stdlib (since 3.8) | core maintainability | See https://docs.python.org/3.8/library/math.html#math.prod
and https://docs.python.org/3.8/library/math.html#math.isqrt.
We have diofant.core.mul.prod and diofant.core.power.isqrt. | True | Use prod() and isqrt() from stdlib (since 3.8) - See https://docs.python.org/3.8/library/math.html#math.prod
and https://docs.python.org/3.8/library/math.html#math.isqrt.
We have diofant.core.mul.prod and diofant.core.power.isqrt. | main | use prod and isqrt from stdlib since see and we have diofant core mul prod and diofant core power isqrt | 1 |
1,268 | 5,375,429,177 | IssuesEvent | 2017-02-23 04:45:08 | wojno/movie_manager | https://api.github.com/repos/wojno/movie_manager | opened | As an authenticated user, I want to remove a movie from my collection | Maintain Collection | As an `authenticated user`, I want to `remove` a movie from my `collection` so that I can keep my `inventory` current when I misplace a `movie` | True | As an authenticated user, I want to remove a movie from my collection - As an `authenticated user`, I want to `remove` a movie from my `collection` so that I can keep my `inventory` current when I misplace a `movie` | main | as an authenticated user i want to remove a movie from my collection as an authenticated user i want to remove a movie from my collection so that i can keep my inventory current when i misplace a movie | 1 |
5,737 | 30,328,763,702 | IssuesEvent | 2023-07-11 03:46:32 | garret1317/yt-dlp-rajiko | https://api.github.com/repos/garret1317/yt-dlp-rajiko | closed | package/load v8 key properly | maintainance | dont think [the way im doing it currently](https://github.com/garret1317/yt-dlp-rajiko/blob/9ae5229195b7033522a8b5e7c086e88b0048fe54/yt_dlp_plugins/extractor/radiko.py#L21C1-L22C45) is the right way
don't know how to actually do it right though | True | package/load v8 key properly - dont think [the way im doing it currently](https://github.com/garret1317/yt-dlp-rajiko/blob/9ae5229195b7033522a8b5e7c086e88b0048fe54/yt_dlp_plugins/extractor/radiko.py#L21C1-L22C45) is the right way
don't know how to actually do it right though | main | package load key properly dont think is the right way don t know how to actually do it right though | 1 |
430,245 | 12,450,146,109 | IssuesEvent | 2020-05-27 08:19:39 | hazelcast/hazelcast-cpp-client | https://api.github.com/repos/hazelcast/hazelcast-cpp-client | closed | Missing tryLock with leaseTime option for IMap | Estimation: S Priority: Low Source: Community Type: Enhancement | Currently, client supports `tryLock` with `timeToWait` or no time restriction at all for IMap.
Per [implementation](https://github.com/hazelcast/hazelcast/blob/8f5f00237c900632ec0783b5ad211e047b7dff64/hazelcast/src/main/java/com/hazelcast/map/IMap.java#L1918), setting `leaseTime` is also possible and seems like it's missing in the client impl. It would be useful to include that in cpp client as well. | 1.0 | Missing tryLock with leaseTime option for IMap - Currently, client supports `tryLock` with `timeToWait` or no time restriction at all for IMap.
Per [implementation](https://github.com/hazelcast/hazelcast/blob/8f5f00237c900632ec0783b5ad211e047b7dff64/hazelcast/src/main/java/com/hazelcast/map/IMap.java#L1918), setting `leaseTime` is also possible and seems like it's missing in the client impl. It would be useful to include that in cpp client as well. | non_main | missing trylock with leasetime option for imap currently client supports trylock with timetowait or no time restriction at all for imap per setting leasetime is also possible and seems like it s missing in the client impl it would be useful to include that in cpp client as well | 0 |
466,251 | 13,399,006,210 | IssuesEvent | 2020-09-03 13:55:46 | fossasia/open-event-frontend | https://api.github.com/repos/fossasia/open-event-frontend | opened | System Invoice Information of Platform needs Additional Format Option and Logo | Priority: High enhancement | Please add options for the invoice format to the admin section as follows
* Invoice Format [Format Options here] (please resarch how to do it)
* Logo (enable image upload here)
Compare https://eventyay.com/admin/settings/billing

| 1.0 | System Invoice Information of Platform needs Additional Format Option and Logo - Please add options for the invoice format to the admin section as follows
* Invoice Format [Format Options here] (please resarch how to do it)
* Logo (enable image upload here)
Compare https://eventyay.com/admin/settings/billing

| non_main | system invoice information of platform needs additional format option and logo please add options for the invoice format to the admin section as follows invoice format please resarch how to do it logo enable image upload here compare | 0 |
447,067 | 12,882,815,617 | IssuesEvent | 2020-07-12 18:47:09 | dking1286/css-gardener | https://api.github.com/repos/dking1286/css-gardener | opened | Generate opinionated project boilerplate | enhancement medium priority | Generate an opinionated "quick-start" project template. Should use the generation script for shadow-cljs boilerplate, css-gardener.edn, and reagent components | 1.0 | Generate opinionated project boilerplate - Generate an opinionated "quick-start" project template. Should use the generation script for shadow-cljs boilerplate, css-gardener.edn, and reagent components | non_main | generate opinionated project boilerplate generate an opinionated quick start project template should use the generation script for shadow cljs boilerplate css gardener edn and reagent components | 0 |
52,726 | 10,918,605,477 | IssuesEvent | 2019-11-21 17:12:49 | remkop/picocli | https://api.github.com/repos/remkop/picocli | closed | zsh completion script should not assume `complete` command exists | auto-completion codegen | After recently upgrading my macOs to Mojave, I found out that my Picocli autocomplete script stopped working with the following error:
```
.my-completion.sh:2899: command not found: complete`
```
To solve the problem I needed to add the following prior to running the script:
```
autoload bashcompinit
bashcompinit
```
On further inspection, I noticed https://picocli.info/autocomplete.html#_install_completion_script illustrates in passing that [this](https://stackoverflow.com/a/27853970/527333) is required. I'm wondering if it makes sense to add that to the generated script if `complete` isn't defined. That would allow the instructions for `bash` and `zsh` to be the same. | 1.0 | zsh completion script should not assume `complete` command exists - After recently upgrading my macOs to Mojave, I found out that my Picocli autocomplete script stopped working with the following error:
```
.my-completion.sh:2899: command not found: complete`
```
To solve the problem I needed to add the following prior to running the script:
```
autoload bashcompinit
bashcompinit
```
On further inspection, I noticed https://picocli.info/autocomplete.html#_install_completion_script illustrates in passing that [this](https://stackoverflow.com/a/27853970/527333) is required. I'm wondering if it makes sense to add that to the generated script if `complete` isn't defined. That would allow the instructions for `bash` and `zsh` to be the same. | non_main | zsh completion script should not assume complete command exists after recently upgrading my macos to mojave i found out that my picocli autocomplete script stopped working with the following error my completion sh command not found complete to solve the problem i needed to add the following prior to running the script autoload bashcompinit bashcompinit on further inspection i noticed illustrates in passing that is required i m wondering if it makes sense to add that to the generated script if complete isn t defined that would allow the instructions for bash and zsh to be the same | 0 |
3,455 | 13,221,994,031 | IssuesEvent | 2020-08-17 14:52:00 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | ZFS module idempotence problems with received dataset properties | affects_2.2 bot_closed bug cloud collection collection:community.general module needs_collection_redirect solaris storage support:community waiting_on_maintainer zfs | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
module system/zfs
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
```
hash_behavior = merge
force_handlers = True
pipelining = True
control_path = %(directory)s/%%p-%%h-%%r
```
##### OS / ENVIRONMENT
Managing from macOS 10.12
Managing Gentoo Linux hosts
##### SUMMARY
*(this issue was ported over from https://github.com/ansible/ansible-modules-extras/issues/3543 following the repository merge of ansible/ansible-modules-extras into ansible/ansible)*
Currently, the ZFS module will only consider `local` settings while comparing the current state of a ZFS filesystem and its intended target state. This can lead to unwanted `set` operations to occur even when the target filesystem is in the state described in the play, hurting idempotence.
A good example of this is with the "mountpoint" property. If ZFS dataset `B` exists as a child of dataset `A`, but dataset `A` was previously restored from a backup with `zfs receive`, therefore having most of its properties not flagged as `local`, but as `received` instead (including `mountpoint`), a run of the ZFS module on `A` will attempt to first unmount `B` so that it can perform a `mountpoint` property change on `A` even though `A` already has its `mountpoint` property set to the correct value.
In some cases the above will fail (for example if a file is open in dataset `B` so that `B` can't be unmounted), but in other cases this will still yield an unwanted side effect where an unmount will be forced to occur even when it shouldn't.
I'm not sure why only `local` properties are being considered at present when comparing current dataset state and target dataset state. Unless I'm missing something, I believe a valid fix could be to simply delete the following line: https://github.com/ansible/ansible-modules-extras/blob/9760ec2538f8b44cb7f27924617a8e024a694724/system/zfs.py#L198
##### STEPS TO REPRODUCE
1. `zfs receive` any dataset containing children in place of one normally managed by an ansible play
2. run something on the server making use of a file in the one of the children datasets and keeping it open
3. run the play containing the zfs module call that tries to ensure zfs dataset properties are set correctly
##### EXPECTED RESULTS
no `zfs set` command should be issued for dataset properties which are already set to the correct value
##### ACTUAL RESULTS
`zfs set` commands are issued on the managed host to set properties to a value they are already set at, in some cases causing unwanted side effects on the managed dataset and its children.
<!--- Paste verbatim command output between quotes below -->
```
Using module file /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/extras/system/zfs.py
<xxx.yyy.zzz> ESTABLISH SSH CONNECTION FOR USER: root
<xxx.yyy.zzz> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/guillaume/.ansible/cp/%p-%h-%r xxx.yyy.zzz '/bin/sh -c '"'"'/usr/bin/python2 && sleep 0'"'"''
fatal: [xxx.yyy.zzz]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"atime": "off",
"canmount": "on",
"casesensitivity": "sensitive",
"checksum": "fletcher4",
"compression": "lz4",
"copies": "1",
"createparent": null,
"devices": "off",
"exec": "off",
"logbias": "latency",
"mountpoint": "/var/lib/mysql",
"nbmand": "off",
"normalization": "formD",
"primarycache": "metadata",
"quota": "none",
"readonly": "off",
"recordsize": "16K",
"refquota": "none",
"refreservation": "none",
"reservation": "none",
"secondarycache": "none",
"setuid": "off",
"sharenfs": "off",
"sharesmb": "off",
"snapdir": "hidden",
"utf8only": "on",
"xattr": "off"
},
"module_name": "zfs"
},
"msg": "umount: /var/lib/mysql/relay_log: target is busy\n (In some cases useful info about processes that\n use the device is found by lsof(8) or fuser(1).)\ncannot unmount '/var/lib/mysql/relay_log': umount failed\n"
}
```
| True | ZFS module idempotence problems with received dataset properties - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
module system/zfs
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
```
hash_behavior = merge
force_handlers = True
pipelining = True
control_path = %(directory)s/%%p-%%h-%%r
```
##### OS / ENVIRONMENT
Managing from macOS 10.12
Managing Gentoo Linux hosts
##### SUMMARY
*(this issue was ported over from https://github.com/ansible/ansible-modules-extras/issues/3543 following the repository merge of ansible/ansible-modules-extras into ansible/ansible)*
Currently, the ZFS module will only consider `local` settings while comparing the current state of a ZFS filesystem and its intended target state. This can lead to unwanted `set` operations to occur even when the target filesystem is in the state described in the play, hurting idempotence.
A good example of this is with the "mountpoint" property. If ZFS dataset `B` exists as a child of dataset `A`, but dataset `A` was previously restored from a backup with `zfs receive`, therefore having most of its properties not flagged as `local`, but as `received` instead (including `mountpoint`), a run of the ZFS module on `A` will attempt to first unmount `B` so that it can perform a `mountpoint` property change on `A` even though `A` already has its `mountpoint` property set to the correct value.
In some cases the above will fail (for example if a file is open in dataset `B` so that `B` can't be unmounted), but in other cases this will still yield an unwanted side effect where an unmount will be forced to occur even when it shouldn't.
I'm not sure why only `local` properties are being considered at present when comparing current dataset state and target dataset state. Unless I'm missing something, I believe a valid fix could be to simply delete the following line: https://github.com/ansible/ansible-modules-extras/blob/9760ec2538f8b44cb7f27924617a8e024a694724/system/zfs.py#L198
##### STEPS TO REPRODUCE
1. `zfs receive` any dataset containing children in place of one normally managed by an ansible play
2. run something on the server making use of a file in the one of the children datasets and keeping it open
3. run the play containing the zfs module call that tries to ensure zfs dataset properties are set correctly
##### EXPECTED RESULTS
no `zfs set` command should be issued for dataset properties which are already set to the correct value
##### ACTUAL RESULTS
`zfs set` commands are issued on the managed host to set properties to a value they are already set at, in some cases causing unwanted side effects on the managed dataset and its children.
<!--- Paste verbatim command output between quotes below -->
```
Using module file /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/extras/system/zfs.py
<xxx.yyy.zzz> ESTABLISH SSH CONNECTION FOR USER: root
<xxx.yyy.zzz> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/Users/guillaume/.ansible/cp/%p-%h-%r xxx.yyy.zzz '/bin/sh -c '"'"'/usr/bin/python2 && sleep 0'"'"''
fatal: [xxx.yyy.zzz]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"atime": "off",
"canmount": "on",
"casesensitivity": "sensitive",
"checksum": "fletcher4",
"compression": "lz4",
"copies": "1",
"createparent": null,
"devices": "off",
"exec": "off",
"logbias": "latency",
"mountpoint": "/var/lib/mysql",
"nbmand": "off",
"normalization": "formD",
"primarycache": "metadata",
"quota": "none",
"readonly": "off",
"recordsize": "16K",
"refquota": "none",
"refreservation": "none",
"reservation": "none",
"secondarycache": "none",
"setuid": "off",
"sharenfs": "off",
"sharesmb": "off",
"snapdir": "hidden",
"utf8only": "on",
"xattr": "off"
},
"module_name": "zfs"
},
"msg": "umount: /var/lib/mysql/relay_log: target is busy\n (In some cases useful info about processes that\n use the device is found by lsof(8) or fuser(1).)\ncannot unmount '/var/lib/mysql/relay_log': umount failed\n"
}
```
| main | zfs module idempotence problems with received dataset properties issue type bug report component name module system zfs ansible version ansible configuration hash behavior merge force handlers true pipelining true control path directory s p h r os environment managing from macos managing gentoo linux hosts summary this issue was ported over from following the repository merge of ansible ansible modules extras into ansible ansible currently the zfs module will only consider local settings while comparing the current state of a zfs filesystem and its intended target state this can lead to unwanted set operations to occur even when the target filesystem is in the state described in the play hurting idempotence a good example of this is with the mountpoint property if zfs dataset b exists as a child of dataset a but dataset a was previously restored from a backup with zfs receive therefore having most of its properties not flagged as local but as received instead including mountpoint a run of the zfs module on a will attempt to first unmount b so that it can perform a mountpoint property change on a even though a already has its mountpoint property set to the correct value in some cases the above will fail for example if a file is open in dataset b so that b can t be unmounted but in other cases this will still yield an unwanted side effect where an unmount will be forced to occur even when it shouldn t i m not sure why only local properties are being considered at present when comparing current dataset state and target dataset state unless i m missing something i believe a valid fix could be to simply delete the following line steps to reproduce zfs receive any dataset containing children in place of one normally managed by an ansible play run something on the server making use of a file in the one of the children datasets and keeping it open run the play containing the zfs module call that tries to ensure zfs dataset properties are set correctly expected results no zfs set command should be issued for dataset properties which are already set to the correct value actual results zfs set commands are issued on the managed host to set properties to a value they are already set at in some cases causing unwanted side effects on the managed dataset and its children using module file opt local library frameworks python framework versions lib site packages ansible modules extras system zfs py establish ssh connection for user root ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath users guillaume ansible cp p h r xxx yyy zzz bin sh c usr bin sleep fatal failed changed false failed true invocation module args atime off canmount on casesensitivity sensitive checksum compression copies createparent null devices off exec off logbias latency mountpoint var lib mysql nbmand off normalization formd primarycache metadata quota none readonly off recordsize refquota none refreservation none reservation none secondarycache none setuid off sharenfs off sharesmb off snapdir hidden on xattr off module name zfs msg umount var lib mysql relay log target is busy n in some cases useful info about processes that n use the device is found by lsof or fuser ncannot unmount var lib mysql relay log umount failed n | 1 |
596,605 | 18,108,006,209 | IssuesEvent | 2021-09-22 21:38:57 | minio/mc | https://api.github.com/repos/minio/mc | closed | copy from local to minio - "<ERROR> Unable to validate source" | priority: low community | ## Expected behavior
Copy folder from local to minio server
`mcli cp --recursive ./images/ s3repo/public/images/`
## Actual behavior
```mcli: <ERROR> Unable to validate source `./images/`.```
## Steps to reproduce the behavior
`wget https://dl.minio.io/client/mc/release/windows-amd64/mc.exe -O /usr/local/bin/mcli`
see above command.
## mc --version
```
$ mcli --version
mcli version RELEASE.2021-02-14T04-28-06Z
```
## System information
windows 10 + MSYS 2 (or Git-Bash)
| 1.0 | copy from local to minio - "<ERROR> Unable to validate source" - ## Expected behavior
Copy folder from local to minio server
`mcli cp --recursive ./images/ s3repo/public/images/`
## Actual behavior
```mcli: <ERROR> Unable to validate source `./images/`.```
## Steps to reproduce the behavior
`wget https://dl.minio.io/client/mc/release/windows-amd64/mc.exe -O /usr/local/bin/mcli`
see above command.
## mc --version
```
$ mcli --version
mcli version RELEASE.2021-02-14T04-28-06Z
```
## System information
windows 10 + MSYS 2 (or Git-Bash)
| non_main | copy from local to minio unable to validate source expected behavior copy folder from local to minio server mcli cp recursive images public images actual behavior mcli unable to validate source images steps to reproduce the behavior wget o usr local bin mcli see above command mc version mcli version mcli version release system information windows msys or git bash | 0 |
3,989 | 18,444,251,021 | IssuesEvent | 2021-10-14 22:26:31 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Repo]: usage of `math.div` breaks some build tools that do not have the ability to upgrade sass. | type: bug 🐛 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
carbon-components, carbon-components-react
### Browser
Chrome
### Package version
v10.41.0
### Description
The addition of math.div added in 10.41.0 do not check for the existence of the function, and causes some build tools to break. This breaks backwards compatibility of some libraries, because the sass version cannot be changed by the user.
For example, in `ai-apps/angular` and `carbon-addons-iot-react` the angular library is trying to maintain parity with react by using the react styles; however, angular v11 only support dart-sass v1.32.x and `math.div` was added in 1.33. These math.div updates prevent the `ai-apps/angular` package from re-using styles from carbon and still maintain backwards compatibility with angular v11.
### CodeSandbox example
n/a
### Steps to reproduce
You can see an example of this in the `carbon-addons-iot-react` build logs from https://app.netlify.com/sites/ai-apps-pal-angular/deploys/615df5061dc8800008ef3170.
```
3:19:49 PM: ERR! => Failed to build the preview
3:19:49 PM: ERR! ./src/side-panel/side-panel.scss
3:19:49 PM: ERR! Module build failed (from /opt/build/repo/node_modules/sass-loader/dist/cjs.js):
3:19:49 PM: ERR! SassError: Undefined function.
3:19:49 PM: ERR! ╷
3:19:49 PM: ERR! 41 │ @return math.div($px, $carbon--base-font-size) * 1rem;
3:19:49 PM: ERR! │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3:19:49 PM: ERR! ╵
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/vendor/@carbon/elements/scss/layout/_convert.import.scss 41:11 carbon--rem()
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/vendor/@carbon/elements/scss/layout/_breakpoint.scss 16:23 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/vendor/@carbon/elements/scss/type/_styles.import.scss 23:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/vendor/@carbon/elements/scss/type/_reset.scss 10:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/_css--reset.scss 9:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/_helper-mixins.scss 23:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/_mixins.scss 8:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/globals/_vars.scss 2:9 @import
3:19:49 PM: ERR! src/side-panel/side-panel.scss 1:9 root stylesheet
```
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Repo]: usage of `math.div` breaks some build tools that do not have the ability to upgrade sass. - ### Package
carbon-components, carbon-components-react
### Browser
Chrome
### Package version
v10.41.0
### Description
The addition of math.div added in 10.41.0 do not check for the existence of the function, and causes some build tools to break. This breaks backwards compatibility of some libraries, because the sass version cannot be changed by the user.
For example, in `ai-apps/angular` and `carbon-addons-iot-react` the angular library is trying to maintain parity with react by using the react styles; however, angular v11 only support dart-sass v1.32.x and `math.div` was added in 1.33. These math.div updates prevent the `ai-apps/angular` package from re-using styles from carbon and still maintain backwards compatibility with angular v11.
### CodeSandbox example
n/a
### Steps to reproduce
You can see an example of this in the `carbon-addons-iot-react` build logs from https://app.netlify.com/sites/ai-apps-pal-angular/deploys/615df5061dc8800008ef3170.
```
3:19:49 PM: ERR! => Failed to build the preview
3:19:49 PM: ERR! ./src/side-panel/side-panel.scss
3:19:49 PM: ERR! Module build failed (from /opt/build/repo/node_modules/sass-loader/dist/cjs.js):
3:19:49 PM: ERR! SassError: Undefined function.
3:19:49 PM: ERR! ╷
3:19:49 PM: ERR! 41 │ @return math.div($px, $carbon--base-font-size) * 1rem;
3:19:49 PM: ERR! │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3:19:49 PM: ERR! ╵
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/vendor/@carbon/elements/scss/layout/_convert.import.scss 41:11 carbon--rem()
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/vendor/@carbon/elements/scss/layout/_breakpoint.scss 16:23 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/vendor/@carbon/elements/scss/type/_styles.import.scss 23:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/vendor/@carbon/elements/scss/type/_reset.scss 10:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/_css--reset.scss 9:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/_helper-mixins.scss 23:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/vendor/carbon-components/scss/globals/scss/_mixins.scss 8:9 @import
3:19:49 PM: ERR! src/vendor/@ai-apps/styles/scss/globals/_vars.scss 2:9 @import
3:19:49 PM: ERR! src/side-panel/side-panel.scss 1:9 root stylesheet
```
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | usage of math div breaks some build tools that do not have the ability to upgrade sass package carbon components carbon components react browser chrome package version description the addition of math div added in do not check for the existence of the function and causes some build tools to break this breaks backwards compatibility of some libraries because the sass version cannot be changed by the user for example in ai apps angular and carbon addons iot react the angular library is trying to maintain parity with react by using the react styles however angular only support dart sass x and math div was added in these math div updates prevent the ai apps angular package from re using styles from carbon and still maintain backwards compatibility with angular codesandbox example n a steps to reproduce you can see an example of this in the carbon addons iot react build logs from pm err failed to build the preview pm err src side panel side panel scss pm err module build failed from opt build repo node modules sass loader dist cjs js pm err sasserror undefined function pm err ╷ pm err │ return math div px carbon base font size pm err │ pm err ╵ pm err src vendor ai apps styles scss vendor carbon components scss globals scss vendor carbon elements scss layout convert import scss carbon rem pm err src vendor ai apps styles scss vendor carbon components scss globals scss vendor carbon elements scss layout breakpoint scss import pm err src vendor ai apps styles scss vendor carbon components scss globals scss vendor carbon elements scss type styles import scss import pm err src vendor ai apps styles scss vendor carbon components scss globals scss vendor carbon elements scss type reset scss import pm err src vendor ai apps styles scss vendor carbon components scss globals scss css reset scss import pm err src vendor ai apps styles scss vendor carbon components scss globals scss helper mixins scss import pm err src vendor ai apps styles scss vendor carbon components scss globals scss mixins scss import pm err src vendor ai apps styles scss globals vars scss import pm err src side panel side panel scss root stylesheet code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
358 | 3,285,745,825 | IssuesEvent | 2015-10-28 21:56:11 | keepalo/angular-storm | https://api.github.com/repos/keepalo/angular-storm | closed | Create more defined classes for collection, and entities and model | enhancement maintainability | affects stormEntity and stormCollection. and possibly stormModel
Provide interface on both for mixins and actions (handled by stormExtension). | True | Create more defined classes for collection, and entities and model - affects stormEntity and stormCollection. and possibly stormModel
Provide interface on both for mixins and actions (handled by stormExtension). | main | create more defined classes for collection and entities and model affects stormentity and stormcollection and possibly stormmodel provide interface on both for mixins and actions handled by stormextension | 1 |
1,044 | 4,857,570,711 | IssuesEvent | 2016-11-12 17:38:57 | duckduckgo/zeroclickinfo-longtail | https://api.github.com/repos/duckduckgo/zeroclickinfo-longtail | closed | Stack Overflow: DIsplay error | Improvement Maintainer Input Requested | See the screenshot below for the [query](https://duckduckgo.com/?q=assert+python&t=hu&ia=qa):

------
IA Page: http://duck.co/ia/view/stack_overflow
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @zachthompson | True | Stack Overflow: DIsplay error - See the screenshot below for the [query](https://duckduckgo.com/?q=assert+python&t=hu&ia=qa):

------
IA Page: http://duck.co/ia/view/stack_overflow
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @zachthompson | main | stack overflow display error see the screenshot below for the ia page zachthompson | 1 |
139,964 | 12,882,894,698 | IssuesEvent | 2020-07-12 19:10:40 | scikit-learn/scikit-learn | https://api.github.com/repos/scikit-learn/scikit-learn | closed | fit_intercept=False in Ridge, Lasso and ElasticNet may be described better | Documentation | #### Describe the issue linked to the documentation
In Ridge model the description of `fit_intercept` parameter is the following:
> Whether to fit the intercept for this model. If set to false, no intercept will be used in calculations (i.e. `X` and `y` are expected to be centered).
1. I tried to fit several dense Ridge models with `fit_intercept=False` and it seems that the result doesn't depend on whether `y` is centered or not, if `X` is centered. It seems that Ridge centers `y` forcibly if `X` is centered.
Below is one of my experiments:
input:
```
import numpy as np
from sklearn.datasets import make_regression
from sklearn.linear_model import Ridge
X, y = make_regression(n_samples=3, n_features=3, random_state=1, bias=30)
Xc = X - np.mean(X, axis=0) #centered X
yc = y - y.mean() #centered y
ridge_Xc_yc = Ridge(fit_intercept=False)
ridge_Xc_yc.fit(Xc,yc)
print(ridge_Xc_yc.coef_)
```
output (for centered `y`):
```
[ 1.34882842 -3.99509115 5.09281771]
```
input:
```
ridge_Xc_y = Ridge(fit_intercept=False)
ridge_Xc_y.fit(Xc,y)
print(ridge_Xc_y.coef_)
```
output (for uncentered `y`):
```
[ 1.34882842 -3.99509115 5.09281771]
```
As you see, both outputs are the same.
2) Look at the other linear regression models – Lasso and ElasticNet.
In the case of Lasso the description of `fit_intercept` is the following:
> Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).
And in the case of ElasticNet:
> Whether the intercept should be estimated or not. If `False`, the data is assumed to be already centered.
In both cases (Lasso and Elastic-Net) you should clearly specify what is meant by “data”, as you did for the Ridge model.
#### Suggest a potential alternative/fix
1) If Ridge with `fit_intercept=False` really doesn't depend on whether `y` is centered or not, if `X` is centered, the description of fit_intercept should look like this:
> Whether to fit the intercept for this model. If set to false, no intercept will be used in calculations (i.e. `X` is expected to be centered).
2) In the Lasso and Elastic-Net models you should clearly specify what is meant by “data” in the description of `fit_intercept` parameter, as you did for the Ridge model. It seems that for all three models (Lasso, Ridge, Elastic-net) the description of this parameter should be the same. | 1.0 | fit_intercept=False in Ridge, Lasso and ElasticNet may be described better - #### Describe the issue linked to the documentation
In Ridge model the description of `fit_intercept` parameter is the following:
> Whether to fit the intercept for this model. If set to false, no intercept will be used in calculations (i.e. `X` and `y` are expected to be centered).
1. I tried to fit several dense Ridge models with `fit_intercept=False` and it seems that the result doesn't depend on whether `y` is centered or not, if `X` is centered. It seems that Ridge centers `y` forcibly if `X` is centered.
Below is one of my experiments:
input:
```
import numpy as np
from sklearn.datasets import make_regression
from sklearn.linear_model import Ridge
X, y = make_regression(n_samples=3, n_features=3, random_state=1, bias=30)
Xc = X - np.mean(X, axis=0) #centered X
yc = y - y.mean() #centered y
ridge_Xc_yc = Ridge(fit_intercept=False)
ridge_Xc_yc.fit(Xc,yc)
print(ridge_Xc_yc.coef_)
```
output (for centered `y`):
```
[ 1.34882842 -3.99509115 5.09281771]
```
input:
```
ridge_Xc_y = Ridge(fit_intercept=False)
ridge_Xc_y.fit(Xc,y)
print(ridge_Xc_y.coef_)
```
output (for uncentered `y`):
```
[ 1.34882842 -3.99509115 5.09281771]
```
As you see, both outputs are the same.
2) Look at the other linear regression models – Lasso and ElasticNet.
In the case of Lasso the description of `fit_intercept` is the following:
> Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).
And in the case of ElasticNet:
> Whether the intercept should be estimated or not. If `False`, the data is assumed to be already centered.
In both cases (Lasso and Elastic-Net) you should clearly specify what is meant by “data”, as you did for the Ridge model.
#### Suggest a potential alternative/fix
1) If Ridge with `fit_intercept=False` really doesn't depend on whether `y` is centered or not, if `X` is centered, the description of fit_intercept should look like this:
> Whether to fit the intercept for this model. If set to false, no intercept will be used in calculations (i.e. `X` is expected to be centered).
2) In the Lasso and Elastic-Net models you should clearly specify what is meant by “data” in the description of `fit_intercept` parameter, as you did for the Ridge model. It seems that for all three models (Lasso, Ridge, Elastic-net) the description of this parameter should be the same. | non_main | fit intercept false in ridge lasso and elasticnet may be described better describe the issue linked to the documentation in ridge model the description of fit intercept parameter is the following whether to fit the intercept for this model if set to false no intercept will be used in calculations i e x and y are expected to be centered i tried to fit several dense ridge models with fit intercept false and it seems that the result doesn t depend on whether y is centered or not if x is centered it seems that ridge centers y forcibly if x is centered below is one of my experiments input import numpy as np from sklearn datasets import make regression from sklearn linear model import ridge x y make regression n samples n features random state bias xc x np mean x axis centered x yc y y mean centered y ridge xc yc ridge fit intercept false ridge xc yc fit xc yc print ridge xc yc coef output for centered y input ridge xc y ridge fit intercept false ridge xc y fit xc y print ridge xc y coef output for uncentered y as you see both outputs are the same look at the other linear regression models – lasso and elasticnet in the case of lasso the description of fit intercept is the following whether to calculate the intercept for this model if set to false no intercept will be used in calculations i e data is expected to be centered and in the case of elasticnet whether the intercept should be estimated or not if false the data is assumed to be already centered in both cases lasso and elastic net you should clearly specify what is meant by “data” as you did for the ridge model suggest a potential alternative fix if ridge with fit intercept false really doesn t depend on whether y is centered or not if x is centered the description of fit intercept should look like this whether to fit the intercept for this model if set to false no intercept will be used in calculations i e x is expected to be centered in the lasso and elastic net models you should clearly specify what is meant by “data” in the description of fit intercept parameter as you did for the ridge model it seems that for all three models lasso ridge elastic net the description of this parameter should be the same | 0 |
9 | 2,514,988,247 | IssuesEvent | 2015-01-15 15:47:30 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | opened | Remove www/example-simple/ | enhancement maintainability | The `hostnames.php` file should be moved to the `www/admin/` folder. The rest of files in this folder must go away. | True | Remove www/example-simple/ - The `hostnames.php` file should be moved to the `www/admin/` folder. The rest of files in this folder must go away. | main | remove www example simple the hostnames php file should be moved to the www admin folder the rest of files in this folder must go away | 1 |
5,589 | 28,013,227,485 | IssuesEvent | 2023-03-27 20:17:44 | beyarkay/eskom-calendar | https://api.github.com/repos/beyarkay/eskom-calendar | opened | Missing area schedule | waiting-on-maintainer missing-area-schedule | Area - QUEENSWOOD Ext 1 (8
Municipality - Tshwane
Province - Gauteng
Integration - Solar Assistant
**What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot.
**Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
This really helps us figure out what's working!
**Any other information**
If you've got any other info you think might be helpful, feel free to leave it here
| True | Missing area schedule - Area - QUEENSWOOD Ext 1 (8
Municipality - Tshwane
Province - Gauteng
Integration - Solar Assistant
**What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot.
**Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
This really helps us figure out what's working!
**Any other information**
If you've got any other info you think might be helpful, feel free to leave it here
| main | missing area schedule area queenswood ext municipality tshwane province gauteng integration solar assistant what area s couldn t you find on please also give the province municipality our beautiful country has a surprising number of places that are named the same as each other if you know what your area is named on eskomsepush including that also helps a lot where did you hear about this really helps us figure out what s working any other information if you ve got any other info you think might be helpful feel free to leave it here | 1 |
42,987 | 5,560,747,301 | IssuesEvent | 2017-03-24 20:22:28 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | pathof operator | Area-Language Design | The existing `nameof` operator is great, but it only returns the name of the leaf node of the path provided.
For example:
``` c#
nameof(ObservableAccount.Customer.BasicInfo.Name);
// evaluates to "Name"
```
I would like an operator, for instance`pathof`, that lets me do something like this:
``` c#
pathof(ObservableAccount.Customer.BasicInfo.Name);
// evaluates to "ObservableAccount.Customer.BasicInfo.Name"
```
Today I have to write code like this:
``` c#
$"{nameof(ObservableAccount.Customer)}.{nameof(ObservableCustomer.BasicInfo)}.{nameof(BasicInfo.Name)}";
```
or implement a helper that lets me do this:
``` c#
PropertyPathHelper.Combine(
nameof(ObservableAccount.Customer),
nameof(ObservableCustomer.BasicInfo),
nameof(BasicInfo.Name));
```
| 1.0 | pathof operator - The existing `nameof` operator is great, but it only returns the name of the leaf node of the path provided.
For example:
``` c#
nameof(ObservableAccount.Customer.BasicInfo.Name);
// evaluates to "Name"
```
I would like an operator, for instance`pathof`, that lets me do something like this:
``` c#
pathof(ObservableAccount.Customer.BasicInfo.Name);
// evaluates to "ObservableAccount.Customer.BasicInfo.Name"
```
Today I have to write code like this:
``` c#
$"{nameof(ObservableAccount.Customer)}.{nameof(ObservableCustomer.BasicInfo)}.{nameof(BasicInfo.Name)}";
```
or implement a helper that lets me do this:
``` c#
PropertyPathHelper.Combine(
nameof(ObservableAccount.Customer),
nameof(ObservableCustomer.BasicInfo),
nameof(BasicInfo.Name));
```
| non_main | pathof operator the existing nameof operator is great but it only returns the name of the leaf node of the path provided for example c nameof observableaccount customer basicinfo name evaluates to name i would like an operator for instance pathof that lets me do something like this c pathof observableaccount customer basicinfo name evaluates to observableaccount customer basicinfo name today i have to write code like this c nameof observableaccount customer nameof observablecustomer basicinfo nameof basicinfo name or implement a helper that lets me do this c propertypathhelper combine nameof observableaccount customer nameof observablecustomer basicinfo nameof basicinfo name | 0 |
846 | 4,501,265,418 | IssuesEvent | 2016-09-01 08:48:31 | openwrt/packages | https://api.github.com/repos/openwrt/packages | closed | please reconfigure "shadowsocks-libev" | waiting for maintainer | @aa65535 please reconfigure `shadowsocks-libev`, we need `ss-local` and `ss-server`.
| True | please reconfigure "shadowsocks-libev" - @aa65535 please reconfigure `shadowsocks-libev`, we need `ss-local` and `ss-server`.
| main | please reconfigure shadowsocks libev please reconfigure shadowsocks libev we need ss local and ss server | 1 |
152,677 | 24,002,850,437 | IssuesEvent | 2022-09-14 12:51:04 | Refemi/refemi_front | https://api.github.com/repos/Refemi/refemi_front | closed | [ DESIGN / LOADER ] Dashboard header display when loading | design priority: burning hot V1 | When the dashboard page loads, there are three elements that appear relatively quickly at the top left of the view. I'm curious to know where this may be coming from and if it's possible to avoid this?
 | 1.0 | [ DESIGN / LOADER ] Dashboard header display when loading - When the dashboard page loads, there are three elements that appear relatively quickly at the top left of the view. I'm curious to know where this may be coming from and if it's possible to avoid this?
 | non_main | dashboard header display when loading when the dashboard page loads there are three elements that appear relatively quickly at the top left of the view i m curious to know where this may be coming from and if it s possible to avoid this | 0 |
11 | 2,515,070,006 | IssuesEvent | 2015-01-15 16:16:33 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | opened | Cleanup the SimpleSAML_Utilities class | enhancement maintainability started | The following must be done:
* Remove the `validateCA()` method.
* Remove the `generateRandomBytesMTrand()` method.
* Remove the `validateXML()` and `validateXMLDocument()` methods. Use a standalone composer module instead.
* Refactor the rest of it to group methods by their functionality in dedicated classes under `lib/SimpleSAML/Utils/`. | True | Cleanup the SimpleSAML_Utilities class - The following must be done:
* Remove the `validateCA()` method.
* Remove the `generateRandomBytesMTrand()` method.
* Remove the `validateXML()` and `validateXMLDocument()` methods. Use a standalone composer module instead.
* Refactor the rest of it to group methods by their functionality in dedicated classes under `lib/SimpleSAML/Utils/`. | main | cleanup the simplesaml utilities class the following must be done remove the validateca method remove the generaterandombytesmtrand method remove the validatexml and validatexmldocument methods use a standalone composer module instead refactor the rest of it to group methods by their functionality in dedicated classes under lib simplesaml utils | 1 |
34,014 | 7,321,001,214 | IssuesEvent | 2018-03-02 09:49:53 | PowerDNS/pdns | https://api.github.com/repos/PowerDNS/pdns | closed | Recursor: recursor_cache MemRecursorCache::get entryA / entryAAAA mixup? | backport to stable? defect rec | Looking at Master - shouldn't this be "entryAAAA" here to work as intended..?
https://github.com/PowerDNS/pdns/blob/9b7f95ab22421a0f5c00a47c3abce621a7d2766c/pdns/recursor_cache.cc#L192
(I know this is a rather short one) ;) | 1.0 | Recursor: recursor_cache MemRecursorCache::get entryA / entryAAAA mixup? - Looking at Master - shouldn't this be "entryAAAA" here to work as intended..?
https://github.com/PowerDNS/pdns/blob/9b7f95ab22421a0f5c00a47c3abce621a7d2766c/pdns/recursor_cache.cc#L192
(I know this is a rather short one) ;) | non_main | recursor recursor cache memrecursorcache get entrya entryaaaa mixup looking at master shouldn t this be entryaaaa here to work as intended i know this is a rather short one | 0 |
1,354 | 5,831,056,761 | IssuesEvent | 2017-05-08 18:22:56 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Removing `real-vnc` removes `/etc` symlink, which breaks the OS | awaiting maintainer feedback | #### Description of issue
After installing `real-vnc`, using command `brew cask uninstall real-vnc` to uninstall it, removes the `/etc` symlink (!!!!!!), then breaks the system.
#### Output of your command with `--verbose --debug`
**If you're going to try this, make sure you have root shell (e.g. `sudo su`) already opened before you do this. Otherwise it might break your system and requires you to go into Recovery from USB or even a re-install. You can fix this by running `ln -s /private/etc /etc` with `root`.**
```
==> Uninstalling Cask real-vnc
==> Uninstalling Cask real-vnc
==> Un-installing artifacts
==> Determining which artifacts are present in Cask real-vnc
==> 4 artifact/s defined
#<Hbc::Artifact::Uninstall:0x007f887f21ff98>
#<Hbc::Artifact::PreflightBlock:0x007f887f21c488>
#<Hbc::Artifact::Pkg:0x007f887f21f1b0>
#<Hbc::Artifact::Zap:0x007f887f21fed0>
==> Un-installing artifact of class Hbc::Artifact::Uninstall
==> Running uninstall process for real-vnc; your password may be necessary
==> Removing launchctl service com.realvnc.vncserver
==> Executing: ["/bin/launchctl", "list", "com.realvnc.vncserver"]
==> Executing: ["/usr/bin/sudo", "-E", "--", "/bin/launchctl", "list", "com.realvnc.vncserver"]
==> Removing launchctl service com.realvnc.vncserver.peruser
==> Executing: ["/bin/launchctl", "list", "com.realvnc.vncserver.peruser"]
==> Executing: ["/usr/bin/sudo", "-E", "--", "/bin/launchctl", "list", "com.realvnc.vncserver.peruser"]
==> Uninstalling packages:
==> Executing: ["/usr/sbin/pkgutil", "--pkgs=com.realvnc.vncserver.pkg"]
com.realvnc.vncserver.pkg
==> Executing: ["/usr/sbin/pkgutil", "--files", "com.realvnc.vncserver.pkg"]
==> Executing: ["/usr/sbin/pkgutil", "--pkg-info-plist", "com.realvnc.vncserver.pkg"]
==> Deleting pkg symlinks and special files
==> Executing: ["/usr/bin/sudo", "-E", "--", "/usr/bin/xargs", "-0", "--", "/bin/rm", "--"]
==> rm: /etc/vnc/get_primary_ip4: No such file or directory
==> Deleting pkg directories
==> Executing: ["/usr/bin/stat", "-f", "%Of", "--", "/Library/vnc/VNC Chat.app/Contents/Resources/de.lproj"]
==> Executing: ["/usr/bin/sudo", "-E", "--", "/bin/chmod", "--", "777", "/Library/vnc/VNC Chat.app/Contents/Resources/de.lproj"]
==> sudo: unable to stat /etc/sudoers: No such file or directory
==> sudo: no valid sudoers sources found, quitting
==> sudo: unable to initialize policy plugin
==> Executing: ["/usr/bin/sudo", "-E", "--", "/bin/chmod", "--", "755", "/Library/vnc/VNC Chat.app/Contents/Resources/de.lproj"]
==> sudo: unable to stat /etc/sudoers: No such file or directory
==> sudo: no valid sudoers sources found, quitting
==> sudo: unable to initialize policy plugin
Error: Command failed to execute!
==> Failed command:
/usr/bin/sudo -E -- /bin/chmod -- 755 #<Pathname:/Library/vnc/VNC Chat.app/Contents/Resources/de.lproj>
==> Standard Output of failed command:
==> Standard Error of failed command:
sudo: unable to stat /etc/sudoers: No such file or directory
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
==> Exit status of failed command:
#<Process::Status: pid 28466 exit 1>
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:65:in `assert_success'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:36:in `run!'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:14:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:18:in `run!'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:104:in `ensure in with_full_permissions'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:105:in `with_full_permissions'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:32:in `block in uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:29:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:29:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:191:in `block (2 levels) in uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:189:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:189:in `block in uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:188:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:188:in `uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:34:in `block (2 levels) in dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:32:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:32:in `block in dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:31:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:31:in `dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall.rb:7:in `uninstall_phase'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:349:in `block in uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:346:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:346:in `uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:331:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:20:in `block in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:9:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:9:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:115:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:158:in `process'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:90:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:163:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:163:in `rescue in process'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:149:in `process'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:90:in `<main>'
```
#### Output of `brew cask doctor`
```
==> Homebrew-Cask Version
Homebrew-Cask 1.1.13-129-gc6c930174e
caskroom/homebrew-cask (git revision a45af; last commit 2017-04-25)
==> Homebrew-Cask Install Location
<NONE>
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (1 files, 9.8MB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3644 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-fonts (1105 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (161 casks)
/usr/local/Homebrew/Library/Taps/homebrew/homebrew-boneyard (0 casks)
/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core (0 casks)
/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services (0 casks)
/usr/local/Homebrew/Library/Taps/knqyf263/homebrew-pet (0 casks)
/usr/local/Homebrew/Library/Taps/mpv-player/homebrew-mpv (0 casks)
/usr/local/Homebrew/Library/Taps/neovim/homebrew-neovim (0 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LANG="en_US.UTF-8"
LC_ALL="en_US.UTF-8"
LC_CTYPE="UTF-8"
PATH="~/bin:~/.pyenv/shims:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Applications/Wireshark.app/Contents/MacOS:~/Library/Android/sdk/platform-tools:/usr/local/opt/go/libexec/bin:~/.go/bin:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm"
SHELL="/usr/local/bin/zsh"
```
| True | Removing `real-vnc` removes `/etc` symlink, which breaks the OS - #### Description of issue
After installing `real-vnc`, using command `brew cask uninstall real-vnc` to uninstall it, removes the `/etc` symlink (!!!!!!), then breaks the system.
#### Output of your command with `--verbose --debug`
**If you're going to try this, make sure you have root shell (e.g. `sudo su`) already opened before you do this. Otherwise it might break your system and requires you to go into Recovery from USB or even a re-install. You can fix this by running `ln -s /private/etc /etc` with `root`.**
```
==> Uninstalling Cask real-vnc
==> Uninstalling Cask real-vnc
==> Un-installing artifacts
==> Determining which artifacts are present in Cask real-vnc
==> 4 artifact/s defined
#<Hbc::Artifact::Uninstall:0x007f887f21ff98>
#<Hbc::Artifact::PreflightBlock:0x007f887f21c488>
#<Hbc::Artifact::Pkg:0x007f887f21f1b0>
#<Hbc::Artifact::Zap:0x007f887f21fed0>
==> Un-installing artifact of class Hbc::Artifact::Uninstall
==> Running uninstall process for real-vnc; your password may be necessary
==> Removing launchctl service com.realvnc.vncserver
==> Executing: ["/bin/launchctl", "list", "com.realvnc.vncserver"]
==> Executing: ["/usr/bin/sudo", "-E", "--", "/bin/launchctl", "list", "com.realvnc.vncserver"]
==> Removing launchctl service com.realvnc.vncserver.peruser
==> Executing: ["/bin/launchctl", "list", "com.realvnc.vncserver.peruser"]
==> Executing: ["/usr/bin/sudo", "-E", "--", "/bin/launchctl", "list", "com.realvnc.vncserver.peruser"]
==> Uninstalling packages:
==> Executing: ["/usr/sbin/pkgutil", "--pkgs=com.realvnc.vncserver.pkg"]
com.realvnc.vncserver.pkg
==> Executing: ["/usr/sbin/pkgutil", "--files", "com.realvnc.vncserver.pkg"]
==> Executing: ["/usr/sbin/pkgutil", "--pkg-info-plist", "com.realvnc.vncserver.pkg"]
==> Deleting pkg symlinks and special files
==> Executing: ["/usr/bin/sudo", "-E", "--", "/usr/bin/xargs", "-0", "--", "/bin/rm", "--"]
==> rm: /etc/vnc/get_primary_ip4: No such file or directory
==> Deleting pkg directories
==> Executing: ["/usr/bin/stat", "-f", "%Of", "--", "/Library/vnc/VNC Chat.app/Contents/Resources/de.lproj"]
==> Executing: ["/usr/bin/sudo", "-E", "--", "/bin/chmod", "--", "777", "/Library/vnc/VNC Chat.app/Contents/Resources/de.lproj"]
==> sudo: unable to stat /etc/sudoers: No such file or directory
==> sudo: no valid sudoers sources found, quitting
==> sudo: unable to initialize policy plugin
==> Executing: ["/usr/bin/sudo", "-E", "--", "/bin/chmod", "--", "755", "/Library/vnc/VNC Chat.app/Contents/Resources/de.lproj"]
==> sudo: unable to stat /etc/sudoers: No such file or directory
==> sudo: no valid sudoers sources found, quitting
==> sudo: unable to initialize policy plugin
Error: Command failed to execute!
==> Failed command:
/usr/bin/sudo -E -- /bin/chmod -- 755 #<Pathname:/Library/vnc/VNC Chat.app/Contents/Resources/de.lproj>
==> Standard Output of failed command:
==> Standard Error of failed command:
sudo: unable to stat /etc/sudoers: No such file or directory
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
==> Exit status of failed command:
#<Process::Status: pid 28466 exit 1>
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:65:in `assert_success'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:36:in `run!'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:14:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:18:in `run!'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:104:in `ensure in with_full_permissions'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:105:in `with_full_permissions'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:32:in `block in uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:29:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:29:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:191:in `block (2 levels) in uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:189:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:189:in `block in uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:188:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:188:in `uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:34:in `block (2 levels) in dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:32:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:32:in `block in dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:31:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:31:in `dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall.rb:7:in `uninstall_phase'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:349:in `block in uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:346:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:346:in `uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:331:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:20:in `block in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:9:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:9:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:115:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:158:in `process'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:90:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:163:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:163:in `rescue in process'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:149:in `process'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:90:in `<main>'
```
#### Output of `brew cask doctor`
```
==> Homebrew-Cask Version
Homebrew-Cask 1.1.13-129-gc6c930174e
caskroom/homebrew-cask (git revision a45af; last commit 2017-04-25)
==> Homebrew-Cask Install Location
<NONE>
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (1 files, 9.8MB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3644 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-fonts (1105 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (161 casks)
/usr/local/Homebrew/Library/Taps/homebrew/homebrew-boneyard (0 casks)
/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core (0 casks)
/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services (0 casks)
/usr/local/Homebrew/Library/Taps/knqyf263/homebrew-pet (0 casks)
/usr/local/Homebrew/Library/Taps/mpv-player/homebrew-mpv (0 casks)
/usr/local/Homebrew/Library/Taps/neovim/homebrew-neovim (0 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LANG="en_US.UTF-8"
LC_ALL="en_US.UTF-8"
LC_CTYPE="UTF-8"
PATH="~/bin:~/.pyenv/shims:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Applications/Wireshark.app/Contents/MacOS:~/Library/Android/sdk/platform-tools:/usr/local/opt/go/libexec/bin:~/.go/bin:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm"
SHELL="/usr/local/bin/zsh"
```
| main | removing real vnc removes etc symlink which breaks the os description of issue after installing real vnc using command brew cask uninstall real vnc to uninstall it removes the etc symlink then breaks the system output of your command with verbose debug if you re going to try this make sure you have root shell e g sudo su already opened before you do this otherwise it might break your system and requires you to go into recovery from usb or even a re install you can fix this by running ln s private etc etc with root uninstalling cask real vnc uninstalling cask real vnc un installing artifacts determining which artifacts are present in cask real vnc artifact s defined un installing artifact of class hbc artifact uninstall running uninstall process for real vnc your password may be necessary removing launchctl service com realvnc vncserver executing executing removing launchctl service com realvnc vncserver peruser executing executing uninstalling packages executing com realvnc vncserver pkg executing executing deleting pkg symlinks and special files executing rm etc vnc get primary no such file or directory deleting pkg directories executing executing sudo unable to stat etc sudoers no such file or directory sudo no valid sudoers sources found quitting sudo unable to initialize policy plugin executing sudo unable to stat etc sudoers no such file or directory sudo no valid sudoers sources found quitting sudo unable to initialize policy plugin error command failed to execute failed command usr bin sudo e bin chmod standard output of failed command standard error of failed command sudo unable to stat etc sudoers no such file or directory sudo no valid sudoers sources found quitting sudo unable to initialize policy plugin exit status of failed command usr local homebrew library homebrew cask lib hbc system command rb in assert success usr local homebrew library homebrew cask lib hbc system command rb in run usr local homebrew library homebrew cask lib hbc system command rb in run usr local homebrew library homebrew cask lib hbc system command rb in run usr local homebrew library homebrew cask lib hbc pkg rb in ensure in with full permissions usr local homebrew library homebrew cask lib hbc pkg rb in with full permissions usr local homebrew library homebrew cask lib hbc pkg rb in block in uninstall usr local homebrew library homebrew cask lib hbc pkg rb in each usr local homebrew library homebrew cask lib hbc pkg rb in uninstall usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in block levels in uninstall pkgutil usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in each usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in block in uninstall pkgutil usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in each usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in uninstall pkgutil usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in block levels in dispatch uninstall directives usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in each usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in block in dispatch uninstall directives usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in each usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in dispatch uninstall directives usr local homebrew library homebrew cask lib hbc artifact uninstall rb in uninstall phase usr local homebrew library homebrew cask lib hbc installer rb in block in uninstall artifacts usr local homebrew library homebrew cask lib hbc installer rb in each usr local homebrew library homebrew cask lib hbc installer rb in uninstall artifacts usr local homebrew library homebrew cask lib hbc installer rb in uninstall usr local homebrew library homebrew cask lib hbc cli uninstall rb in block in run usr local homebrew library homebrew cask lib hbc cli uninstall rb in each usr local homebrew library homebrew cask lib hbc cli uninstall rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in process usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in process usr local homebrew library homebrew cask lib hbc cli rb in process usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor homebrew cask version homebrew cask caskroom homebrew cask git revision last commit homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads library caches homebrew cask files homebrew cask taps usr local homebrew library taps caskroom homebrew cask casks usr local homebrew library taps caskroom homebrew fonts casks usr local homebrew library taps caskroom homebrew versions casks usr local homebrew library taps homebrew homebrew boneyard casks usr local homebrew library taps homebrew homebrew core casks usr local homebrew library taps homebrew homebrew services casks usr local homebrew library taps homebrew pet casks usr local homebrew library taps mpv player homebrew mpv casks usr local homebrew library taps neovim homebrew neovim casks contents of load path usr local homebrew library homebrew cask lib usr local homebrew library homebrew library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal environment variables lang en us utf lc all en us utf lc ctype utf path bin pyenv shims usr local bin usr local sbin usr bin bin usr sbin sbin opt bin applications wireshark app contents macos library android sdk platform tools usr local opt go libexec bin go bin usr local homebrew library taps homebrew homebrew services cmd usr local homebrew library homebrew shims scm shell usr local bin zsh | 1 |
3,509 | 13,722,633,321 | IssuesEvent | 2020-10-03 04:48:32 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Spurious Travis failures in asset cache initialization | BYOND Issue Continuous Integration Maintainability/Hinders improvements Runtime | Happens at random
```
Initialized Weather subsystem within 0 seconds!
Initialized Atmospherics subsystem within 6.3 seconds!
[08:16:30] Runtime in atoms_movable.dm,420: No valid destination passed into forceMove
proc name: forceMove (/atom/movable/proc/forceMove)
src: the hourglass countdown (/obj/effect/countdown/hourglass)
src.loc: the hourglass (/obj/item/hourglass)
call stack:
the hourglass countdown (/obj/effect/countdown/hourglass): forceMove(null)
the hourglass countdown (/obj/effect/countdown/hourglass): attach(the hourglass (/obj/item/hourglass))
the hourglass countdown (/obj/effect/countdown/hourglass): Initialize(0)
Atoms (/datum/controller/subsystem/atoms): InitAtom(the hourglass countdown (/obj/effect/countdown/hourglass), /list (/list))
the hourglass countdown (/obj/effect/countdown/hourglass): New(0)
the hourglass (/obj/item/hourglass): Initialize(0)
Atoms (/datum/controller/subsystem/atoms): InitAtom(the hourglass (/obj/item/hourglass), /list (/list))
the hourglass (/obj/item/hourglass): New(0)
vending (/datum/asset/spritesheet/vending): register()
vending (/datum/asset/spritesheet/vending): New()
get asset datum(/datum/asset/spritesheet/vendi... (/datum/asset/spritesheet/vending))
Assets (/datum/controller/subsystem/assets): Initialize(297859)
Master (/datum/controller/master): Initialize(10, 0, 1)
Initialized Assets subsystem within 4.8 seconds!
Initialized Icon Smoothing subsystem within 4.1 seconds!
``` | True | Spurious Travis failures in asset cache initialization - Happens at random
```
Initialized Weather subsystem within 0 seconds!
Initialized Atmospherics subsystem within 6.3 seconds!
[08:16:30] Runtime in atoms_movable.dm,420: No valid destination passed into forceMove
proc name: forceMove (/atom/movable/proc/forceMove)
src: the hourglass countdown (/obj/effect/countdown/hourglass)
src.loc: the hourglass (/obj/item/hourglass)
call stack:
the hourglass countdown (/obj/effect/countdown/hourglass): forceMove(null)
the hourglass countdown (/obj/effect/countdown/hourglass): attach(the hourglass (/obj/item/hourglass))
the hourglass countdown (/obj/effect/countdown/hourglass): Initialize(0)
Atoms (/datum/controller/subsystem/atoms): InitAtom(the hourglass countdown (/obj/effect/countdown/hourglass), /list (/list))
the hourglass countdown (/obj/effect/countdown/hourglass): New(0)
the hourglass (/obj/item/hourglass): Initialize(0)
Atoms (/datum/controller/subsystem/atoms): InitAtom(the hourglass (/obj/item/hourglass), /list (/list))
the hourglass (/obj/item/hourglass): New(0)
vending (/datum/asset/spritesheet/vending): register()
vending (/datum/asset/spritesheet/vending): New()
get asset datum(/datum/asset/spritesheet/vendi... (/datum/asset/spritesheet/vending))
Assets (/datum/controller/subsystem/assets): Initialize(297859)
Master (/datum/controller/master): Initialize(10, 0, 1)
Initialized Assets subsystem within 4.8 seconds!
Initialized Icon Smoothing subsystem within 4.1 seconds!
``` | main | spurious travis failures in asset cache initialization happens at random initialized weather subsystem within seconds initialized atmospherics subsystem within seconds runtime in atoms movable dm no valid destination passed into forcemove proc name forcemove atom movable proc forcemove src the hourglass countdown obj effect countdown hourglass src loc the hourglass obj item hourglass call stack the hourglass countdown obj effect countdown hourglass forcemove null the hourglass countdown obj effect countdown hourglass attach the hourglass obj item hourglass the hourglass countdown obj effect countdown hourglass initialize atoms datum controller subsystem atoms initatom the hourglass countdown obj effect countdown hourglass list list the hourglass countdown obj effect countdown hourglass new the hourglass obj item hourglass initialize atoms datum controller subsystem atoms initatom the hourglass obj item hourglass list list the hourglass obj item hourglass new vending datum asset spritesheet vending register vending datum asset spritesheet vending new get asset datum datum asset spritesheet vendi datum asset spritesheet vending assets datum controller subsystem assets initialize master datum controller master initialize initialized assets subsystem within seconds initialized icon smoothing subsystem within seconds | 1 |
5,785 | 30,649,098,431 | IssuesEvent | 2023-07-25 07:39:34 | jupyter-naas/awesome-notebooks | https://api.github.com/repos/jupyter-naas/awesome-notebooks | closed | Python - Check if string is number | templates maintainer | This notebook will check if a string is a number and how it is useful for organizations. It will help to identify if a string is a number or not.
| True | Python - Check if string is number - This notebook will check if a string is a number and how it is useful for organizations. It will help to identify if a string is a number or not.
| main | python check if string is number this notebook will check if a string is a number and how it is useful for organizations it will help to identify if a string is a number or not | 1 |
1,530 | 6,572,221,478 | IssuesEvent | 2017-09-11 00:13:57 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ec2_vpc_route_table should infer "lookup: id" when "route_table_id" is provided or "lookup" should be mandatory | affects_2.1 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_vpc_route_table module
##### ANSIBLE VERSION
```
ansible 2.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
No modifications to default configuration
##### OS / ENVIRONMENT
Host OS is Arch Linux, I'm building infrastructure in AWS using boto version 2.39.0, and aws-cli version 1.10.17.
##### SUMMARY
When modifying a route table and specifying both tags and route table ID, the module defaults to identifying the route table by tags as specified in the documentation. However, it is a reasonable use-case to want to update tags, subnet associations, and routes in a single operation. If you do not provide "lookup: id" to the module, it ignores the presence of the route_table_id parameter and attempts to match route table based solely on the new tags, which fails and the result is that a new route table is created based on the parameters you have specified. Either "lookup" should be required, or route_table_id should imply "lookup: id"
##### STEPS TO REPRODUCE
Using the playbook pasted below, the module will create a new route table with those properties instead of updating the route table specified in route_table_id. Adding "lookup: id" gives expected behaviour.
```
- name: Configure Public Internet routing
ec2_vpc_route_table:
vpc_id: "{{ VpcId }}"
region: "{{ Region }}"
route_table_id: "{{ RouteTableId }}"
tags:
Name: PublicManagementRoute
subnets:
- "{{ subnet-id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ gateway_id }}"
register: pub_int_route
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I expected that the route table specified in route_table_id to be updated with the gateway, subnet association, and default route provided
##### ACTUAL RESULTS
A new route table was created
| True | ec2_vpc_route_table should infer "lookup: id" when "route_table_id" is provided or "lookup" should be mandatory - ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_vpc_route_table module
##### ANSIBLE VERSION
```
ansible 2.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
No modifications to default configuration
##### OS / ENVIRONMENT
Host OS is Arch Linux, I'm building infrastructure in AWS using boto version 2.39.0, and aws-cli version 1.10.17.
##### SUMMARY
When modifying a route table and specifying both tags and route table ID, the module defaults to identifying the route table by tags as specified in the documentation. However, it is a reasonable use-case to want to update tags, subnet associations, and routes in a single operation. If you do not provide "lookup: id" to the module, it ignores the presence of the route_table_id parameter and attempts to match route table based solely on the new tags, which fails and the result is that a new route table is created based on the parameters you have specified. Either "lookup" should be required, or route_table_id should imply "lookup: id"
##### STEPS TO REPRODUCE
Using the playbook pasted below, the module will create a new route table with those properties instead of updating the route table specified in route_table_id. Adding "lookup: id" gives expected behaviour.
```
- name: Configure Public Internet routing
ec2_vpc_route_table:
vpc_id: "{{ VpcId }}"
region: "{{ Region }}"
route_table_id: "{{ RouteTableId }}"
tags:
Name: PublicManagementRoute
subnets:
- "{{ subnet-id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ gateway_id }}"
register: pub_int_route
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I expected that the route table specified in route_table_id to be updated with the gateway, subnet association, and default route provided
##### ACTUAL RESULTS
A new route table was created
| main | vpc route table should infer lookup id when route table id is provided or lookup should be mandatory issue type bug report component name vpc route table module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration no modifications to default configuration os environment host os is arch linux i m building infrastructure in aws using boto version and aws cli version summary when modifying a route table and specifying both tags and route table id the module defaults to identifying the route table by tags as specified in the documentation however it is a reasonable use case to want to update tags subnet associations and routes in a single operation if you do not provide lookup id to the module it ignores the presence of the route table id parameter and attempts to match route table based solely on the new tags which fails and the result is that a new route table is created based on the parameters you have specified either lookup should be required or route table id should imply lookup id steps to reproduce using the playbook pasted below the module will create a new route table with those properties instead of updating the route table specified in route table id adding lookup id gives expected behaviour name configure public internet routing vpc route table vpc id vpcid region region route table id routetableid tags name publicmanagementroute subnets subnet id routes dest gateway id gateway id register pub int route expected results i expected that the route table specified in route table id to be updated with the gateway subnet association and default route provided actual results a new route table was created | 1 |
3,501 | 13,655,597,605 | IssuesEvent | 2020-09-27 23:14:48 | amyjko/faculty | https://api.github.com/repos/amyjko/faculty | closed | CER FAQ: Integrate into site | maintainability | Move off the Gist. The formatting is inconsistent, the translation from GitHub markdown isn't standardized, and I can't easily file issues on the file. The barrier to this is how to represent it; it's currently GitHub flavored Markdown, and translating it to JSX would lower modifiability. One option would be to lean on GitHub's parsing service to render. | True | CER FAQ: Integrate into site - Move off the Gist. The formatting is inconsistent, the translation from GitHub markdown isn't standardized, and I can't easily file issues on the file. The barrier to this is how to represent it; it's currently GitHub flavored Markdown, and translating it to JSX would lower modifiability. One option would be to lean on GitHub's parsing service to render. | main | cer faq integrate into site move off the gist the formatting is inconsistent the translation from github markdown isn t standardized and i can t easily file issues on the file the barrier to this is how to represent it it s currently github flavored markdown and translating it to jsx would lower modifiability one option would be to lean on github s parsing service to render | 1 |
567 | 4,044,282,700 | IssuesEvent | 2016-05-21 07:19:06 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | WordMap: Needs to ensure remainder is only 1 word, or remove "similar to" trigger | Low-Hanging Fruit Maintainer Input Requested Relevancy Triggering | This instant answer triggers on any query that starts with "similar to" and often returns irrelevant results for queries that aren't searching for words/terms.
We need to reduce the IA to only working on single word queries or tighten the triggering to queries that more clearly indicate they're looking for similar _words_.
e.g. https://duckduckgo.com/?q=similar+to+invisible+fence+collar&ia=answer
------
IA Page: http://duck.co/ia/view/word_map
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @twinword | True | WordMap: Needs to ensure remainder is only 1 word, or remove "similar to" trigger - This instant answer triggers on any query that starts with "similar to" and often returns irrelevant results for queries that aren't searching for words/terms.
We need to reduce the IA to only working on single word queries or tighten the triggering to queries that more clearly indicate they're looking for similar _words_.
e.g. https://duckduckgo.com/?q=similar+to+invisible+fence+collar&ia=answer
------
IA Page: http://duck.co/ia/view/word_map
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @twinword | main | wordmap needs to ensure remainder is only word or remove similar to trigger this instant answer triggers on any query that starts with similar to and often returns irrelevant results for queries that aren t searching for words terms we need to reduce the ia to only working on single word queries or tighten the triggering to queries that more clearly indicate they re looking for similar words e g ia page twinword | 1 |
70,648 | 30,704,311,259 | IssuesEvent | 2023-07-27 04:08:49 | hashicorp/terraform-provider-azurerm | https://api.github.com/repos/hashicorp/terraform-provider-azurerm | closed | Importing azurerm_cdn_frontdoor_security_policy will force recreation due to casing changes in cdn_frontdoor_firewall_policy_id and cdn_frontdoor_domain_id | service/cdn v/3.x | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the [contribution guide](https://github.com/hashicorp/terraform-provider-azurerm/blob/main/contributing/README.md) to help.
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.5.3
### AzureRM Provider Version
3.66.0
### Affected Resource(s)/Data Source(s)
azurerm_cdn_frontdoor_security_policy
### Terraform Configuration Files
```hcl
import {
to = azurerm_cdn_frontdoor_firewall_policy.this
id = ""
}
import {
to = azurerm_cdn_frontdoor_security_policy.this
id = ""
}
import {
to = azurerm_cdn_frontdoor_profile.this
id = ""
}
import {
to = azurerm_cdn_frontdoor_endpoint.this
id = ""
}
import {
to = azurerm_cdn_frontdoor_custom_domain.this
id = ""
}
resource "azurerm_cdn_frontdoor_security_policy" "this" {
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.this.id
name = "${azurerm_cdn_frontdoor_firewall_policy.this.name}-securityPolicy"
security_policies {
firewall {
cdn_frontdoor_firewall_policy_id = azurerm_cdn_frontdoor_firewall_policy.this.id
association {
patterns_to_match = ["/*"]
domain {
cdn_frontdoor_domain_id = azurerm_cdn_frontdoor_custom_domain.this.id
}
domain {
cdn_frontdoor_domain_id = azurerm_cdn_frontdoor_endpoint.this.id
}
}
}
}
}
```
### Debug Output/Panic Output
```shell
# azurerm_cdn_frontdoor_security_policy.this must be replaced
# (imported from "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/securityPolicies/xxxx")
# Warning: this will destroy the imported resource
-/+ resource "azurerm_cdn_frontdoor_security_policy" "this" {
cdn_frontdoor_profile_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx"
~ id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/securityPolicies/xxxx" -> (known after apply)
name = "xxxx"
~ security_policies {
~ firewall {
~ cdn_frontdoor_firewall_policy_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/microsoft.network/frontdoorWebApplicationFirewallPolicies/xxxx" -> "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Network/frontDoorWebApplicationFirewallPolicies/xxxx" # forces replacement
~ association {
patterns_to_match = [
"/*",
]
~ domain {
~ active = true -> (known after apply)
~ cdn_frontdoor_domain_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/customdomains/xxxx" -> "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/customDomains/xxxx" # forces replacement
}
~ domain {
~ active = true -> (known after apply)
~ cdn_frontdoor_domain_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/afdendpoints/xxxx" -> "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/afdEndpoints/xxxx" # forces replacement
}
}
}
}
}
```
### Expected Behaviour
The azurerm_cdn_frontdoor_security_policy resource should not need to be recreated after importing.
### Actual Behaviour

When importing a azurerm_cdn_frontdoor_security_policy that was created outside of terraform the terraform plan will show that it will need to be recreated due to changes of the casing of cdn_frontdoor_firewall_policy_id and cdn_frontdoor_domain_id.
Since there is no update function in azurerm for azurerm_cdn_frontdoor_security_policy these changes will force a recreation. According to https://github.com/hashicorp/terraform-provider-azurerm/issues/19761#issuecomment-1381010226 the API doesn't support an Update operation. Is there a reason why a [patch update](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/services/cdn/mgmt/2021-06-01/cdn#SecurityPoliciesClient.Patch) can't be done (I am not familiar with go/Azure API).
### Steps to Reproduce
1. Create a CDN FrontDoor Profile, Custom Domain, Endpoint that is associated with one firewall policy in the Azure Portal or by using the Front Door (classic) to Standard/Premium tier migration tool.
2. terraform plan
### Important Factoids
_No response_
### References
https://github.com/hashicorp/terraform-provider-azurerm/issues/19761#issuecomment-1381010226 | 1.0 | Importing azurerm_cdn_frontdoor_security_policy will force recreation due to casing changes in cdn_frontdoor_firewall_policy_id and cdn_frontdoor_domain_id - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the [contribution guide](https://github.com/hashicorp/terraform-provider-azurerm/blob/main/contributing/README.md) to help.
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.5.3
### AzureRM Provider Version
3.66.0
### Affected Resource(s)/Data Source(s)
azurerm_cdn_frontdoor_security_policy
### Terraform Configuration Files
```hcl
import {
to = azurerm_cdn_frontdoor_firewall_policy.this
id = ""
}
import {
to = azurerm_cdn_frontdoor_security_policy.this
id = ""
}
import {
to = azurerm_cdn_frontdoor_profile.this
id = ""
}
import {
to = azurerm_cdn_frontdoor_endpoint.this
id = ""
}
import {
to = azurerm_cdn_frontdoor_custom_domain.this
id = ""
}
resource "azurerm_cdn_frontdoor_security_policy" "this" {
cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.this.id
name = "${azurerm_cdn_frontdoor_firewall_policy.this.name}-securityPolicy"
security_policies {
firewall {
cdn_frontdoor_firewall_policy_id = azurerm_cdn_frontdoor_firewall_policy.this.id
association {
patterns_to_match = ["/*"]
domain {
cdn_frontdoor_domain_id = azurerm_cdn_frontdoor_custom_domain.this.id
}
domain {
cdn_frontdoor_domain_id = azurerm_cdn_frontdoor_endpoint.this.id
}
}
}
}
}
```
### Debug Output/Panic Output
```shell
# azurerm_cdn_frontdoor_security_policy.this must be replaced
# (imported from "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/securityPolicies/xxxx")
# Warning: this will destroy the imported resource
-/+ resource "azurerm_cdn_frontdoor_security_policy" "this" {
cdn_frontdoor_profile_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx"
~ id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/securityPolicies/xxxx" -> (known after apply)
name = "xxxx"
~ security_policies {
~ firewall {
~ cdn_frontdoor_firewall_policy_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/microsoft.network/frontdoorWebApplicationFirewallPolicies/xxxx" -> "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Network/frontDoorWebApplicationFirewallPolicies/xxxx" # forces replacement
~ association {
patterns_to_match = [
"/*",
]
~ domain {
~ active = true -> (known after apply)
~ cdn_frontdoor_domain_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/customdomains/xxxx" -> "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/customDomains/xxxx" # forces replacement
}
~ domain {
~ active = true -> (known after apply)
~ cdn_frontdoor_domain_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/afdendpoints/xxxx" -> "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsoft.Cdn/profiles/xxxx/afdEndpoints/xxxx" # forces replacement
}
}
}
}
}
```
### Expected Behaviour
The azurerm_cdn_frontdoor_security_policy resource should not need to be recreated after importing.
### Actual Behaviour

When importing a azurerm_cdn_frontdoor_security_policy that was created outside of terraform the terraform plan will show that it will need to be recreated due to changes of the casing of cdn_frontdoor_firewall_policy_id and cdn_frontdoor_domain_id.
Since there is no update function in azurerm for azurerm_cdn_frontdoor_security_policy these changes will force a recreation. According to https://github.com/hashicorp/terraform-provider-azurerm/issues/19761#issuecomment-1381010226 the API doesn't support an Update operation. Is there a reason why a [patch update](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/services/cdn/mgmt/2021-06-01/cdn#SecurityPoliciesClient.Patch) can't be done (I am not familiar with go/Azure API).
### Steps to Reproduce
1. Create a CDN FrontDoor Profile, Custom Domain, Endpoint that is associated with one firewall policy in the Azure Portal or by using the Front Door (classic) to Standard/Premium tier migration tool.
2. terraform plan
### Important Factoids
_No response_
### References
https://github.com/hashicorp/terraform-provider-azurerm/issues/19761#issuecomment-1381010226 | non_main | importing azurerm cdn frontdoor security policy will force recreation due to casing changes in cdn frontdoor firewall policy id and cdn frontdoor domain id is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment and review the to help terraform version azurerm provider version affected resource s data source s azurerm cdn frontdoor security policy terraform configuration files hcl import to azurerm cdn frontdoor firewall policy this id import to azurerm cdn frontdoor security policy this id import to azurerm cdn frontdoor profile this id import to azurerm cdn frontdoor endpoint this id import to azurerm cdn frontdoor custom domain this id resource azurerm cdn frontdoor security policy this cdn frontdoor profile id azurerm cdn frontdoor profile this id name azurerm cdn frontdoor firewall policy this name securitypolicy security policies firewall cdn frontdoor firewall policy id azurerm cdn frontdoor firewall policy this id association patterns to match domain cdn frontdoor domain id azurerm cdn frontdoor custom domain this id domain cdn frontdoor domain id azurerm cdn frontdoor endpoint this id debug output panic output shell azurerm cdn frontdoor security policy this must be replaced imported from subscriptions xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxxx resourcegroups xxxx providers microsoft cdn profiles xxxx securitypolicies xxxx warning this will destroy the imported resource resource azurerm cdn frontdoor security policy this cdn frontdoor profile id subscriptions xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxxx resourcegroups xxxx providers microsoft cdn profiles xxxx id subscriptions xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxxx resourcegroups xxxx providers microsoft cdn profiles xxxx securitypolicies xxxx known after apply name xxxx security policies firewall cdn frontdoor firewall policy id subscriptions xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxxx resourcegroups xxxx providers microsoft network frontdoorwebapplicationfirewallpolicies xxxx subscriptions xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxxx resourcegroups xxxx providers microsoft network frontdoorwebapplicationfirewallpolicies xxxx forces replacement association patterns to match domain active true known after apply cdn frontdoor domain id subscriptions xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxxx resourcegroups xxxx providers microsoft cdn profiles xxxx customdomains xxxx subscriptions xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxxx resourcegroups xxxx providers microsoft cdn profiles xxxx customdomains xxxx forces replacement domain active true known after apply cdn frontdoor domain id subscriptions xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxxx resourcegroups xxxx providers microsoft cdn profiles xxxx afdendpoints xxxx subscriptions xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxxx resourcegroups xxxx providers microsoft cdn profiles xxxx afdendpoints xxxx forces replacement expected behaviour the azurerm cdn frontdoor security policy resource should not need to be recreated after importing actual behaviour when importing a azurerm cdn frontdoor security policy that was created outside of terraform the terraform plan will show that it will need to be recreated due to changes of the casing of cdn frontdoor firewall policy id and cdn frontdoor domain id since there is no update function in azurerm for azurerm cdn frontdoor security policy these changes will force a recreation according to the api doesn t support an update operation is there a reason why a can t be done i am not familiar with go azure api steps to reproduce create a cdn frontdoor profile custom domain endpoint that is associated with one firewall policy in the azure portal or by using the front door classic to standard premium tier migration tool terraform plan important factoids no response references | 0 |
163,984 | 20,364,308,324 | IssuesEvent | 2022-02-21 02:31:53 | michaeldotson/outerspace-vue | https://api.github.com/repos/michaeldotson/outerspace-vue | opened | CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz | security vulnerability | ## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /outerspace-vue/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-3.7.0.tgz (Root Library)
- webpack-dev-server-3.3.1.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution (url-parse): 1.5.2</p>
<p>Direct dependency fix Resolution (@vue/cli-service): 3.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz - ## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /outerspace-vue/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-3.7.0.tgz (Root Library)
- webpack-dev-server-3.3.1.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution (url-parse): 1.5.2</p>
<p>Direct dependency fix Resolution (@vue/cli-service): 3.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file outerspace vue package json path to vulnerable library node modules url parse package json dependency hierarchy cli service tgz root library webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library vulnerability details url parse is vulnerable to url redirection to untrusted site publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse direct dependency fix resolution vue cli service step up your open source security game with whitesource | 0 |
1,627 | 6,572,656,189 | IssuesEvent | 2017-09-11 04:07:57 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | osx_defaults - 'Type mismatch' error if default previously set with different type | affects_2.2 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
osx_defaults
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
inventory = hosts/hosts
vault_password_file = ~/.ansible/vault/password.txt
retry_files_enabled = False
retry_files_save_path = ~/.ansible/retry
remote_user = root
nocows = 0
[ssh_connection]
pipelining = True
```
##### OS / ENVIRONMENT
macOS 10.12.1 (16B2555) managing the same
##### SUMMARY
When using the ansible osx_defaults module to set a default, and the default is already set but with a different type, an error is generated. Running ```defaults write ...``` from the command line works fine in these scenarios to reset to a new type.
##### STEPS TO REPRODUCE
Variables file prefs.yaml:
```
mac_prefs_misc:
- {
domain: com.apple.Terminal,
key: FocusFollowsMouse,
type: bool,
value: true
}
```
Playbook:
```
- include_vars: prefs.yaml
- name: set various osx defaults prefs
osx_defaults:
domain: "{{ item.domain }}"
key: "{{ item.key }}"
type: "{{ item.type }}"
value: "{{ item.value }}"
state: present
with_items: "{{ mac_prefs_misc }}"
```
##### EXPECTED RESULTS
Default is set
##### ACTUAL RESULTS
Default is not set to new value because the default was previously set with a different type.
```
failed: [mini1] (item={u'domain': u'com.apple.Terminal', u'type': u'bool', u'key': u'FocusFollowsMouse', u'value': True}) => {"failed": true, "item": {"domain": "com.apple.Terminal", "key": "FocusFollowsMouse", "type": "bool", "value": true}, "msg": "Type mismatch. Type in defaults: str"}
```
| True | osx_defaults - 'Type mismatch' error if default previously set with different type - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
osx_defaults
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
inventory = hosts/hosts
vault_password_file = ~/.ansible/vault/password.txt
retry_files_enabled = False
retry_files_save_path = ~/.ansible/retry
remote_user = root
nocows = 0
[ssh_connection]
pipelining = True
```
##### OS / ENVIRONMENT
macOS 10.12.1 (16B2555) managing the same
##### SUMMARY
When using the ansible osx_defaults module to set a default, and the default is already set but with a different type, an error is generated. Running ```defaults write ...``` from the command line works fine in these scenarios to reset to a new type.
##### STEPS TO REPRODUCE
Variables file prefs.yaml:
```
mac_prefs_misc:
- {
domain: com.apple.Terminal,
key: FocusFollowsMouse,
type: bool,
value: true
}
```
Playbook:
```
- include_vars: prefs.yaml
- name: set various osx defaults prefs
osx_defaults:
domain: "{{ item.domain }}"
key: "{{ item.key }}"
type: "{{ item.type }}"
value: "{{ item.value }}"
state: present
with_items: "{{ mac_prefs_misc }}"
```
##### EXPECTED RESULTS
Default is set
##### ACTUAL RESULTS
Default is not set to new value because the default was previously set with a different type.
```
failed: [mini1] (item={u'domain': u'com.apple.Terminal', u'type': u'bool', u'key': u'FocusFollowsMouse', u'value': True}) => {"failed": true, "item": {"domain": "com.apple.Terminal", "key": "FocusFollowsMouse", "type": "bool", "value": true}, "msg": "Type mismatch. Type in defaults: str"}
```
| main | osx defaults type mismatch error if default previously set with different type issue type bug report component name osx defaults ansible version ansible config file configured module search path default w o overrides configuration inventory hosts hosts vault password file ansible vault password txt retry files enabled false retry files save path ansible retry remote user root nocows pipelining true os environment macos managing the same summary when using the ansible osx defaults module to set a default and the default is already set but with a different type an error is generated running defaults write from the command line works fine in these scenarios to reset to a new type steps to reproduce variables file prefs yaml mac prefs misc domain com apple terminal key focusfollowsmouse type bool value true playbook include vars prefs yaml name set various osx defaults prefs osx defaults domain item domain key item key type item type value item value state present with items mac prefs misc expected results default is set actual results default is not set to new value because the default was previously set with a different type failed item u domain u com apple terminal u type u bool u key u focusfollowsmouse u value true failed true item domain com apple terminal key focusfollowsmouse type bool value true msg type mismatch type in defaults str | 1 |
3,893 | 17,315,784,169 | IssuesEvent | 2021-07-27 05:47:52 | imdhemy/laravel-in-app-purchases | https://api.github.com/repos/imdhemy/laravel-in-app-purchases | closed | status DID RENEW when I should receive INITIAL BUY | bug maintainer_replied | **Describe the bug**
The bug is related to appstore. When I test the subscription, I receive the status DID RENEW when I should receive INITIAL BUY
**To Reproduce**
Steps to reproduce the behavior:
Purchase the subscription using sandbox user.
Check the server to server notification log
**Expected behavior**
I am expecting that AppstoreInitialBuyListener.php should receive the data when Initial buy happens.
**Full trace**
If applicable, add full error trace to the bug.
**Additional context**
Add any other context about the problem here.
| True | status DID RENEW when I should receive INITIAL BUY - **Describe the bug**
The bug is related to appstore. When I test the subscription, I receive the status DID RENEW when I should receive INITIAL BUY
**To Reproduce**
Steps to reproduce the behavior:
Purchase the subscription using sandbox user.
Check the server to server notification log
**Expected behavior**
I am expecting that AppstoreInitialBuyListener.php should receive the data when Initial buy happens.
**Full trace**
If applicable, add full error trace to the bug.
**Additional context**
Add any other context about the problem here.
| main | status did renew when i should receive initial buy describe the bug the bug is related to appstore when i test the subscription i receive the status did renew when i should receive initial buy to reproduce steps to reproduce the behavior purchase the subscription using sandbox user check the server to server notification log expected behavior i am expecting that appstoreinitialbuylistener php should receive the data when initial buy happens full trace if applicable add full error trace to the bug additional context add any other context about the problem here | 1 |
5,037 | 25,840,558,563 | IssuesEvent | 2022-12-12 23:48:22 | ElasticPerch/websocket | https://api.github.com/repos/ElasticPerch/websocket | opened | [bug] Websocket subprotocol is not chosen on client preferance | bug waiting on new maintainer | From websocket created by [KSDaemon](https://github.com/KSDaemon): gorilla/websocket#822
**Describe the bug**
From the Websocket RFC6455:
For client side:
> |Sec-WebSocket-Protocol| header field, with a list of values indicating which protocols the client would like to speak, ordered by preference.
And for server side:
> Either a single value representing the subprotocol the server is ready to use or null. The value chosen MUST be derived from the client's handshake, specifically by selecting one of the values from the |Sec-WebSocket-Protocol| field that the server is willing to use for this connection (if any).
So if the client provides a few options for subprotocol. The server should choose the first one it supports.
Right now, if client provides a few options, lib choose the first one it supports (and not the first one from the client).
e.g. So if the client sends Sec-WebSocket-Protocol: wamp.2.cbor, wamp.2,json and server supports wamp.2,json, wamp.2.cbor then wamp.2,json will be chosen but not wamp.2.cbor as it should be.
> A clear and concise description of what the bug is.
Lib version: all :)
**Code Snippets**
The problem is in `server.go`: selectSubprotocol func:
```go
clientProtocols := Subprotocols(r)
for _, serverProtocol := range u.Subprotocols {
for _, clientProtocol := range clientProtocols {
if clientProtocol == serverProtocol {
return clientProtocol
}
}
}
```
should be changed to:
```go
clientProtocols := Subprotocols(r)
for _, clientProtocol := range clientProtocols {
for _, serverProtocol := range u.Subprotocols {
if clientProtocol == serverProtocol {
return clientProtocol
}
}
}
```
| True | [bug] Websocket subprotocol is not chosen on client preferance - From websocket created by [KSDaemon](https://github.com/KSDaemon): gorilla/websocket#822
**Describe the bug**
From the Websocket RFC6455:
For client side:
> |Sec-WebSocket-Protocol| header field, with a list of values indicating which protocols the client would like to speak, ordered by preference.
And for server side:
> Either a single value representing the subprotocol the server is ready to use or null. The value chosen MUST be derived from the client's handshake, specifically by selecting one of the values from the |Sec-WebSocket-Protocol| field that the server is willing to use for this connection (if any).
So if the client provides a few options for subprotocol. The server should choose the first one it supports.
Right now, if client provides a few options, lib choose the first one it supports (and not the first one from the client).
e.g. So if the client sends Sec-WebSocket-Protocol: wamp.2.cbor, wamp.2,json and server supports wamp.2,json, wamp.2.cbor then wamp.2,json will be chosen but not wamp.2.cbor as it should be.
> A clear and concise description of what the bug is.
Lib version: all :)
**Code Snippets**
The problem is in `server.go`: selectSubprotocol func:
```go
clientProtocols := Subprotocols(r)
for _, serverProtocol := range u.Subprotocols {
for _, clientProtocol := range clientProtocols {
if clientProtocol == serverProtocol {
return clientProtocol
}
}
}
```
should be changed to:
```go
clientProtocols := Subprotocols(r)
for _, clientProtocol := range clientProtocols {
for _, serverProtocol := range u.Subprotocols {
if clientProtocol == serverProtocol {
return clientProtocol
}
}
}
```
| main | websocket subprotocol is not chosen on client preferance from websocket created by gorilla websocket describe the bug from the websocket for client side sec websocket protocol header field with a list of values indicating which protocols the client would like to speak ordered by preference and for server side either a single value representing the subprotocol the server is ready to use or null the value chosen must be derived from the client s handshake specifically by selecting one of the values from the sec websocket protocol field that the server is willing to use for this connection if any so if the client provides a few options for subprotocol the server should choose the first one it supports right now if client provides a few options lib choose the first one it supports and not the first one from the client e g so if the client sends sec websocket protocol wamp cbor wamp json and server supports wamp json wamp cbor then wamp json will be chosen but not wamp cbor as it should be a clear and concise description of what the bug is lib version all code snippets the problem is in server go selectsubprotocol func go clientprotocols subprotocols r for serverprotocol range u subprotocols for clientprotocol range clientprotocols if clientprotocol serverprotocol return clientprotocol should be changed to go clientprotocols subprotocols r for clientprotocol range clientprotocols for serverprotocol range u subprotocols if clientprotocol serverprotocol return clientprotocol | 1 |
38,012 | 18,882,081,406 | IssuesEvent | 2021-11-15 00:01:51 | umbraco/Umbraco-CMS | https://api.github.com/repos/umbraco/Umbraco-CMS | closed | DatabaseIntegrityCheck runs automatically in the background and locks the content tree | state/needs-investigation category/performance status/stale | A brief description of the issue goes here.
DatabaseIntegrityCheck runs in the background on startup and locks the content tree.
## Umbraco version
I am seeing this issue on Umbraco version: 8.9.1
Reproduction
------------
If you're filing a bug, please describe how to reproduce it. Include as much
relevant information as possible, such as:
### Bug summary
DatabaseIntegrityCheck runs in the background on startup and locks the content tree. The backoffice content and media trees won't load
### Specifics
### Steps to reproduce
Start umbraco with a large number of content items.
Log into backoffice.
### Expected result
Content tree is not locked by background processes
### Actual result
Content tree is locked. Unable to edit content
| True | DatabaseIntegrityCheck runs automatically in the background and locks the content tree - A brief description of the issue goes here.
DatabaseIntegrityCheck runs in the background on startup and locks the content tree.
## Umbraco version
I am seeing this issue on Umbraco version: 8.9.1
Reproduction
------------
If you're filing a bug, please describe how to reproduce it. Include as much
relevant information as possible, such as:
### Bug summary
DatabaseIntegrityCheck runs in the background on startup and locks the content tree. The backoffice content and media trees won't load
### Specifics
### Steps to reproduce
Start umbraco with a large number of content items.
Log into backoffice.
### Expected result
Content tree is not locked by background processes
### Actual result
Content tree is locked. Unable to edit content
| non_main | databaseintegritycheck runs automatically in the background and locks the content tree a brief description of the issue goes here databaseintegritycheck runs in the background on startup and locks the content tree umbraco version i am seeing this issue on umbraco version reproduction if you re filing a bug please describe how to reproduce it include as much relevant information as possible such as bug summary databaseintegritycheck runs in the background on startup and locks the content tree the backoffice content and media trees won t load specifics steps to reproduce start umbraco with a large number of content items log into backoffice expected result content tree is not locked by background processes actual result content tree is locked unable to edit content | 0 |
3,260 | 12,413,826,841 | IssuesEvent | 2020-05-22 13:28:36 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | Bug related to IOS download on the cisco WLC | affects_2.10 aireos bug cisco collection collection:community.network has_pr module needs_collection_redirect needs_info needs_maintainer needs_template networking support:community | Please raise issues via the [new interface](https://github.com/ansible/ansible/issues/new/choose)
---
- name: run show sysinfo on remote devices
aireos_command:
commands:
- command: "transfer download datatype ap-image"
- command: "transfer download ap-images mode tftp"
- command: "transfer download ap-images serverIp x.x.x.x"
- command: "transfer download ap-images imagePath AIR-AP1830-K9-ME-8-5-151-0/"
- command: "transfer download start"
prompt: "(y/N)"
answer: 'y'
Now the problem is device asks twice for input after command "Transfer download start" One time for asking auto-reboot after complete and second one is are you sure you want to start. Now if i run above command for the first question i have answered the yes however for second the prompt option is not working. It just takes the prompt and answer just once however this is not how the cisco WLC controller works. It asks twice before starting. Kindly fix this issue.
Run time output with code shown above
"Do you want to Auto-Reboot after Complete? (y/N) y",
"Auto-Reboot check ENABLED!!",
"System will reboot within 5secs, ONLY upon Download COMPLETION to all Aps.",
"Configuration will be Saved before Auto-Reboot",
"",
"DOWNLOAD may take some time......",
"Are you sure you want to start? (y/N)",
=========================================
Output of debug with below configuration is also showing below
---
- name: run show sysinfo on remote devices
aireos_command:
commands:
- command: "transfer download datatype ap-image"
- command: "transfer download ap-images mode tftp"
- command: "transfer download ap-images serverIp x.x.x.x"
- command: "transfer download ap-images imagePath AIR-AP1830-K9-ME-8-5-151-0/"
- command: "transfer download start"
prompt: "(y/N)"
answer: 'y'
prompt: "(y/N)"
answer: 'y'
------ Debug output
"Do you want to Auto-Reboot after Complete? (y/N) y",
"Auto-Reboot check ENABLED!!",
"System will reboot within 5secs, ONLY upon Download COMPLETION to all Aps.",
"Configuration will be Saved before Auto-Reboot",
"",
"DOWNLOAD may take some time......",
"Are you sure you want to start? (y/N)",
"",
"",
"Transfer Canceled"
| True | Bug related to IOS download on the cisco WLC - Please raise issues via the [new interface](https://github.com/ansible/ansible/issues/new/choose)
---
- name: run show sysinfo on remote devices
aireos_command:
commands:
- command: "transfer download datatype ap-image"
- command: "transfer download ap-images mode tftp"
- command: "transfer download ap-images serverIp x.x.x.x"
- command: "transfer download ap-images imagePath AIR-AP1830-K9-ME-8-5-151-0/"
- command: "transfer download start"
prompt: "(y/N)"
answer: 'y'
Now the problem is device asks twice for input after command "Transfer download start" One time for asking auto-reboot after complete and second one is are you sure you want to start. Now if i run above command for the first question i have answered the yes however for second the prompt option is not working. It just takes the prompt and answer just once however this is not how the cisco WLC controller works. It asks twice before starting. Kindly fix this issue.
Run time output with code shown above
"Do you want to Auto-Reboot after Complete? (y/N) y",
"Auto-Reboot check ENABLED!!",
"System will reboot within 5secs, ONLY upon Download COMPLETION to all Aps.",
"Configuration will be Saved before Auto-Reboot",
"",
"DOWNLOAD may take some time......",
"Are you sure you want to start? (y/N)",
=========================================
Output of debug with below configuration is also showing below
---
- name: run show sysinfo on remote devices
aireos_command:
commands:
- command: "transfer download datatype ap-image"
- command: "transfer download ap-images mode tftp"
- command: "transfer download ap-images serverIp x.x.x.x"
- command: "transfer download ap-images imagePath AIR-AP1830-K9-ME-8-5-151-0/"
- command: "transfer download start"
prompt: "(y/N)"
answer: 'y'
prompt: "(y/N)"
answer: 'y'
------ Debug output
"Do you want to Auto-Reboot after Complete? (y/N) y",
"Auto-Reboot check ENABLED!!",
"System will reboot within 5secs, ONLY upon Download COMPLETION to all Aps.",
"Configuration will be Saved before Auto-Reboot",
"",
"DOWNLOAD may take some time......",
"Are you sure you want to start? (y/N)",
"",
"",
"Transfer Canceled"
| main | bug related to ios download on the cisco wlc please raise issues via the name run show sysinfo on remote devices aireos command commands command transfer download datatype ap image command transfer download ap images mode tftp command transfer download ap images serverip x x x x command transfer download ap images imagepath air me command transfer download start prompt y n answer y now the problem is device asks twice for input after command transfer download start one time for asking auto reboot after complete and second one is are you sure you want to start now if i run above command for the first question i have answered the yes however for second the prompt option is not working it just takes the prompt and answer just once however this is not how the cisco wlc controller works it asks twice before starting kindly fix this issue run time output with code shown above do you want to auto reboot after complete y n y auto reboot check enabled system will reboot within only upon download completion to all aps configuration will be saved before auto reboot download may take some time are you sure you want to start y n output of debug with below configuration is also showing below name run show sysinfo on remote devices aireos command commands command transfer download datatype ap image command transfer download ap images mode tftp command transfer download ap images serverip x x x x command transfer download ap images imagepath air me command transfer download start prompt y n answer y prompt y n answer y debug output do you want to auto reboot after complete y n y auto reboot check enabled system will reboot within only upon download completion to all aps configuration will be saved before auto reboot download may take some time are you sure you want to start y n transfer canceled | 1 |
4,760 | 24,525,946,015 | IssuesEvent | 2022-10-11 13:09:13 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Show the record summary in the linked record input for an FK filter condition when the page is loaded from scratch | type: enhancement work: backend status: ready restricted: maintainers | ## Steps to reproduce
1. Use the Library Management schema.
1. Create a new publisher. This way you have a publisher that you know does not have any publications.
1. Go to the Publications table page.
1. Add a filter condition. Specify Publisher "is equal to", and then use the record selector to select your newly-added publisher with no publications.
1. After selecting a publisher, observe that linked record input within the filter condition displays the record summary for that publisher. That's because the record selector passed that record summary up to the tabularData store on the table page. This is good.
1. Refresh the whole page.
1. Open the filters dropdown.
1. Expect to see that same record summary within the filter condition.
1. Instead observe that the linked record input only displays the id of the linked record -- not is record summary. This is bad.
## A similar scenario that already works
If you perform the steps above, but instead filter on a publisher that _has_ publications, then things are a bit better. The filter condition _does_ display the record summary for the publisher because the records response data contained that record summary data.
## Implementation
- @pavish and I chatted about this yesterday on a quick call. We would like for the records API to return the record summary data needed to render this record summary.
- The back end should look at the filter conditions, see that we're filtering on an FK column, see the value used within that filter condition, and then provide the necessary data for that record summary within the `preview_data` field that's already present on the API response.
- No schema changes are needed.
- No front end changes are needed because the front end is already making use of that `preview_data` when displaying record summaries within filter conditions.
| True | Show the record summary in the linked record input for an FK filter condition when the page is loaded from scratch - ## Steps to reproduce
1. Use the Library Management schema.
1. Create a new publisher. This way you have a publisher that you know does not have any publications.
1. Go to the Publications table page.
1. Add a filter condition. Specify Publisher "is equal to", and then use the record selector to select your newly-added publisher with no publications.
1. After selecting a publisher, observe that linked record input within the filter condition displays the record summary for that publisher. That's because the record selector passed that record summary up to the tabularData store on the table page. This is good.
1. Refresh the whole page.
1. Open the filters dropdown.
1. Expect to see that same record summary within the filter condition.
1. Instead observe that the linked record input only displays the id of the linked record -- not is record summary. This is bad.
## A similar scenario that already works
If you perform the steps above, but instead filter on a publisher that _has_ publications, then things are a bit better. The filter condition _does_ display the record summary for the publisher because the records response data contained that record summary data.
## Implementation
- @pavish and I chatted about this yesterday on a quick call. We would like for the records API to return the record summary data needed to render this record summary.
- The back end should look at the filter conditions, see that we're filtering on an FK column, see the value used within that filter condition, and then provide the necessary data for that record summary within the `preview_data` field that's already present on the API response.
- No schema changes are needed.
- No front end changes are needed because the front end is already making use of that `preview_data` when displaying record summaries within filter conditions.
| main | show the record summary in the linked record input for an fk filter condition when the page is loaded from scratch steps to reproduce use the library management schema create a new publisher this way you have a publisher that you know does not have any publications go to the publications table page add a filter condition specify publisher is equal to and then use the record selector to select your newly added publisher with no publications after selecting a publisher observe that linked record input within the filter condition displays the record summary for that publisher that s because the record selector passed that record summary up to the tabulardata store on the table page this is good refresh the whole page open the filters dropdown expect to see that same record summary within the filter condition instead observe that the linked record input only displays the id of the linked record not is record summary this is bad a similar scenario that already works if you perform the steps above but instead filter on a publisher that has publications then things are a bit better the filter condition does display the record summary for the publisher because the records response data contained that record summary data implementation pavish and i chatted about this yesterday on a quick call we would like for the records api to return the record summary data needed to render this record summary the back end should look at the filter conditions see that we re filtering on an fk column see the value used within that filter condition and then provide the necessary data for that record summary within the preview data field that s already present on the api response no schema changes are needed no front end changes are needed because the front end is already making use of that preview data when displaying record summaries within filter conditions | 1 |
27,996 | 4,076,377,230 | IssuesEvent | 2016-05-29 21:23:11 | gr3ysky/cooperate | https://api.github.com/repos/gr3ysky/cooperate | closed | generate diagrams | design on-going task | Description: Create uml diagrams fro design document.
Category: Design
Estimated Work: 2 weeks
Start Date: 14.03.2016 | 1.0 | generate diagrams - Description: Create uml diagrams fro design document.
Category: Design
Estimated Work: 2 weeks
Start Date: 14.03.2016 | non_main | generate diagrams description create uml diagrams fro design document category design estimated work weeks start date | 0 |
4,646 | 24,070,368,926 | IssuesEvent | 2022-09-18 04:20:19 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | closed | Bazel syntax highlighting as a separate plugin | type: feature request product: IntelliJ topic: bazel awaiting-maintainer | ### Description of the feature request:
Having the ability to add Bazel syntax highlighting to .bzl/.bazel files without having to install the entire IntelliJ plugin.
### What underlying problem are you trying to solve with this feature?
Installing the entire intellij-bazel plugin pins you to an older version of IntelliJ, which is not desirable. There doesn't exist an IntelliJ plugin solely for syntax highlighting, so it would be amazing if you released just this portion independently.
### What operating system, Intellij IDE and programming languages are you using? Please provide specific versions.
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_ | True | Bazel syntax highlighting as a separate plugin - ### Description of the feature request:
Having the ability to add Bazel syntax highlighting to .bzl/.bazel files without having to install the entire IntelliJ plugin.
### What underlying problem are you trying to solve with this feature?
Installing the entire intellij-bazel plugin pins you to an older version of IntelliJ, which is not desirable. There doesn't exist an IntelliJ plugin solely for syntax highlighting, so it would be amazing if you released just this portion independently.
### What operating system, Intellij IDE and programming languages are you using? Please provide specific versions.
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_ | main | bazel syntax highlighting as a separate plugin description of the feature request having the ability to add bazel syntax highlighting to bzl bazel files without having to install the entire intellij plugin what underlying problem are you trying to solve with this feature installing the entire intellij bazel plugin pins you to an older version of intellij which is not desirable there doesn t exist an intellij plugin solely for syntax highlighting so it would be amazing if you released just this portion independently what operating system intellij ide and programming languages are you using please provide specific versions no response have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response | 1 |
3,470 | 13,312,448,523 | IssuesEvent | 2020-08-26 09:44:58 | digitalpardoe/set_icon | https://api.github.com/repos/digitalpardoe/set_icon | closed | Ability To Set File & Folder Icons | Improvement No Longer Maintained | Add the ability to change & update the icons of files and folders in addition to volumes.
| True | Ability To Set File & Folder Icons - Add the ability to change & update the icons of files and folders in addition to volumes.
| main | ability to set file folder icons add the ability to change update the icons of files and folders in addition to volumes | 1 |
13 | 2,490,015,162 | IssuesEvent | 2015-01-02 04:18:35 | bwapi/bwapi | https://api.github.com/repos/bwapi/bwapi | opened | Clean up Offsets.h | Priority-Low | Offsets.h is a mess. It needs to be cleaned up.
- **Readability:** It's not very clean and looking at it hurts my brain a little.
- **Type safety:** Right now it's difficult to identify the intended purpose of some things, is it a pointer to an integer or pointer to array of integers? Make use of std::array.
- **Structure extraction:** It is littered with structure definitions which should be extracted out of this file.
- **Direct access:** Use references to access data directly instead of dereferencing a pointer. This makes code look cleaner.
| 1.0 | Clean up Offsets.h - Offsets.h is a mess. It needs to be cleaned up.
- **Readability:** It's not very clean and looking at it hurts my brain a little.
- **Type safety:** Right now it's difficult to identify the intended purpose of some things, is it a pointer to an integer or pointer to array of integers? Make use of std::array.
- **Structure extraction:** It is littered with structure definitions which should be extracted out of this file.
- **Direct access:** Use references to access data directly instead of dereferencing a pointer. This makes code look cleaner.
| non_main | clean up offsets h offsets h is a mess it needs to be cleaned up readability it s not very clean and looking at it hurts my brain a little type safety right now it s difficult to identify the intended purpose of some things is it a pointer to an integer or pointer to array of integers make use of std array structure extraction it is littered with structure definitions which should be extracted out of this file direct access use references to access data directly instead of dereferencing a pointer this makes code look cleaner | 0 |
118,181 | 9,977,686,502 | IssuesEvent | 2019-07-09 17:55:25 | rancher/rio | https://api.github.com/repos/rancher/rio | closed | Create namespace on run command if it doesn't exist. | enhancement to-test | Version - v0.1.1-rc1
Steps:
1. rio run -n tnp/test1 nginx
Results: namespaces “tnp” not found. We should create the namespace if it doesn't exist. | 1.0 | Create namespace on run command if it doesn't exist. - Version - v0.1.1-rc1
Steps:
1. rio run -n tnp/test1 nginx
Results: namespaces “tnp” not found. We should create the namespace if it doesn't exist. | non_main | create namespace on run command if it doesn t exist version steps rio run n tnp nginx results namespaces “tnp” not found we should create the namespace if it doesn t exist | 0 |
2,127 | 7,267,138,147 | IssuesEvent | 2018-02-20 02:45:27 | dgets/DANT2a | https://api.github.com/repos/dgets/DANT2a | closed | updateDisplay() not utilizing updateEntry() properly | enhancement maintainability wontfix | As can be seen in the following code snippet, `updateDisplay()` is not properly working with `updateEntry()` in order to take advantage of the capabilities. There may still be issues with `updateEntry()` not having all of the proper capabilities, as well. A quick walk-through should be able to discern this.
```
case EntryType.Entries.Alarm:
clbAlarms.Items.Clear();
foreach (EntryType.Alarm al in activeAlarms) {
if (al.Running) {
//clbAlarms.Items.Add(al.ActiveAt + " - " + al.Name, true);
updateEntry(EntryType.Entries.Alarm, cntr);
} else {
clbAlarms.Items.Add(al.ActiveAt + " - " + al.Name, false);
}
cntr++;
}
cntr = 0;
break;
``` | True | updateDisplay() not utilizing updateEntry() properly - As can be seen in the following code snippet, `updateDisplay()` is not properly working with `updateEntry()` in order to take advantage of the capabilities. There may still be issues with `updateEntry()` not having all of the proper capabilities, as well. A quick walk-through should be able to discern this.
```
case EntryType.Entries.Alarm:
clbAlarms.Items.Clear();
foreach (EntryType.Alarm al in activeAlarms) {
if (al.Running) {
//clbAlarms.Items.Add(al.ActiveAt + " - " + al.Name, true);
updateEntry(EntryType.Entries.Alarm, cntr);
} else {
clbAlarms.Items.Add(al.ActiveAt + " - " + al.Name, false);
}
cntr++;
}
cntr = 0;
break;
``` | main | updatedisplay not utilizing updateentry properly as can be seen in the following code snippet updatedisplay is not properly working with updateentry in order to take advantage of the capabilities there may still be issues with updateentry not having all of the proper capabilities as well a quick walk through should be able to discern this case entrytype entries alarm clbalarms items clear foreach entrytype alarm al in activealarms if al running clbalarms items add al activeat al name true updateentry entrytype entries alarm cntr else clbalarms items add al activeat al name false cntr cntr break | 1 |
2,314 | 8,290,091,316 | IssuesEvent | 2018-09-19 16:19:30 | chocolatey/chocolatey-package-requests | https://api.github.com/repos/chocolatey/chocolatey-package-requests | closed | RFM - Anaconda 2 | Status: Available For Maintainer(s) | Download page: https://www.anaconda.com/download/
Latest Download Link:
- 64 bits: https://repo.anaconda.com/archive/Anaconda2-5.2.0-Windows-x86_64.exe
- 32 bits: https://repo.anaconda.com/archive/Anaconda2-5.2.0-Windows-x86.exe
For params, see here:
https://github.com/chantisnake/chocolateyPackage/blob/master/anaconda-choco/anaconda2/Versions/4.1.1/tools/chocolateyinstall.ps1 | True | RFM - Anaconda 2 - Download page: https://www.anaconda.com/download/
Latest Download Link:
- 64 bits: https://repo.anaconda.com/archive/Anaconda2-5.2.0-Windows-x86_64.exe
- 32 bits: https://repo.anaconda.com/archive/Anaconda2-5.2.0-Windows-x86.exe
For params, see here:
https://github.com/chantisnake/chocolateyPackage/blob/master/anaconda-choco/anaconda2/Versions/4.1.1/tools/chocolateyinstall.ps1 | main | rfm anaconda download page latest download link bits bits for params see here | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.