Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
770,464 | 27,040,772,629 | IssuesEvent | 2023-02-13 05:03:00 | GSM-MSG/GCMS-BackEnd | https://api.github.com/repos/GSM-MSG/GCMS-BackEnd | closed | λμ리 μμ±ν λ κ°μ μ΄λ¦κ³Ό κ°μ νμ
μ λμλ¦¬κ° μλμ§ κ²μ¬ | 1οΈβ£ Priority: High β‘οΈ Simple | ### Describe
λμ리 μμ±ν λ κ°μ μ΄λ¦μ΄λ©΄μ κ°μ νμ
μΈ λμλ¦¬κ° μλμ§ κ²μ¬ν΄μΌλλλ° μνμ;;
### Additional
_No response_ | 1.0 | λμ리 μμ±ν λ κ°μ μ΄λ¦κ³Ό κ°μ νμ
μ λμλ¦¬κ° μλμ§ κ²μ¬ - ### Describe
λμ리 μμ±ν λ κ°μ μ΄λ¦μ΄λ©΄μ κ°μ νμ
μΈ λμλ¦¬κ° μλμ§ κ²μ¬ν΄μΌλλλ° μνμ;;
### Additional
_No response_ | non_main | λμ리 μμ±ν λ κ°μ μ΄λ¦κ³Ό κ°μ νμ
μ λμλ¦¬κ° μλμ§ κ²μ¬ describe λμ리 μμ±ν λ κ°μ μ΄λ¦μ΄λ©΄μ κ°μ νμ
μΈ λμλ¦¬κ° μλμ§ κ²μ¬ν΄μΌλλλ° μνμ additional no response | 0 |
368,641 | 25,801,279,445 | IssuesEvent | 2022-12-11 02:03:27 | external-secrets/external-secrets | https://api.github.com/repos/external-secrets/external-secrets | closed | Documentation does not state that CreationPolicy=Owner setting ownerReference field | good first issue area/documentation Stale | **Describe the solution you'd like**
Update the creationPolicy=Merge (or create a new policy) that also sets the `ownerReference` on the managed secret.
**What is the added value?**
This would be useful for bootstrapping the `Secret` used by a `SecretStore`, and having external-secrets automatically syncronise changes to the secret after the initial secret creation.
This is possible with the Merge strategy already, however my GitOps tooling (argocd) is attempting to self-heal (converge on declared state) and clean up the secret as without the `ownerReference` metadata it does not understand the `Secret`s relationship to the `ExternalSecret` resource.
**Give us examples of the outcome**
I think this could be achieved by extending the check in the `mutationFunc` in `pkg/controllers/externalsecret/externalsecret_controller.go` to also set apply owner fields for the merge creationPolicy.
**Observations (Constraints, Context, etc):**
The initial secret is being created directly with `kubectl create secret generic ...`. | 1.0 | Documentation does not state that CreationPolicy=Owner setting ownerReference field - **Describe the solution you'd like**
Update the creationPolicy=Merge (or create a new policy) that also sets the `ownerReference` on the managed secret.
**What is the added value?**
This would be useful for bootstrapping the `Secret` used by a `SecretStore`, and having external-secrets automatically syncronise changes to the secret after the initial secret creation.
This is possible with the Merge strategy already, however my GitOps tooling (argocd) is attempting to self-heal (converge on declared state) and clean up the secret as without the `ownerReference` metadata it does not understand the `Secret`s relationship to the `ExternalSecret` resource.
**Give us examples of the outcome**
I think this could be achieved by extending the check in the `mutationFunc` in `pkg/controllers/externalsecret/externalsecret_controller.go` to also set apply owner fields for the merge creationPolicy.
**Observations (Constraints, Context, etc):**
The initial secret is being created directly with `kubectl create secret generic ...`. | non_main | documentation does not state that creationpolicy owner setting ownerreference field describe the solution you d like update the creationpolicy merge or create a new policy that also sets the ownerreference on the managed secret what is the added value this would be useful for bootstrapping the secret used by a secretstore and having external secrets automatically syncronise changes to the secret after the initial secret creation this is possible with the merge strategy already however my gitops tooling argocd is attempting to self heal converge on declared state and clean up the secret as without the ownerreference metadata it does not understand the secret s relationship to the externalsecret resource give us examples of the outcome i think this could be achieved by extending the check in the mutationfunc in pkg controllers externalsecret externalsecret controller go to also set apply owner fields for the merge creationpolicy observations constraints context etc the initial secret is being created directly with kubectl create secret generic | 0 |
1,247 | 5,308,979,794 | IssuesEvent | 2017-02-12 04:04:32 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | vmware_guest scsi controller type ignored | affects_2.2 bug_report cloud vmware waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from βansible --versionβ between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say βN/Aβ for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
When cloning a new vm from template the given value for parameter scsi is ignored and controller in vm is always paravirtual even if controller in template is LSI_parallel
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Create vm from template using vmware_guest and set scsi parameter to other than paravirtual.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Create the VM
vmware_guest:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
name: "{{ inventory_hostname }}"
state: poweredon
disk:
- size_gb: "{{ vm_hdd }}"
type: thick
datastore: "{{ esx_datastore }}"
nic:
- type: e1000e
network: "{{ vm_network }}"
network_type: standard
hardware:
memory_mb: "{{ vm_mem }}"
num_cpus: "{{ vm_cpu }}"
osid: centos64guest
scsi: lsi
datacenter: "{{ vcenter_dc }}"
esxi_hostname: "{{ esx_host }}"
template: "{{ vm_template }}"
wait_for_ip_address: yes
register: deploy
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
SCSI Controller is LSI Parallel
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
SCSI Controller is paravirtual
<!--- Paste verbatim command output between quotes below -->
```
``` | True | vmware_guest scsi controller type ignored - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from βansible --versionβ between quotes below -->
```
ansible 2.2.0.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say βN/Aβ for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
When cloning a new vm from template the given value for parameter scsi is ignored and controller in vm is always paravirtual even if controller in template is LSI_parallel
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Create vm from template using vmware_guest and set scsi parameter to other than paravirtual.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Create the VM
vmware_guest:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
name: "{{ inventory_hostname }}"
state: poweredon
disk:
- size_gb: "{{ vm_hdd }}"
type: thick
datastore: "{{ esx_datastore }}"
nic:
- type: e1000e
network: "{{ vm_network }}"
network_type: standard
hardware:
memory_mb: "{{ vm_mem }}"
num_cpus: "{{ vm_cpu }}"
osid: centos64guest
scsi: lsi
datacenter: "{{ vcenter_dc }}"
esxi_hostname: "{{ esx_host }}"
template: "{{ vm_template }}"
wait_for_ip_address: yes
register: deploy
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
SCSI Controller is LSI Parallel
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
SCSI Controller is paravirtual
<!--- Paste verbatim command output between quotes below -->
```
``` | main | vmware guest scsi controller type ignored issue type bug report component name vmware guest ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default os environment mention the os you are running ansible from and the os you are managing or say βn aβ for anything that is not platform specific n a summary when cloning a new vm from template the given value for parameter scsi is ignored and controller in vm is always paravirtual even if controller in template is lsi parallel steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create vm from template using vmware guest and set scsi parameter to other than paravirtual name create the vm vmware guest validate certs false hostname vcenter hostname username vcenter user password vcenter pass name inventory hostname state poweredon disk size gb vm hdd type thick datastore esx datastore nic type network vm network network type standard hardware memory mb vm mem num cpus vm cpu osid scsi lsi datacenter vcenter dc esxi hostname esx host template vm template wait for ip address yes register deploy expected results scsi controller is lsi parallel actual results scsi controller is paravirtual | 1 |
4,459 | 23,219,113,613 | IssuesEvent | 2022-08-02 16:26:37 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Troubleshoot slow vertical scrolling of table rows | type: bug work: frontend status: ready restricted: maintainers | As reported [on Matrix](https://matrix.to/#/!UnujZDUxGuMrYdvgTU:matrix.mathesar.org/$vG7lSFfzzDySrzpjHQRc3j8pYCnHhgx5P0xWA_J2oRo?via=matrix.mathesar.org&via=matrix.org), some users experience significant lags when scrolling the table vertically. ([example video](https://www.loom.com/share/f9284adfe82440a3b360223028219de4))
We should troubleshoot this to better understand the cause and reproduction scenarios.
| True | Troubleshoot slow vertical scrolling of table rows - As reported [on Matrix](https://matrix.to/#/!UnujZDUxGuMrYdvgTU:matrix.mathesar.org/$vG7lSFfzzDySrzpjHQRc3j8pYCnHhgx5P0xWA_J2oRo?via=matrix.mathesar.org&via=matrix.org), some users experience significant lags when scrolling the table vertically. ([example video](https://www.loom.com/share/f9284adfe82440a3b360223028219de4))
We should troubleshoot this to better understand the cause and reproduction scenarios.
| main | troubleshoot slow vertical scrolling of table rows as reported some users experience significant lags when scrolling the table vertically we should troubleshoot this to better understand the cause and reproduction scenarios | 1 |
5,328 | 26,903,222,365 | IssuesEvent | 2023-02-06 17:03:12 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Remove "Short Name" field on user form | type: enhancement work: frontend status: ready restricted: maintainers | I'd like to remove the "Short Name" field on the user form. In a [Matrix discussion](https://matrix.to/#/!xInuTkBwjXZXYatIlm:matrix.mathesar.org/$2e6ULgkergV1BZwlHxpNBNUKcG4W-bnetG9bApfg23M?via=matrix.mathesar.org&via=matrix.org) @kgodey said this was fine. It's an optional field so we can keep it in the API but just remove the form UI until we have a use-case for short names.
| True | Remove "Short Name" field on user form - I'd like to remove the "Short Name" field on the user form. In a [Matrix discussion](https://matrix.to/#/!xInuTkBwjXZXYatIlm:matrix.mathesar.org/$2e6ULgkergV1BZwlHxpNBNUKcG4W-bnetG9bApfg23M?via=matrix.mathesar.org&via=matrix.org) @kgodey said this was fine. It's an optional field so we can keep it in the API but just remove the form UI until we have a use-case for short names.
| main | remove short name field on user form i d like to remove the short name field on the user form in a kgodey said this was fine it s an optional field so we can keep it in the api but just remove the form ui until we have a use case for short names | 1 |
148,116 | 13,226,769,197 | IssuesEvent | 2020-08-18 00:59:37 | material-components/material-components-web-components | https://api.github.com/repos/material-components/material-components-web-components | closed | Update folder structure for Sass theme files | Focus Area: Components Severity: Medium Type: Feature Why: Enable new use cases Why: Improve documentation Why: improve ergonomics | ## Description
Folder structure for packages should be updated to compensate for Sass theming files. e.g. [linear-progress](https://github.com/material-components/material-components-web-components/tree/master/packages/linear-progress/src)
## Acceptance criteria
Users should be able to use theme mixins from an `_index.scss` partial at the root of the package.
```scss
@use '@material/linear-progress';
html {
@include linear-progress.theme((bar: secondary));
}
```
| 1.0 | Update folder structure for Sass theme files - ## Description
Folder structure for packages should be updated to compensate for Sass theming files. e.g. [linear-progress](https://github.com/material-components/material-components-web-components/tree/master/packages/linear-progress/src)
## Acceptance criteria
Users should be able to use theme mixins from an `_index.scss` partial at the root of the package.
```scss
@use '@material/linear-progress';
html {
@include linear-progress.theme((bar: secondary));
}
```
| non_main | update folder structure for sass theme files description folder structure for packages should be updated to compensate for sass theming files e g acceptance criteria users should be able to use theme mixins from an index scss partial at the root of the package scss use material linear progress html include linear progress theme bar secondary | 0 |
791 | 4,389,802,065 | IssuesEvent | 2016-08-08 23:41:08 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | unarchive on OSX does not extract the archive and fails with an error | bug_report P2 waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
Bug
##### COMPONENT NAME
unarchive module
##### ANSIBLE VERSION
Ansible on OSX 10.10, installed through Homebrew.
```
$ ansible --version
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Not settings changed (to my knowledge ;-)).
##### OS / ENVIRONMENT
OSX 10.10, Homebrew
##### SUMMARY
When executing the play pasted below Ansible fails when excuting the unarchive command.
EDIT:
This is a regression because it happens with my existing plays which worked perfectly not long ago. I am pretty sure that this problem did not exist in the Ansible 2.0.x release on OSX using Homebrew.
##### STEPS TO REPRODUCE
1. Copy the play below into a file
2. Download the archive file (http://share.astina.io/openssl-certs.tar.gz) and put it besides the file. Alternatively you can just create your own archive, I guess the contents are irrelevant (maybe it is relevant it being a `tar.gz` archive)
3. Run `ansible-playbook -vvvv main.yml`
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Test unarchive
hosts: 127.0.0.1
connection: local
tasks:
- name: Create destination folder
file: path=test-certificates state=directory
- name: Install public certificates for OpenSSL
unarchive: src=openssl-certs.tar.gz dest=test-certificates creates=test-certificates/VeriSign_Universal_Root_Certification_Authority.pem
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The play runs through and in the play's directory there is a directory called `test-certificates` with the contents of the archive.
##### ACTUAL RESULTS
Ansible fails with the following error:
```
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"backup":
null, "content": null, "copy": true, "creates":
"test-certificates/VeriSign_Universal_Root_Certification_Authority.pem", "delimiter": null, "dest":
"test-certificates", "directory_mode": null, "exclude": [], "extra_opts": [], "follow": false, "force": null,
"group": null, "keep_newer": false, "list_files": false, "mode": null, "original_basename":
"openssl-certs.tar.gz", "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null,
"setype": null, "seuser": null, "src":
"/Users/raffaele/.ansible/tmp/ansible-tmp-1465980002.6-168001143856730/source"}}, "msg":
"Unexpected error when accessing exploded file: [Errno 2] No such file or directory:
'test-certificates/ACCVRAIZ1.pem'", "stat": {"exists": false}}
```
| True | unarchive on OSX does not extract the archive and fails with an error - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
Bug
##### COMPONENT NAME
unarchive module
##### ANSIBLE VERSION
Ansible on OSX 10.10, installed through Homebrew.
```
$ ansible --version
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Not settings changed (to my knowledge ;-)).
##### OS / ENVIRONMENT
OSX 10.10, Homebrew
##### SUMMARY
When executing the play pasted below Ansible fails when excuting the unarchive command.
EDIT:
This is a regression because it happens with my existing plays which worked perfectly not long ago. I am pretty sure that this problem did not exist in the Ansible 2.0.x release on OSX using Homebrew.
##### STEPS TO REPRODUCE
1. Copy the play below into a file
2. Download the archive file (http://share.astina.io/openssl-certs.tar.gz) and put it besides the file. Alternatively you can just create your own archive, I guess the contents are irrelevant (maybe it is relevant it being a `tar.gz` archive)
3. Run `ansible-playbook -vvvv main.yml`
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Test unarchive
hosts: 127.0.0.1
connection: local
tasks:
- name: Create destination folder
file: path=test-certificates state=directory
- name: Install public certificates for OpenSSL
unarchive: src=openssl-certs.tar.gz dest=test-certificates creates=test-certificates/VeriSign_Universal_Root_Certification_Authority.pem
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The play runs through and in the play's directory there is a directory called `test-certificates` with the contents of the archive.
##### ACTUAL RESULTS
Ansible fails with the following error:
```
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"backup":
null, "content": null, "copy": true, "creates":
"test-certificates/VeriSign_Universal_Root_Certification_Authority.pem", "delimiter": null, "dest":
"test-certificates", "directory_mode": null, "exclude": [], "extra_opts": [], "follow": false, "force": null,
"group": null, "keep_newer": false, "list_files": false, "mode": null, "original_basename":
"openssl-certs.tar.gz", "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null,
"setype": null, "seuser": null, "src":
"/Users/raffaele/.ansible/tmp/ansible-tmp-1465980002.6-168001143856730/source"}}, "msg":
"Unexpected error when accessing exploded file: [Errno 2] No such file or directory:
'test-certificates/ACCVRAIZ1.pem'", "stat": {"exists": false}}
```
| main | unarchive on osx does not extract the archive and fails with an error issue type bug component name unarchive module ansible version ansible on osx installed through homebrew ansible version ansible config file configured module search path default w o overrides configuration not settings changed to my knowledge os environment osx homebrew summary when executing the play pasted below ansible fails when excuting the unarchive command edit this is a regression because it happens with my existing plays which worked perfectly not long ago i am pretty sure that this problem did not exist in the ansible x release on osx using homebrew steps to reproduce copy the play below into a file download the archive file and put it besides the file alternatively you can just create your own archive i guess the contents are irrelevant maybe it is relevant it being a tar gz archive run ansible playbook vvvv main yml name test unarchive hosts connection local tasks name create destination folder file path test certificates state directory name install public certificates for openssl unarchive src openssl certs tar gz dest test certificates creates test certificates verisign universal root certification authority pem expected results the play runs through and in the play s directory there is a directory called test certificates with the contents of the archive actual results ansible fails with the following error fatal failed changed false failed true invocation module args backup null content null copy true creates test certificates verisign universal root certification authority pem delimiter null dest test certificates directory mode null exclude extra opts follow false force null group null keep newer false list files false mode null original basename openssl certs tar gz owner null regexp null remote src null selevel null serole null setype null seuser null src users raffaele ansible tmp ansible tmp source msg unexpected error when accessing exploded file no such file or directory test certificates pem stat exists false | 1 |
750,814 | 26,219,054,912 | IssuesEvent | 2023-01-04 13:28:13 | AxonFramework/AxonFramework | https://api.github.com/repos/AxonFramework/AxonFramework | closed | Configurable Locking Scheme in SagaStore | Priority 4: Would Type: Feature Status: Under Discussion | It would be beneficial if the `SagaStore` could adjust it's locking scheme in regards to retrieving sagas from the database.
Ideally the `Builder` pattern would provide a toggle to change the query performed to retrieve a Saga instance. Doing so, a Saga instance could be enforce to not be acted upon concurrently by two distinct threads, for example when a Saga it's `SagaManager` is backed by a `SubscribingEventProcessor`.
Kick-off for this idea comes from Steven Grimm in [this](https://groups.google.com/forum/#!msg/axonframework/HAKofptqz0Q/B0KxDc9SCQAJ) user group issue. | 1.0 | Configurable Locking Scheme in SagaStore - It would be beneficial if the `SagaStore` could adjust it's locking scheme in regards to retrieving sagas from the database.
Ideally the `Builder` pattern would provide a toggle to change the query performed to retrieve a Saga instance. Doing so, a Saga instance could be enforce to not be acted upon concurrently by two distinct threads, for example when a Saga it's `SagaManager` is backed by a `SubscribingEventProcessor`.
Kick-off for this idea comes from Steven Grimm in [this](https://groups.google.com/forum/#!msg/axonframework/HAKofptqz0Q/B0KxDc9SCQAJ) user group issue. | non_main | configurable locking scheme in sagastore it would be beneficial if the sagastore could adjust it s locking scheme in regards to retrieving sagas from the database ideally the builder pattern would provide a toggle to change the query performed to retrieve a saga instance doing so a saga instance could be enforce to not be acted upon concurrently by two distinct threads for example when a saga it s sagamanager is backed by a subscribingeventprocessor kick off for this idea comes from steven grimm in user group issue | 0 |
2,821 | 10,119,928,966 | IssuesEvent | 2019-07-31 12:41:27 | precice/precice | https://api.github.com/repos/precice/precice | opened | Investigate pybind11 to replace Python actions and Python bindings | maintainability | The project pybind11 promises:
> Seamless operability between C++11 and Python
https://github.com/pybind/pybind11
This could be a single solution to generate both
1. the interoperability between preCICE and the user-defined python actions as well as
2. the generation of the python bindings.
I think we should investigate this and evaluate if we can use it to reduce the pythonic pain of maintenance. | True | Investigate pybind11 to replace Python actions and Python bindings - The project pybind11 promises:
> Seamless operability between C++11 and Python
https://github.com/pybind/pybind11
This could be a single solution to generate both
1. the interoperability between preCICE and the user-defined python actions as well as
2. the generation of the python bindings.
I think we should investigate this and evaluate if we can use it to reduce the pythonic pain of maintenance. | main | investigate to replace python actions and python bindings the project promises seamless operability between c and python this could be a single solution to generate both the interoperability between precice and the user defined python actions as well as the generation of the python bindings i think we should investigate this and evaluate if we can use it to reduce the pythonic pain of maintenance | 1 |
4,590 | 23,821,280,282 | IssuesEvent | 2022-09-05 11:29:05 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | New Explorations should not auto-save. Editing an existing exploration should auto-save. | type: bug work: backend work: frontend status: ready restricted: maintainers | New Explorations are currently persistent, any change made immediately saves the exploration. This behaviour is not preferred since we'd like the user to be able to run and discard queries.
[Mail thread containing related discussion](https://groups.google.com/a/mathesar.org/g/mathesar-developers/c/RQJSiDQu1Tg/m/uLHj30yFAgAJ).
New behaviour proposed:
* New Exploration: Auto-save is not preferred
- User opens the Data Explorer
- User joins tables, does any number of operations
- This should not get saved automatically
- It should get saved when user manually clicks Save button
* Existing Exploration: Auto-save is preferred
- Users edits an existing exploration in the Data Explorer
- User makes changes to it
- The changes are auto-saved
- We have undo-redo to improve the user's editing experience
@kgodey @mathemancer @ghislaine @seancolsen does this sound good? | True | New Explorations should not auto-save. Editing an existing exploration should auto-save. - New Explorations are currently persistent, any change made immediately saves the exploration. This behaviour is not preferred since we'd like the user to be able to run and discard queries.
[Mail thread containing related discussion](https://groups.google.com/a/mathesar.org/g/mathesar-developers/c/RQJSiDQu1Tg/m/uLHj30yFAgAJ).
New behaviour proposed:
* New Exploration: Auto-save is not preferred
- User opens the Data Explorer
- User joins tables, does any number of operations
- This should not get saved automatically
- It should get saved when user manually clicks Save button
* Existing Exploration: Auto-save is preferred
- Users edits an existing exploration in the Data Explorer
- User makes changes to it
- The changes are auto-saved
- We have undo-redo to improve the user's editing experience
@kgodey @mathemancer @ghislaine @seancolsen does this sound good? | main | new explorations should not auto save editing an existing exploration should auto save new explorations are currently persistent any change made immediately saves the exploration this behaviour is not preferred since we d like the user to be able to run and discard queries new behaviour proposed new exploration auto save is not preferred user opens the data explorer user joins tables does any number of operations this should not get saved automatically it should get saved when user manually clicks save button existing exploration auto save is preferred users edits an existing exploration in the data explorer user makes changes to it the changes are auto saved we have undo redo to improve the user s editing experience kgodey mathemancer ghislaine seancolsen does this sound good | 1 |
126,990 | 17,148,533,527 | IssuesEvent | 2021-07-13 17:19:49 | eoscostarica/eosio-dashboard | https://api.github.com/repos/eoscostarica/eosio-dashboard | reopened | Add descriptive paragraphs | Design / UX content | Users can use a bit of text to help guide them on the dashboard features.

## Accounts Page
Enter a valid EOSIO account name to see information on the account and also interact with any smart contract deployed on that account.
## Node Performance
## Rewards Distributon | 1.0 | Add descriptive paragraphs - Users can use a bit of text to help guide them on the dashboard features.

## Accounts Page
Enter a valid EOSIO account name to see information on the account and also interact with any smart contract deployed on that account.
## Node Performance
## Rewards Distributon | non_main | add descriptive paragraphs users can use a bit of text to help guide them on the dashboard features accounts page enter a valid eosio account name to see information on the account and also interact with any smart contract deployed on that account node performance rewards distributon | 0 |
5,508 | 27,497,976,519 | IssuesEvent | 2023-03-05 11:17:39 | nordtheme/nord | https://api.github.com/repos/nordtheme/nord | opened | Migrate Nord repositories from `arcticicestudio` to `nordtheme` | context-docs context-port type-epic scope-maintainability scope-ux | # Migrate Nord repositories from `arcticicestudio` to `nordtheme`
The latest [βNorthern Post β The state and roadmap of Nordβ][1] announcement described the plans for the migration to [Nordβs new home home on GitHub][2] and this _epic_ (GitHub [_tasklist_][3]) is used to track to overall process.
For a better visualization, with a time-based structure and general overview see [the βRoadmap β§β view of Nordβs βPlanning & Roadmapsβ project board][4] as well as the [βActive β»β view][5] for the current iteration.
The main goal is to migrate the repositories itself to this new `nordtheme` organization, includingβ¦
- β¦adjustments of any hyperlinks to previous `arcticicestudio` references as well as other, already migrated, repositories.
- β¦adjustments of any copyright information, removing [_Arctic Ice Studio_ as Nord brand due to its retirement][6].
- β¦adaption to updated documentation and style conventions which reduces the need to solve this through individual issues again afterwards.
- β¦adaption to [Nordβs new βPlanning & Roadmapsβ project board][7] by adding existing, already triaged or active, issues and preparing βfreshβ issues for the follow-up triaging later on.
```[tasklist]
### Tasks
```
[1]: https://github.com/orgs/nordtheme/discussions/183
[2]: https://github.com/orgs/nordtheme/discussions/183#user-content-nords-new-home-on-github
[3]: https://docs.github.com/en/issues/tracking-your-work-with-issues/about-tasklists
[4]: https://github.com/orgs/nordtheme/projects/1/views/11
[5]: https://github.com/orgs/nordtheme/projects/1/views/7
[6]: https://github.com/orgs/nordtheme/discussions/183#user-content-retire-arctic-ice-studio-as-nord-brand
[7]: https://github.com/orgs/nordtheme/projects/1 | True | Migrate Nord repositories from `arcticicestudio` to `nordtheme` - # Migrate Nord repositories from `arcticicestudio` to `nordtheme`
The latest [βNorthern Post β The state and roadmap of Nordβ][1] announcement described the plans for the migration to [Nordβs new home home on GitHub][2] and this _epic_ (GitHub [_tasklist_][3]) is used to track to overall process.
For a better visualization, with a time-based structure and general overview see [the βRoadmap β§β view of Nordβs βPlanning & Roadmapsβ project board][4] as well as the [βActive β»β view][5] for the current iteration.
The main goal is to migrate the repositories itself to this new `nordtheme` organization, includingβ¦
- β¦adjustments of any hyperlinks to previous `arcticicestudio` references as well as other, already migrated, repositories.
- β¦adjustments of any copyright information, removing [_Arctic Ice Studio_ as Nord brand due to its retirement][6].
- β¦adaption to updated documentation and style conventions which reduces the need to solve this through individual issues again afterwards.
- β¦adaption to [Nordβs new βPlanning & Roadmapsβ project board][7] by adding existing, already triaged or active, issues and preparing βfreshβ issues for the follow-up triaging later on.
```[tasklist]
### Tasks
```
[1]: https://github.com/orgs/nordtheme/discussions/183
[2]: https://github.com/orgs/nordtheme/discussions/183#user-content-nords-new-home-on-github
[3]: https://docs.github.com/en/issues/tracking-your-work-with-issues/about-tasklists
[4]: https://github.com/orgs/nordtheme/projects/1/views/11
[5]: https://github.com/orgs/nordtheme/projects/1/views/7
[6]: https://github.com/orgs/nordtheme/discussions/183#user-content-retire-arctic-ice-studio-as-nord-brand
[7]: https://github.com/orgs/nordtheme/projects/1 | main | migrate nord repositories from arcticicestudio to nordtheme migrate nord repositories from arcticicestudio to nordtheme the latest announcement described the plans for the migration to and this epic github is used to track to overall process for a better visualization with a time based structure and general overview see as well as the for the current iteration the main goal is to migrate the repositories itself to this new nordtheme organization includingβ¦ β¦adjustments of any hyperlinks to previous arcticicestudio references as well as other already migrated repositories β¦adjustments of any copyright information removing β¦adaption to updated documentation and style conventions which reduces the need to solve this through individual issues again afterwards β¦adaption to by adding existing already triaged or active issues and preparing βfreshβ issues for the follow up triaging later on tasks | 1 |
5,852 | 31,278,944,117 | IssuesEvent | 2023-08-22 08:21:22 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | Do not shutdown `LogManager` during on application shutdown; let Spring's shutdown hook do that. | area/maintainability kind/task | **Description**
> **Note**
> See linked epic issue for motivation
Acceptance criteria:
- We don't shutdown the `LogManager` in the `onApplicationEvent(ContextClosedEvent event)` of the `StandaloneBroker` and `StandaloneGateway`
- Instead, we enable the `logging.register-shutdown-hook` property in our default `application.properties`
One thing to keep in mind, when we'll launch the test applications, will we registering tons of hooks over time? It might be that we need to also disable it (via an overridden property) then. | True | Do not shutdown `LogManager` during on application shutdown; let Spring's shutdown hook do that. - **Description**
> **Note**
> See linked epic issue for motivation
Acceptance criteria:
- We don't shutdown the `LogManager` in the `onApplicationEvent(ContextClosedEvent event)` of the `StandaloneBroker` and `StandaloneGateway`
- Instead, we enable the `logging.register-shutdown-hook` property in our default `application.properties`
One thing to keep in mind, when we'll launch the test applications, will we registering tons of hooks over time? It might be that we need to also disable it (via an overridden property) then. | main | do not shutdown logmanager during on application shutdown let spring s shutdown hook do that description note see linked epic issue for motivation acceptance criteria we don t shutdown the logmanager in the onapplicationevent contextclosedevent event of the standalonebroker and standalonegateway instead we enable the logging register shutdown hook property in our default application properties one thing to keep in mind when we ll launch the test applications will we registering tons of hooks over time it might be that we need to also disable it via an overridden property then | 1 |
147,988 | 19,526,253,634 | IssuesEvent | 2021-12-30 08:24:25 | panasalap/linux-4.1.15 | https://api.github.com/repos/panasalap/linux-4.1.15 | opened | CVE-2020-27820 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2020-27820 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/9c15ec31637ff4ee4a4c14fb9b3264a31f75aa69">9c15ec31637ff4ee4a4c14fb9b3264a31f75aa69</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/nouveau/nouveau_drm.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in Linux kernel, where a use-after-frees in nouveau's postclose() handler could happen if removing device (that is not common to remove video card physically without power-off, but same happens if "unbind" the driver).
<p>Publish Date: 2021-11-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27820>CVE-2020-27820</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-27820 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2020-27820 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/9c15ec31637ff4ee4a4c14fb9b3264a31f75aa69">9c15ec31637ff4ee4a4c14fb9b3264a31f75aa69</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/nouveau/nouveau_drm.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in Linux kernel, where a use-after-frees in nouveau's postclose() handler could happen if removing device (that is not common to remove video card physically without power-off, but same happens if "unbind" the driver).
<p>Publish Date: 2021-11-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27820>CVE-2020-27820</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers gpu drm nouveau nouveau drm c vulnerability details a vulnerability was found in linux kernel where a use after frees in nouveau s postclose handler could happen if removing device that is not common to remove video card physically without power off but same happens if unbind the driver publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
1,533 | 6,572,225,408 | IssuesEvent | 2017-09-11 00:17:05 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ec2_vpc_route_table error not clear when Name tag already in use | affects_2.1 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ec2_vpc_route_table
##### ANSIBLE VERSION
```
<!--- Paste verbatim output from βansible --versionβ between quotes -->
ansible --version
ansible 2.1.0 (devel 22467a0de8) last updated 2016/04/13 09:48:50 (GMT -400)
lib/ansible/modules/core: (detached HEAD 99cd31140d) last updated 2016/04/13 09:49:08 (GMT -400)
lib/ansible/modules/extras: (detached HEAD ab2f4c4002) last updated 2016/04/13 09:49:08 (GMT -400)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say βN/Aβ for anything that is not platform-specific.
-->
Ansible Host: Fedora 23
Ansible Target: N/A AWS API
##### SUMMARY
<!--- Explain the problem briefly -->
Attempting to create a VPC route table for NAT purposes, I accidentally re-used the `Name` resource tag from an previously created route table. I wasn't trying to update the existing route table with the matching `Name`, it was a copy/paste error. The resulting error was not at all useful trying to troubleshoot the error.
The error appears to be thrown in this block:
```
for route_spec in route_specs:
i = index_of_matching_route(route_spec, routes_to_match)
if i is None:
route_specs_to_create.append(route_spec)
else:
del routes_to_match[i]
routes_to_delete = [r for r in routes_to_match
if r.gateway_id != 'local'
and r.gateway_id not in propagating_vgw_ids]
```
specifically iterating `routes_to_match`. I'm not familiar enough with the code to figure out what's going wrong with that list. It appears there's some matching going on against the resource Name that can empty the `routes_to_match` list.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes -->
```
$ ansible-playbook -i hosts site.yml -vvv
```
Failing role task:
```
- name: Create VPC route table - Public
ec2_vpc_route_table:
state: present
region: us-west-2
vpc_id: "{{ vpc.vpc_id }}"
resource_tags:
Environment: Testing
Name: Public Routes
subnets:
- "{{public_subnet.subnet.id}}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ vpc.igw_id }}"
- name: Create VPC route table - Internal NAT
ec2_vpc_route_table:
state: present
region: us-west-2
vpc_id: "{{ vpc.vpc_id }}"
resource_tags:
Environment: Testing
Name: Public Routes
subnets:
- "{{ priv1_subnet.subnet.id }}"
- "{{ priv2_subnet.subnet.id }}"
routes:
- dest: 0.0.0.0/0
instance_id: "{{ item }}"
with_items:
- "{{ nat_host.instance_ids }}"
```
Note the reuse of
```
resource_tags:
Environment: Testing
Name: Public Routes
```
in each task
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Not entirely sure, but if the intent is to allow for updates of a route table based on the `Name` resource tag, then something that has the syntax for that update. Otherwise an error that specifies that the match list was somehow empty.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
TASK [vpc : Create VPC route table - Internal NAT] *****************************
task path: /home/matt.micene/Projects/github/aws_ansible/roles/vpc/tasks/main.yml:82
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: matt.micene
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880 `" )'
<127.0.0.1> PUT /tmp/tmpLMeNP3 TO /home/matt.micene/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880/ec2_vpc_route_table
<127.0.0.1> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /home/matt.micene/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880/ec2_vpc_route_table; rm -rf "/home/matt.micene/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880/" > /dev/null 2>&1'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py", line 611, in <module>
File "/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py", line 599, in main
File "/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py", line 522, in ensure_route_table_present
File "/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py", line 322, in ensure_routes
TypeError: argument of type 'NoneType' is not iterable
failed: [localhost] (item=i-8358025b) => {"failed": true, "invocation": {"module_name": "ec2_vpc_route_table"}, "item": "i-8358025b", "module_stderr": "Traceback (most recent call last):\n File \"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main\n \"__main__\", fname, loader, pkg_name)\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py\", line 611, in <module>\n File \"/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py\", line 599, in main\n File \"/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py\", line 522, in ensure_route_table_present\n File \"/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py\", line 322, in ensure_routes\nTypeError: argument of type 'NoneType' is not iterable\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
```
| True | ec2_vpc_route_table error not clear when Name tag already in use - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ec2_vpc_route_table
##### ANSIBLE VERSION
```
<!--- Paste verbatim output from βansible --versionβ between quotes -->
ansible --version
ansible 2.1.0 (devel 22467a0de8) last updated 2016/04/13 09:48:50 (GMT -400)
lib/ansible/modules/core: (detached HEAD 99cd31140d) last updated 2016/04/13 09:49:08 (GMT -400)
lib/ansible/modules/extras: (detached HEAD ab2f4c4002) last updated 2016/04/13 09:49:08 (GMT -400)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say βN/Aβ for anything that is not platform-specific.
-->
Ansible Host: Fedora 23
Ansible Target: N/A AWS API
##### SUMMARY
<!--- Explain the problem briefly -->
Attempting to create a VPC route table for NAT purposes, I accidentally re-used the `Name` resource tag from an previously created route table. I wasn't trying to update the existing route table with the matching `Name`, it was a copy/paste error. The resulting error was not at all useful trying to troubleshoot the error.
The error appears to be thrown in this block:
```
for route_spec in route_specs:
i = index_of_matching_route(route_spec, routes_to_match)
if i is None:
route_specs_to_create.append(route_spec)
else:
del routes_to_match[i]
routes_to_delete = [r for r in routes_to_match
if r.gateway_id != 'local'
and r.gateway_id not in propagating_vgw_ids]
```
specifically iterating `routes_to_match`. I'm not familiar enough with the code to figure out what's going wrong with that list. It appears there's some matching going on against the resource Name that can empty the `routes_to_match` list.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes -->
```
$ ansible-playbook -i hosts site.yml -vvv
```
Failing role task:
```
- name: Create VPC route table - Public
ec2_vpc_route_table:
state: present
region: us-west-2
vpc_id: "{{ vpc.vpc_id }}"
resource_tags:
Environment: Testing
Name: Public Routes
subnets:
- "{{public_subnet.subnet.id}}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ vpc.igw_id }}"
- name: Create VPC route table - Internal NAT
ec2_vpc_route_table:
state: present
region: us-west-2
vpc_id: "{{ vpc.vpc_id }}"
resource_tags:
Environment: Testing
Name: Public Routes
subnets:
- "{{ priv1_subnet.subnet.id }}"
- "{{ priv2_subnet.subnet.id }}"
routes:
- dest: 0.0.0.0/0
instance_id: "{{ item }}"
with_items:
- "{{ nat_host.instance_ids }}"
```
Note the reuse of
```
resource_tags:
Environment: Testing
Name: Public Routes
```
in each task
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Not entirely sure, but if the intent is to allow for updates of a route table based on the `Name` resource tag, then something that has the syntax for that update. Otherwise an error that specifies that the match list was somehow empty.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
TASK [vpc : Create VPC route table - Internal NAT] *****************************
task path: /home/matt.micene/Projects/github/aws_ansible/roles/vpc/tasks/main.yml:82
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: matt.micene
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880 `" )'
<127.0.0.1> PUT /tmp/tmpLMeNP3 TO /home/matt.micene/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880/ec2_vpc_route_table
<127.0.0.1> EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /home/matt.micene/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880/ec2_vpc_route_table; rm -rf "/home/matt.micene/.ansible/tmp/ansible-tmp-1460558374.83-191553459311880/" > /dev/null 2>&1'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py", line 611, in <module>
File "/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py", line 599, in main
File "/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py", line 522, in ensure_route_table_present
File "/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py", line 322, in ensure_routes
TypeError: argument of type 'NoneType' is not iterable
failed: [localhost] (item=i-8358025b) => {"failed": true, "invocation": {"module_name": "ec2_vpc_route_table"}, "item": "i-8358025b", "module_stderr": "Traceback (most recent call last):\n File \"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main\n \"__main__\", fname, loader, pkg_name)\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py\", line 611, in <module>\n File \"/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py\", line 599, in main\n File \"/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py\", line 522, in ensure_route_table_present\n File \"/tmp/ansible_dQA9Zm/ansible/module_exec/ec2_vpc_route_table/__main__.py\", line 322, in ensure_routes\nTypeError: argument of type 'NoneType' is not iterable\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
```
| main | vpc route table error not clear when name tag already in use issue type bug report component name vpc route table ansible version ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say βn aβ for anything that is not platform specific ansible host fedora ansible target n a aws api summary attempting to create a vpc route table for nat purposes i accidentally re used the name resource tag from an previously created route table i wasn t trying to update the existing route table with the matching name it was a copy paste error the resulting error was not at all useful trying to troubleshoot the error the error appears to be thrown in this block for route spec in route specs i index of matching route route spec routes to match if i is none route specs to create append route spec else del routes to match routes to delete r for r in routes to match if r gateway id local and r gateway id not in propagating vgw ids specifically iterating routes to match i m not familiar enough with the code to figure out what s going wrong with that list it appears there s some matching going on against the resource name that can empty the routes to match list steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansible playbook i hosts site yml vvv failing role task name create vpc route table public vpc route table state present region us west vpc id vpc vpc id resource tags environment testing name public routes subnets public subnet subnet id routes dest gateway id vpc igw id name create vpc route table internal nat vpc route table state present region us west vpc id vpc vpc id resource tags environment testing name public routes subnets subnet subnet id subnet subnet id routes dest instance id item with items nat host instance ids note the reuse of resource tags environment testing name public routes in each task expected results not entirely sure but if the intent is to allow for updates of a route table based on the name resource tag then something that has the syntax for that update otherwise an error that specifies that the match list was somehow empty actual results task task path home matt micene projects github aws ansible roles vpc tasks main yml establish local connection for user matt micene exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put tmp to home matt micene ansible tmp ansible tmp vpc route table exec bin sh c lang c lc all c lc messages c usr bin python home matt micene ansible tmp ansible tmp vpc route table rm rf home matt micene ansible tmp ansible tmp dev null an exception occurred during task execution the full traceback is traceback most recent call last file usr runpy py line in run module as main main fname loader pkg name file usr runpy py line in run code exec code in run globals file tmp ansible ansible module exec vpc route table main py line in file tmp ansible ansible module exec vpc route table main py line in main file tmp ansible ansible module exec vpc route table main py line in ensure route table present file tmp ansible ansible module exec vpc route table main py line in ensure routes typeerror argument of type nonetype is not iterable failed item i failed true invocation module name vpc route table item i module stderr traceback most recent call last n file usr runpy py line in run module as main n main fname loader pkg name n file usr runpy py line in run code n exec code in run globals n file tmp ansible ansible module exec vpc route table main py line in n file tmp ansible ansible module exec vpc route table main py line in main n file tmp ansible ansible module exec vpc route table main py line in ensure route table present n file tmp ansible ansible module exec vpc route table main py line in ensure routes ntypeerror argument of type nonetype is not iterable n module stdout msg module failure parsed false | 1 |
5,249 | 26,567,532,633 | IssuesEvent | 2023-01-20 21:57:34 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Fix CI deprecation warnings | engineering devops maintain | We currently get a bunch of deprecation warnings for every GitHub action run. I think we need to fix this to make sure CI is not going to break.
 | True | Fix CI deprecation warnings - We currently get a bunch of deprecation warnings for every GitHub action run. I think we need to fix this to make sure CI is not going to break.
 | main | fix ci deprecation warnings we currently get a bunch of deprecation warnings for every github action run i think we need to fix this to make sure ci is not going to break | 1 |
40,852 | 5,318,879,037 | IssuesEvent | 2017-02-14 04:01:11 | NorthBridge/nexus-community | https://api.github.com/repos/NorthBridge/nexus-community | closed | enrollment packet did not come through with Cats network | ready to test | New enrollment emails did not come through | 1.0 | enrollment packet did not come through with Cats network - New enrollment emails did not come through | non_main | enrollment packet did not come through with cats network new enrollment emails did not come through | 0 |
64,511 | 18,722,101,109 | IssuesEvent | 2021-11-03 12:58:05 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | [Components and pattern standards] Design components or patterns don't align with Design System guidelines. (04.07.1) | content design 508/Accessibility ia education 508-defect-3 collab-cycle-feedback afs-education Staging CCIssue04.07 CC-Dashboard | ### General Information
#### VFS team name
Education Application (BAH)
#### VFS product name
Comparison Tool Redesign
#### Point of Contact/Reviewers
Trevor Pierce (Accessibility)
Allison Christman (Design)
---
### Platform Issue
Design components or patterns don't align with Design System guidelines.
### Issue Details
Search bar on Comparison Tool view doesn't match other search bars like /find-forms
### Link, screenshot or steps to recreate
https://docs.google.com/spreadsheets/d/1KnkaMDBeOZUR9n1sG_w3JB1cpAWvg-rWOLuJPjbUwsI/edit#gid=0
### VA.gov Experience Standard
[Cateogy Number 04, Issue Number 07](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
WCAG SC 3.2.4 AA
---
### Platform Recommendation
Update visual style to match existing search bars.
Style the search button to be attached and same color/size as the rest of the search on VA.gov. Put the icon in front of the text. See excel file under links for example
### VFS Team Tasks to Complete
- [ ] Comment on the ticket if there are questions or concerns
- [ ] VFS team closes the ticket when the issue has been resolved | 1.0 | [Components and pattern standards] Design components or patterns don't align with Design System guidelines. (04.07.1) - ### General Information
#### VFS team name
Education Application (BAH)
#### VFS product name
Comparison Tool Redesign
#### Point of Contact/Reviewers
Trevor Pierce (Accessibility)
Allison Christman (Design)
---
### Platform Issue
Design components or patterns don't align with Design System guidelines.
### Issue Details
Search bar on Comparison Tool view doesn't match other search bars like /find-forms
### Link, screenshot or steps to recreate
https://docs.google.com/spreadsheets/d/1KnkaMDBeOZUR9n1sG_w3JB1cpAWvg-rWOLuJPjbUwsI/edit#gid=0
### VA.gov Experience Standard
[Cateogy Number 04, Issue Number 07](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
WCAG SC 3.2.4 AA
---
### Platform Recommendation
Update visual style to match existing search bars.
Style the search button to be attached and same color/size as the rest of the search on VA.gov. Put the icon in front of the text. See excel file under links for example
### VFS Team Tasks to Complete
- [ ] Comment on the ticket if there are questions or concerns
- [ ] VFS team closes the ticket when the issue has been resolved | non_main | design components or patterns don t align with design system guidelines general information vfs team name education application bah vfs product name comparison tool redesign point of contact reviewers trevor pierce accessibility allison christman design platform issue design components or patterns don t align with design system guidelines issue details search bar on comparison tool view doesn t match other search bars like find forms link screenshot or steps to recreate va gov experience standard other references wcag sc aa platform recommendation update visual style to match existing search bars style the search button to be attached and same color size as the rest of the search on va gov put the icon in front of the text see excel file under links for example vfs team tasks to complete comment on the ticket if there are questions or concerns vfs team closes the ticket when the issue has been resolved | 0 |
240,392 | 20,026,270,279 | IssuesEvent | 2022-02-01 21:42:33 | phetsims/sun | https://api.github.com/repos/phetsims/sun | closed | API change for `slider.enabledProperty` | dev:phet-io type:automated-testing status:blocks-publication | Several sims have been failing CT due to PhET-iO API changes to Slider `enabledProperty`. For example:
```
molecule-polarity : phet-io-api-compatibility : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1643631899761/molecule-polarity/molecule-polarity_en.html?continuousTest=%7B%22test%22%3A%5B%22molecule-polarity%22%2C%22phet-io-api-compatibility%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1643631899761%22%2C%22timestamp%22%3A1643637936709%7D&ea&brand=phet-io&phetioStandalone&phetioCompareAPI&randomSeed=332211
Query: ea&brand=phet-io&phetioStandalone&phetioCompareAPI&randomSeed=332211
Uncaught Error: Assertion failed: Designed API changes detected, please roll them back or revise the reference API:
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomAElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomBElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomCElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.twoAtomsScreen.view.electronegativityPanels.atomAElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.twoAtomsScreen.view.electronegativityPanels.atomBElectronegativityPanel.slider.enabledProperty
Error: Assertion failed: Designed API changes detected, please roll them back or revise the reference API:
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomAElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomBElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomCElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.twoAtomsScreen.view.electronegativityPanels.atomAElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.twoAtomsScreen.view.electronegativityPanels.atomBElectronegativityPanel.slider.enabledProperty
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1643631899761/assert/js/assert.js:25:13)
at XMLHttpRequest.<anonymous> (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1643631899761/chipper/dist/js/phet-io/js/phetioEngine.js:345:23)
id: Bayes Chrome
Snapshot from 1/31/2022, 5:24:59 AM
```
There have been no sim-specific changes to the sims that Iβm responsible for.
I see changes to AccessibleSlider and AccessibleValueHandler by @mjkauzmann that seem to coincide with the appearance of this problem. For example https://github.com/phetsims/sun/commit/1ad9e76414658b3797009662e651d3d41b48d34a and https://github.com/phetsims/sun/commit/877f09991f4ae29029c2a7c8215d814318b6a91d.
If this was an intentional change, please summarize "why", and update PhET-iO APIs. If it was unintentional, please correct the regression.
| 1.0 | API change for `slider.enabledProperty` - Several sims have been failing CT due to PhET-iO API changes to Slider `enabledProperty`. For example:
```
molecule-polarity : phet-io-api-compatibility : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1643631899761/molecule-polarity/molecule-polarity_en.html?continuousTest=%7B%22test%22%3A%5B%22molecule-polarity%22%2C%22phet-io-api-compatibility%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1643631899761%22%2C%22timestamp%22%3A1643637936709%7D&ea&brand=phet-io&phetioStandalone&phetioCompareAPI&randomSeed=332211
Query: ea&brand=phet-io&phetioStandalone&phetioCompareAPI&randomSeed=332211
Uncaught Error: Assertion failed: Designed API changes detected, please roll them back or revise the reference API:
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomAElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomBElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomCElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.twoAtomsScreen.view.electronegativityPanels.atomAElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.twoAtomsScreen.view.electronegativityPanels.atomBElectronegativityPanel.slider.enabledProperty
Error: Assertion failed: Designed API changes detected, please roll them back or revise the reference API:
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomAElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomBElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.threeAtomsScreen.view.electronegativityPanels.atomCElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.twoAtomsScreen.view.electronegativityPanels.atomAElectronegativityPanel.slider.enabledProperty
PhET-iO Element missing: moleculePolarity.twoAtomsScreen.view.electronegativityPanels.atomBElectronegativityPanel.slider.enabledProperty
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1643631899761/assert/js/assert.js:25:13)
at XMLHttpRequest.<anonymous> (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1643631899761/chipper/dist/js/phet-io/js/phetioEngine.js:345:23)
id: Bayes Chrome
Snapshot from 1/31/2022, 5:24:59 AM
```
There have been no sim-specific changes to the sims that Iβm responsible for.
I see changes to AccessibleSlider and AccessibleValueHandler by @mjkauzmann that seem to coincide with the appearance of this problem. For example https://github.com/phetsims/sun/commit/1ad9e76414658b3797009662e651d3d41b48d34a and https://github.com/phetsims/sun/commit/877f09991f4ae29029c2a7c8215d814318b6a91d.
If this was an intentional change, please summarize "why", and update PhET-iO APIs. If it was unintentional, please correct the regression.
| non_main | api change for slider enabledproperty several sims have been failing ct due to phet io api changes to slider enabledproperty for example molecule polarity phet io api compatibility unbuilt query ea brand phet io phetiostandalone phetiocompareapi randomseed uncaught error assertion failed designed api changes detected please roll them back or revise the reference api phet io element missing moleculepolarity threeatomsscreen view electronegativitypanels atomaelectronegativitypanel slider enabledproperty phet io element missing moleculepolarity threeatomsscreen view electronegativitypanels atombelectronegativitypanel slider enabledproperty phet io element missing moleculepolarity threeatomsscreen view electronegativitypanels atomcelectronegativitypanel slider enabledproperty phet io element missing moleculepolarity twoatomsscreen view electronegativitypanels atomaelectronegativitypanel slider enabledproperty phet io element missing moleculepolarity twoatomsscreen view electronegativitypanels atombelectronegativitypanel slider enabledproperty error assertion failed designed api changes detected please roll them back or revise the reference api phet io element missing moleculepolarity threeatomsscreen view electronegativitypanels atomaelectronegativitypanel slider enabledproperty phet io element missing moleculepolarity threeatomsscreen view electronegativitypanels atombelectronegativitypanel slider enabledproperty phet io element missing moleculepolarity threeatomsscreen view electronegativitypanels atomcelectronegativitypanel slider enabledproperty phet io element missing moleculepolarity twoatomsscreen view electronegativitypanels atomaelectronegativitypanel slider enabledproperty phet io element missing moleculepolarity twoatomsscreen view electronegativitypanels atombelectronegativitypanel slider enabledproperty at window assertions assertfunction at xmlhttprequest id bayes chrome snapshot from am there have been no sim specific changes to the sims that iβm responsible for i see changes to accessibleslider and accessiblevaluehandler by mjkauzmann that seem to coincide with the appearance of this problem for example and if this was an intentional change please summarize why and update phet io apis if it was unintentional please correct the regression | 0 |
119,923 | 25,707,249,578 | IssuesEvent | 2022-12-07 02:10:20 | Rich2/openstrat | https://api.github.com/repos/Rich2/openstrat | opened | Removal of Grid Managers | code elimination | Consider removing grid managers. I'm not sure the encapsulation they enable justifies the increase in obfuscation. | 1.0 | Removal of Grid Managers - Consider removing grid managers. I'm not sure the encapsulation they enable justifies the increase in obfuscation. | non_main | removal of grid managers consider removing grid managers i m not sure the encapsulation they enable justifies the increase in obfuscation | 0 |
3,696 | 15,093,646,468 | IssuesEvent | 2021-02-07 01:45:30 | Datatamer/tamr-client | https://api.github.com/repos/Datatamer/tamr-client | closed | Change name of default branch from `master` to `main` | βοΈ Maintainers | We should follow through on a commitment to change the default branch name of the repo from the problematic name `master` to the Github-recommended name `main`. | True | Change name of default branch from `master` to `main` - We should follow through on a commitment to change the default branch name of the repo from the problematic name `master` to the Github-recommended name `main`. | main | change name of default branch from master to main we should follow through on a commitment to change the default branch name of the repo from the problematic name master to the github recommended name main | 1 |
81,708 | 3,593,757,822 | IssuesEvent | 2016-02-01 20:57:51 | morelinq/MoreLINQ | https://api.github.com/repos/morelinq/MoreLINQ | closed | Create "RandomElement" method | enhancement Priority-Low | We should have an extension method with returns a random element from the sequence.
See http://stackoverflow.com/questions/648196/random-row-from-linq/648240#648240 for sample code.
---
Originally reported on Google Code with ID 20
Reported by @jskeet on 2009-03-15 18:08:59
| 1.0 | Create "RandomElement" method - We should have an extension method with returns a random element from the sequence.
See http://stackoverflow.com/questions/648196/random-row-from-linq/648240#648240 for sample code.
---
Originally reported on Google Code with ID 20
Reported by @jskeet on 2009-03-15 18:08:59
| non_main | create randomelement method we should have an extension method with returns a random element from the sequence see for sample code originally reported on google code with id reported by jskeet on | 0 |
5,358 | 26,979,297,434 | IssuesEvent | 2023-02-09 11:55:23 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Application to join: Harzah2 (bbcode module) | Port in progress Maintainer application | I would like to apply to join the Backdrop contrib team.
Here is a module I ported: https://github.com/Harzah2/bbcode
[LICENSE.txt](https://github.com/backdrop-ops/contrib/files/3349653/LICENSE.txt)
[README.txt](https://github.com/backdrop-ops/contrib/files/3349655/README.txt)
| True | Application to join: Harzah2 (bbcode module) - I would like to apply to join the Backdrop contrib team.
Here is a module I ported: https://github.com/Harzah2/bbcode
[LICENSE.txt](https://github.com/backdrop-ops/contrib/files/3349653/LICENSE.txt)
[README.txt](https://github.com/backdrop-ops/contrib/files/3349655/README.txt)
| main | application to join bbcode module i would like to apply to join the backdrop contrib team here is a module i ported | 1 |
3,297 | 12,689,281,438 | IssuesEvent | 2020-06-21 04:58:18 | diofant/diofant | https://api.github.com/repos/diofant/diofant | opened | Make .diff() call convention for high derivatives - consistent across the codebase | core help wanted maintainability polys | Right now, `Poly.diff()` and `Expr.diff()` - differ (e.g. see sympy/sympy#19590). They shouldn't. Perhaps, Poly's syntax
`Poly(x**2).diff((x, 2))` for higher derivatives must be chosen, as it seems simple.
| True | Make .diff() call convention for high derivatives - consistent across the codebase - Right now, `Poly.diff()` and `Expr.diff()` - differ (e.g. see sympy/sympy#19590). They shouldn't. Perhaps, Poly's syntax
`Poly(x**2).diff((x, 2))` for higher derivatives must be chosen, as it seems simple.
| main | make diff call convention for high derivatives consistent across the codebase right now poly diff and expr diff differ e g see sympy sympy they shouldn t perhaps poly s syntax poly x diff x for higher derivatives must be chosen as it seems simple | 1 |
245,169 | 18,773,883,620 | IssuesEvent | 2021-11-07 10:24:08 | ElysianFieldsArchive/ElysianMuse | https://api.github.com/repos/ElysianFieldsArchive/ElysianMuse | opened | [user] add uml for artist settings | documentation Module:User | status: active/inactive (auto inactive when added by other user as artist contributor to their story, even if artist tag wasn't added before)
available art type (Story banner, Chapter banner, Mood board, Avatar, animated banners, Other)
expected turnaround time (1-3 days, 4-7 days, 1-2 weeks, 2-3 weeks)
examples (image links)
AOB (freetext)
see also #27 | 1.0 | [user] add uml for artist settings - status: active/inactive (auto inactive when added by other user as artist contributor to their story, even if artist tag wasn't added before)
available art type (Story banner, Chapter banner, Mood board, Avatar, animated banners, Other)
expected turnaround time (1-3 days, 4-7 days, 1-2 weeks, 2-3 weeks)
examples (image links)
AOB (freetext)
see also #27 | non_main | add uml for artist settings status active inactive auto inactive when added by other user as artist contributor to their story even if artist tag wasn t added before available art type story banner chapter banner mood board avatar animated banners other expected turnaround time days days weeks weeks examples image links aob freetext see also | 0 |
2,239 | 7,888,543,604 | IssuesEvent | 2018-06-27 22:34:15 | react-navigation/react-navigation | https://api.github.com/repos/react-navigation/react-navigation | closed | how to set initialRouteName based on props received in navigation in v2.0 | needs more info needs repro needs response from maintainer | This is not a bug but need to know how to implement this after upgrading to React Navigation V2.0
Here is the long thread on same issue for React Navigation 1.0 #458
Requirements:
How to set initialRouteName for TabNavigator/ StackNavigator based on the props. I need to set initial Tab based on some conditions. Previously I was able to handle the props inside a React Component and Compose the TabNavigator inside Component based on the props "defaultTab" from parent.
As in RN@2.0, only one navigator should be rendered inside component and if I use previous method I got this error.
> console.error: "You should only render one navigator explicitly in your app, and other navigators should by rendered by including them in that navigator. Full details at: https://v2.reactnavigation.org/docs/common-mistakes.html#explicitly-rendering-more-than-one-navigator"
If I follow "Common-mistakes" and use the alternative method provided by exposing routes I am unable to use the props received to set the initialRouteName.
previously I used this approach [458](https://github.com/react-navigation/react-navigation/issues/458#issuecomment-369129600)
I am at very beginner level and this project does not use Redux. Here is the exact code.
```
export default class CategoryChooser extends Component {
componentWillMount () {
const {defaultTab} = this.props.navigation.state.params;
if (defaultTab == undefined) {
defaultTab = 'ExpenseCategoryTab';
}
CategoryChooseStack = TabNavigator({
IncomeCategoryTab: {
screen: IncomeCategoryTab,
navigationOptions: {
title: 'Income',
}
},
ExpenseCategoryTab: {
screen: ExpenseCategoryTab,
navigationOptions: {
title: 'Expense',
}
},
TransferTab: {
screen: WalletChooserTab,
navigationOptions: {
title: 'Transfer',
}
},
}, {
initialRouteName: defaultTab,
backBehavior: 'none',
navigationOptions: {
//....
},
});
}
render() {
return <CategoryChooseStack screenProps={{...this.props.navigation.state.params, pn: this.props.navigation}}/>
}
}
``` | True | how to set initialRouteName based on props received in navigation in v2.0 - This is not a bug but need to know how to implement this after upgrading to React Navigation V2.0
Here is the long thread on same issue for React Navigation 1.0 #458
Requirements:
How to set initialRouteName for TabNavigator/ StackNavigator based on the props. I need to set initial Tab based on some conditions. Previously I was able to handle the props inside a React Component and Compose the TabNavigator inside Component based on the props "defaultTab" from parent.
As in RN@2.0, only one navigator should be rendered inside component and if I use previous method I got this error.
> console.error: "You should only render one navigator explicitly in your app, and other navigators should by rendered by including them in that navigator. Full details at: https://v2.reactnavigation.org/docs/common-mistakes.html#explicitly-rendering-more-than-one-navigator"
If I follow "Common-mistakes" and use the alternative method provided by exposing routes I am unable to use the props received to set the initialRouteName.
previously I used this approach [458](https://github.com/react-navigation/react-navigation/issues/458#issuecomment-369129600)
I am at very beginner level and this project does not use Redux. Here is the exact code.
```
export default class CategoryChooser extends Component {
componentWillMount () {
const {defaultTab} = this.props.navigation.state.params;
if (defaultTab == undefined) {
defaultTab = 'ExpenseCategoryTab';
}
CategoryChooseStack = TabNavigator({
IncomeCategoryTab: {
screen: IncomeCategoryTab,
navigationOptions: {
title: 'Income',
}
},
ExpenseCategoryTab: {
screen: ExpenseCategoryTab,
navigationOptions: {
title: 'Expense',
}
},
TransferTab: {
screen: WalletChooserTab,
navigationOptions: {
title: 'Transfer',
}
},
}, {
initialRouteName: defaultTab,
backBehavior: 'none',
navigationOptions: {
//....
},
});
}
render() {
return <CategoryChooseStack screenProps={{...this.props.navigation.state.params, pn: this.props.navigation}}/>
}
}
``` | main | how to set initialroutename based on props received in navigation in this is not a bug but need to know how to implement this after upgrading to react navigation here is the long thread on same issue for react navigation requirements how to set initialroutename for tabnavigator stacknavigator based on the props i need to set initial tab based on some conditions previously i was able to handle the props inside a react component and compose the tabnavigator inside component based on the props defaulttab from parent as in rn only one navigator should be rendered inside component and if i use previous method i got this error console error you should only render one navigator explicitly in your app and other navigators should by rendered by including them in that navigator full details at if i follow common mistakes and use the alternative method provided by exposing routes i am unable to use the props received to set the initialroutename previously i used this approach i am at very beginner level and this project does not use redux here is the exact code export default class categorychooser extends component componentwillmount const defaulttab this props navigation state params if defaulttab undefined defaulttab expensecategorytab categorychoosestack tabnavigator incomecategorytab screen incomecategorytab navigationoptions title income expensecategorytab screen expensecategorytab navigationoptions title expense transfertab screen walletchoosertab navigationoptions title transfer initialroutename defaulttab backbehavior none navigationoptions render return | 1 |
1,275 | 5,397,626,662 | IssuesEvent | 2017-02-27 15:07:11 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | redhat_subscription should not allow attaching subscriptions by name | affects_2.2 bug_report module waiting_on_maintainer | <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the module/plugin/task/feature -->
redhat_subscription
##### ANSIBLE VERSION
<!--- Paste verbatim output from βansible --versionβ between quotes below -->
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say βN/Aβ for anything that is not platform-specific.
-->
RHEL 7.3
##### SUMMARY
<!--- Explain the problem briefly -->
redhat_subscription manager is overloading the value type of the original subscription manager. This causes confusion and over-consumption of entitlements.
The official Red Hat Subscription Manager allows attaching subscriptions by Pool ID, NOT by Pool Name.
```
[root@rhel7 ~]# subscription-manager attach --help
Usage: subscription-manager attach [OPTIONS]
Attach a specified subscription to the registered system
Options:
-h, --help show this help message and exit
--proxy=PROXY_URL proxy URL in the form of proxy_hostname:proxy_port
--proxyuser=PROXY_USER
user for HTTP proxy with basic authentication
--proxypassword=PROXY_PASSWORD
password for HTTP proxy with basic authentication
--pool=POOL the ID of the pool to attach (can be specified more
than once)
--quantity=QUANTITY number of subscriptions to attach
--auto Automatically attach compatible subscriptions to this
system. This is the default action.
--servicelevel=SERVICE_LEVEL
service level to apply to this system
--file=FILE A file from which to read pool IDs. If a hyphen is
provided, pool IDs will be read from stdin.
[root@rhel7 ~]#
```
Because this module is allowing a user to specify a name and trying to do its own logic and parsing of available Pool IDs, it is incorrectly attempting to extend the capabilities of the official subscription manager. It also does it poorly, allowing multiple subscriptions to be consumed when they were not desired.
Large customers will have many duplicate subscriptions of the same name due to growth and purchases made at different times. This could have a very large impact to large customers.
The feature to enable using the Pool ID is a good start, but we also must remove the capability of adding by Name. This Ansible module should not attempt to do more than the original tool allows.
https://github.com/ansible/ansible-modules-core/pull/4603
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- redhat_subscription:
state: present
username: "{{ rhn_username }}"
password: "{{ rhn_password }}"
pool: '^Employee SKU$'
# pool: "{{ POOL_ID }}"
when: ansible_distribution == 'RedHat'
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Only a single Pool ID attached, presumably the first one matched. But certainly not ALL matches.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
I have altered the values below to protect our actual Pool IDs (XXX, YYY, ZZZ). I can provide actual examples privately.
<!--- Paste verbatim command output between quotes below -->
```
<util7vm> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt util7vm '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-pyeuamdvwtzvyzuxvbczhjotjfyacqxn; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1481978149.12-197063747409681/redhat_subscription.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1481978149.12-197063747409681/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
changed: [util7vm] => {
"changed": true,
"invocation": {
"module_args": {
"activationkey": null,
"autosubscribe": false,
"consumer_id": null,
"consumer_name": null,
"consumer_type": null,
"environment": null,
"force_register": false,
"org_id": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"pool": "^Employee SKU$",
"rhsm_baseurl": "https://cdn.redhat.com",
"server_hostname": "subscription.rhn.redhat.com",
"server_insecure": "0",
"state": "present",
"username": "my-rhn-support-username"
},
"module_name": "redhat_subscription"
},
"subscribed_pool_ids": [
"XXXXXXXXXXXXXXXXXXXXXXXXXXX",
"YYYYYYYYYYYYYYYYYYYYYYYY",
"ZZZZZZZZZZZZZZZZZZZZZZZZZZZ"
],
"unsubscribed_serials": []
}
PLAY RECAP *********************************************************************
util7vm : ok=2 changed=1 unreachable=0 failed=0
``` | True | redhat_subscription should not allow attaching subscriptions by name - <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the module/plugin/task/feature -->
redhat_subscription
##### ANSIBLE VERSION
<!--- Paste verbatim output from βansible --versionβ between quotes below -->
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say βN/Aβ for anything that is not platform-specific.
-->
RHEL 7.3
##### SUMMARY
<!--- Explain the problem briefly -->
redhat_subscription manager is overloading the value type of the original subscription manager. This causes confusion and over-consumption of entitlements.
The official Red Hat Subscription Manager allows attaching subscriptions by Pool ID, NOT by Pool Name.
```
[root@rhel7 ~]# subscription-manager attach --help
Usage: subscription-manager attach [OPTIONS]
Attach a specified subscription to the registered system
Options:
-h, --help show this help message and exit
--proxy=PROXY_URL proxy URL in the form of proxy_hostname:proxy_port
--proxyuser=PROXY_USER
user for HTTP proxy with basic authentication
--proxypassword=PROXY_PASSWORD
password for HTTP proxy with basic authentication
--pool=POOL the ID of the pool to attach (can be specified more
than once)
--quantity=QUANTITY number of subscriptions to attach
--auto Automatically attach compatible subscriptions to this
system. This is the default action.
--servicelevel=SERVICE_LEVEL
service level to apply to this system
--file=FILE A file from which to read pool IDs. If a hyphen is
provided, pool IDs will be read from stdin.
[root@rhel7 ~]#
```
Because this module is allowing a user to specify a name and trying to do its own logic and parsing of available Pool IDs, it is incorrectly attempting to extend the capabilities of the official subscription manager. It also does it poorly, allowing multiple subscriptions to be consumed when they were not desired.
Large customers will have many duplicate subscriptions of the same name due to growth and purchases made at different times. This could have a very large impact to large customers.
The feature to enable using the Pool ID is a good start, but we also must remove the capability of adding by Name. This Ansible module should not attempt to do more than the original tool allows.
https://github.com/ansible/ansible-modules-core/pull/4603
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- redhat_subscription:
state: present
username: "{{ rhn_username }}"
password: "{{ rhn_password }}"
pool: '^Employee SKU$'
# pool: "{{ POOL_ID }}"
when: ansible_distribution == 'RedHat'
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Only a single Pool ID attached, presumably the first one matched. But certainly not ALL matches.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
I have altered the values below to protect our actual Pool IDs (XXX, YYY, ZZZ). I can provide actual examples privately.
<!--- Paste verbatim command output between quotes below -->
```
<util7vm> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt util7vm '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-pyeuamdvwtzvyzuxvbczhjotjfyacqxn; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1481978149.12-197063747409681/redhat_subscription.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1481978149.12-197063747409681/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
changed: [util7vm] => {
"changed": true,
"invocation": {
"module_args": {
"activationkey": null,
"autosubscribe": false,
"consumer_id": null,
"consumer_name": null,
"consumer_type": null,
"environment": null,
"force_register": false,
"org_id": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"pool": "^Employee SKU$",
"rhsm_baseurl": "https://cdn.redhat.com",
"server_hostname": "subscription.rhn.redhat.com",
"server_insecure": "0",
"state": "present",
"username": "my-rhn-support-username"
},
"module_name": "redhat_subscription"
},
"subscribed_pool_ids": [
"XXXXXXXXXXXXXXXXXXXXXXXXXXX",
"YYYYYYYYYYYYYYYYYYYYYYYY",
"ZZZZZZZZZZZZZZZZZZZZZZZZZZZ"
],
"unsubscribed_serials": []
}
PLAY RECAP *********************************************************************
util7vm : ok=2 changed=1 unreachable=0 failed=0
``` | main | redhat subscription should not allow attaching subscriptions by name verify first that your issue request is not already reported on github also test if the latest release and master branch are affected too issue type bug report component name redhat subscription ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say βn aβ for anything that is not platform specific rhel summary redhat subscription manager is overloading the value type of the original subscription manager this causes confusion and over consumption of entitlements the official red hat subscription manager allows attaching subscriptions by pool id not by pool name subscription manager attach help usage subscription manager attach attach a specified subscription to the registered system options h help show this help message and exit proxy proxy url proxy url in the form of proxy hostname proxy port proxyuser proxy user user for http proxy with basic authentication proxypassword proxy password password for http proxy with basic authentication pool pool the id of the pool to attach can be specified more than once quantity quantity number of subscriptions to attach auto automatically attach compatible subscriptions to this system this is the default action servicelevel service level service level to apply to this system file file a file from which to read pool ids if a hyphen is provided pool ids will be read from stdin because this module is allowing a user to specify a name and trying to do its own logic and parsing of available pool ids it is incorrectly attempting to extend the capabilities of the official subscription manager it also does it poorly allowing multiple subscriptions to be consumed when they were not desired large customers will have many duplicate subscriptions of the same name due to growth and purchases made at different times this could have a very large impact to large customers the feature to enable using the pool id is a good start but we also must remove the capability of adding by name this ansible module should not attempt to do more than the original tool allows steps to reproduce for bugs show exactly how to reproduce the problem using a minimal test case for new features show how the feature would be used yaml redhat subscription state present username rhn username password rhn password pool employee sku pool pool id when ansible distribution redhat expected results only a single pool id attached presumably the first one matched but certainly not all matches actual results i have altered the values below to protect our actual pool ids xxx yyy zzz i can provide actual examples privately ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath root ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success pyeuamdvwtzvyzuxvbczhjotjfyacqxn usr bin python root ansible tmp ansible tmp redhat subscription py rm rf root ansible tmp ansible tmp dev null sleep changed changed true invocation module args activationkey null autosubscribe false consumer id null consumer name null consumer type null environment null force register false org id null password value specified in no log parameter pool employee sku rhsm baseurl server hostname subscription rhn redhat com server insecure state present username my rhn support username module name redhat subscription subscribed pool ids xxxxxxxxxxxxxxxxxxxxxxxxxxx yyyyyyyyyyyyyyyyyyyyyyyy zzzzzzzzzzzzzzzzzzzzzzzzzzz unsubscribed serials play recap ok changed unreachable failed | 1 |
21,928 | 30,446,559,002 | IssuesEvent | 2023-07-15 18:48:28 | h4sh5/pypi-auto-scanner | https://api.github.com/repos/h4sh5/pypi-auto-scanner | opened | pyutils 0.0.1b8 has 2 GuardDog issues | guarddog typosquatting silent-process-execution | https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b8",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pytils, python-utils",
"silent-process-execution": [
{
"location": "pyutils/exec_utils.py/pyutils/exec_utils.py:204",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp_et5qdoe/pyutils"
}
}``` | 1.0 | pyutils 0.0.1b8 has 2 GuardDog issues - https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b8",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pytils, python-utils",
"silent-process-execution": [
{
"location": "pyutils/exec_utils.py/pyutils/exec_utils.py:204",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp_et5qdoe/pyutils"
}
}``` | non_main | pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt pytils python utils silent process execution location pyutils exec utils py pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmp pyutils | 0 |
159,651 | 25,028,337,174 | IssuesEvent | 2022-11-04 10:05:56 | ProjektAdLer/2D_3D_AdLer | https://api.github.com/repos/ProjektAdLer/2D_3D_AdLer | closed | Responsive Design - Allgemeine Anpassungen | design | **User Story**
Als _Benutzerrolle_ mΓΆchte ich _FunktionalitΓ€t_ sodass _Wert_ fΓΌr den Nutzer
**Akzeptanzkriterien**
- [x] Schriften mit Breakpoints versehen
- [x] Icons mit Breaktpoints versehen
- [x] Texten genug WeiΓraum geben
- [x] So wenig Text wie mΓΆglich, so viel wie nΓΆtig
**Definition of Ready**
- [x] User Story ist klein genug fΓΌr Sprint
- [x] User Story ist fΓΌr jeden beteiligten Entwickler klar verstΓ€ndlich
- [x] User Story Aufwand ist geschΓ€tzt
- [x] User Story hat Akzeptanzkriterien
- [x] User Story hat einen Mehrwert fΓΌr das Produkt oder die Entwicklung
- [x] User Story Ursprung ist bekannt (Stakeholder)
- [x] User Story ist Release zugewiesen
**Definition of Done**
- [ ] Alle Akzeptanzkriterien sind erfΓΌllt
- [ ] Die Implementierung liegt in GitHub auf einem Prototype Branch
- [ ] Unittestabdeckung muss grΓΆΓer 92% sein
- [ ] Alle Tests mΓΌssen bestanden sein
- [ ] Prototype Dokumentation wurde angelegt (Wiki (Version mit angeben!)(Changelog bis dahin verlinken!))
- [ ] Dokumentation des Vorgehens wurde angelegt (Zenhub Kommentar)
- [ ] Es gibt keine bekannten Bugs
- [ ] Die Realisierung der User Story wurde erfolgreich durch den Product Owner abgenommen
| 1.0 | Responsive Design - Allgemeine Anpassungen - **User Story**
Als _Benutzerrolle_ mΓΆchte ich _FunktionalitΓ€t_ sodass _Wert_ fΓΌr den Nutzer
**Akzeptanzkriterien**
- [x] Schriften mit Breakpoints versehen
- [x] Icons mit Breaktpoints versehen
- [x] Texten genug WeiΓraum geben
- [x] So wenig Text wie mΓΆglich, so viel wie nΓΆtig
**Definition of Ready**
- [x] User Story ist klein genug fΓΌr Sprint
- [x] User Story ist fΓΌr jeden beteiligten Entwickler klar verstΓ€ndlich
- [x] User Story Aufwand ist geschΓ€tzt
- [x] User Story hat Akzeptanzkriterien
- [x] User Story hat einen Mehrwert fΓΌr das Produkt oder die Entwicklung
- [x] User Story Ursprung ist bekannt (Stakeholder)
- [x] User Story ist Release zugewiesen
**Definition of Done**
- [ ] Alle Akzeptanzkriterien sind erfΓΌllt
- [ ] Die Implementierung liegt in GitHub auf einem Prototype Branch
- [ ] Unittestabdeckung muss grΓΆΓer 92% sein
- [ ] Alle Tests mΓΌssen bestanden sein
- [ ] Prototype Dokumentation wurde angelegt (Wiki (Version mit angeben!)(Changelog bis dahin verlinken!))
- [ ] Dokumentation des Vorgehens wurde angelegt (Zenhub Kommentar)
- [ ] Es gibt keine bekannten Bugs
- [ ] Die Realisierung der User Story wurde erfolgreich durch den Product Owner abgenommen
| non_main | responsive design allgemeine anpassungen user story als benutzerrolle mΓΆchte ich funktionalitΓ€t sodass wert fΓΌr den nutzer akzeptanzkriterien schriften mit breakpoints versehen icons mit breaktpoints versehen texten genug weiΓraum geben so wenig text wie mΓΆglich so viel wie nΓΆtig definition of ready user story ist klein genug fΓΌr sprint user story ist fΓΌr jeden beteiligten entwickler klar verstΓ€ndlich user story aufwand ist geschΓ€tzt user story hat akzeptanzkriterien user story hat einen mehrwert fΓΌr das produkt oder die entwicklung user story ursprung ist bekannt stakeholder user story ist release zugewiesen definition of done alle akzeptanzkriterien sind erfΓΌllt die implementierung liegt in github auf einem prototype branch unittestabdeckung muss grΓΆΓer sein alle tests mΓΌssen bestanden sein prototype dokumentation wurde angelegt wiki version mit angeben changelog bis dahin verlinken dokumentation des vorgehens wurde angelegt zenhub kommentar es gibt keine bekannten bugs die realisierung der user story wurde erfolgreich durch den product owner abgenommen | 0 |
3,834 | 16,685,777,994 | IssuesEvent | 2021-06-08 07:53:57 | hbz/lobid-resources | https://api.github.com/repos/hbz/lobid-resources | closed | Load Webhook configs dynamically | ALMA maintainance | Atm the `webhook` configs in https://github.com/hbz/lobid-resources/blob/master/web/conf/resources.conf are only loaded when the play app is started. Thus:
1. if a config is changed the app has to be restarted
2. one cannot 100% certain that the content of the file is indeed active at any point
It's a hassle to always restart the app.
Solution is to dynamically reload the webhook config when a webhook is invoked. | True | Load Webhook configs dynamically - Atm the `webhook` configs in https://github.com/hbz/lobid-resources/blob/master/web/conf/resources.conf are only loaded when the play app is started. Thus:
1. if a config is changed the app has to be restarted
2. one cannot 100% certain that the content of the file is indeed active at any point
It's a hassle to always restart the app.
Solution is to dynamically reload the webhook config when a webhook is invoked. | main | load webhook configs dynamically atm the webhook configs in are only loaded when the play app is started thus if a config is changed the app has to be restarted one cannot certain that the content of the file is indeed active at any point it s a hassle to always restart the app solution is to dynamically reload the webhook config when a webhook is invoked | 1 |
349,022 | 31,767,522,473 | IssuesEvent | 2023-09-12 09:40:17 | tanisecarvalho/i-was-there | https://api.github.com/repos/tanisecarvalho/i-was-there | closed | TASK: Validators | task TESTING | - [ ] Use HTML Validator
- [ ] Use CSS Validator
- [ ] Use Python Validator
- [ ] Use Lighthouse on Chrome Dev Tools | 1.0 | TASK: Validators - - [ ] Use HTML Validator
- [ ] Use CSS Validator
- [ ] Use Python Validator
- [ ] Use Lighthouse on Chrome Dev Tools | non_main | task validators use html validator use css validator use python validator use lighthouse on chrome dev tools | 0 |
3,381 | 13,099,131,675 | IssuesEvent | 2020-08-03 20:56:02 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Application to join: earlyburg | Maintainer application | Hello and welcome to the contrib application process! We're happy to have you :)
Before we begin, please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
- [x] Include a LICENSE.txt file.
- [x] If porting a Drupal 7 project, Maintain the Git history from Drupal.
When your project meets those three criteria, please continue with the following:
**The name of your module, theme, or layout**
Search API Pages
**(Optional) Post a link here to an issue in the drupal.org queue notifying Drupal 7 maintainers that you are working on a Backdrop port of their project**
did not actually do that - my apologies
**OR (option #2) If you have contributed code to Backdrop core/contrib projects please provide links to pull requests/commits**
https://github.com/backdrop-contrib/entity_plus/pull/31
**OR (option #3) If you do not intend to contribute code, but would like to update documentation, manage issue queues, etc, please tag an existing contrib group member so they can post their recommendation**
N/A
**If you have chosen option #3 above, do you agree to undergo the same review process for coders (above) should you ever decide to contribute code?**
N/A
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
**Post a link to your new Backdrop project under your own GitHub account**
https://github.com/earlyburg/search_api_pages
also I have this port of my Drupal 7 module found here :
https://www.drupal.org/project/random_frontpage
https://github.com/earlyburg/RandomFrontpage
Once we have a chance to review your project, we may provide feedback that's meant to be helpful. If everything checks out, you will be invited to the @backdrop-contrib group, and will be able to transfer the project π
| True | Application to join: earlyburg - Hello and welcome to the contrib application process! We're happy to have you :)
Before we begin, please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
- [x] Include a LICENSE.txt file.
- [x] If porting a Drupal 7 project, Maintain the Git history from Drupal.
When your project meets those three criteria, please continue with the following:
**The name of your module, theme, or layout**
Search API Pages
**(Optional) Post a link here to an issue in the drupal.org queue notifying Drupal 7 maintainers that you are working on a Backdrop port of their project**
did not actually do that - my apologies
**OR (option #2) If you have contributed code to Backdrop core/contrib projects please provide links to pull requests/commits**
https://github.com/backdrop-contrib/entity_plus/pull/31
**OR (option #3) If you do not intend to contribute code, but would like to update documentation, manage issue queues, etc, please tag an existing contrib group member so they can post their recommendation**
N/A
**If you have chosen option #3 above, do you agree to undergo the same review process for coders (above) should you ever decide to contribute code?**
N/A
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
**Post a link to your new Backdrop project under your own GitHub account**
https://github.com/earlyburg/search_api_pages
also I have this port of my Drupal 7 module found here :
https://www.drupal.org/project/random_frontpage
https://github.com/earlyburg/RandomFrontpage
Once we have a chance to review your project, we may provide feedback that's meant to be helpful. If everything checks out, you will be invited to the @backdrop-contrib group, and will be able to transfer the project π
| main | application to join earlyburg hello and welcome to the contrib application process we re happy to have you before we begin please note these requirements for new contrib projects include a readme md file containing license and maintainer information include a license txt file if porting a drupal project maintain the git history from drupal when your project meets those three criteria please continue with the following the name of your module theme or layout search api pages optional post a link here to an issue in the drupal org queue notifying drupal maintainers that you are working on a backdrop port of their project did not actually do that my apologies or option if you have contributed code to backdrop core contrib projects please provide links to pull requests commits or option if you do not intend to contribute code but would like to update documentation manage issue queues etc please tag an existing contrib group member so they can post their recommendation n a if you have chosen option above do you agree to undergo the same review process for coders above should you ever decide to contribute code n a if you have chosen option or above do you agree to the yes post a link to your new backdrop project under your own github account also i have this port of my drupal module found here once we have a chance to review your project we may provide feedback that s meant to be helpful if everything checks out you will be invited to the backdrop contrib group and will be able to transfer the project π | 1 |
141,056 | 11,388,342,223 | IssuesEvent | 2020-01-29 16:31:09 | RoboticsClubatUCF/Bowser | https://api.github.com/repos/RoboticsClubatUCF/Bowser | closed | We need to test the motor with a similar load to the actual robot | Hardware Interface Testing | We need to test the motor with a similar load to the actual robot (i.e. ~220 lbs) and calculate the effect on the RPM. We should probably set up some sort of equation to relate the load on the motor to the RPM. | 1.0 | We need to test the motor with a similar load to the actual robot - We need to test the motor with a similar load to the actual robot (i.e. ~220 lbs) and calculate the effect on the RPM. We should probably set up some sort of equation to relate the load on the motor to the RPM. | non_main | we need to test the motor with a similar load to the actual robot we need to test the motor with a similar load to the actual robot i e lbs and calculate the effect on the rpm we should probably set up some sort of equation to relate the load on the motor to the rpm | 0 |
308,230 | 9,436,601,600 | IssuesEvent | 2019-04-13 08:07:13 | wojtek2kdev/CommentaryJS | https://api.github.com/repos/wojtek2kdev/CommentaryJS | opened | Feature: URL scalar resolver | Priority: Medium Status: Accepted Status: In Progress Type: Enhancement | # Feature task
### Title: URL scalar resolver
##### Feature request issue id:
none
### Short description:
Create resolver for custom GraphQL scalar type - URL. This scalar type defines that returned data or input data which has been defined that way in the schema, must returns or accepts data which matches to URL address
| 1.0 | Feature: URL scalar resolver - # Feature task
### Title: URL scalar resolver
##### Feature request issue id:
none
### Short description:
Create resolver for custom GraphQL scalar type - URL. This scalar type defines that returned data or input data which has been defined that way in the schema, must returns or accepts data which matches to URL address
| non_main | feature url scalar resolver feature task title url scalar resolver feature request issue id none short description create resolver for custom graphql scalar type url this scalar type defines that returned data or input data which has been defined that way in the schema must returns or accepts data which matches to url address | 0 |
3,850 | 16,983,926,000 | IssuesEvent | 2021-06-30 12:23:04 | laminas/laminas-cache-storage-adapter-redis | https://api.github.com/repos/laminas/laminas-cache-storage-adapter-redis | opened | `RedisArray` implementation | Awaiting Maintainer Response Enhancement | ### Feature Request
<!-- Fill in the relevant information below to help triage your issue. -->
| Q | A
|------------ | ------
| New Feature | yes
#### Summary
<!--
Provide a summary of the feature you would like to see implemented.
Ideally, create an RFC on our forums (https://discourse.laminas.dev/c/contributors)
to get feedback and flesh out the design, and link to it here.
-->
Back in 2018, I've requested permission to migrate `RedisArray` implementation from 3rd-party library: https://github.com/smoke/zf2-cache-storage-redis-array/pull/1#issuecomment-416457191
As there are no integration tests available, e.g., I'd say I write it from scratch while referencing some parts of the 3rd-party library.
The question is - how do we proceed with copyright? /cc @weierophinney
When I reference some party of the 3rd-party library but write most of it by myself, is there a strong need to label that? | True | `RedisArray` implementation - ### Feature Request
<!-- Fill in the relevant information below to help triage your issue. -->
| Q | A
|------------ | ------
| New Feature | yes
#### Summary
<!--
Provide a summary of the feature you would like to see implemented.
Ideally, create an RFC on our forums (https://discourse.laminas.dev/c/contributors)
to get feedback and flesh out the design, and link to it here.
-->
Back in 2018, I've requested permission to migrate `RedisArray` implementation from 3rd-party library: https://github.com/smoke/zf2-cache-storage-redis-array/pull/1#issuecomment-416457191
As there are no integration tests available, e.g., I'd say I write it from scratch while referencing some parts of the 3rd-party library.
The question is - how do we proceed with copyright? /cc @weierophinney
When I reference some party of the 3rd-party library but write most of it by myself, is there a strong need to label that? | main | redisarray implementation feature request q a new feature yes summary provide a summary of the feature you would like to see implemented ideally create an rfc on our forums to get feedback and flesh out the design and link to it here back in i ve requested permission to migrate redisarray implementation from party library as there are no integration tests available e g i d say i write it from scratch while referencing some parts of the party library the question is how do we proceed with copyright cc weierophinney when i reference some party of the party library but write most of it by myself is there a strong need to label that | 1 |
20,311 | 15,241,503,736 | IssuesEvent | 2021-02-19 08:32:38 | MatterMiners/cobald | https://api.github.com/repos/MatterMiners/cobald | opened | Add programmatic configuration schemas | enhancement help wanted usability | The configuration is currently treated as a blob of raw YAML/JSON equivalent data. This makes validation and error reporting difficult, as keys/values are checked on usage.
``cobald`` should provide means to define expected schemas for configuration. This would allow to efficiently detect and report errors. Schemas would also allow to generated documentation; this is not required for the base functionality.
Schemas perhaps should (allow to) be automatically generated from the objects taking the configuration.
Related issues:
- https://github.com/MatterMiners/tardis/issues/152 motivates this issue, but functionality is not plugin specific.
- https://github.com/MatterMiners/cobald/issues/54 on pretty configuration error reporting
External packages:
- The [``pydantic``](https://pydantic-docs.helpmanual.io) package for validating runtime types based on annotations. | True | Add programmatic configuration schemas - The configuration is currently treated as a blob of raw YAML/JSON equivalent data. This makes validation and error reporting difficult, as keys/values are checked on usage.
``cobald`` should provide means to define expected schemas for configuration. This would allow to efficiently detect and report errors. Schemas would also allow to generated documentation; this is not required for the base functionality.
Schemas perhaps should (allow to) be automatically generated from the objects taking the configuration.
Related issues:
- https://github.com/MatterMiners/tardis/issues/152 motivates this issue, but functionality is not plugin specific.
- https://github.com/MatterMiners/cobald/issues/54 on pretty configuration error reporting
External packages:
- The [``pydantic``](https://pydantic-docs.helpmanual.io) package for validating runtime types based on annotations. | non_main | add programmatic configuration schemas the configuration is currently treated as a blob of raw yaml json equivalent data this makes validation and error reporting difficult as keys values are checked on usage cobald should provide means to define expected schemas for configuration this would allow to efficiently detect and report errors schemas would also allow to generated documentation this is not required for the base functionality schemas perhaps should allow to be automatically generated from the objects taking the configuration related issues motivates this issue but functionality is not plugin specific on pretty configuration error reporting external packages the package for validating runtime types based on annotations | 0 |
95,113 | 19,670,473,403 | IssuesEvent | 2022-01-11 06:27:54 | vmware-tanzu/cluster-api-provider-bringyourownhost | https://api.github.com/repos/vmware-tanzu/cluster-api-provider-bringyourownhost | closed | Integration test for ByoHost webhook | area/code-quality area/webhooks | An attempt was made earlier to write integration test for byohost webhook (which is more useful in checking byohost deletion under different circumstances). Here is the code snippet -
https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/blob/970c5de3c5100f8fb93b6ac85516e6ef735261ec/apis/infrastructure/v1beta1/byohost_webhook_test.go#L19-L21
We are currently ignoring this test by marking it as `XContext`. Debug the issue and enable the test and do any modifications as necessary. | 1.0 | Integration test for ByoHost webhook - An attempt was made earlier to write integration test for byohost webhook (which is more useful in checking byohost deletion under different circumstances). Here is the code snippet -
https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/blob/970c5de3c5100f8fb93b6ac85516e6ef735261ec/apis/infrastructure/v1beta1/byohost_webhook_test.go#L19-L21
We are currently ignoring this test by marking it as `XContext`. Debug the issue and enable the test and do any modifications as necessary. | non_main | integration test for byohost webhook an attempt was made earlier to write integration test for byohost webhook which is more useful in checking byohost deletion under different circumstances here is the code snippet we are currently ignoring this test by marking it as xcontext debug the issue and enable the test and do any modifications as necessary | 0 |
14,979 | 8,722,619,785 | IssuesEvent | 2018-12-09 14:20:25 | Beep6581/RawTherapee | https://api.github.com/repos/Beep6581/RawTherapee | opened | Closing RT is slow on Windows when cache folder contains a lot of files | patch provided performance | On my win7 I have the RT cache folder on a SSD.
At closing RT scans the folder and needs about 1 ms per 10 files, which sums up to 2 seconds if the max number of cache entries (set to 20000 at my system) is reached.
The following patch does the following (on Windows)
1) It counts the files in cachefolder without getting the file names and the timestamp.
This is about 10 times faster than getting the file names and timestamps, which are only needed when the number of files is greater than the max. number of cache entries. If the number of files is less or equal than the max. number of cache entries, it returns.
2) If the number of files is greater than the max. number of cache settings, it continues with the same processing as before patch.
That means in case 1) closing RT is more than 10 times faster, while in case 2) it's less than 10% slower
Measured with 10082 files in cache folder:
before patch: 960 ms
after patch: 75 ms
```diff
diff --git a/rtgui/cachemanager.cc b/rtgui/cachemanager.cc
index 5f73e9e0f..f33a43c96 100644
--- a/rtgui/cachemanager.cc
+++ b/rtgui/cachemanager.cc
@@ -32,7 +32,8 @@
#include "options.h"
#include "procparamchangers.h"
#include "thumbnail.h"
-
+#define BENCHMARK
+#include "../rtengine/StopWatch.h"
namespace
{
@@ -339,6 +340,28 @@ Glib::ustring CacheManager::getCacheFileName (const Glib::ustring& subDir,
void CacheManager::applyCacheSizeLimitation () const
{
+ BENCHFUN
+#ifdef WIN32
+ // first count files without fetching file name and timestamp.
+ std::size_t numFiles = 0;
+ try {
+
+ const auto dirName = Glib::build_filename (baseDir, "data");
+ const auto dir = Gio::File::create_for_path (dirName);
+
+ auto enumerator = dir->enumerate_children ("");
+
+ while (numFiles <= options.maxCacheEntries && enumerator->next_file ()) {
+ ++numFiles;
+ }
+
+ } catch (Glib::Exception&) {}
+
+ if (numFiles <= options.maxCacheEntries) {
+ return;
+ }
+#endif
+
using FNameMTime = std::pair<Glib::ustring, Glib::TimeVal>;
std::vector<FNameMTime> files;
```
| True | Closing RT is slow on Windows when cache folder contains a lot of files - On my win7 I have the RT cache folder on a SSD.
At closing RT scans the folder and needs about 1 ms per 10 files, which sums up to 2 seconds if the max number of cache entries (set to 20000 at my system) is reached.
The following patch does the following (on Windows)
1) It counts the files in cachefolder without getting the file names and the timestamp.
This is about 10 times faster than getting the file names and timestamps, which are only needed when the number of files is greater than the max. number of cache entries. If the number of files is less or equal than the max. number of cache entries, it returns.
2) If the number of files is greater than the max. number of cache settings, it continues with the same processing as before patch.
That means in case 1) closing RT is more than 10 times faster, while in case 2) it's less than 10% slower
Measured with 10082 files in cache folder:
before patch: 960 ms
after patch: 75 ms
```diff
diff --git a/rtgui/cachemanager.cc b/rtgui/cachemanager.cc
index 5f73e9e0f..f33a43c96 100644
--- a/rtgui/cachemanager.cc
+++ b/rtgui/cachemanager.cc
@@ -32,7 +32,8 @@
#include "options.h"
#include "procparamchangers.h"
#include "thumbnail.h"
-
+#define BENCHMARK
+#include "../rtengine/StopWatch.h"
namespace
{
@@ -339,6 +340,28 @@ Glib::ustring CacheManager::getCacheFileName (const Glib::ustring& subDir,
void CacheManager::applyCacheSizeLimitation () const
{
+ BENCHFUN
+#ifdef WIN32
+ // first count files without fetching file name and timestamp.
+ std::size_t numFiles = 0;
+ try {
+
+ const auto dirName = Glib::build_filename (baseDir, "data");
+ const auto dir = Gio::File::create_for_path (dirName);
+
+ auto enumerator = dir->enumerate_children ("");
+
+ while (numFiles <= options.maxCacheEntries && enumerator->next_file ()) {
+ ++numFiles;
+ }
+
+ } catch (Glib::Exception&) {}
+
+ if (numFiles <= options.maxCacheEntries) {
+ return;
+ }
+#endif
+
using FNameMTime = std::pair<Glib::ustring, Glib::TimeVal>;
std::vector<FNameMTime> files;
```
| non_main | closing rt is slow on windows when cache folder contains a lot of files on my i have the rt cache folder on a ssd at closing rt scans the folder and needs about ms per files which sums up to seconds if the max number of cache entries set to at my system is reached the following patch does the following on windows it counts the files in cachefolder without getting the file names and the timestamp this is about times faster than getting the file names and timestamps which are only needed when the number of files is greater than the max number of cache entries if the number of files is less or equal than the max number of cache entries it returns if the number of files is greater than the max number of cache settings it continues with the same processing as before patch that means in case closing rt is more than times faster while in case it s less than slower measured with files in cache folder before patch ms after patch ms diff diff git a rtgui cachemanager cc b rtgui cachemanager cc index a rtgui cachemanager cc b rtgui cachemanager cc include options h include procparamchangers h include thumbnail h define benchmark include rtengine stopwatch h namespace glib ustring cachemanager getcachefilename const glib ustring subdir void cachemanager applycachesizelimitation const benchfun ifdef first count files without fetching file name and timestamp std size t numfiles try const auto dirname glib build filename basedir data const auto dir gio file create for path dirname auto enumerator dir enumerate children while numfiles next file numfiles catch glib exception if numfiles options maxcacheentries return endif using fnamemtime std pair std vector files | 0 |
153,649 | 19,708,526,931 | IssuesEvent | 2022-01-13 01:38:15 | rvvergara/tv-series-app | https://api.github.com/repos/rvvergara/tv-series-app | opened | CVE-2021-37712 (High) detected in tar-4.4.1.tgz | security vulnerability | ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.4.tgz (Root Library)
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37712 (High) detected in tar-4.4.1.tgz - ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.4.tgz (Root Library)
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href dependency hierarchy react scripts tgz root library fsevents tgz node pre gyp tgz x tar tgz vulnerable library vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value additionally on windows systems long path portions would resolve to the same file system entities as their short path counterparts a specially crafted tar archive could thus include a directory with one form of the path followed by a symbolic link with a different string that resolves to the same file system entity followed by a file using the first form by first creating a directory and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar step up your open source security game with whitesource | 0 |
73,251 | 8,850,366,999 | IssuesEvent | 2019-01-08 13:04:24 | decred/dcrdesign | https://api.github.com/repos/decred/dcrdesign | closed | dcrdocs design improvements | DCR Platforms visual design | @MariaPleshkova
https://github.com/decred/dcrdocs/issues
https://docs.decred.org
This is based on a material-design mkdocs theme https://squidfunk.github.io/mkdocs-material/getting-started/
Scope of this task is fine tuning the material-design theme on low level (css and content changes) to further align with Decreds visual identity.
**Tasks:**
- Map out instances for fixes and styling refinements
- Create bite-sized issues to github for implementation
- Limited to a single or few related elements
- Describe the problem and the suggested solution
- If needed provide mockup with specs for solving
**Theme-related:**
- Primarily focus on typography colours and smaller details, adding up to the identity and providing a good visual experience, appropriate contrasts, alignments, etc
- Create templates for both screenshots and instructional schematics
**Content:**
- Add/create missing icons
- Create new screenshots and schematics based on templates
**Example implementation issues:**
https://github.com/decred/decrediton/issues/1423
https://github.com/decred/decrediton/issues/1424
https://github.com/decred/decrediton/issues/1422
https://github.com/decred/decrediton/issues/1413 | 1.0 | dcrdocs design improvements - @MariaPleshkova
https://github.com/decred/dcrdocs/issues
https://docs.decred.org
This is based on a material-design mkdocs theme https://squidfunk.github.io/mkdocs-material/getting-started/
Scope of this task is fine tuning the material-design theme on low level (css and content changes) to further align with Decreds visual identity.
**Tasks:**
- Map out instances for fixes and styling refinements
- Create bite-sized issues to github for implementation
- Limited to a single or few related elements
- Describe the problem and the suggested solution
- If needed provide mockup with specs for solving
**Theme-related:**
- Primarily focus on typography colours and smaller details, adding up to the identity and providing a good visual experience, appropriate contrasts, alignments, etc
- Create templates for both screenshots and instructional schematics
**Content:**
- Add/create missing icons
- Create new screenshots and schematics based on templates
**Example implementation issues:**
https://github.com/decred/decrediton/issues/1423
https://github.com/decred/decrediton/issues/1424
https://github.com/decred/decrediton/issues/1422
https://github.com/decred/decrediton/issues/1413 | non_main | dcrdocs design improvements mariapleshkova this is based on a material design mkdocs theme scope of this task is fine tuning the material design theme on low level css and content changes to further align with decreds visual identity tasks map out instances for fixes and styling refinements create bite sized issues to github for implementation limited to a single or few related elements describe the problem and the suggested solution if needed provide mockup with specs for solving theme related primarily focus on typography colours and smaller details adding up to the identity and providing a good visual experience appropriate contrasts alignments etc create templates for both screenshots and instructional schematics content add create missing icons create new screenshots and schematics based on templates example implementation issues | 0 |
3,365 | 13,039,115,074 | IssuesEvent | 2020-07-28 16:12:57 | laminas/automatic-releases | https://api.github.com/repos/laminas/automatic-releases | opened | Set up `ORGANIZATION_ADMIN_TOKEN` to allow for automatic branch switching | Awaiting Maintainer Response Feature Request Help Wanted | Self-release partially succeeded, but we had a failure later in actions run:
```
/usr/bin/docker run --name c20181c155a4ed455de0bcfd869a25c9bb5f_021211 --label 87c201 --workdir /github/workspace --rm -e GITHUB_TOKEN -e SIGNING_SECRET_KEY -e GIT_AUTHOR_NAME -e GIT_AUTHOR_EMAIL -e INPUT_COMMAND-NAME -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/automatic-releases/automatic-releases":"/github/workspace" 87c201:81c155a4ed455de0bcfd869a25c9bb5f "laminas:automatic-releases:switch-default-branch-to-next-minor"
```
That's because `ORGANIZATION_ADMIN_TOKEN` is not set.
It's obviously a bit risky to add such a variable to the environment, but it should be fine if it's marked as protected (only direct pushes to repository branches can access it). | True | Set up `ORGANIZATION_ADMIN_TOKEN` to allow for automatic branch switching - Self-release partially succeeded, but we had a failure later in actions run:
```
/usr/bin/docker run --name c20181c155a4ed455de0bcfd869a25c9bb5f_021211 --label 87c201 --workdir /github/workspace --rm -e GITHUB_TOKEN -e SIGNING_SECRET_KEY -e GIT_AUTHOR_NAME -e GIT_AUTHOR_EMAIL -e INPUT_COMMAND-NAME -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/automatic-releases/automatic-releases":"/github/workspace" 87c201:81c155a4ed455de0bcfd869a25c9bb5f "laminas:automatic-releases:switch-default-branch-to-next-minor"
```
That's because `ORGANIZATION_ADMIN_TOKEN` is not set.
It's obviously a bit risky to add such a variable to the environment, but it should be fine if it's marked as protected (only direct pushes to repository branches can access it). | main | set up organization admin token to allow for automatic branch switching self release partially succeeded but we had a failure later in actions run usr bin docker run name label workdir github workspace rm e github token e signing secret key e git author name e git author email e input command name e home e github job e github ref e github sha e github repository e github repository owner e github run id e github run number e github actor e github workflow e github head ref e github base ref e github event name e github server url e github api url e github graphql url e github workspace e github action e github event path e runner os e runner tool cache e runner temp e runner workspace e actions runtime url e actions runtime token e actions cache url e github actions true e ci true v var run docker sock var run docker sock v home runner work temp github home github home v home runner work temp github workflow github workflow v home runner work automatic releases automatic releases github workspace laminas automatic releases switch default branch to next minor that s because organization admin token is not set it s obviously a bit risky to add such a variable to the environment but it should be fine if it s marked as protected only direct pushes to repository branches can access it | 1 |
137,960 | 5,320,892,225 | IssuesEvent | 2017-02-14 11:52:47 | siteorigin/so-widgets-bundle | https://api.github.com/repos/siteorigin/so-widgets-bundle | closed | Remove slider widget sentinel when only using one slide | bug priority-1 | If using only one slide the slider widget is still adding a sentinel so in the source you have two list items, two slides. If possible, remove the sentinel if only one slide is added.
Reference site: http://preview.tapkey.org/en/
| 1.0 | Remove slider widget sentinel when only using one slide - If using only one slide the slider widget is still adding a sentinel so in the source you have two list items, two slides. If possible, remove the sentinel if only one slide is added.
Reference site: http://preview.tapkey.org/en/
| non_main | remove slider widget sentinel when only using one slide if using only one slide the slider widget is still adding a sentinel so in the source you have two list items two slides if possible remove the sentinel if only one slide is added reference site | 0 |
2,908 | 10,331,194,248 | IssuesEvent | 2019-09-02 16:58:18 | vostpt/mobile-app | https://api.github.com/repos/vostpt/mobile-app | closed | Native Splash Screen | Needs Maintainers Help | **Description**
Create Native Android and iOS screen for the app
**Requirements**
- See https://medium.com/@diegoveloper/flutter-splash-screen-9f4e05542548 for more details
**UI**
<img width="364" alt="imagem" src="https://user-images.githubusercontent.com/10728633/63039860-1fe8f280-bebc-11e9-911f-6a48d9bf219d.png">
| True | Native Splash Screen - **Description**
Create Native Android and iOS screen for the app
**Requirements**
- See https://medium.com/@diegoveloper/flutter-splash-screen-9f4e05542548 for more details
**UI**
<img width="364" alt="imagem" src="https://user-images.githubusercontent.com/10728633/63039860-1fe8f280-bebc-11e9-911f-6a48d9bf219d.png">
| main | native splash screen description create native android and ios screen for the app requirements see for more details ui img width alt imagem src | 1 |
3,251 | 3,098,713,435 | IssuesEvent | 2015-08-28 12:59:36 | projectatomic/atomic-reactor | https://api.github.com/repos/projectatomic/atomic-reactor | opened | squashing: provide id of base image to the tool | bug post-build plugin priority 1 | so it will squash just the built image, not multiple layers from base image | 1.0 | squashing: provide id of base image to the tool - so it will squash just the built image, not multiple layers from base image | non_main | squashing provide id of base image to the tool so it will squash just the built image not multiple layers from base image | 0 |
3,703 | 15,112,294,480 | IssuesEvent | 2021-02-08 21:38:30 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | opened | Application to join: [larsdesigns] | Maintainer application | Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [ ] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [ ] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [ ] If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
Node Noindex
**(Optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
https://www.drupal.org/project/node_noindex/issues/3197373
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/larsdesigns/backdrop-contrib-node_noindex
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
**If you have chosen option #3 above, do you agree to undergo this same maintainer application process again, should you decide to contribute code in the future?**
YES
<!-- Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| True | Application to join: [larsdesigns] - Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [ ] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [ ] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [ ] If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
Node Noindex
**(Optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
https://www.drupal.org/project/node_noindex/issues/3197373
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/larsdesigns/backdrop-contrib-node_noindex
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
**If you have chosen option #3 above, do you agree to undergo this same maintainer application process again, should you decide to contribute code in the future?**
YES
<!-- Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| main | application to join hello and welcome to the contrib application process we re happy to have you please note these requirements for new contrib projects include a readme md file containing license and maintainer information you can use this example include a license txt file you can use this example if porting a drupal project maintain the git history from drupal please provide the following information the name of your module theme or layout node noindex optional post a link here to an issue in the drupal org queue notifying the drupal maintainers that you are working on a backdrop port of their project post a link to your new backdrop project under your own github account option if you have chosen option or above do you agree to the yes if you have chosen option above do you agree to undergo this same maintainer application process again should you decide to contribute code in the future yes | 1 |
157,234 | 12,366,992,121 | IssuesEvent | 2020-05-18 11:29:27 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Chrome UI Functional Tests.test/functional/apps/visualize/_visualize_listingΒ·js - visualize app visualize listing page search is case insensitive | failed-test | A test failed on a tracked branch
```
Error: expected 0 to equal 1
at Assertion.assert (packages/kbn-expect/expect.js:100:11)
at Assertion.be.Assertion.equal (packages/kbn-expect/expect.js:221:8)
at Context.equal (test/functional/apps/visualize/_visualize_listing.js:97:30)
at process._tickCallback (internal/process/next_tick.js:68:7)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.4/197/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/visualize/_visualize_listingΒ·js","test.name":"visualize app visualize listing page search is case insensitive","test.failCount":1}} --> | 1.0 | Failing test: Chrome UI Functional Tests.test/functional/apps/visualize/_visualize_listingΒ·js - visualize app visualize listing page search is case insensitive - A test failed on a tracked branch
```
Error: expected 0 to equal 1
at Assertion.assert (packages/kbn-expect/expect.js:100:11)
at Assertion.be.Assertion.equal (packages/kbn-expect/expect.js:221:8)
at Context.equal (test/functional/apps/visualize/_visualize_listing.js:97:30)
at process._tickCallback (internal/process/next_tick.js:68:7)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.4/197/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/visualize/_visualize_listingΒ·js","test.name":"visualize app visualize listing page search is case insensitive","test.failCount":1}} --> | non_main | failing test chrome ui functional tests test functional apps visualize visualize listingΒ·js visualize app visualize listing page search is case insensitive a test failed on a tracked branch error expected to equal at assertion assert packages kbn expect expect js at assertion be assertion equal packages kbn expect expect js at context equal test functional apps visualize visualize listing js at process tickcallback internal process next tick js first failure | 0 |
4,748 | 24,508,289,413 | IssuesEvent | 2022-10-10 18:35:48 | web3phl/directory | https://api.github.com/repos/web3phl/directory | closed | readme banner | chore maintainers only | ### π€ Not Existing Feature Request?
- [X] Yes, I'm sure, this is a new requested feature!
### π€ Not an Idea or Suggestion?
- [X] Yes, I'm sure, this is not idea or suggestion!
### π Request Details
Nice to have a readme banner too in our readme. Just an inspiration like my other projects below.
https://gathertown.js.org and https://buymeacoffee.js.org###

### π Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/web3phl/directory/blob/main/CODE_OF_CONDUCT.md). | True | readme banner - ### π€ Not Existing Feature Request?
- [X] Yes, I'm sure, this is a new requested feature!
### π€ Not an Idea or Suggestion?
- [X] Yes, I'm sure, this is not idea or suggestion!
### π Request Details
Nice to have a readme banner too in our readme. Just an inspiration like my other projects below.
https://gathertown.js.org and https://buymeacoffee.js.org###

### π Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/web3phl/directory/blob/main/CODE_OF_CONDUCT.md). | main | readme banner π€ not existing feature request yes i m sure this is a new requested feature π€ not an idea or suggestion yes i m sure this is not idea or suggestion π request details nice to have a readme banner too in our readme just an inspiration like my other projects below and π code of conduct i agree to follow this project s | 1 |
2,229 | 7,869,438,911 | IssuesEvent | 2018-06-24 14:00:58 | arcticicestudio/nord-hyper | https://api.github.com/repos/arcticicestudio/nord-hyper | opened | Use Arctic Ice Studio's base ESLint config | context-workflow scope-maintainability type-improvement | The currently used ESLint rules should be replaced with the base config's [eslint-config-arcticicestudio-base][].
[eslint-config-arcticicestudio-base]: https://www.npmjs.com/package/eslint-config-arcticicestudio-base | True | Use Arctic Ice Studio's base ESLint config - The currently used ESLint rules should be replaced with the base config's [eslint-config-arcticicestudio-base][].
[eslint-config-arcticicestudio-base]: https://www.npmjs.com/package/eslint-config-arcticicestudio-base | main | use arctic ice studio s base eslint config the currently used eslint rules should be replaced with the base config s | 1 |
5,687 | 29,924,867,780 | IssuesEvent | 2023-06-22 04:03:56 | OpenLightingProject/ola | https://api.github.com/repos/OpenLightingProject/ola | opened | New protoc versioning breaks configure | bug OpSys-OSX Maintainability | ```
checking for protoc... /usr/local/bin/protoc
+ test -z /usr/local/bin/protoc
+ test -n 2.3.0
+ printf '%s\n' 'configure:25925: checking protoc version'
+ printf %s 'checking protoc version... '
++ /usr/local/bin/protoc --version
++ grep libprotoc
++ sed 's/.*\([0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\).*/\1/g'
+ protoc_version='libprotoc 23.3'
+ required=2.3.0
++ echo 2.3.0
++ sed 's/[^0-9].*//'
+ required_major=2
++ echo 2.3.0
++ sed 's/[0-9][0-9]*\.\([0-9][0-9]*\)\.[0-9][0-9]*/\1/'
+ required_minor=3
++ echo 2.3.0
++ sed 's/^.*[^0-9]//'
+ required_patch=0
++ echo libprotoc 23.3
++ sed 's/[^0-9].*//'
+ actual_major=
++ echo libprotoc 23.3
++ sed 's/[0-9][0-9]*\.\([0-9][0-9]*\)\.[0-9][0-9]*/\1/'
+ actual_minor='libprotoc 23.3'
++ echo libprotoc 23.3
++ sed 's/^.*[^0-9]//'
+ actual_patch=3
++ expr '>' 2 '|' = 2 '&' libprotoc 23.3 '>' 3 '|' = 2 '&' libprotoc 23.3 = 3 '&' 3 '>=' 0
expr: syntax error
+ protoc_version_proper=
+ test '' = 1
+ as_fn_error 1 'protoc version too old libprotoc 23.3 < 2.3.0' 25948 5
+ as_status=1
+ test 1 -eq 0
+ test 5
+ as_lineno=25948
+ as_lineno_stack=as_lineno_stack=
+ printf '%s\n' 'configure:25948: error: protoc version too old libprotoc 23.3 < 2.3.0'
+ printf '%s\n' 'configure: error: protoc version too old libprotoc 23.3 < 2.3.0'
configure: error: protoc version too old libprotoc 23.3 < 2.3.0
```
See https://github.com/OpenLightingProject/ola/actions/runs/5341378456/jobs/9682134019
https://protobuf.dev/support/version-support/ | True | New protoc versioning breaks configure - ```
checking for protoc... /usr/local/bin/protoc
+ test -z /usr/local/bin/protoc
+ test -n 2.3.0
+ printf '%s\n' 'configure:25925: checking protoc version'
+ printf %s 'checking protoc version... '
++ /usr/local/bin/protoc --version
++ grep libprotoc
++ sed 's/.*\([0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\).*/\1/g'
+ protoc_version='libprotoc 23.3'
+ required=2.3.0
++ echo 2.3.0
++ sed 's/[^0-9].*//'
+ required_major=2
++ echo 2.3.0
++ sed 's/[0-9][0-9]*\.\([0-9][0-9]*\)\.[0-9][0-9]*/\1/'
+ required_minor=3
++ echo 2.3.0
++ sed 's/^.*[^0-9]//'
+ required_patch=0
++ echo libprotoc 23.3
++ sed 's/[^0-9].*//'
+ actual_major=
++ echo libprotoc 23.3
++ sed 's/[0-9][0-9]*\.\([0-9][0-9]*\)\.[0-9][0-9]*/\1/'
+ actual_minor='libprotoc 23.3'
++ echo libprotoc 23.3
++ sed 's/^.*[^0-9]//'
+ actual_patch=3
++ expr '>' 2 '|' = 2 '&' libprotoc 23.3 '>' 3 '|' = 2 '&' libprotoc 23.3 = 3 '&' 3 '>=' 0
expr: syntax error
+ protoc_version_proper=
+ test '' = 1
+ as_fn_error 1 'protoc version too old libprotoc 23.3 < 2.3.0' 25948 5
+ as_status=1
+ test 1 -eq 0
+ test 5
+ as_lineno=25948
+ as_lineno_stack=as_lineno_stack=
+ printf '%s\n' 'configure:25948: error: protoc version too old libprotoc 23.3 < 2.3.0'
+ printf '%s\n' 'configure: error: protoc version too old libprotoc 23.3 < 2.3.0'
configure: error: protoc version too old libprotoc 23.3 < 2.3.0
```
See https://github.com/OpenLightingProject/ola/actions/runs/5341378456/jobs/9682134019
https://protobuf.dev/support/version-support/ | main | new protoc versioning breaks configure checking for protoc usr local bin protoc test z usr local bin protoc test n printf s n configure checking protoc version printf s checking protoc version usr local bin protoc version grep libprotoc sed s g protoc version libprotoc required echo sed s required major echo sed s required minor echo sed s required patch echo libprotoc sed s actual major echo libprotoc sed s actual minor libprotoc echo libprotoc sed s actual patch expr libprotoc libprotoc expr syntax error protoc version proper test as fn error protoc version too old libprotoc as status test eq test as lineno as lineno stack as lineno stack printf s n configure error protoc version too old libprotoc printf s n configure error protoc version too old libprotoc configure error protoc version too old libprotoc see | 1 |
4,294 | 21,657,187,561 | IssuesEvent | 2022-05-06 15:11:10 | jesus2099/konami-command | https://api.github.com/repos/jesus2099/konami-command | opened | Replace Yahoo! Auction links on the fly | feature USO-to-USMO mb_PREFERRED-MBS lastfm_ALL-LINKS-TO-LOCAL-SITE mb_REVIVE-DELETED-EDITORS mb_ALL-RELEASE-GROUPS maintainability | [Yahoo! Auctions Japan is now geoblocked!](https://community.metabrainz.org/t/yahoo-auctions-japan-is-now-geoblocked/583740)
Still visible through Buyee:
```
https://page.auctions.yahoo.co.jp/jp/auction/k510836318
https://buyee.jp/item/yahoo/auction/k510836318
https://aucview.aucfan.com/yahoo/1040333136/
https://buyee.jp/item/yahoo/auction/1040333136
```
Maybe I should make all such scripts a single customisable script. | True | Replace Yahoo! Auction links on the fly - [Yahoo! Auctions Japan is now geoblocked!](https://community.metabrainz.org/t/yahoo-auctions-japan-is-now-geoblocked/583740)
Still visible through Buyee:
```
https://page.auctions.yahoo.co.jp/jp/auction/k510836318
https://buyee.jp/item/yahoo/auction/k510836318
https://aucview.aucfan.com/yahoo/1040333136/
https://buyee.jp/item/yahoo/auction/1040333136
```
Maybe I should make all such scripts a single customisable script. | main | replace yahoo auction links on the fly still visible through buyee maybe i should make all such scripts a single customisable script | 1 |
364,575 | 25,495,317,126 | IssuesEvent | 2022-11-27 15:55:40 | jenkins-infra/jenkins.io | https://api.github.com/repos/jenkins-infra/jenkins.io | opened | Add documentation how to reset user & admin passwords | documentation | Based on a quick search, we don't have a guide documenting how to reset Jenkins passwords at all.
According to the helpdesk statistics, this is one, if not, the major topic of the past weeks.
Such a guide should outline how to
- reset the Jenkins password for an admin account, in case an instance administrator locked out themselves and no other admin accounts are available.
- reset other Jenkins' user passwords from an admin perspective.
Bonus points: relief of some strain on the infra team | 1.0 | Add documentation how to reset user & admin passwords - Based on a quick search, we don't have a guide documenting how to reset Jenkins passwords at all.
According to the helpdesk statistics, this is one, if not, the major topic of the past weeks.
Such a guide should outline how to
- reset the Jenkins password for an admin account, in case an instance administrator locked out themselves and no other admin accounts are available.
- reset other Jenkins' user passwords from an admin perspective.
Bonus points: relief of some strain on the infra team | non_main | add documentation how to reset user admin passwords based on a quick search we don t have a guide documenting how to reset jenkins passwords at all according to the helpdesk statistics this is one if not the major topic of the past weeks such a guide should outline how to reset the jenkins password for an admin account in case an instance administrator locked out themselves and no other admin accounts are available reset other jenkins user passwords from an admin perspective bonus points relief of some strain on the infra team | 0 |
163,984 | 6,217,772,502 | IssuesEvent | 2017-07-08 18:08:37 | javaee/glassfish | https://api.github.com/repos/javaee/glassfish | closed | Response code 400 on a DELETE request with a body | Component: grizzly-kernel ERR: Assignee Priority: Major Type: Bug | When a browser sends a request with a method DELETE and some body, glassfish replies with the response code 400\. No corresponding filters or a servlet are called.
When a DELETE request contains no body, then servlet's methods are called as usual.
Glassfish 4.0 has no such problem.
Th sample request is:
```
DELETE /struct/StructureElement/3751?_dc=1413865817878 HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Content-Length: 11
Origin: http://localhost:8080
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.101 Safari/537.36
Content-Type: application/json
Accept: */*
Referer: http://localhost:8080/struct-client/
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,ru;q=0.6
Cookie: BPMSESSIONID=8aMBU61U7xn1RFSObP1slBhkvMCC; JSESSIONID=d116d19d27d9a381eef12b5acacd; treeForm_tree-hi=treeForm:tree:resources:JDBC:connectionPoolResources:ypool
{"id":3751}
```
#### Environment
Same behavior on linux x64, windows 7 x64\. jdk1.8.0_20
#### Affected Versions
[4.1, 4.1.2] | 1.0 | Response code 400 on a DELETE request with a body - When a browser sends a request with a method DELETE and some body, glassfish replies with the response code 400\. No corresponding filters or a servlet are called.
When a DELETE request contains no body, then servlet's methods are called as usual.
Glassfish 4.0 has no such problem.
Th sample request is:
```
DELETE /struct/StructureElement/3751?_dc=1413865817878 HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Content-Length: 11
Origin: http://localhost:8080
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.101 Safari/537.36
Content-Type: application/json
Accept: */*
Referer: http://localhost:8080/struct-client/
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,ru;q=0.6
Cookie: BPMSESSIONID=8aMBU61U7xn1RFSObP1slBhkvMCC; JSESSIONID=d116d19d27d9a381eef12b5acacd; treeForm_tree-hi=treeForm:tree:resources:JDBC:connectionPoolResources:ypool
{"id":3751}
```
#### Environment
Same behavior on linux x64, windows 7 x64\. jdk1.8.0_20
#### Affected Versions
[4.1, 4.1.2] | non_main | response code on a delete request with a body when a browser sends a request with a method delete and some body glassfish replies with the response code no corresponding filters or a servlet are called when a delete request contains no body then servlet s methods are called as usual glassfish has no such problem th sample request is delete struct structureelement dc http host localhost connection keep alive content length origin x requested with xmlhttprequest user agent mozilla linux applewebkit khtml like gecko chrome safari content type application json accept referer accept encoding gzip deflate sdch accept language en us en q ru q cookie bpmsessionid jsessionid treeform tree hi treeform tree resources jdbc connectionpoolresources ypool id environment same behavior on linux windows affected versions | 0 |
974 | 4,716,938,747 | IssuesEvent | 2016-10-16 10:27:54 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | reopened | Validation with visudo does not work for lineinfile if | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lineinfile
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
```
CentOS Linux release 7.2.1511 (Core)
Linux 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
```
##### SUMMARY
When trying to create or modify a file in _/etc/sudoers.d_ by using the ```lineinfile``` module the validation with visudo fails because a temporary file is not found.
http://docs.ansible.com/ansible/lineinfile_module.html
##### STEPS TO REPRODUCE
```
- name: Setup sudoers permissions
lineinfile: dest=/etc/sudoers.d/icinga2
create=yes
state=present
line='icinga ALL=(ALL) NOPASSWD:/usr/bin/find'
validate='visudo -cf %s'
```
##### EXPECTED RESULTS
A file created under _/etc/sudoers.d/icinga2_ with the content ```icinga ALL=(ALL) NOPASSWD:/usr/bin/find```which passed validation.
##### ACTUAL RESULTS
```
FAILED! => {"changed": false, "cmd": "visudo -cf /tmp/tmpSBsM5A", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2
```
| True | Validation with visudo does not work for lineinfile if - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lineinfile
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
```
CentOS Linux release 7.2.1511 (Core)
Linux 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
```
##### SUMMARY
When trying to create or modify a file in _/etc/sudoers.d_ by using the ```lineinfile``` module the validation with visudo fails because a temporary file is not found.
http://docs.ansible.com/ansible/lineinfile_module.html
##### STEPS TO REPRODUCE
```
- name: Setup sudoers permissions
lineinfile: dest=/etc/sudoers.d/icinga2
create=yes
state=present
line='icinga ALL=(ALL) NOPASSWD:/usr/bin/find'
validate='visudo -cf %s'
```
##### EXPECTED RESULTS
A file created under _/etc/sudoers.d/icinga2_ with the content ```icinga ALL=(ALL) NOPASSWD:/usr/bin/find```which passed validation.
##### ACTUAL RESULTS
```
FAILED! => {"changed": false, "cmd": "visudo -cf /tmp/tmpSBsM5A", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2
```
| main | validation with visudo does not work for lineinfile if issue type bug report component name lineinfile ansible version ansible configuration n a os environment centos linux release core linux smp thu may utc gnu linux summary when trying to create or modify a file in etc sudoers d by using the lineinfile module the validation with visudo fails because a temporary file is not found steps to reproduce name setup sudoers permissions lineinfile dest etc sudoers d create yes state present line icinga all all nopasswd usr bin find validate visudo cf s expected results a file created under etc sudoers d with the content icinga all all nopasswd usr bin find which passed validation actual results failed changed false cmd visudo cf tmp failed true msg no such file or directory rc | 1 |
5,442 | 27,258,461,523 | IssuesEvent | 2023-02-22 13:18:14 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Do not show incomplete imports in table select menus across app | type: bug work: frontend status: ready restricted: maintainers | ## Description
* When an import is incomplete, it currently shows up as a table everywhere. It should not be shown as a valid table in:
- Data explorer - base table select menu
- Link table modal
- Constraints modal
Related: [matrix discussion](https://matrix.to/#/!UnujZDUxGuMrYdvgTU:matrix.mathesar.org/$Xhvzqz--AAl7-a73euXfbhggD5axkZPB34bkmjnDD0A?via=matrix.mathesar.org&via=matrix.org). | True | Do not show incomplete imports in table select menus across app - ## Description
* When an import is incomplete, it currently shows up as a table everywhere. It should not be shown as a valid table in:
- Data explorer - base table select menu
- Link table modal
- Constraints modal
Related: [matrix discussion](https://matrix.to/#/!UnujZDUxGuMrYdvgTU:matrix.mathesar.org/$Xhvzqz--AAl7-a73euXfbhggD5axkZPB34bkmjnDD0A?via=matrix.mathesar.org&via=matrix.org). | main | do not show incomplete imports in table select menus across app description when an import is incomplete it currently shows up as a table everywhere it should not be shown as a valid table in data explorer base table select menu link table modal constraints modal related | 1 |
4,552 | 23,716,092,742 | IssuesEvent | 2022-08-30 11:55:06 | rdmorganiser/rdmo | https://api.github.com/repos/rdmorganiser/rdmo | opened | Bug in is_site_manager in project/rules when a project has no site | type: bug effort:minor type: maintainance | ### Description / Beschreibung
An `AttributeError` occurs when a user has projects (which do not have a site assigned to them)
and is checked for the `is_site_manager` rule on the /projects page.
From the `error.log`:
```py
File "/srv/rdmo/env/lib/python3.8/site-packages/rdmo/projects/rules.py", line 37, in is_site_manager
return user.role.manager.filter(pk=project.site.pk).exists()
AttributeError: 'NoneType' object has no attribute 'pk'
```
Maybe these were older projects that were not migrated but in any case this error can be prevented in the code.
### Expected behaviour / Erwartetes Verhalten
The landing page /projects opens without error even when a project has no site.
### Steps to fix the error
Assigning a site to the project via the admin interface fixes the error.
### References / Verweise
* [rdmo/projects/rules.py#L37](https://github.com/rdmorganiser/rdmo/blob/c6a930f6adfaa0fc5a3ce3fbd4fbac3c85cea2a1/rdmo/projects/rules.py#L37)
| True | Bug in is_site_manager in project/rules when a project has no site - ### Description / Beschreibung
An `AttributeError` occurs when a user has projects (which do not have a site assigned to them)
and is checked for the `is_site_manager` rule on the /projects page.
From the `error.log`:
```py
File "/srv/rdmo/env/lib/python3.8/site-packages/rdmo/projects/rules.py", line 37, in is_site_manager
return user.role.manager.filter(pk=project.site.pk).exists()
AttributeError: 'NoneType' object has no attribute 'pk'
```
Maybe these were older projects that were not migrated but in any case this error can be prevented in the code.
### Expected behaviour / Erwartetes Verhalten
The landing page /projects opens without error even when a project has no site.
### Steps to fix the error
Assigning a site to the project via the admin interface fixes the error.
### References / Verweise
* [rdmo/projects/rules.py#L37](https://github.com/rdmorganiser/rdmo/blob/c6a930f6adfaa0fc5a3ce3fbd4fbac3c85cea2a1/rdmo/projects/rules.py#L37)
| main | bug in is site manager in project rules when a project has no site description beschreibung an attributeerror occurs when a user has projects which do not have a site assigned to them and is checked for the is site manager rule on the projects page from the error log py file srv rdmo env lib site packages rdmo projects rules py line in is site manager return user role manager filter pk project site pk exists attributeerror nonetype object has no attribute pk maybe these were older projects that were not migrated but in any case this error can be prevented in the code expected behaviour erwartetes verhalten the landing page projects opens without error even when a project has no site steps to fix the error assigning a site to the project via the admin interface fixes the error references verweise | 1 |
166,827 | 12,973,078,912 | IssuesEvent | 2020-07-21 13:31:55 | LiskHQ/lisk-sdk | https://api.github.com/repos/LiskHQ/lisk-sdk | opened | Refactor synchronization unit tests | type: refactoring type: test | ### Description
Currently synchronization unit tests (fast chain switching and block syncrhonization) are using complex setup of stubs.
If any of the implementation detail changes, it is very hard to update the tests.
It should be cleaned up to be more modifiable and readable with less mock setups. | 1.0 | Refactor synchronization unit tests - ### Description
Currently synchronization unit tests (fast chain switching and block syncrhonization) are using complex setup of stubs.
If any of the implementation detail changes, it is very hard to update the tests.
It should be cleaned up to be more modifiable and readable with less mock setups. | non_main | refactor synchronization unit tests description currently synchronization unit tests fast chain switching and block syncrhonization are using complex setup of stubs if any of the implementation detail changes it is very hard to update the tests it should be cleaned up to be more modifiable and readable with less mock setups | 0 |
4,960 | 25,461,979,349 | IssuesEvent | 2022-11-24 20:39:37 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | opened | MAINT, ENH: more pathlib support tasks | enhancement maintainability Component-Readers | Some suggested follow-ups from gh-3935:
1. Add support and testing for `pathlib` object handling to `SingleFrameReaderBase` and/or upstream this to the proto infrastructure. The `pathlib` handling/testing in the connected PR is restricted to `ReaderBase` for now.
2. The testing added in the above PR might eventually benefit from being consolidated to i.e., universe creation over a parametrized set of trajectory formats instead of the copy-paste sampling I did for GRO and DCD formats. | True | MAINT, ENH: more pathlib support tasks - Some suggested follow-ups from gh-3935:
1. Add support and testing for `pathlib` object handling to `SingleFrameReaderBase` and/or upstream this to the proto infrastructure. The `pathlib` handling/testing in the connected PR is restricted to `ReaderBase` for now.
2. The testing added in the above PR might eventually benefit from being consolidated to i.e., universe creation over a parametrized set of trajectory formats instead of the copy-paste sampling I did for GRO and DCD formats. | main | maint enh more pathlib support tasks some suggested follow ups from gh add support and testing for pathlib object handling to singleframereaderbase and or upstream this to the proto infrastructure the pathlib handling testing in the connected pr is restricted to readerbase for now the testing added in the above pr might eventually benefit from being consolidated to i e universe creation over a parametrized set of trajectory formats instead of the copy paste sampling i did for gro and dcd formats | 1 |
951 | 4,697,760,361 | IssuesEvent | 2016-10-12 10:26:02 | duckduckgo/zeroclickinfo-fathead | https://api.github.com/repos/duckduckgo/zeroclickinfo-fathead | opened | NodeJS: Update Parser to latest API | Improvement Maintainer Input Requested Mission: Programming Priority: High Status: Needs a Developer Topic: JavaScript | NodeJS [JSON reference file](https://nodejs.org/api/all.json) has had some changes to it's structure in latest versions of it's API resulting in the parser not handling the new structure and crashing.
- [ ] Update Parser to be compatible with latest API
Latest v6.7.0 : https://nodejs.org/api/all.json
Previous structure : https://web.archive.org/web/20150908053932/https://nodejs.org/api/all.json
---
This issue is part of the [Programming Mission](https://forum.duckduckhack.com/t/duckduckhack-programming-mission-overview/53): help us improve the results for [JavaScript related searches](https://forum.duckduckhack.com/t/javascript-search-overview)!
------
IA Page: http://duck.co/ia/view/node_js
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @dhruvbird | True | NodeJS: Update Parser to latest API - NodeJS [JSON reference file](https://nodejs.org/api/all.json) has had some changes to it's structure in latest versions of it's API resulting in the parser not handling the new structure and crashing.
- [ ] Update Parser to be compatible with latest API
Latest v6.7.0 : https://nodejs.org/api/all.json
Previous structure : https://web.archive.org/web/20150908053932/https://nodejs.org/api/all.json
---
This issue is part of the [Programming Mission](https://forum.duckduckhack.com/t/duckduckhack-programming-mission-overview/53): help us improve the results for [JavaScript related searches](https://forum.duckduckhack.com/t/javascript-search-overview)!
------
IA Page: http://duck.co/ia/view/node_js
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @dhruvbird | main | nodejs update parser to latest api nodejs has had some changes to it s structure in latest versions of it s api resulting in the parser not handling the new structure and crashing update parser to be compatible with latest api latest previous structure this issue is part of the help us improve the results for ia page dhruvbird | 1 |
468,415 | 13,482,235,321 | IssuesEvent | 2020-09-11 01:02:18 | meanmedianmoge/zoia_lib | https://api.github.com/repos/meanmedianmoge/zoia_lib | closed | Search for patches by author name | UI enhancement medium priority | **Is your feature request related to a problem? Please describe.**
Currently, we are able to query the PS API for author id (a unique 4-digit identifier for users) and not by username. Also, add authors as a sorting option.
**Describe the solution you'd like**
Add the ability to search and sort for patches from specific authors. May require creating a lookup table with some fuzzy logic/strings so the search terms don't have to be exact. Afterward, pass the functionality to the front-end search bars.
**Describe alternatives you've considered**
Start memorizing the patch id's of popular patch creators. I've already got two down.
```
2825 is meanmedianmoge (me)
2953 is cmhjacques (christopher)
```
| 1.0 | Search for patches by author name - **Is your feature request related to a problem? Please describe.**
Currently, we are able to query the PS API for author id (a unique 4-digit identifier for users) and not by username. Also, add authors as a sorting option.
**Describe the solution you'd like**
Add the ability to search and sort for patches from specific authors. May require creating a lookup table with some fuzzy logic/strings so the search terms don't have to be exact. Afterward, pass the functionality to the front-end search bars.
**Describe alternatives you've considered**
Start memorizing the patch id's of popular patch creators. I've already got two down.
```
2825 is meanmedianmoge (me)
2953 is cmhjacques (christopher)
```
| non_main | search for patches by author name is your feature request related to a problem please describe currently we are able to query the ps api for author id a unique digit identifier for users and not by username also add authors as a sorting option describe the solution you d like add the ability to search and sort for patches from specific authors may require creating a lookup table with some fuzzy logic strings so the search terms don t have to be exact afterward pass the functionality to the front end search bars describe alternatives you ve considered start memorizing the patch id s of popular patch creators i ve already got two down is meanmedianmoge me is cmhjacques christopher | 0 |
353,376 | 10,552,070,092 | IssuesEvent | 2019-10-03 14:30:48 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | easier way to manage attributes in batch | Priority-Normal Type-Form/Function | ```
From Matt Browser in email 2/6/2012:
I have used Enter Data -> Batch Tools -> Bulkload Attributes to change
attributes, but creating these CSV files is time-consuming. Could you
please add similar functionality to search results like Manage... -
> ::Change Stuff:: -> Individual Attributes? This would be much
faster for the user when, for example, a large batch of specimens is
sorted into males and females.
```
Original issue reported on code.google.com by `carla...@gmail.com` on 7 Feb 2012 at 1:01
| 1.0 | easier way to manage attributes in batch - ```
From Matt Browser in email 2/6/2012:
I have used Enter Data -> Batch Tools -> Bulkload Attributes to change
attributes, but creating these CSV files is time-consuming. Could you
please add similar functionality to search results like Manage... -
> ::Change Stuff:: -> Individual Attributes? This would be much
faster for the user when, for example, a large batch of specimens is
sorted into males and females.
```
Original issue reported on code.google.com by `carla...@gmail.com` on 7 Feb 2012 at 1:01
| non_main | easier way to manage attributes in batch from matt browser in email i have used enter data batch tools bulkload attributes to change attributes but creating these csv files is time consuming could you please add similar functionality to search results like manage change stuff individual attributes this would be much faster for the user when for example a large batch of specimens is sorted into males and females original issue reported on code google com by carla gmail com on feb at | 0 |
897 | 4,559,605,657 | IssuesEvent | 2016-09-14 03:16:44 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2 module - root volume size customization | affects_2.1 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2 module
##### ANSIBLE VERSION
<!--- Paste verbatim output from βansible --versionβ between quotes below -->
```
ansible 2.1.1.0
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say βN/Aβ for anything that is not platform-specific.
-->
Ubuntu 14.04
##### SUMMARY
<!--- Explain the problem briefly -->
I'm working on a playbook to launch a windows ec2 instance with a custom root volume size. It appears that my play is completely overlooking the 'volumes' attribute when creating the instance though. I'm not getting any errors.. but the instance is created with the default 30GB root volume size.
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
Below is an example playbook as well as an example role that we have created. Using this configuration it appears that the volumes info is not being taken into consideration when creating an instance.
Playbook:
```
---
- include: aws.yml
- name: create server in availbility zone
hosts: control
gather_facts: yes
vars_files:
- group_vars/secret.yml
vars:
- app_name: "TEST"
- app_updates: "Yes"
- app_persistence: "Yes"
- instance_type: m3.medium
- instance_count: 1
- term_protect: yes
- vpc_subnet_id: my-subnet-id
- instance_zone: my-az-info
roles:
- win_launch
```
win_launch role:
```
- name: Launch new instance
ec2:
region: "{{ region }}"
keypair: "{{ keypair }}"
zone: "{{ instance_zone }}"
group_id: [ "{{ sg_out.group_id }}" ]
image: "{{ win_ami_id }}"
instance_type: "{{ instance_type }}"
assign_public_ip: no
termination_protection: "{{ term_protect }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
wait: yes
exact_count: "{{ instance_count }}"
count_tag:
Name: "{{ app_name }}"
instance_tags:
Name: "{{ app_name }}"
Updates: "{{ app_updates }}"
Persistent: "{{ app_persistence }}"
user_data: "{{ lookup('template','userdata.txt.j2') }}"
volumes:
- device_name: /dev/xvda
volume_size: 80
volume_type: gp2
register: ec2
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expect a windows root drive to be created with the size of my picking.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
ec2 instance is created with the default root volume size. | True | ec2 module - root volume size customization - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2 module
##### ANSIBLE VERSION
<!--- Paste verbatim output from βansible --versionβ between quotes below -->
```
ansible 2.1.1.0
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say βN/Aβ for anything that is not platform-specific.
-->
Ubuntu 14.04
##### SUMMARY
<!--- Explain the problem briefly -->
I'm working on a playbook to launch a windows ec2 instance with a custom root volume size. It appears that my play is completely overlooking the 'volumes' attribute when creating the instance though. I'm not getting any errors.. but the instance is created with the default 30GB root volume size.
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
Below is an example playbook as well as an example role that we have created. Using this configuration it appears that the volumes info is not being taken into consideration when creating an instance.
Playbook:
```
---
- include: aws.yml
- name: create server in availbility zone
hosts: control
gather_facts: yes
vars_files:
- group_vars/secret.yml
vars:
- app_name: "TEST"
- app_updates: "Yes"
- app_persistence: "Yes"
- instance_type: m3.medium
- instance_count: 1
- term_protect: yes
- vpc_subnet_id: my-subnet-id
- instance_zone: my-az-info
roles:
- win_launch
```
win_launch role:
```
- name: Launch new instance
ec2:
region: "{{ region }}"
keypair: "{{ keypair }}"
zone: "{{ instance_zone }}"
group_id: [ "{{ sg_out.group_id }}" ]
image: "{{ win_ami_id }}"
instance_type: "{{ instance_type }}"
assign_public_ip: no
termination_protection: "{{ term_protect }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
wait: yes
exact_count: "{{ instance_count }}"
count_tag:
Name: "{{ app_name }}"
instance_tags:
Name: "{{ app_name }}"
Updates: "{{ app_updates }}"
Persistent: "{{ app_persistence }}"
user_data: "{{ lookup('template','userdata.txt.j2') }}"
volumes:
- device_name: /dev/xvda
volume_size: 80
volume_type: gp2
register: ec2
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Expect a windows root drive to be created with the size of my picking.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
ec2 instance is created with the default root volume size. | main | module root volume size customization issue type bug report component name module ansible version ansible os environment mention the os you are running ansible from and the os you are managing or say βn aβ for anything that is not platform specific ubuntu summary i m working on a playbook to launch a windows instance with a custom root volume size it appears that my play is completely overlooking the volumes attribute when creating the instance though i m not getting any errors but the instance is created with the default root volume size steps to reproduce below is an example playbook as well as an example role that we have created using this configuration it appears that the volumes info is not being taken into consideration when creating an instance playbook include aws yml name create server in availbility zone hosts control gather facts yes vars files group vars secret yml vars app name test app updates yes app persistence yes instance type medium instance count term protect yes vpc subnet id my subnet id instance zone my az info roles win launch win launch role name launch new instance region region keypair keypair zone instance zone group id image win ami id instance type instance type assign public ip no termination protection term protect vpc subnet id vpc subnet id wait yes exact count instance count count tag name app name instance tags name app name updates app updates persistent app persistence user data lookup template userdata txt volumes device name dev xvda volume size volume type register expected results expect a windows root drive to be created with the size of my picking actual results instance is created with the default root volume size | 1 |
1,548 | 6,572,237,228 | IssuesEvent | 2017-09-11 00:26:34 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Bug in error handling Librato Annotation module | affects_2.0 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
monitoring/librato_annotation
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
Apparently my Librato credentials are incorrect. But the var 'e' at https://github.com/ansible/ansible-modules-extras/blob/2a0c5e2a8fd7ed3ce6d6eedd08e85e01e1617113/monitoring/librato_annotation.py#L136 seems to be not placed right. I have no knowledge about Python, but seems like a bug to me.
Result:
```
fatal: [IP]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\", line 3003, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\", line 157, in main\r\n post_annotation(module)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\", line 137, in post_annotation\r\n module.fail_json(msg=\"Request Failed\", reason=e.reason)\r\nNameError: global name 'e' is not defined\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
##### STEPS TO REPRODUCE
```
- name: Annotate Librato
librato_annotation:
user: "{{ secret_librato_username }}"
api_key: "{{ secret_librato_api_key }}"
title: New deploy
name: app-deploys
source: "{{ application_env }}"
when: '"production" in group_names'
```
##### EXPECTED RESULTS
I expect a 'normal' error message with "Request Failed"
##### ACTUAL RESULTS
Got a stacktrace about "global name 'e' is not defined"
| True | Bug in error handling Librato Annotation module - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
monitoring/librato_annotation
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
Apparently my Librato credentials are incorrect. But the var 'e' at https://github.com/ansible/ansible-modules-extras/blob/2a0c5e2a8fd7ed3ce6d6eedd08e85e01e1617113/monitoring/librato_annotation.py#L136 seems to be not placed right. I have no knowledge about Python, but seems like a bug to me.
Result:
```
fatal: [IP]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\", line 3003, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\", line 157, in main\r\n post_annotation(module)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\", line 137, in post_annotation\r\n module.fail_json(msg=\"Request Failed\", reason=e.reason)\r\nNameError: global name 'e' is not defined\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
##### STEPS TO REPRODUCE
```
- name: Annotate Librato
librato_annotation:
user: "{{ secret_librato_username }}"
api_key: "{{ secret_librato_api_key }}"
title: New deploy
name: app-deploys
source: "{{ application_env }}"
when: '"production" in group_names'
```
##### EXPECTED RESULTS
I expect a 'normal' error message with "Request Failed"
##### ACTUAL RESULTS
Got a stacktrace about "global name 'e' is not defined"
| main | bug in error handling librato annotation module issue type bug report component name monitoring librato annotation ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment ubuntu summary apparently my librato credentials are incorrect but the var e at seems to be not placed right i have no knowledge about python but seems like a bug to me result fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file home ubuntu ansible tmp ansible tmp librato annotation line in r n main r n file home ubuntu ansible tmp ansible tmp librato annotation line in main r n post annotation module r n file home ubuntu ansible tmp ansible tmp librato annotation line in post annotation r n module fail json msg request failed reason e reason r nnameerror global name e is not defined r n msg module failure parsed false steps to reproduce name annotate librato librato annotation user secret librato username api key secret librato api key title new deploy name app deploys source application env when production in group names expected results i expect a normal error message with request failed actual results got a stacktrace about global name e is not defined | 1 |
2,200 | 7,761,676,281 | IssuesEvent | 2018-06-01 10:44:14 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | reopened | Do not mix several Linq styles in one method | Area: analyzer Area: maintainability feature | Several Linq styles in one method are relatively hard to read. Hence we should not mix them and we should verify that. | True | Do not mix several Linq styles in one method - Several Linq styles in one method are relatively hard to read. Hence we should not mix them and we should verify that. | main | do not mix several linq styles in one method several linq styles in one method are relatively hard to read hence we should not mix them and we should verify that | 1 |
2,584 | 8,784,621,717 | IssuesEvent | 2018-12-20 10:23:13 | dgets/nightMiner | https://api.github.com/repos/dgets/nightMiner | opened | Move the early_blockade() pieces to the right areas in seek_n_nav and analytics | enhancement maintainability | It's a nasty oversight to have put them all in one area. Break things up the way that it should be organized and put them in the right spots. It's already causing issues when searching for the different methods utilized in the algorithm, particularly in debugging #46. | True | Move the early_blockade() pieces to the right areas in seek_n_nav and analytics - It's a nasty oversight to have put them all in one area. Break things up the way that it should be organized and put them in the right spots. It's already causing issues when searching for the different methods utilized in the algorithm, particularly in debugging #46. | main | move the early blockade pieces to the right areas in seek n nav and analytics it s a nasty oversight to have put them all in one area break things up the way that it should be organized and put them in the right spots it s already causing issues when searching for the different methods utilized in the algorithm particularly in debugging | 1 |
222,597 | 17,464,037,942 | IssuesEvent | 2021-08-06 14:25:49 | ComputationalRadiationPhysics/picongpu | https://api.github.com/repos/ComputationalRadiationPhysics/picongpu | opened | CI hangs when determine ocation | question component: tests | Our CI seems to hang (at least for a long time) when running the validation step:

This is caused when installing `clang-format-11` via `$ apt install -y clang-format-11`.
During the install, `tzdata` needs to be configured and seems to request input, which is not given:

Is is a temporary issue or did something change in the CI? | 1.0 | CI hangs when determine ocation - Our CI seems to hang (at least for a long time) when running the validation step:

This is caused when installing `clang-format-11` via `$ apt install -y clang-format-11`.
During the install, `tzdata` needs to be configured and seems to request input, which is not given:

Is is a temporary issue or did something change in the CI? | non_main | ci hangs when determine ocation our ci seems to hang at least for a long time when running the validation step this is caused when installing clang format via apt install y clang format during the install tzdata needs to be configured and seems to request input which is not given is is a temporary issue or did something change in the ci | 0 |
1,617 | 6,572,644,405 | IssuesEvent | 2017-09-11 04:01:38 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | blockinfile: insert text before first match | affects_2.3 feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
blockinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from βansible --versionβ between quotes below -->
```
ansible 2.3.0 (devel 8331e915e0) last updated 2016/10/28 13:59:10 (GMT -700)
lib/ansible/modules/core: (devel de7ec946e9) last updated 2016/10/28 14:12:53 (GMT -700)
lib/ansible/modules/extras: (devel 8bd40fb622) last updated 2016/10/28 14:05:12 (GMT -700)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
none
##### OS / ENVIRONMENT
n/a
##### SUMMARY
<!--- Explain the problem briefly -->
It would be helpful for the `blockinfile` module to support inserting a block of text _before_ the first match of a regular expression. My need for this arises because the `sendmail.mc` configuration file requires some settings be defined before the first `MAILER()` definition.
##### STEPS TO REPRODUCE
The last 4 lines of my `sendmail.mc` file are:
```
$ tail -n 4 sendmail.mc
dnl MASQUERADE_DOMAIN(mydomain.lan)dnl
MAILER(smtp)dnl
MAILER(procmail)dnl
dnl MAILER(cyrusv2)dnl
```
I want to insert a block of text before the first `MAILER()` definition so that the end of the file looks like this:
```
$ tail -n 8 sendmail.mc
dnl MASQUERADE_DOMAIN(mydomain.lan)dnl
dnl # BEGIN ANSIBLE MANAGED BLOCK
new_line1
new_line2
dnl # END ANSIBLE MANAGED BLOCK
MAILER(smtp)dnl
MAILER(procmail)dnl
dnl MAILER(cyrusv2)dnl
```
I am able to get the correct result with this, but I'd rather not hardcode the entire `MAILER` line:
```
$ ansible localhost, -m blockinfile -a '\
dest=sendmail.mc \
marker="dnl # {mark} ANSIBLE MANAGED BLOCK" \
block="new_line1\nnew_line2" \
insertbefore="^MAILER\(smtp\)dnl" \
'
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
localhost | SUCCESS => {
"changed": true,
"msg": "Block inserted"
}
$ tail -n 8 sendmail.mc
dnl MASQUERADE_DOMAIN(mydomain.lan)dnl
dnl # BEGIN ANSIBLE MANAGED BLOCK
new_line1
new_line2
dnl # END ANSIBLE MANAGED BLOCK
MAILER(smtp)dnl
MAILER(procmail)dnl
dnl MAILER(cyrusv2)dnl
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Ideally, I would be able to use a regular expression like `^MAILER\(.*` so that it would insert the text before the first match any `MAILER()` definition.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
I have to use a regular expression like `^MAILER\(smtp\)dnl` which is brittle because the order of the `MAILER()` definitions may different.
| True | blockinfile: insert text before first match - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
blockinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from βansible --versionβ between quotes below -->
```
ansible 2.3.0 (devel 8331e915e0) last updated 2016/10/28 13:59:10 (GMT -700)
lib/ansible/modules/core: (devel de7ec946e9) last updated 2016/10/28 14:12:53 (GMT -700)
lib/ansible/modules/extras: (devel 8bd40fb622) last updated 2016/10/28 14:05:12 (GMT -700)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
none
##### OS / ENVIRONMENT
n/a
##### SUMMARY
<!--- Explain the problem briefly -->
It would be helpful for the `blockinfile` module to support inserting a block of text _before_ the first match of a regular expression. My need for this arises because the `sendmail.mc` configuration file requires some settings be defined before the first `MAILER()` definition.
##### STEPS TO REPRODUCE
The last 4 lines of my `sendmail.mc` file are:
```
$ tail -n 4 sendmail.mc
dnl MASQUERADE_DOMAIN(mydomain.lan)dnl
MAILER(smtp)dnl
MAILER(procmail)dnl
dnl MAILER(cyrusv2)dnl
```
I want to insert a block of text before the first `MAILER()` definition so that the end of the file looks like this:
```
$ tail -n 8 sendmail.mc
dnl MASQUERADE_DOMAIN(mydomain.lan)dnl
dnl # BEGIN ANSIBLE MANAGED BLOCK
new_line1
new_line2
dnl # END ANSIBLE MANAGED BLOCK
MAILER(smtp)dnl
MAILER(procmail)dnl
dnl MAILER(cyrusv2)dnl
```
I am able to get the correct result with this, but I'd rather not hardcode the entire `MAILER` line:
```
$ ansible localhost, -m blockinfile -a '\
dest=sendmail.mc \
marker="dnl # {mark} ANSIBLE MANAGED BLOCK" \
block="new_line1\nnew_line2" \
insertbefore="^MAILER\(smtp\)dnl" \
'
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
localhost | SUCCESS => {
"changed": true,
"msg": "Block inserted"
}
$ tail -n 8 sendmail.mc
dnl MASQUERADE_DOMAIN(mydomain.lan)dnl
dnl # BEGIN ANSIBLE MANAGED BLOCK
new_line1
new_line2
dnl # END ANSIBLE MANAGED BLOCK
MAILER(smtp)dnl
MAILER(procmail)dnl
dnl MAILER(cyrusv2)dnl
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Ideally, I would be able to use a regular expression like `^MAILER\(.*` so that it would insert the text before the first match any `MAILER()` definition.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
I have to use a regular expression like `^MAILER\(smtp\)dnl` which is brittle because the order of the `MAILER()` definitions may different.
| main | blockinfile insert text before first match issue type feature idea component name blockinfile ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt config file configured module search path default w o overrides configuration none os environment n a summary it would be helpful for the blockinfile module to support inserting a block of text before the first match of a regular expression my need for this arises because the sendmail mc configuration file requires some settings be defined before the first mailer definition steps to reproduce the last lines of my sendmail mc file are tail n sendmail mc dnl masquerade domain mydomain lan dnl mailer smtp dnl mailer procmail dnl dnl mailer dnl i want to insert a block of text before the first mailer definition so that the end of the file looks like this tail n sendmail mc dnl masquerade domain mydomain lan dnl dnl begin ansible managed block new new dnl end ansible managed block mailer smtp dnl mailer procmail dnl dnl mailer dnl i am able to get the correct result with this but i d rather not hardcode the entire mailer line ansible localhost m blockinfile a dest sendmail mc marker dnl mark ansible managed block block new nnew insertbefore mailer smtp dnl host file not found etc ansible hosts provided hosts list is empty only localhost is available localhost success changed true msg block inserted tail n sendmail mc dnl masquerade domain mydomain lan dnl dnl begin ansible managed block new new dnl end ansible managed block mailer smtp dnl mailer procmail dnl dnl mailer dnl expected results ideally i would be able to use a regular expression like mailer so that it would insert the text before the first match any mailer definition actual results i have to use a regular expression like mailer smtp dnl which is brittle because the order of the mailer definitions may different | 1 |
3,830 | 16,659,106,632 | IssuesEvent | 2021-06-06 03:07:11 | restqa/restqa | https://api.github.com/repos/restqa/restqa | opened | Jenkins setup configuration | Pair with maintainer good first issue | Hello π,
### π Background
Currently, during the RestQA initialization (command: `restqa init`), the user has been asked if he would like RestQA to prepare the configuration file setup for it continuous integration tool.
Example:

### βοΈ What is the actual behavior?
As of now, the supported continuous integration tools are:
* Github Action
* Gitlab CI
* Travis CI
* Bitbucket pipeline
* Circle CI
### π΅οΈββοΈ How to reproduce the issue the current behavior?
Install RestQA and run the command `restqa init`
### π€ What is the expected behavior?
As a part of many feedbacks from users, having **Jenkins** supported by RestQA would help on getting the test data pipeline ready faster for some teams.
### π Proposed solution.
Let's add **Jenkins** into the supported continuous integration.
Then when the user will select **Jenkins** from the list, RestQA will create a `Jenkinsfile` containing the expected configuration to be run by jenkins in order to setup the test automation pipeline.
Jenkins file template:
```groovy
pipeline {
agent { label 'master' }
stages {
stage('RestQA') {
steps {
script {
sh "ls -lah"
sh "docker run -v ${env.WORKSPACE}:/app restqa/restqa"
archiveArtifacts artifacts: 'report/'
}
}
}
}
}
```
Cheers.
| True | Jenkins setup configuration - Hello π,
### π Background
Currently, during the RestQA initialization (command: `restqa init`), the user has been asked if he would like RestQA to prepare the configuration file setup for it continuous integration tool.
Example:

### βοΈ What is the actual behavior?
As of now, the supported continuous integration tools are:
* Github Action
* Gitlab CI
* Travis CI
* Bitbucket pipeline
* Circle CI
### π΅οΈββοΈ How to reproduce the issue the current behavior?
Install RestQA and run the command `restqa init`
### π€ What is the expected behavior?
As a part of many feedbacks from users, having **Jenkins** supported by RestQA would help on getting the test data pipeline ready faster for some teams.
### π Proposed solution.
Let's add **Jenkins** into the supported continuous integration.
Then when the user will select **Jenkins** from the list, RestQA will create a `Jenkinsfile` containing the expected configuration to be run by jenkins in order to setup the test automation pipeline.
Jenkins file template:
```groovy
pipeline {
agent { label 'master' }
stages {
stage('RestQA') {
steps {
script {
sh "ls -lah"
sh "docker run -v ${env.WORKSPACE}:/app restqa/restqa"
archiveArtifacts artifacts: 'report/'
}
}
}
}
}
```
Cheers.
| main | jenkins setup configuration hello π π background currently during the restqa initialization command restqa init the user has been asked if he would like restqa to prepare the configuration file setup for it continuous integration tool example βοΈ what is the actual behavior as of now the supported continuous integration tools are github action gitlab ci travis ci bitbucket pipeline circle ci π΅οΈββοΈ how to reproduce the issue the current behavior install restqa and run the command restqa init π€ what is the expected behavior as a part of many feedbacks from users having jenkins supported by restqa would help on getting the test data pipeline ready faster for some teams π proposed solution let s add jenkins into the supported continuous integration then when the user will select jenkins from the list restqa will create a jenkinsfile containing the expected configuration to be run by jenkins in order to setup the test automation pipeline jenkins file template groovy pipeline agent label master stages stage restqa steps script sh ls lah sh docker run v env workspace app restqa restqa archiveartifacts artifacts report cheers | 1 |
381,078 | 11,273,196,634 | IssuesEvent | 2020-01-14 16:08:36 | dhenry-KCI/FredCo-Post-Go-Live- | https://api.github.com/repos/dhenry-KCI/FredCo-Post-Go-Live- | opened | Planning - resulting EPLANS in IPS - Should not work | High Priority | PW257661 - It appears Health was able to result their review in IPS for an EPlan application. They system is not suppose to allow reviewers to do this. Please fix this for health and any other agency reviewers (except Ashley Reed who needs this ability).
This resulted in a notification being sent from IPS asking the applicant to resubmit but they cant because Health did not result their review in PDox. IPS is not supposed to send notification like this for eplans!


Also, the notification that was sent out shows it was sent by VKapoor, but he wasnt the last person to result their review. | 1.0 | Planning - resulting EPLANS in IPS - Should not work - PW257661 - It appears Health was able to result their review in IPS for an EPlan application. They system is not suppose to allow reviewers to do this. Please fix this for health and any other agency reviewers (except Ashley Reed who needs this ability).
This resulted in a notification being sent from IPS asking the applicant to resubmit but they cant because Health did not result their review in PDox. IPS is not supposed to send notification like this for eplans!


Also, the notification that was sent out shows it was sent by VKapoor, but he wasnt the last person to result their review. | non_main | planning resulting eplans in ips should not work it appears health was able to result their review in ips for an eplan application they system is not suppose to allow reviewers to do this please fix this for health and any other agency reviewers except ashley reed who needs this ability this resulted in a notification being sent from ips asking the applicant to resubmit but they cant because health did not result their review in pdox ips is not supposed to send notification like this for eplans also the notification that was sent out shows it was sent by vkapoor but he wasnt the last person to result their review | 0 |
263,755 | 28,056,644,698 | IssuesEvent | 2023-03-29 09:48:30 | tamirverthim/src | https://api.github.com/repos/tamirverthim/src | reopened | CVE-2020-12663 (High) detected in src0aecda14650f9fce8577e43d2a403385b5fa5bcf | Mend: dependency security vulnerability | ## CVE-2020-12663 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>src0aecda14650f9fce8577e43d2a403385b5fa5bcf</b></p></summary>
<p>
<p>Public git conversion mirror of OpenBSD's official CVS src repository. Pull requests not accepted - send diffs to the tech@ mailing list.</p>
<p>Library home page: <a href=https://github.com/openbsd/src.git>https://github.com/openbsd/src.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (9)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_scrub.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_delegpt.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sbin/unwind/libunbound/iterator/iter_utils.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iterator.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_delegpt.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iterator.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_delegpt.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sbin/unwind/libunbound/iterator/iter_utils.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_scrub.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Unbound before 1.10.1 has an infinite loop via malformed DNS answers received from upstream servers.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-12663>CVE-2020-12663</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12663">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12663</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: 1.10.1</p>
</p>
</details>
<p></p>
| True | CVE-2020-12663 (High) detected in src0aecda14650f9fce8577e43d2a403385b5fa5bcf - ## CVE-2020-12663 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>src0aecda14650f9fce8577e43d2a403385b5fa5bcf</b></p></summary>
<p>
<p>Public git conversion mirror of OpenBSD's official CVS src repository. Pull requests not accepted - send diffs to the tech@ mailing list.</p>
<p>Library home page: <a href=https://github.com/openbsd/src.git>https://github.com/openbsd/src.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (9)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_scrub.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_delegpt.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sbin/unwind/libunbound/iterator/iter_utils.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iterator.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_delegpt.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iterator.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_delegpt.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sbin/unwind/libunbound/iterator/iter_utils.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/usr.sbin/unbound/iterator/iter_scrub.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Unbound before 1.10.1 has an infinite loop via malformed DNS answers received from upstream servers.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-12663>CVE-2020-12663</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12663">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12663</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: 1.10.1</p>
</p>
</details>
<p></p>
| non_main | cve high detected in cve high severity vulnerability vulnerable library public git conversion mirror of openbsd s official cvs src repository pull requests not accepted send diffs to the tech mailing list library home page a href vulnerable source files usr sbin unbound iterator iter scrub c usr sbin unbound iterator iter delegpt c sbin unwind libunbound iterator iter utils c usr sbin unbound iterator iterator c usr sbin unbound iterator iter delegpt h usr sbin unbound iterator iterator c usr sbin unbound iterator iter delegpt c sbin unwind libunbound iterator iter utils c usr sbin unbound iterator iter scrub c vulnerability details unbound before has an infinite loop via malformed dns answers received from upstream servers publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
56,194 | 11,527,779,922 | IssuesEvent | 2020-02-16 00:20:10 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [4.0] System Testing for Joomla | No Code Attached Yet | ### Steps to reproduce the issue
I looked at the` read.me` in the `tests` directory and followed the [instructions here](https://github.com/joomla/joomla-cms/blob/4.0-dev/tests/Codeception/README.md).
This is my `acceptance.suite.yml`
```
# Codeception Test Suite Configuration
# Suite for acceptance tests.
# Perform tests in browser using the WebDriver or PhpBrowser.
# If you need both WebDriver and PHPBrowser tests - create a separate suite.
class_name: AcceptanceTester
modules:
enabled:
- Asserts
- JoomlaBrowser
- Helper\Acceptance
- Helper\JoomlaDb
config:
JoomlaBrowser:
url: 'http://localhost/joomla-cms4' # the url that points to the joomla installation at /tests/system/joomla-cms
browser: 'chrome'
window_size: 1920x1080
capabilities:
'goog:chromeOptions':
args: ["whitelisted-ips", "disable-gpu", "no-sandbox", "window-size=1920x1080", "--disable-dev-shm-usage"]
name: 'jane doe' # Name for the Administrator
username: 'admin' # UserName for the Administrator
password: 'admin' # Password for the Administrator
database host: 'localhost' # place where the Application is Hosted #server Address
database user: 'root' # MySQL Server user ID, usually root
database password: 'mypassword' # MySQL Server password, usually empty or root
database name: 'joomla_db4' # DB Name, at the Server
database type: 'mysqli' # type in lowercase one of the options: MySQL\MySQLi\PDO
database prefix: 'jos_' # DB Prefix for tables
install sample data: 'no' # Do you want to Download the Sample Data Along with Joomla Installation, then keep it Yes
sample data: 'Default English (GB) Sample Data' # Default Sample Data
admin email: 'admin@mydomain.com' # email Id of the Admin
language: 'English (United Kingdom)' # Language in which you want the Application to be Installed
timeout: 90 # or 90000 the same result
log_js_errors: true
Helper\JoomlaDb:
dsn: 'mysql:host=mysql;dbname=joomla_db4'
user: 'root'
password: 'mypasswor'
prefix: 'jos_'
Helper\Acceptance:
url: 'http://localhost/joomla-cms4' # the url that points to the joomla installation at /tests/system/joomla-cms - we need it twice here
MicrosoftEdgeInsiders: false # set this to true, if you are on Windows Insiders
cmsPath: '/tests/www/test-install' # ; If you want to setup your test website (document root) in a different folder, you can do that here.
localUser: 'www-data' # (Linux / Mac only) If you want to set a different owner for the CMS test folder
error_level: "E_ALL & ~E_STRICT & ~E_DEPRECATED"
env:
postgres:
modules:
config:
JoomlaBrowser:
database host: 'postgres'
database type: 'pgsql'
Helper\JoomlaDb:
dsn: 'pgsql:host=postgres;dbname=test_joomla'
mysql8:
modules:
config:
JoomlaBrowser:
database host: 'mysql8'
mysql:
# Nothing to change
```
At first everything looks pretty good.
```
/var/www/html/joomla-cms4$ ./node_modules/.bin/selenium-standalone install
----------
selenium-standalone installation starting
----------
---
selenium install:
from: https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.5.jar
to: /var/www/html/joomla-cms4/node_modules/selenium-standalone/.selenium/selenium-server/3.141.5-server.jar
---
chrome install:
from: https://chromedriver.storage.googleapis.com/2.43/chromedriver_linux64.zip
to: /var/www/html/joomla-cms4/node_modules/selenium-standalone/.selenium/chromedriver/2.43-x64-chromedriver
---
firefox install:
from: https://github.com/mozilla/geckodriver/releases/download/v0.23.0/geckodriver-v0.23.0-linux64.tar.gz
to: /var/www/html/joomla-cms4/node_modules/selenium-standalone/.selenium/geckodriver/0.23.0-x64-geckodriver
---
File from https://chromedriver.storage.googleapis.com/2.43/chromedriver_linux64.zip has already been downloaded
---
File from https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.5.jar has already been downloaded
---
File from https://github.com/mozilla/geckodriver/releases/download/v0.23.0/geckodriver-v0.23.0-linux64.tar.gz has already been downloaded
-----
selenium-standalone installation finished
-----
astrid@astrid-TravelMate-5760G:/var/www/html/joomla-cms4$ ./node_modules/.bin/selenium-standalone start
20:07:43.930 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.5, revision: d54ebd709a
20:07:44.026 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
2020-01-24 20:07:44.075:INFO::main: Logging initialized @387ms to org.seleniumhq.jetty9.util.log.StdErrLog
20:07:44.326 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
20:07:44.405 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
Selenium started
```
But if I run
` libraries/vendor/bin/codecept run acceptance tests/Codeception/acceptance/install`
i get
```
/var/www/html/joomla-cms4$ libraries/vendor/bin/codecept run acceptance tests/Codeception/acceptance/install
Codeception PHP Testing Framework v3.1.0
Powered by PHPUnit 8.3.4 by Sebastian Bergmann and contributors.
Running with seed:
In Db.php line 555:
Db: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known while creating PDO connection
run [-o|--override OVERRIDE] [-e|--ext EXT] [--report] [--html [HTML]] [--xml [XML]] [--phpunit-xml [PHPUNIT-XML]] [--tap [TAP]] [--json [JSON]] [--colors] [--no-colors] [--silent] [--steps] [-d|--debug] [--bootstrap [BOOTSTRAP]] [--no-redirect] [--coverage [COVERAGE]] [--coverage-html [COVERAGE-HTML]] [--coverage-xml [COVERAGE-XML]] [--coverage-text [COVERAGE-TEXT]] [--coverage-crap4j [COVERAGE-CRAP4J]] [--coverage-phpunit [COVERAGE-PHPUNIT]] [--no-exit] [-g|--group GROUP] [-s|--skip SKIP] [-x|--skip-group SKIP-GROUP] [--env ENV] [-f|--fail-fast] [--no-rebuild] [--seed SEED] [--] [<suite> [<test>]]
```
What am I doing wrong?
I tested this on Ubuntu 18.04 and PHP 7.2.24 | 1.0 | [4.0] System Testing for Joomla - ### Steps to reproduce the issue
I looked at the` read.me` in the `tests` directory and followed the [instructions here](https://github.com/joomla/joomla-cms/blob/4.0-dev/tests/Codeception/README.md).
This is my `acceptance.suite.yml`
```
# Codeception Test Suite Configuration
# Suite for acceptance tests.
# Perform tests in browser using the WebDriver or PhpBrowser.
# If you need both WebDriver and PHPBrowser tests - create a separate suite.
class_name: AcceptanceTester
modules:
enabled:
- Asserts
- JoomlaBrowser
- Helper\Acceptance
- Helper\JoomlaDb
config:
JoomlaBrowser:
url: 'http://localhost/joomla-cms4' # the url that points to the joomla installation at /tests/system/joomla-cms
browser: 'chrome'
window_size: 1920x1080
capabilities:
'goog:chromeOptions':
args: ["whitelisted-ips", "disable-gpu", "no-sandbox", "window-size=1920x1080", "--disable-dev-shm-usage"]
name: 'jane doe' # Name for the Administrator
username: 'admin' # UserName for the Administrator
password: 'admin' # Password for the Administrator
database host: 'localhost' # place where the Application is Hosted #server Address
database user: 'root' # MySQL Server user ID, usually root
database password: 'mypassword' # MySQL Server password, usually empty or root
database name: 'joomla_db4' # DB Name, at the Server
database type: 'mysqli' # type in lowercase one of the options: MySQL\MySQLi\PDO
database prefix: 'jos_' # DB Prefix for tables
install sample data: 'no' # Do you want to Download the Sample Data Along with Joomla Installation, then keep it Yes
sample data: 'Default English (GB) Sample Data' # Default Sample Data
admin email: 'admin@mydomain.com' # email Id of the Admin
language: 'English (United Kingdom)' # Language in which you want the Application to be Installed
timeout: 90 # or 90000 the same result
log_js_errors: true
Helper\JoomlaDb:
dsn: 'mysql:host=mysql;dbname=joomla_db4'
user: 'root'
password: 'mypasswor'
prefix: 'jos_'
Helper\Acceptance:
url: 'http://localhost/joomla-cms4' # the url that points to the joomla installation at /tests/system/joomla-cms - we need it twice here
MicrosoftEdgeInsiders: false # set this to true, if you are on Windows Insiders
cmsPath: '/tests/www/test-install' # ; If you want to setup your test website (document root) in a different folder, you can do that here.
localUser: 'www-data' # (Linux / Mac only) If you want to set a different owner for the CMS test folder
error_level: "E_ALL & ~E_STRICT & ~E_DEPRECATED"
env:
postgres:
modules:
config:
JoomlaBrowser:
database host: 'postgres'
database type: 'pgsql'
Helper\JoomlaDb:
dsn: 'pgsql:host=postgres;dbname=test_joomla'
mysql8:
modules:
config:
JoomlaBrowser:
database host: 'mysql8'
mysql:
# Nothing to change
```
At first everything looks pretty good.
```
/var/www/html/joomla-cms4$ ./node_modules/.bin/selenium-standalone install
----------
selenium-standalone installation starting
----------
---
selenium install:
from: https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.5.jar
to: /var/www/html/joomla-cms4/node_modules/selenium-standalone/.selenium/selenium-server/3.141.5-server.jar
---
chrome install:
from: https://chromedriver.storage.googleapis.com/2.43/chromedriver_linux64.zip
to: /var/www/html/joomla-cms4/node_modules/selenium-standalone/.selenium/chromedriver/2.43-x64-chromedriver
---
firefox install:
from: https://github.com/mozilla/geckodriver/releases/download/v0.23.0/geckodriver-v0.23.0-linux64.tar.gz
to: /var/www/html/joomla-cms4/node_modules/selenium-standalone/.selenium/geckodriver/0.23.0-x64-geckodriver
---
File from https://chromedriver.storage.googleapis.com/2.43/chromedriver_linux64.zip has already been downloaded
---
File from https://selenium-release.storage.googleapis.com/3.141/selenium-server-standalone-3.141.5.jar has already been downloaded
---
File from https://github.com/mozilla/geckodriver/releases/download/v0.23.0/geckodriver-v0.23.0-linux64.tar.gz has already been downloaded
-----
selenium-standalone installation finished
-----
astrid@astrid-TravelMate-5760G:/var/www/html/joomla-cms4$ ./node_modules/.bin/selenium-standalone start
20:07:43.930 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.5, revision: d54ebd709a
20:07:44.026 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
2020-01-24 20:07:44.075:INFO::main: Logging initialized @387ms to org.seleniumhq.jetty9.util.log.StdErrLog
20:07:44.326 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
20:07:44.405 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
Selenium started
```
But if I run
` libraries/vendor/bin/codecept run acceptance tests/Codeception/acceptance/install`
i get
```
/var/www/html/joomla-cms4$ libraries/vendor/bin/codecept run acceptance tests/Codeception/acceptance/install
Codeception PHP Testing Framework v3.1.0
Powered by PHPUnit 8.3.4 by Sebastian Bergmann and contributors.
Running with seed:
In Db.php line 555:
Db: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known while creating PDO connection
run [-o|--override OVERRIDE] [-e|--ext EXT] [--report] [--html [HTML]] [--xml [XML]] [--phpunit-xml [PHPUNIT-XML]] [--tap [TAP]] [--json [JSON]] [--colors] [--no-colors] [--silent] [--steps] [-d|--debug] [--bootstrap [BOOTSTRAP]] [--no-redirect] [--coverage [COVERAGE]] [--coverage-html [COVERAGE-HTML]] [--coverage-xml [COVERAGE-XML]] [--coverage-text [COVERAGE-TEXT]] [--coverage-crap4j [COVERAGE-CRAP4J]] [--coverage-phpunit [COVERAGE-PHPUNIT]] [--no-exit] [-g|--group GROUP] [-s|--skip SKIP] [-x|--skip-group SKIP-GROUP] [--env ENV] [-f|--fail-fast] [--no-rebuild] [--seed SEED] [--] [<suite> [<test>]]
```
What am I doing wrong?
I tested this on Ubuntu 18.04 and PHP 7.2.24 | non_main | system testing for joomla steps to reproduce the issue i looked at the read me in the tests directory and followed the this is my acceptance suite yml codeception test suite configuration suite for acceptance tests perform tests in browser using the webdriver or phpbrowser if you need both webdriver and phpbrowser tests create a separate suite class name acceptancetester modules enabled asserts joomlabrowser helper acceptance helper joomladb config joomlabrowser url the url that points to the joomla installation at tests system joomla cms browser chrome window size capabilities goog chromeoptions args name jane doe name for the administrator username admin username for the administrator password admin password for the administrator database host localhost place where the application is hosted server address database user root mysql server user id usually root database password mypassword mysql server password usually empty or root database name joomla db name at the server database type mysqli type in lowercase one of the options mysql mysqli pdo database prefix jos db prefix for tables install sample data no do you want to download the sample data along with joomla installation then keep it yes sample data default english gb sample data default sample data admin email admin mydomain com email id of the admin language english united kingdom language in which you want the application to be installed timeout or the same result log js errors true helper joomladb dsn mysql host mysql dbname joomla user root password mypasswor prefix jos helper acceptance url the url that points to the joomla installation at tests system joomla cms we need it twice here microsoftedgeinsiders false set this to true if you are on windows insiders cmspath tests www test install if you want to setup your test website document root in a different folder you can do that here localuser www data linux mac only if you want to set a different owner for the cms test folder error level e all e strict e deprecated env postgres modules config joomlabrowser database host postgres database type pgsql helper joomladb dsn pgsql host postgres dbname test joomla modules config joomlabrowser database host mysql nothing to change at first everything looks pretty good var www html joomla node modules bin selenium standalone install selenium standalone installation starting selenium install from to var www html joomla node modules selenium standalone selenium selenium server server jar chrome install from to var www html joomla node modules selenium standalone selenium chromedriver chromedriver firefox install from to var www html joomla node modules selenium standalone selenium geckodriver geckodriver file from has already been downloaded file from has already been downloaded file from has already been downloaded selenium standalone installation finished astrid astrid travelmate var www html joomla node modules bin selenium standalone start info selenium server version revision info launching a standalone selenium server on port info main logging initialized to org seleniumhq util log stderrlog info initialising webdriverservlet info selenium server is up and running on port selenium started but if i run libraries vendor bin codecept run acceptance tests codeception acceptance install i get var www html joomla libraries vendor bin codecept run acceptance tests codeception acceptance install codeception php testing framework powered by phpunit by sebastian bergmann and contributors running with seed in db php line db sqlstate php network getaddresses getaddrinfo failed name or service not known while creating pdo connection run what am i doing wrong i tested this on ubuntu and php | 0 |
1,252 | 5,316,658,170 | IssuesEvent | 2017-02-13 20:27:58 | christoff-buerger/racr | https://api.github.com/repos/christoff-buerger/racr | opened | replace which in Bash scripts by command -v | low maintainability | The `list-scheme-systems.bash` script is using `which` to find installed _R6RS Scheme_ systems that are officially supported by _RACR_. To use `which` in shell scripts is problematic however, since it is not a built-in command:
* External command calls are more expensive than built-ins.
* The semantics of the actual `which` executable/script called is operating system dependent (including different search strategies to find the command, exit codes and output behaviours like formatting and the device printed to).
A good overview about the problem gives http://unix.stackexchange.com/questions/85249/why-not-use-which-what-to-use-then.
A more portable -- in terms of operating system -- alternative is to use _Bash's_ `command -v` built-in; after all, _RACR's_ scripts are for _Bash_.
Scripts using `which` are:
* `list-scheme-systems.bash`
* `profiling/atomic-petrinets/print-system-configuration.bash` | True | replace which in Bash scripts by command -v - The `list-scheme-systems.bash` script is using `which` to find installed _R6RS Scheme_ systems that are officially supported by _RACR_. To use `which` in shell scripts is problematic however, since it is not a built-in command:
* External command calls are more expensive than built-ins.
* The semantics of the actual `which` executable/script called is operating system dependent (including different search strategies to find the command, exit codes and output behaviours like formatting and the device printed to).
A good overview about the problem gives http://unix.stackexchange.com/questions/85249/why-not-use-which-what-to-use-then.
A more portable -- in terms of operating system -- alternative is to use _Bash's_ `command -v` built-in; after all, _RACR's_ scripts are for _Bash_.
Scripts using `which` are:
* `list-scheme-systems.bash`
* `profiling/atomic-petrinets/print-system-configuration.bash` | main | replace which in bash scripts by command v the list scheme systems bash script is using which to find installed scheme systems that are officially supported by racr to use which in shell scripts is problematic however since it is not a built in command external command calls are more expensive than built ins the semantics of the actual which executable script called is operating system dependent including different search strategies to find the command exit codes and output behaviours like formatting and the device printed to a good overview about the problem gives a more portable in terms of operating system alternative is to use bash s command v built in after all racr s scripts are for bash scripts using which are list scheme systems bash profiling atomic petrinets print system configuration bash | 1 |
84,701 | 10,554,816,464 | IssuesEvent | 2019-10-03 20:21:23 | eBay/skin | https://api.github.com/repos/eBay/skin | closed | Combobox: investigate options for adding a distinct expand/collapse button | blocked: design module: combobox resolution: wont fix status: backlog | Currently our combobox expand occurs on focus of the textbox. Clicking the chevron icon with mouse or touch simply sets focus on the textbox (which in turn causes the combobox to expand).
<img width="239" alt="Screen Shot 2019-05-17 at 12 13 41 PM" src="https://user-images.githubusercontent.com/38065/57951137-1ab8df00-789e-11e9-8ab9-e8b4e9d1883f.png">
<img width="240" alt="Screen Shot 2019-05-17 at 12 20 17 PM" src="https://user-images.githubusercontent.com/38065/57951169-26a4a100-789e-11e9-8160-603867b614b0.png">
We have a few options regarding adding a separate expand/collapse button element (as defined in MIND patterns and WCAG):
1. Don't add one. Keep the behaviour as is (a button element is not strictly required by WCAG, it is optional).
1. Make the current chevron icon an actual button element (possible discoverability issue though)
1. Create a `no-icon` modifier for the combobox, so that apps can define their own expand/collapse button and behaviour. Or simply just leave out the icon span element from the markup should do it?
1. Create a separate version of combobox which has no icon and a distinct separate button immediately adjacent to the combobox (so there would be white space between the textbox and button).
Option 3 feels like the best to me.
| 1.0 | Combobox: investigate options for adding a distinct expand/collapse button - Currently our combobox expand occurs on focus of the textbox. Clicking the chevron icon with mouse or touch simply sets focus on the textbox (which in turn causes the combobox to expand).
<img width="239" alt="Screen Shot 2019-05-17 at 12 13 41 PM" src="https://user-images.githubusercontent.com/38065/57951137-1ab8df00-789e-11e9-8ab9-e8b4e9d1883f.png">
<img width="240" alt="Screen Shot 2019-05-17 at 12 20 17 PM" src="https://user-images.githubusercontent.com/38065/57951169-26a4a100-789e-11e9-8160-603867b614b0.png">
We have a few options regarding adding a separate expand/collapse button element (as defined in MIND patterns and WCAG):
1. Don't add one. Keep the behaviour as is (a button element is not strictly required by WCAG, it is optional).
1. Make the current chevron icon an actual button element (possible discoverability issue though)
1. Create a `no-icon` modifier for the combobox, so that apps can define their own expand/collapse button and behaviour. Or simply just leave out the icon span element from the markup should do it?
1. Create a separate version of combobox which has no icon and a distinct separate button immediately adjacent to the combobox (so there would be white space between the textbox and button).
Option 3 feels like the best to me.
| non_main | combobox investigate options for adding a distinct expand collapse button currently our combobox expand occurs on focus of the textbox clicking the chevron icon with mouse or touch simply sets focus on the textbox which in turn causes the combobox to expand img width alt screen shot at pm src img width alt screen shot at pm src we have a few options regarding adding a separate expand collapse button element as defined in mind patterns and wcag don t add one keep the behaviour as is a button element is not strictly required by wcag it is optional make the current chevron icon an actual button element possible discoverability issue though create a no icon modifier for the combobox so that apps can define their own expand collapse button and behaviour or simply just leave out the icon span element from the markup should do it create a separate version of combobox which has no icon and a distinct separate button immediately adjacent to the combobox so there would be white space between the textbox and button option feels like the best to me | 0 |
70,311 | 15,083,476,789 | IssuesEvent | 2021-02-05 15:53:04 | gsylvie/vuln-example-apacheds-all | https://api.github.com/repos/gsylvie/vuln-example-apacheds-all | opened | CVE-2015-6420 (High) detected in commons-collections-3.2.1.jar | security vulnerability | ## CVE-2015-6420 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to dependency file: vuln-example-apacheds-all/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar,vuln-example-apacheds-all/target/vuln-example-apacheds-all-2021.02.04/WEB-INF/lib/commons-collections-3.2.1.jar</p>
<p>
Dependency Hierarchy:
- apacheds-all-1.5.5.jar (Root Library)
- shared-ldap-0.9.15.jar
- :x: **commons-collections-3.2.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gsylvie/vuln-example-apacheds-all/commit/f52c0954db6d663b5609fc98543749f5d6a147a0">f52c0954db6d663b5609fc98543749f5d6a147a0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library.
<p>Publish Date: 2015-12-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420>CVE-2015-6420</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/commons-collections/tree/collections-3.2.2,https://github.com/apache/commons-collections/tree/collections-4.1">https://github.com/apache/commons-collections/tree/collections-3.2.2,https://github.com/apache/commons-collections/tree/collections-4.1</a></p>
<p>Release Date: 2015-12-15</p>
<p>Fix Resolution: commons-collections:commons-collections3.2.2,org.apache.commons:commons-collections4:4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-6420 (High) detected in commons-collections-3.2.1.jar - ## CVE-2015-6420 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to dependency file: vuln-example-apacheds-all/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar,vuln-example-apacheds-all/target/vuln-example-apacheds-all-2021.02.04/WEB-INF/lib/commons-collections-3.2.1.jar</p>
<p>
Dependency Hierarchy:
- apacheds-all-1.5.5.jar (Root Library)
- shared-ldap-0.9.15.jar
- :x: **commons-collections-3.2.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gsylvie/vuln-example-apacheds-all/commit/f52c0954db6d663b5609fc98543749f5d6a147a0">f52c0954db6d663b5609fc98543749f5d6a147a0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library.
<p>Publish Date: 2015-12-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420>CVE-2015-6420</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/commons-collections/tree/collections-3.2.2,https://github.com/apache/commons-collections/tree/collections-4.1">https://github.com/apache/commons-collections/tree/collections-3.2.2,https://github.com/apache/commons-collections/tree/collections-4.1</a></p>
<p>Release Date: 2015-12-15</p>
<p>Fix Resolution: commons-collections:commons-collections3.2.2,org.apache.commons:commons-collections4:4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in commons collections jar cve high severity vulnerability vulnerable library commons collections jar types that extend and augment the java collections framework path to dependency file vuln example apacheds all pom xml path to vulnerable library home wss scanner repository commons collections commons collections commons collections jar vuln example apacheds all target vuln example apacheds all web inf lib commons collections jar dependency hierarchy apacheds all jar root library shared ldap jar x commons collections jar vulnerable library found in head commit a href vulnerability details serialized object interfaces in certain cisco collaboration and social media endpoint clients and client software network application service and acceleration network and content security devices network management and provisioning routing and switching enterprise and service provider unified computing voice and unified communications devices video streaming telepresence and transcoding devices wireless and cisco hosted services products allow remote attackers to execute arbitrary commands via a crafted serialized java object related to the apache commons collections acc library publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution commons collections commons org apache commons commons step up your open source security game with whitesource | 0 |
189,419 | 15,187,960,110 | IssuesEvent | 2021-02-15 14:29:52 | nx10/httpgd | https://api.github.com/repos/nx10/httpgd | opened | API documentation vignette | documentation | With the static IDs implemented, the httpgd core APIs (R+HTTP/WebSockets) should be stable.
The markdown documentation should be updated and compiled into a single easy to understand vignette that contains everything needed to use the httpgd API for application and package developers. | 1.0 | API documentation vignette - With the static IDs implemented, the httpgd core APIs (R+HTTP/WebSockets) should be stable.
The markdown documentation should be updated and compiled into a single easy to understand vignette that contains everything needed to use the httpgd API for application and package developers. | non_main | api documentation vignette with the static ids implemented the httpgd core apis r http websockets should be stable the markdown documentation should be updated and compiled into a single easy to understand vignette that contains everything needed to use the httpgd api for application and package developers | 0 |
1,034 | 4,827,596,907 | IssuesEvent | 2016-11-07 14:07:24 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | cloudformation state: absent requires template or template_url | affects_1.9 aws bug_report cloud waiting_on_maintainer | ##### Issue Type: Bug Report
##### Ansible Version: Ansible 1.9.3, cloudformation.py version 9eb0c178ec2f4e94ae29329fa9694a381157b413
##### Environment: Mac OS X El Captain
##### Summary:
Using the latest (9eb0c178ec2f4e94ae29329fa9694a381157b413) cloudformation module, below **does not** works:
``` yaml
- name: Removing RDS instances
cloudformation2:
stack_name: "{{ db_stack_name }}"
state: "absent"
region: "{{ aws_region }}"
```
But passing a template or template_url, even if dummy, works:
``` yaml
- name: Removing RDS instances
cloudformation2:
stack_name: "{{ db_stack_name }}"
state: "absent"
region: "{{ aws_region }}"
template_url: "it's a bug"
```
##### Expected Results:
According to docs, template/template_url should only be required when state: present.
##### Actual Results:
```
TASK: [Removing RDS instances] ************************************************
failed: [localhost] => {"failed": true}
msg: Either template or template_url expected
cloudformation2 [localhost] =>
{
"msg": "Either template or template_url expected",
"failed": true
}
```
| True | cloudformation state: absent requires template or template_url - ##### Issue Type: Bug Report
##### Ansible Version: Ansible 1.9.3, cloudformation.py version 9eb0c178ec2f4e94ae29329fa9694a381157b413
##### Environment: Mac OS X El Captain
##### Summary:
Using the latest (9eb0c178ec2f4e94ae29329fa9694a381157b413) cloudformation module, below **does not** works:
``` yaml
- name: Removing RDS instances
cloudformation2:
stack_name: "{{ db_stack_name }}"
state: "absent"
region: "{{ aws_region }}"
```
But passing a template or template_url, even if dummy, works:
``` yaml
- name: Removing RDS instances
cloudformation2:
stack_name: "{{ db_stack_name }}"
state: "absent"
region: "{{ aws_region }}"
template_url: "it's a bug"
```
##### Expected Results:
According to docs, template/template_url should only be required when state: present.
##### Actual Results:
```
TASK: [Removing RDS instances] ************************************************
failed: [localhost] => {"failed": true}
msg: Either template or template_url expected
cloudformation2 [localhost] =>
{
"msg": "Either template or template_url expected",
"failed": true
}
```
| main | cloudformation state absent requires template or template url issue type bug report ansible version ansible cloudformation py version environment mac os x el captain summary using the latest cloudformation module below does not works yaml name removing rds instances stack name db stack name state absent region aws region but passing a template or template url even if dummy works yaml name removing rds instances stack name db stack name state absent region aws region template url it s a bug expected results according to docs template template url should only be required when state present actual results task failed failed true msg either template or template url expected msg either template or template url expected failed true | 1 |
467,297 | 13,445,437,646 | IssuesEvent | 2020-09-08 11:24:21 | pints-team/pints | https://api.github.com/repos/pints-team/pints | closed | Add some kind of change log file | priority | > Perhaps to facilitate the writing of release notes, every PR should add a line to a CHANGELOG.md file (divided into sections for features, bug fixes etc)?
Anyone have any ideas how best to structure this?
`##` For every release, with `###` for features/bug-fixes ?
Todo:
- [x] Decide on format
- [x] Retroactively fill in this file with all changes since 0.3.0
- [x] Update CONTRIBUTING.md, saying adding to this file is a requirement
- [ ] Once merged, bother all open PRs with a message telling them to do this | 1.0 | Add some kind of change log file - > Perhaps to facilitate the writing of release notes, every PR should add a line to a CHANGELOG.md file (divided into sections for features, bug fixes etc)?
Anyone have any ideas how best to structure this?
`##` For every release, with `###` for features/bug-fixes ?
Todo:
- [x] Decide on format
- [x] Retroactively fill in this file with all changes since 0.3.0
- [x] Update CONTRIBUTING.md, saying adding to this file is a requirement
- [ ] Once merged, bother all open PRs with a message telling them to do this | non_main | add some kind of change log file perhaps to facilitate the writing of release notes every pr should add a line to a changelog md file divided into sections for features bug fixes etc anyone have any ideas how best to structure this for every release with for features bug fixes todo decide on format retroactively fill in this file with all changes since update contributing md saying adding to this file is a requirement once merged bother all open prs with a message telling them to do this | 0 |
146,973 | 11,764,982,324 | IssuesEvent | 2020-03-14 15:18:20 | basilisp-lang/basilisp | https://api.github.com/repos/basilisp-lang/basilisp | closed | Basilisp Lisp representation `lrepr` function fails if keyword arguments required | bug testing | If you calling `basilisp.lang.obj.lrepr` on an object which dispatches via the `@functools.singledispatch` decorator (rather than falling back to the default), then the keyword arguments for e.g. `print_level` and `print_readably` are never supplied or created, which will cause Python to raise a `KeyError` if they are needed (e.g. for nested collections).
I originally encountered this bug with the PyTest plugin (in `src/basilisp/testrunner.py`) calling `lrepr` on a nested Python list object.
The fix is simple. We can move the `@functools/singledispatch` decorator down the `basilisp.lang.obj._lrepr_fallback`, since `lrepr` itself creates and supplies default keyword arguments for all of the necessary arguments. | 1.0 | Basilisp Lisp representation `lrepr` function fails if keyword arguments required - If you calling `basilisp.lang.obj.lrepr` on an object which dispatches via the `@functools.singledispatch` decorator (rather than falling back to the default), then the keyword arguments for e.g. `print_level` and `print_readably` are never supplied or created, which will cause Python to raise a `KeyError` if they are needed (e.g. for nested collections).
I originally encountered this bug with the PyTest plugin (in `src/basilisp/testrunner.py`) calling `lrepr` on a nested Python list object.
The fix is simple. We can move the `@functools/singledispatch` decorator down the `basilisp.lang.obj._lrepr_fallback`, since `lrepr` itself creates and supplies default keyword arguments for all of the necessary arguments. | non_main | basilisp lisp representation lrepr function fails if keyword arguments required if you calling basilisp lang obj lrepr on an object which dispatches via the functools singledispatch decorator rather than falling back to the default then the keyword arguments for e g print level and print readably are never supplied or created which will cause python to raise a keyerror if they are needed e g for nested collections i originally encountered this bug with the pytest plugin in src basilisp testrunner py calling lrepr on a nested python list object the fix is simple we can move the functools singledispatch decorator down the basilisp lang obj lrepr fallback since lrepr itself creates and supplies default keyword arguments for all of the necessary arguments | 0 |
81,969 | 10,265,704,125 | IssuesEvent | 2019-08-22 19:31:59 | automotive-edge-computing-consortium/AECC | https://api.github.com/repos/automotive-edge-computing-consortium/AECC | opened | Missing authorization related requirements. | priority:High status:Approved type:Documentation type:Enhancement | Currently there are no requirements in the URD related to authorization. It's recommended that WG1 adds such in the URD.
WG1 to recommend to Intel and Cisco to bring this issue to WG1 SIG1 (security and privacy).
SIG1 contribution details authorization requirements. In process of submission to WG1 for review and approval
eBallot to approve the contribution has been done. | 1.0 | Missing authorization related requirements. - Currently there are no requirements in the URD related to authorization. It's recommended that WG1 adds such in the URD.
WG1 to recommend to Intel and Cisco to bring this issue to WG1 SIG1 (security and privacy).
SIG1 contribution details authorization requirements. In process of submission to WG1 for review and approval
eBallot to approve the contribution has been done. | non_main | missing authorization related requirements currently there are no requirements in the urd related to authorization it s recommended that adds such in the urd to recommend to intel and cisco to bring this issue to security and privacy contribution details authorization requirements in process of submission to for review and approval eballot to approve the contribution has been done | 0 |
450,253 | 12,992,599,951 | IssuesEvent | 2020-07-23 07:14:16 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | greatist.com - site is not usable | browser-firefox-mobile engine-gecko priority-normal | <!-- @browser: Firefox Mobile 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:79.0) Gecko/79.0 Firefox/79.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/55760 -->
**URL**: https://greatist.com/health/fruit-juice-increases-risk-diabetes-090313
**Browser / Version**: Firefox Mobile 79.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Nothing. Just load the page and it keeps spinning in an infinite loop
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200713185329</li><li>channel: beta</li><li>hasTouchScreen: true</li>
</ul>
</details>
Submitted in the name of `@kedarmp`
_From [webcompat.com](https://webcompat.com/) with β€οΈ_ | 1.0 | greatist.com - site is not usable - <!-- @browser: Firefox Mobile 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:79.0) Gecko/79.0 Firefox/79.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/55760 -->
**URL**: https://greatist.com/health/fruit-juice-increases-risk-diabetes-090313
**Browser / Version**: Firefox Mobile 79.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Nothing. Just load the page and it keeps spinning in an infinite loop
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200713185329</li><li>channel: beta</li><li>hasTouchScreen: true</li>
</ul>
</details>
Submitted in the name of `@kedarmp`
_From [webcompat.com](https://webcompat.com/) with β€οΈ_ | non_main | greatist com site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce nothing just load the page and it keeps spinning in an infinite loop browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true submitted in the name of kedarmp from with β€οΈ | 0 |
104,625 | 16,619,891,272 | IssuesEvent | 2021-06-02 22:21:59 | rvvergara/haiku-android | https://api.github.com/repos/rvvergara/haiku-android | opened | CVE-2021-32640 (Medium) detected in ws-7.2.1.tgz | security vulnerability | ## CVE-2021-32640 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-7.2.1.tgz</b></p></summary>
<p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-7.2.1.tgz">https://registry.npmjs.org/ws/-/ws-7.2.1.tgz</a></p>
<p>Path to dependency file: haiku-android/package.json</p>
<p>Path to vulnerable library: haiku-android/node_modules/jsdom/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- jest-25.1.0.tgz (Root Library)
- core-25.1.0.tgz
- jest-config-25.1.0.tgz
- jest-environment-jsdom-25.1.0.tgz
- jsdom-15.2.1.tgz
- :x: **ws-7.2.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/haiku-android/commit/42ffbdf6139415b99e9f81e66dfa9e8abecb19fe">42ffbdf6139415b99e9f81e66dfa9e8abecb19fe</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.
<p>Publish Date: 2021-05-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p>
<p>Release Date: 2021-05-25</p>
<p>Fix Resolution: ws - 7.4.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-32640 (Medium) detected in ws-7.2.1.tgz - ## CVE-2021-32640 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-7.2.1.tgz</b></p></summary>
<p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-7.2.1.tgz">https://registry.npmjs.org/ws/-/ws-7.2.1.tgz</a></p>
<p>Path to dependency file: haiku-android/package.json</p>
<p>Path to vulnerable library: haiku-android/node_modules/jsdom/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- jest-25.1.0.tgz (Root Library)
- core-25.1.0.tgz
- jest-config-25.1.0.tgz
- jest-environment-jsdom-25.1.0.tgz
- jsdom-15.2.1.tgz
- :x: **ws-7.2.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/haiku-android/commit/42ffbdf6139415b99e9f81e66dfa9e8abecb19fe">42ffbdf6139415b99e9f81e66dfa9e8abecb19fe</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.
<p>Publish Date: 2021-05-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p>
<p>Release Date: 2021-05-25</p>
<p>Fix Resolution: ws - 7.4.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in ws tgz cve medium severity vulnerability vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file haiku android package json path to vulnerable library haiku android node modules jsdom node modules ws package json dependency hierarchy jest tgz root library core tgz jest config tgz jest environment jsdom tgz jsdom tgz x ws tgz vulnerable library found in head commit a href vulnerability details ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ws step up your open source security game with whitesource | 0 |
5,550 | 27,777,277,653 | IssuesEvent | 2023-03-16 18:07:31 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [a11y]: FileUploaderDropContainer component has critical violation "The tabbable element's role 'none' is not a widget role" | severity: 2 type: a11y βΏ component: file-uploader status: waiting for maintainer response π¬ adopter: product | ### Package
carbon-components-react
### Browser
Chrome
### Operating System
MacOS
### Package version
7.56.0
### React version
17.0.2
### Automated testing tool and ruleset
IBM Equal Access Accessibility Checker - Latest Deployment
### Assistive technology
_No response_
### Description
The `FileUploaderDropContainer` component has a critical a11y violation `"The tabbable element's role 'none' is not a widget role"` that was found in our product using the IBM Accessibility Checker tool and is re-producible on the [v10]( https://v7-react.carbondesignsystem.com/?path=/story/components-fileuploader--drag-and-drop-upload-container-example-application) and [v11](https://react.carbondesignsystem.com/?path=/story/components-fileuploader--drag-and-drop-upload-container-example-application) storybook as well.
v10 storybook

v11 storybook

### WCAG 2.1 Violation
_No response_
### Reproduction/example
Carbon v10 storybook - https://v7-react.carbondesignsystem.com/?path=/story/components-fileuploader--drag-and-drop-upload-container-example-application and/or Carbon v11 storybook - https://react.carbondesignsystem.com/?path=/story/components-fileuploader--drag-and-drop-upload-container-example-application
### Steps to reproduce
On the storybook link (either v10 or v11) for the FileUploader -> FileUploaderDropContainer story, run the IBM Accessibility Checker scan. The scan should report the critical violation mentioned above (scroll to the bottom of the violations list).
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [a11y]: FileUploaderDropContainer component has critical violation "The tabbable element's role 'none' is not a widget role" - ### Package
carbon-components-react
### Browser
Chrome
### Operating System
MacOS
### Package version
7.56.0
### React version
17.0.2
### Automated testing tool and ruleset
IBM Equal Access Accessibility Checker - Latest Deployment
### Assistive technology
_No response_
### Description
The `FileUploaderDropContainer` component has a critical a11y violation `"The tabbable element's role 'none' is not a widget role"` that was found in our product using the IBM Accessibility Checker tool and is re-producible on the [v10]( https://v7-react.carbondesignsystem.com/?path=/story/components-fileuploader--drag-and-drop-upload-container-example-application) and [v11](https://react.carbondesignsystem.com/?path=/story/components-fileuploader--drag-and-drop-upload-container-example-application) storybook as well.
v10 storybook

v11 storybook

### WCAG 2.1 Violation
_No response_
### Reproduction/example
Carbon v10 storybook - https://v7-react.carbondesignsystem.com/?path=/story/components-fileuploader--drag-and-drop-upload-container-example-application and/or Carbon v11 storybook - https://react.carbondesignsystem.com/?path=/story/components-fileuploader--drag-and-drop-upload-container-example-application
### Steps to reproduce
On the storybook link (either v10 or v11) for the FileUploader -> FileUploaderDropContainer story, run the IBM Accessibility Checker scan. The scan should report the critical violation mentioned above (scroll to the bottom of the violations list).
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | fileuploaderdropcontainer component has critical violation the tabbable element s role none is not a widget role package carbon components react browser chrome operating system macos package version react version automated testing tool and ruleset ibm equal access accessibility checker latest deployment assistive technology no response description the fileuploaderdropcontainer component has a critical violation the tabbable element s role none is not a widget role that was found in our product using the ibm accessibility checker tool and is re producible on the and storybook as well storybook storybook wcag violation no response reproduction example carbon storybook and or carbon storybook steps to reproduce on the storybook link either or for the fileuploader fileuploaderdropcontainer story run the ibm accessibility checker scan the scan should report the critical violation mentioned above scroll to the bottom of the violations list code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
13,934 | 4,790,660,845 | IssuesEvent | 2016-10-31 09:33:51 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Module Position parameter not working with admin template Hathor | No Code Attached Yet | ### Steps to reproduce the issue
Set Hathor as admin template. Edit or create a module and try to set a module position.
### Expected result
Opens a drop down menu to select the position
### Actual result
The drop down of the parameter Position doeasn't open. If you click on Select a list of module positions appears, but if you click on the position links nothing happens.
| 1.0 | Module Position parameter not working with admin template Hathor - ### Steps to reproduce the issue
Set Hathor as admin template. Edit or create a module and try to set a module position.
### Expected result
Opens a drop down menu to select the position
### Actual result
The drop down of the parameter Position doeasn't open. If you click on Select a list of module positions appears, but if you click on the position links nothing happens.
| non_main | module position parameter not working with admin template hathor steps to reproduce the issue set hathor as admin template edit or create a module and try to set a module position expected result opens a drop down menu to select the position actual result the drop down of the parameter position doeasn t open if you click on select a list of module positions appears but if you click on the position links nothing happens | 0 |
1,307 | 5,554,623,446 | IssuesEvent | 2017-03-24 00:52:19 | zaproxy/zaproxy | https://api.github.com/repos/zaproxy/zaproxy | opened | Switch JSON variant parser to use a library instead of 'maual' parsing | enhancement Maintainability Usability | The `VariantJSONQuery` parser 'manually' parses JSON content. It works but there are a few issues. It's manual parsing should be replaced leveraging an established library. | True | Switch JSON variant parser to use a library instead of 'maual' parsing - The `VariantJSONQuery` parser 'manually' parses JSON content. It works but there are a few issues. It's manual parsing should be replaced leveraging an established library. | main | switch json variant parser to use a library instead of maual parsing the variantjsonquery parser manually parses json content it works but there are a few issues it s manual parsing should be replaced leveraging an established library | 1 |
85,585 | 15,755,087,865 | IssuesEvent | 2021-03-31 01:09:42 | taddhopkins/phaser | https://api.github.com/repos/taddhopkins/phaser | opened | CVE-2021-23358 (High) detected in underscore-1.9.1.tgz | security vulnerability | ## CVE-2021-23358 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.9.1.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz">https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz</a></p>
<p>Path to dependency file: phaser/package.json</p>
<p>Path to vulnerable library: phaser/node_modules/underscore/package.json</p>
<p>
Dependency Hierarchy:
- jsdoc-3.6.3.tgz (Root Library)
- :x: **underscore-1.9.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore","packageVersion":"1.9.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"jsdoc:3.6.3;underscore:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"underscore - 1.12.1,1.13.0-2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23358","vulnerabilityDetails":"The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-23358 (High) detected in underscore-1.9.1.tgz - ## CVE-2021-23358 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.9.1.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz">https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz</a></p>
<p>Path to dependency file: phaser/package.json</p>
<p>Path to vulnerable library: phaser/node_modules/underscore/package.json</p>
<p>
Dependency Hierarchy:
- jsdoc-3.6.3.tgz (Root Library)
- :x: **underscore-1.9.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore","packageVersion":"1.9.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"jsdoc:3.6.3;underscore:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"underscore - 1.12.1,1.13.0-2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23358","vulnerabilityDetails":"The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in underscore tgz cve high severity vulnerability vulnerable library underscore tgz javascript s functional programming helper library library home page a href path to dependency file phaser package json path to vulnerable library phaser node modules underscore package json dependency hierarchy jsdoc tgz root library x underscore tgz vulnerable library found in base branch master vulnerability details the package underscore from and before from and before are vulnerable to arbitrary code execution via the template function particularly when a variable property is passed as an argument as it is not sanitized publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution underscore isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree jsdoc underscore isminimumfixversionavailable true minimumfixversion underscore basebranches vulnerabilityidentifier cve vulnerabilitydetails the package underscore from and before from and before are vulnerable to arbitrary code execution via the template function particularly when a variable property is passed as an argument as it is not sanitized vulnerabilityurl | 0 |
259,151 | 8,188,587,496 | IssuesEvent | 2018-08-30 02:47:15 | HippieStation/HippieStation | https://api.github.com/repos/HippieStation/HippieStation | closed | drifting in space in crit counts as crawling and saps your health | Priority: High | In-Game report from cacogen:
Server info: 24544 (Hippie Station)
| 1.0 | drifting in space in crit counts as crawling and saps your health - In-Game report from cacogen:
Server info: 24544 (Hippie Station)
| non_main | drifting in space in crit counts as crawling and saps your health in game report from cacogen server info hippie station | 0 |
252,783 | 19,072,364,719 | IssuesEvent | 2021-11-27 05:28:46 | girlscript/winter-of-contributing | https://api.github.com/repos/girlscript/winter-of-contributing | closed | How to implement UDP protocol in Android JAVA | documentation GWOC21 Android-Dev-Java Assigned | ### Description
Describe how to implement udp protocol in android java
### Domain
Android Dev (Java)
### Type of Contribution
Documentation
### Code of Conduct
- [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project. | 1.0 | How to implement UDP protocol in Android JAVA - ### Description
Describe how to implement udp protocol in android java
### Domain
Android Dev (Java)
### Type of Contribution
Documentation
### Code of Conduct
- [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project. | non_main | how to implement udp protocol in android java description describe how to implement udp protocol in android java domain android dev java type of contribution documentation code of conduct i follow of this project | 0 |
10 | 2,515,038,306 | IssuesEvent | 2015-01-15 16:05:11 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | opened | Cleanup the SimpleSAML_Session class | enhancement maintainability started | The following must be done:
* Remove the default `NULL` value for the `$authority` parameter in the `getAuthState()` method.
* Remove the `getAttribute()` method.
* Remove the `setAttribute()` method.
* Remove the `setAttributes()` method.
* Remove the `getInstance()` method.
* Refactor.
* The error handling code should disappear.
* `session.disable_fallback` defaults to TRUE and goes away (exception is always thrown)
* Some functionality to avoid recursive loops, maybe solved in `Logger::getTracktId()`.
* Remove the `getAuthority()` method.
* Remove the `getAuthnRequest()` method.
* Remove the `setAuthnRequest()` method.
* Remove the `getIdP()` method.
* Remove the `setIdP()` method.
* Remove the `getSessionIndex()` method.
* Remove the `setSessionIndex()` method.
* Remove the `getNameId()` method.
* Remove the `setNameId()` method.
* Remove the `setSessionDuration()` method.
* Remove the `remainingTime()` method.
* Remove the `isAuthenticated()` method.
* Remove the `getAuthInstant()` method.
* Remove the `getAttributes()` method.
* Remove the `getSize()` method.
* Remove the `get_sp_list()` method.
* Remove the `expireDataLogout()` method.
* Remove the `getLogoutState()` and `setLogoutState()` methods. If there's callers, change the call with a direct access to `$state['LogoutState']`.
* Remove the `$authority` property.
* Remove the `DATA_TIMEOUT_LOGOUT` constant. Check dependencies in:
* `lib/SimpleSAML/Auth/Source.php` and
* `lib/SimpleSAML/IdP.php` | True | Cleanup the SimpleSAML_Session class - The following must be done:
* Remove the default `NULL` value for the `$authority` parameter in the `getAuthState()` method.
* Remove the `getAttribute()` method.
* Remove the `setAttribute()` method.
* Remove the `setAttributes()` method.
* Remove the `getInstance()` method.
* Refactor.
* The error handling code should disappear.
* `session.disable_fallback` defaults to TRUE and goes away (exception is always thrown)
* Some functionality to avoid recursive loops, maybe solved in `Logger::getTracktId()`.
* Remove the `getAuthority()` method.
* Remove the `getAuthnRequest()` method.
* Remove the `setAuthnRequest()` method.
* Remove the `getIdP()` method.
* Remove the `setIdP()` method.
* Remove the `getSessionIndex()` method.
* Remove the `setSessionIndex()` method.
* Remove the `getNameId()` method.
* Remove the `setNameId()` method.
* Remove the `setSessionDuration()` method.
* Remove the `remainingTime()` method.
* Remove the `isAuthenticated()` method.
* Remove the `getAuthInstant()` method.
* Remove the `getAttributes()` method.
* Remove the `getSize()` method.
* Remove the `get_sp_list()` method.
* Remove the `expireDataLogout()` method.
* Remove the `getLogoutState()` and `setLogoutState()` methods. If there's callers, change the call with a direct access to `$state['LogoutState']`.
* Remove the `$authority` property.
* Remove the `DATA_TIMEOUT_LOGOUT` constant. Check dependencies in:
* `lib/SimpleSAML/Auth/Source.php` and
* `lib/SimpleSAML/IdP.php` | main | cleanup the simplesaml session class the following must be done remove the default null value for the authority parameter in the getauthstate method remove the getattribute method remove the setattribute method remove the setattributes method remove the getinstance method refactor the error handling code should disappear session disable fallback defaults to true and goes away exception is always thrown some functionality to avoid recursive loops maybe solved in logger gettracktid remove the getauthority method remove the getauthnrequest method remove the setauthnrequest method remove the getidp method remove the setidp method remove the getsessionindex method remove the setsessionindex method remove the getnameid method remove the setnameid method remove the setsessionduration method remove the remainingtime method remove the isauthenticated method remove the getauthinstant method remove the getattributes method remove the getsize method remove the get sp list method remove the expiredatalogout method remove the getlogoutstate and setlogoutstate methods if there s callers change the call with a direct access to state remove the authority property remove the data timeout logout constant check dependencies in lib simplesaml auth source php and lib simplesaml idp php | 1 |
4,168 | 19,985,325,730 | IssuesEvent | 2022-01-30 15:19:22 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | opened | [MAINTAIN] r-travel | maintain | <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
you forget to β#include <cstddef>β?
class_Cache_block.h:14:23: error: expected β)β before βsizeβ
14 | Cache_block(size_t size);
| ~ ^~~~~
| )
class_Cache_block.h:21:5: error: βsize_tβ does not name a type
21 | size_t use_count() const;
| ^~~~~~
class_Cache_block.h:21:5: note: βsize_tβ is defined in header β<cstddef>β; did you forget to β#include <cstddef>β?
class_Cache_block.h:23:5: error: βsize_tβ does not name a type
23 | size_t get_size() const;
| ^~~~~~
class_Cache_block.h:23:5: note: βsize_tβ is defined in header β<cstddef>β; did you forget to β#include <cstddef>β?
class_Cache_block.h:27:5: error: βsize_tβ does not name a type
27 | size_t get_serialize_size() const;
| ^~~~~~
class_Cache_block.h:27:5: note: βsize_tβ is defined in header β<cstddef>β; did you forget to β#include <cstddef>β?
g++ -std=gnu++11 -I"/usr/include/R/" -DNDEBUG `pkg-config fuse --cflags` -I'/usr/lib/R/library/Rcpp/include' -D_FORTIFY_SOURCE=2 -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c class_Memory_mapped.cpp -o class_Memory_mapped.o
g++ -std=gnu++11 -I"/usr/include/R/" -DNDEBUG `pkg-config fuse --cflags` -I'/usr/lib/R/library/Rcpp/include' -D_FORTIFY_SOURCE=2 -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c class_Protect_guard.cpp -o class_Protect_guard.o
class_Filesystem_file_data.cpp: In member function βsize_t Filesystem_file_data::get_serialize_size()β:
class_Filesystem_file_data.cpp:70:26: error: βconst class Cache_blockβ has no member named βget_serialize_sizeβ
70 | size += i.second.get_serialize_size();
| ^~~~~~~~~~~~~~~~~~
class_Filesystem_file_data.cpp: In member function βvoid Filesystem_file_data::serialize(void*)β:
class_Filesystem_file_data.cpp:92:39: error: βconst class Cache_blockβ has no member named βget_serialize_sizeβ
92 | size_t buffer_size = i.second.get_serialize_size();
| ^~~~~~~~~~~~~~~~~~
class_Filesystem_file_data.cpp: In member function βvoid Filesystem_cache_index_iterator::compute_block_info()β:
class_Filesystem_file_data.cpp:177:39: error: βclass Cache_blockβ has no member named βget_sizeβ
177 | block_length = block_iter->second.get_size() / type_size;
| ^~~~~~~~
make: *** [/usr/lib64/R/etc/Makeconf:177: class_Filesystem_file_data.o] Error 1
make: *** Waiting for unfinished jobs....
ERROR: compilation failed for package βTravelβ
```
</details>
**Packages (please complete the following information):**
- Package Name: r-travel
**Description**
https://log.bioarchlinux.org/2022-01-29T15%3A51%3A16/r-travel.log
| True | [MAINTAIN] r-travel - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
you forget to β#include <cstddef>β?
class_Cache_block.h:14:23: error: expected β)β before βsizeβ
14 | Cache_block(size_t size);
| ~ ^~~~~
| )
class_Cache_block.h:21:5: error: βsize_tβ does not name a type
21 | size_t use_count() const;
| ^~~~~~
class_Cache_block.h:21:5: note: βsize_tβ is defined in header β<cstddef>β; did you forget to β#include <cstddef>β?
class_Cache_block.h:23:5: error: βsize_tβ does not name a type
23 | size_t get_size() const;
| ^~~~~~
class_Cache_block.h:23:5: note: βsize_tβ is defined in header β<cstddef>β; did you forget to β#include <cstddef>β?
class_Cache_block.h:27:5: error: βsize_tβ does not name a type
27 | size_t get_serialize_size() const;
| ^~~~~~
class_Cache_block.h:27:5: note: βsize_tβ is defined in header β<cstddef>β; did you forget to β#include <cstddef>β?
g++ -std=gnu++11 -I"/usr/include/R/" -DNDEBUG `pkg-config fuse --cflags` -I'/usr/lib/R/library/Rcpp/include' -D_FORTIFY_SOURCE=2 -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c class_Memory_mapped.cpp -o class_Memory_mapped.o
g++ -std=gnu++11 -I"/usr/include/R/" -DNDEBUG `pkg-config fuse --cflags` -I'/usr/lib/R/library/Rcpp/include' -D_FORTIFY_SOURCE=2 -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c class_Protect_guard.cpp -o class_Protect_guard.o
class_Filesystem_file_data.cpp: In member function βsize_t Filesystem_file_data::get_serialize_size()β:
class_Filesystem_file_data.cpp:70:26: error: βconst class Cache_blockβ has no member named βget_serialize_sizeβ
70 | size += i.second.get_serialize_size();
| ^~~~~~~~~~~~~~~~~~
class_Filesystem_file_data.cpp: In member function βvoid Filesystem_file_data::serialize(void*)β:
class_Filesystem_file_data.cpp:92:39: error: βconst class Cache_blockβ has no member named βget_serialize_sizeβ
92 | size_t buffer_size = i.second.get_serialize_size();
| ^~~~~~~~~~~~~~~~~~
class_Filesystem_file_data.cpp: In member function βvoid Filesystem_cache_index_iterator::compute_block_info()β:
class_Filesystem_file_data.cpp:177:39: error: βclass Cache_blockβ has no member named βget_sizeβ
177 | block_length = block_iter->second.get_size() / type_size;
| ^~~~~~~~
make: *** [/usr/lib64/R/etc/Makeconf:177: class_Filesystem_file_data.o] Error 1
make: *** Waiting for unfinished jobs....
ERROR: compilation failed for package βTravelβ
```
</details>
**Packages (please complete the following information):**
- Package Name: r-travel
**Description**
https://log.bioarchlinux.org/2022-01-29T15%3A51%3A16/r-travel.log
| main | r travel please report the error of one package in one issue use multi issues to report multi bugs thanks log of the bug you forget to β include β class cache block h error expected β β before βsizeβ cache block size t size class cache block h error βsize tβ does not name a type size t use count const class cache block h note βsize tβ is defined in header β β did you forget to β include β class cache block h error βsize tβ does not name a type size t get size const class cache block h note βsize tβ is defined in header β β did you forget to β include β class cache block h error βsize tβ does not name a type size t get serialize size const class cache block h note βsize tβ is defined in header β β did you forget to β include β g std gnu i usr include r dndebug pkg config fuse cflags i usr lib r library rcpp include d fortify source fpic march mtune generic pipe fno plt c class memory mapped cpp o class memory mapped o g std gnu i usr include r dndebug pkg config fuse cflags i usr lib r library rcpp include d fortify source fpic march mtune generic pipe fno plt c class protect guard cpp o class protect guard o class filesystem file data cpp in member function βsize t filesystem file data get serialize size β class filesystem file data cpp error βconst class cache blockβ has no member named βget serialize sizeβ size i second get serialize size class filesystem file data cpp in member function βvoid filesystem file data serialize void β class filesystem file data cpp error βconst class cache blockβ has no member named βget serialize sizeβ size t buffer size i second get serialize size class filesystem file data cpp in member function βvoid filesystem cache index iterator compute block info β class filesystem file data cpp error βclass cache blockβ has no member named βget sizeβ block length block iter second get size type size make error make waiting for unfinished jobs error compilation failed for package βtravelβ packages please complete the following information package name r travel description | 1 |
1,132 | 5,146,365,135 | IssuesEvent | 2017-01-13 00:52:55 | GoogleChrome/lighthouse | https://api.github.com/repos/GoogleChrome/lighthouse | opened | Aggregation refactor | architecture question | I'm currently working on a change so you can easily select what sort of audits you want to run.
In reality this means need to build a full config that includes all these things:
> passes, gatherers per pass, audit list, aggregations (both parent and child aggregations), and the audit list within each of those aggregations
There's a lot of relationships here to juggle and generally it works, but one particular sore spot is "aggregations." What are aggregations anyways? :)
-------
I have a proposal for adjusting our `config/default.json`, which will have implications for the pipeline from auditResults => report.
Basically, I'm interested in replacing our [big aggregations blob](https://github.com/GoogleChrome/lighthouse/blob/ce7927307d1d4a15d81eea6b4deda57cb38d6c25/lighthouse-core/config/default.json#L98-L120) with something like the following:
* **`reportCategories` array**. These are basically our parent aggregations like "PWA", "Fancier Stuff" etc. It's very presentational, and how a visual report will be generated with some semblance of order.
* **`auditGroups` array**. The meat of our aggregations, but it's a flat list with no nesting. so `"App can load on offline/flaky connections"` is a sibling to `"Using modern protocols"` and `"Page load performance is fast"`.
* **`auditGroupTags` object**. Describes the few tags that are applied to each `auditGroup`. These will be used for users configuring what they want to evaluate on a given run.
Each `auditGroup` (nΓ©e aggregation) gains an `id` property, a `reportCategory` property, and a `groupTags` one. They don't have any more `items` property which has 1 or more children, which a lot of code has special handling for. yay. :)
Here's an excerpt of a revised json around the [aggregations part](https://github.com/GoogleChrome/lighthouse/blob/ce7927307d1d4a15d81eea6b4deda57cb38d6c25/lighthouse-core/config/default.json#L98-L120):
```js
], // end of audits array
"reportCategories": [
{
"name": "Progressive Web App",
"description": "These audits validate the aspects of a Progressive Web App.",
"id": "pwa_category",
"scored": true
}, {
"name": "Fancier stuff",
"description": "A list of newer features that you could be using in your app. These audits do not affect your score and are just suggestions.",
"id": "fancy_bp_category",
"scored": false
}, {
"name": "Performance Metrics",
"description": "These encapsulate your app's performance.",
"id": "perf_diagnostics_category",
"scored": false
} // and "Best Practices"..
],
"auditGroupTags": {
"pwa": "Progressive Web App audits",
"perf": "Performance metrics & diagnostics",
"best_practices": "Developer best practices"
},
"auditGroups": [
{
"name": "New JavaScript features",
"id": "fancy_best_practices",
"reportCategory": "fancy_bp_category",
"groupTags": ["best_practices"],
"audits": {
"no-datenow": {
"expectedValue": false
},
"no-console-time": {
"expectedValue": false
}
}
}, {
"name": "App can load on offline/flaky connections",
"description": "Ensuring your web app can respond when the network connection is unavailable or flaky is critical to providing your users a good experience. This is achieved through use of a [Service Worker](https://developers.google.com/web/fundamentals/primers/service-worker/).",
"id": "offline",
"reportCategory": "pwa_category",
"groupTags": ["pwa"],
"audits": {
"service-worker": {
"expectedValue": true,
"weight": 1
},
"works-offline": {
"expectedValue": true,
"weight": 1
}
}
}, {
"name": "Page load performance is fast",
"description": "Users notice if sites and apps don't perform well. These top-level metrics capture the most important perceived performance concerns.",
"id": "perf_metrics",
"reportCategory": "pwa_category",
"groupTags": ["pwa", "perf"],
"audits": {
"first-meaningful-paint": {
"expectedValue": 100,
"weight": 1
},
"speed-index-metric": {
"expectedValue": 100,
"weight": 1
},
"estimated-input-latency": {
"expectedValue": 100,
"weight": 1
},
"time-to-interactive": {
"expectedValue": 100,
"weight": 1
},
"scrolling-60fps": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "Content scrolls at 60fps",
"category": "UX"
},
"touch-150ms": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "Touch input gets a response in < 150ms",
"category": "UX"
},
"fmp-no-jank": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "App is interactive without jank after the first meaningful paint",
"category": "UX"
}
}
}, {
// .. the rest of the auditGroups ....
```
(Side note: we can now nuke categorizable (today) as it only was there for the "toggle to view report by technology vs user feature".)
---------
I think this approach would really help everyone understand the code quite a bit better. But curious if others think it improves clarity.
WDYT? | 1.0 | Aggregation refactor - I'm currently working on a change so you can easily select what sort of audits you want to run.
In reality this means need to build a full config that includes all these things:
> passes, gatherers per pass, audit list, aggregations (both parent and child aggregations), and the audit list within each of those aggregations
There's a lot of relationships here to juggle and generally it works, but one particular sore spot is "aggregations." What are aggregations anyways? :)
-------
I have a proposal for adjusting our `config/default.json`, which will have implications for the pipeline from auditResults => report.
Basically, I'm interested in replacing our [big aggregations blob](https://github.com/GoogleChrome/lighthouse/blob/ce7927307d1d4a15d81eea6b4deda57cb38d6c25/lighthouse-core/config/default.json#L98-L120) with something like the following:
* **`reportCategories` array**. These are basically our parent aggregations like "PWA", "Fancier Stuff" etc. It's very presentational, and how a visual report will be generated with some semblance of order.
* **`auditGroups` array**. The meat of our aggregations, but it's a flat list with no nesting. so `"App can load on offline/flaky connections"` is a sibling to `"Using modern protocols"` and `"Page load performance is fast"`.
* **`auditGroupTags` object**. Describes the few tags that are applied to each `auditGroup`. These will be used for users configuring what they want to evaluate on a given run.
Each `auditGroup` (nΓ©e aggregation) gains an `id` property, a `reportCategory` property, and a `groupTags` one. They don't have any more `items` property which has 1 or more children, which a lot of code has special handling for. yay. :)
Here's an excerpt of a revised json around the [aggregations part](https://github.com/GoogleChrome/lighthouse/blob/ce7927307d1d4a15d81eea6b4deda57cb38d6c25/lighthouse-core/config/default.json#L98-L120):
```js
], // end of audits array
"reportCategories": [
{
"name": "Progressive Web App",
"description": "These audits validate the aspects of a Progressive Web App.",
"id": "pwa_category",
"scored": true
}, {
"name": "Fancier stuff",
"description": "A list of newer features that you could be using in your app. These audits do not affect your score and are just suggestions.",
"id": "fancy_bp_category",
"scored": false
}, {
"name": "Performance Metrics",
"description": "These encapsulate your app's performance.",
"id": "perf_diagnostics_category",
"scored": false
} // and "Best Practices"..
],
"auditGroupTags": {
"pwa": "Progressive Web App audits",
"perf": "Performance metrics & diagnostics",
"best_practices": "Developer best practices"
},
"auditGroups": [
{
"name": "New JavaScript features",
"id": "fancy_best_practices",
"reportCategory": "fancy_bp_category",
"groupTags": ["best_practices"],
"audits": {
"no-datenow": {
"expectedValue": false
},
"no-console-time": {
"expectedValue": false
}
}
}, {
"name": "App can load on offline/flaky connections",
"description": "Ensuring your web app can respond when the network connection is unavailable or flaky is critical to providing your users a good experience. This is achieved through use of a [Service Worker](https://developers.google.com/web/fundamentals/primers/service-worker/).",
"id": "offline",
"reportCategory": "pwa_category",
"groupTags": ["pwa"],
"audits": {
"service-worker": {
"expectedValue": true,
"weight": 1
},
"works-offline": {
"expectedValue": true,
"weight": 1
}
}
}, {
"name": "Page load performance is fast",
"description": "Users notice if sites and apps don't perform well. These top-level metrics capture the most important perceived performance concerns.",
"id": "perf_metrics",
"reportCategory": "pwa_category",
"groupTags": ["pwa", "perf"],
"audits": {
"first-meaningful-paint": {
"expectedValue": 100,
"weight": 1
},
"speed-index-metric": {
"expectedValue": 100,
"weight": 1
},
"estimated-input-latency": {
"expectedValue": 100,
"weight": 1
},
"time-to-interactive": {
"expectedValue": 100,
"weight": 1
},
"scrolling-60fps": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "Content scrolls at 60fps",
"category": "UX"
},
"touch-150ms": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "Touch input gets a response in < 150ms",
"category": "UX"
},
"fmp-no-jank": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "App is interactive without jank after the first meaningful paint",
"category": "UX"
}
}
}, {
// .. the rest of the auditGroups ....
```
(Side note: we can now nuke categorizable (today) as it only was there for the "toggle to view report by technology vs user feature".)
---------
I think this approach would really help everyone understand the code quite a bit better. But curious if others think it improves clarity.
WDYT? | non_main | aggregation refactor i m currently working on a change so you can easily select what sort of audits you want to run in reality this means need to build a full config that includes all these things passes gatherers per pass audit list aggregations both parent and child aggregations and the audit list within each of those aggregations there s a lot of relationships here to juggle and generally it works but one particular sore spot is aggregations what are aggregations anyways i have a proposal for adjusting our config default json which will have implications for the pipeline from auditresults report basically i m interested in replacing our with something like the following reportcategories array these are basically our parent aggregations like pwa fancier stuff etc it s very presentational and how a visual report will be generated with some semblance of order auditgroups array the meat of our aggregations but it s a flat list with no nesting so app can load on offline flaky connections is a sibling to using modern protocols and page load performance is fast auditgrouptags object describes the few tags that are applied to each auditgroup these will be used for users configuring what they want to evaluate on a given run each auditgroup nΓ©e aggregation gains an id property a reportcategory property and a grouptags one they don t have any more items property which has or more children which a lot of code has special handling for yay here s an excerpt of a revised json around the js end of audits array reportcategories name progressive web app description these audits validate the aspects of a progressive web app id pwa category scored true name fancier stuff description a list of newer features that you could be using in your app these audits do not affect your score and are just suggestions id fancy bp category scored false name performance metrics description these encapsulate your app s performance id perf diagnostics category scored false and best practices auditgrouptags pwa progressive web app audits perf performance metrics diagnostics best practices developer best practices auditgroups name new javascript features id fancy best practices reportcategory fancy bp category grouptags audits no datenow expectedvalue false no console time expectedvalue false name app can load on offline flaky connections description ensuring your web app can respond when the network connection is unavailable or flaky is critical to providing your users a good experience this is achieved through use of a id offline reportcategory pwa category grouptags audits service worker expectedvalue true weight works offline expectedvalue true weight name page load performance is fast description users notice if sites and apps don t perform well these top level metrics capture the most important perceived performance concerns id perf metrics reportcategory pwa category grouptags audits first meaningful paint expectedvalue weight speed index metric expectedvalue weight estimated input latency expectedvalue weight time to interactive expectedvalue weight scrolling expectedvalue true weight comingsoon true description content scrolls at category ux touch expectedvalue true weight comingsoon true description touch input gets a response in category ux fmp no jank expectedvalue true weight comingsoon true description app is interactive without jank after the first meaningful paint category ux the rest of the auditgroups side note we can now nuke categorizable today as it only was there for the toggle to view report by technology vs user feature i think this approach would really help everyone understand the code quite a bit better but curious if others think it improves clarity wdyt | 0 |
4,268 | 21,336,612,912 | IssuesEvent | 2022-04-18 15:17:19 | rollerderby/scoreboard | https://api.github.com/repos/rollerderby/scoreboard | opened | Announce v5.0.0 and forward feedback on some issues reported on Facebook | maintainer needed documentation | **CRG 5.0.0 has been [released](https://github.com/rollerderby/scoreboard/releases/tag/v5.0.0).** This should be announced in the Facebook group.
Also when preparing the release I went through the public Facebook group to see if any feedback on the beta was posted there and found a couple of posts where I think I can help the OPs. Since I can't post there myself it'd be nice if these answers could be forwarded to the respective posts.
#### Color Picker Default Color ([Post by Benjamin Doyle on March 17](https://www.facebook.com/groups/derbyscoreboard/posts/4890633761018357/))
This is included in 5.0.0.
#### Operator Page not loading on Chrome v100 ([Post by Dave Almond on March 30](https://www.facebook.com/groups/derbyscoreboard/posts/4923083001106766/))
The files reported as not found have been removed going from 3.x to 4.0.0. Since it works using a different IP address I'm assuming there is a file from 3.x stuck in the browser cache and this causes the issue. If so, clearing the browser cache should resolve it.
#### Disappeared "Box" buttons ([Post by Yvonne Dietrich on April 1](https://www.facebook.com/groups/derbyscoreboard/posts/4927068467374886/))
This is intended behavior when the lineup tracking functionality is used in order to avoid wrong game data when SBO and LT try to start a box trip at about the same time, which is quite annoying to fix. There is an option in the settings to disable LT functionality which will return the buttons.
#### Missing Jam list ([Post by Ioan Wigmore on April 1](https://www.facebook.com/groups/derbyscoreboard/posts/4929261383822261/))
I managed to reproduce this by using "Start New Game" when the prior game had 0 jams. Reloading the screen consistently fixed the problem for me.
Since 5.0.0 forces a reload whenever a new game is started, the problem should not occur there.
#### Keybindings not disabled on other tabs ([Post by Carina Gerry on March 16](https://www.facebook.com/groups/derbyscoreboard/posts/4886098571471876/))
This bug is fixed in 5.0.0. (It was present since 4.1.1 - I'm surprised it wasn't noticed earlier.) | True | Announce v5.0.0 and forward feedback on some issues reported on Facebook - **CRG 5.0.0 has been [released](https://github.com/rollerderby/scoreboard/releases/tag/v5.0.0).** This should be announced in the Facebook group.
Also when preparing the release I went through the public Facebook group to see if any feedback on the beta was posted there and found a couple of posts where I think I can help the OPs. Since I can't post there myself it'd be nice if these answers could be forwarded to the respective posts.
#### Color Picker Default Color ([Post by Benjamin Doyle on March 17](https://www.facebook.com/groups/derbyscoreboard/posts/4890633761018357/))
This is included in 5.0.0.
#### Operator Page not loading on Chrome v100 ([Post by Dave Almond on March 30](https://www.facebook.com/groups/derbyscoreboard/posts/4923083001106766/))
The files reported as not found have been removed going from 3.x to 4.0.0. Since it works using a different IP address I'm assuming there is a file from 3.x stuck in the browser cache and this causes the issue. If so, clearing the browser cache should resolve it.
#### Disappeared "Box" buttons ([Post by Yvonne Dietrich on April 1](https://www.facebook.com/groups/derbyscoreboard/posts/4927068467374886/))
This is intended behavior when the lineup tracking functionality is used in order to avoid wrong game data when SBO and LT try to start a box trip at about the same time, which is quite annoying to fix. There is an option in the settings to disable LT functionality which will return the buttons.
#### Missing Jam list ([Post by Ioan Wigmore on April 1](https://www.facebook.com/groups/derbyscoreboard/posts/4929261383822261/))
I managed to reproduce this by using "Start New Game" when the prior game had 0 jams. Reloading the screen consistently fixed the problem for me.
Since 5.0.0 forces a reload whenever a new game is started, the problem should not occur there.
#### Keybindings not disabled on other tabs ([Post by Carina Gerry on March 16](https://www.facebook.com/groups/derbyscoreboard/posts/4886098571471876/))
This bug is fixed in 5.0.0. (It was present since 4.1.1 - I'm surprised it wasn't noticed earlier.) | main | announce and forward feedback on some issues reported on facebook crg has been this should be announced in the facebook group also when preparing the release i went through the public facebook group to see if any feedback on the beta was posted there and found a couple of posts where i think i can help the ops since i can t post there myself it d be nice if these answers could be forwarded to the respective posts color picker default color this is included in operator page not loading on chrome the files reported as not found have been removed going from x to since it works using a different ip address i m assuming there is a file from x stuck in the browser cache and this causes the issue if so clearing the browser cache should resolve it disappeared box buttons this is intended behavior when the lineup tracking functionality is used in order to avoid wrong game data when sbo and lt try to start a box trip at about the same time which is quite annoying to fix there is an option in the settings to disable lt functionality which will return the buttons missing jam list i managed to reproduce this by using start new game when the prior game had jams reloading the screen consistently fixed the problem for me since forces a reload whenever a new game is started the problem should not occur there keybindings not disabled on other tabs this bug is fixed in it was present since i m surprised it wasn t noticed earlier | 1 |
3,384 | 13,111,993,266 | IssuesEvent | 2020-08-05 00:46:15 | short-d/short | https://api.github.com/repos/short-d/short | closed | [Refactor] Use ptr.String to pass test case strings directly into test cases | maintainability | **What is frustrating you?**
Test case strings should be inserted directly into test cases if possible, but [due to a restriction in the Golang spec](https://golang.org/ref/spec#Address_operators), string literals cannot have its address taken directly.
**Your solution**
In #938, `ptr.String` method was introduced to facilitate getting address out of string literal in a clean way. Update all test cases that use string addresses to use this helper method instead.
| True | [Refactor] Use ptr.String to pass test case strings directly into test cases - **What is frustrating you?**
Test case strings should be inserted directly into test cases if possible, but [due to a restriction in the Golang spec](https://golang.org/ref/spec#Address_operators), string literals cannot have its address taken directly.
**Your solution**
In #938, `ptr.String` method was introduced to facilitate getting address out of string literal in a clean way. Update all test cases that use string addresses to use this helper method instead.
| main | use ptr string to pass test case strings directly into test cases what is frustrating you test case strings should be inserted directly into test cases if possible but string literals cannot have its address taken directly your solution in ptr string method was introduced to facilitate getting address out of string literal in a clean way update all test cases that use string addresses to use this helper method instead | 1 |
5,178 | 26,347,684,691 | IssuesEvent | 2023-01-11 00:12:29 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Update node container | engineering maintain unplanned | # Description
When running commands like `inv new-env` or `inv catch-up` the `package-lock.json` file keeps getting updated.
I believe this might have something to do with the node version that is used in the Dockerfile (which is 14.13.1).
I see errors like this:
```
npm WARN read-shrinkwrap This version of npm is compatible with lockfileVersion@1, but package-lock.json was generated for lockfileVersion@2. I'll try to do my best with it!
```
We should probably update the node version that is used in the container. We also need to make sure that running `inv new-env` or `inv catch-up` can be run without changed to the `package-lock.json` file.
We probably want to make sure that the development node and the live version also match. Production is currently on `19.2.0`. See also: https://devcenter.heroku.com/articles/nodejs-support
We can set this to something else too in the `package.json`.
Considering the support table, using version `18.x` seems reasonable, LTS supported until 2025.
# Acceptance criteria
- [ ] As a developer on my local setup, when I run `inv new-env` or `inv catchup` the `package-lock.json` file is not updated but stays the same as before I ran either command.
- [ ] The development container uses the `node` version as we do in production.
| True | Update node container - # Description
When running commands like `inv new-env` or `inv catch-up` the `package-lock.json` file keeps getting updated.
I believe this might have something to do with the node version that is used in the Dockerfile (which is 14.13.1).
I see errors like this:
```
npm WARN read-shrinkwrap This version of npm is compatible with lockfileVersion@1, but package-lock.json was generated for lockfileVersion@2. I'll try to do my best with it!
```
We should probably update the node version that is used in the container. We also need to make sure that running `inv new-env` or `inv catch-up` can be run without changed to the `package-lock.json` file.
We probably want to make sure that the development node and the live version also match. Production is currently on `19.2.0`. See also: https://devcenter.heroku.com/articles/nodejs-support
We can set this to something else too in the `package.json`.
Considering the support table, using version `18.x` seems reasonable, LTS supported until 2025.
# Acceptance criteria
- [ ] As a developer on my local setup, when I run `inv new-env` or `inv catchup` the `package-lock.json` file is not updated but stays the same as before I ran either command.
- [ ] The development container uses the `node` version as we do in production.
| main | update node container description when running commands like inv new env or inv catch up the package lock json file keeps getting updated i believe this might have something to do with the node version that is used in the dockerfile which is i see errors like this npm warn read shrinkwrap this version of npm is compatible with lockfileversion but package lock json was generated for lockfileversion i ll try to do my best with it we should probably update the node version that is used in the container we also need to make sure that running inv new env or inv catch up can be run without changed to the package lock json file we probably want to make sure that the development node and the live version also match production is currently on see also we can set this to something else too in the package json considering the support table using version x seems reasonable lts supported until acceptance criteria as a developer on my local setup when i run inv new env or inv catchup the package lock json file is not updated but stays the same as before i ran either command the development container uses the node version as we do in production | 1 |
70,707 | 8,575,357,214 | IssuesEvent | 2018-11-12 17:02:52 | pandas-dev/pandas | https://api.github.com/repos/pandas-dev/pandas | closed | Should ExtensionArray.take accept scalar inputs? | API Design ExtensionArray | ndarray.take accepts scalars, and returns a scalar. We should probably make that part of the interface, or document that we don't support it.
```python
In [18]: np.array([1, 2]).take(0)
Out[18]: 1
```
Categorical currently returns an invalid categorical:
```
In [19]: res = pd.Categorical([0, 1]).take(0)
In [20]: type(res)
Out[20]: pandas.core.arrays.categorical.Categorical
```
```pytb
In [21]: res
Out[21]: ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~/Envs/pandas-dev/lib/python3.6/site-packages/IPython/core/formatters.py in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
~/Envs/pandas-dev/lib/python3.6/site-packages/IPython/lib/pretty.py in pretty(self, obj)
398 if cls is not object \
399 and callable(cls.__dict__.get('__repr__')):
--> 400 return _repr_pprint(obj, self, cycle)
401
402 return _default_pprint(obj, self, cycle)
~/Envs/pandas-dev/lib/python3.6/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle)
693 """A pprint that just redirects to the normal repr function."""
694 # Find newlines and replace them with p.break_()
--> 695 output = repr(obj)
696 for idx,output_line in enumerate(output.splitlines()):
697 if idx:
~/sandbox/pandas/pandas/core/base.py in __repr__(self)
80 Yields Bytestring in Py2, Unicode String in py3.
81 """
---> 82 return str(self)
83
84
~/sandbox/pandas/pandas/core/base.py in __str__(self)
59
60 if compat.PY3:
---> 61 return self.__unicode__()
62 return self.__bytes__()
63
~/sandbox/pandas/pandas/core/arrays/categorical.py in __unicode__(self)
1942 """ Unicode representation. """
1943 _maxlen = 10
-> 1944 if len(self._codes) > _maxlen:
1945 result = self._tidy_repr(_maxlen)
1946 elif len(self._codes) > 0:
TypeError: len() of unsized object
```
- IntervalArray.take fails on `take`
- SparseArray allows it.
| 1.0 | Should ExtensionArray.take accept scalar inputs? - ndarray.take accepts scalars, and returns a scalar. We should probably make that part of the interface, or document that we don't support it.
```python
In [18]: np.array([1, 2]).take(0)
Out[18]: 1
```
Categorical currently returns an invalid categorical:
```
In [19]: res = pd.Categorical([0, 1]).take(0)
In [20]: type(res)
Out[20]: pandas.core.arrays.categorical.Categorical
```
```pytb
In [21]: res
Out[21]: ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~/Envs/pandas-dev/lib/python3.6/site-packages/IPython/core/formatters.py in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
~/Envs/pandas-dev/lib/python3.6/site-packages/IPython/lib/pretty.py in pretty(self, obj)
398 if cls is not object \
399 and callable(cls.__dict__.get('__repr__')):
--> 400 return _repr_pprint(obj, self, cycle)
401
402 return _default_pprint(obj, self, cycle)
~/Envs/pandas-dev/lib/python3.6/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle)
693 """A pprint that just redirects to the normal repr function."""
694 # Find newlines and replace them with p.break_()
--> 695 output = repr(obj)
696 for idx,output_line in enumerate(output.splitlines()):
697 if idx:
~/sandbox/pandas/pandas/core/base.py in __repr__(self)
80 Yields Bytestring in Py2, Unicode String in py3.
81 """
---> 82 return str(self)
83
84
~/sandbox/pandas/pandas/core/base.py in __str__(self)
59
60 if compat.PY3:
---> 61 return self.__unicode__()
62 return self.__bytes__()
63
~/sandbox/pandas/pandas/core/arrays/categorical.py in __unicode__(self)
1942 """ Unicode representation. """
1943 _maxlen = 10
-> 1944 if len(self._codes) > _maxlen:
1945 result = self._tidy_repr(_maxlen)
1946 elif len(self._codes) > 0:
TypeError: len() of unsized object
```
- IntervalArray.take fails on `take`
- SparseArray allows it.
| non_main | should extensionarray take accept scalar inputs ndarray take accepts scalars and returns a scalar we should probably make that part of the interface or document that we don t support it python in np array take out categorical currently returns an invalid categorical in res pd categorical take in type res out pandas core arrays categorical categorical pytb in res out typeerror traceback most recent call last envs pandas dev lib site packages ipython core formatters py in call self obj type pprinters self type printers deferred pprinters self deferred printers printer pretty obj printer flush return stream getvalue envs pandas dev lib site packages ipython lib pretty py in pretty self obj if cls is not object and callable cls dict get repr return repr pprint obj self cycle return default pprint obj self cycle envs pandas dev lib site packages ipython lib pretty py in repr pprint obj p cycle a pprint that just redirects to the normal repr function find newlines and replace them with p break output repr obj for idx output line in enumerate output splitlines if idx sandbox pandas pandas core base py in repr self yields bytestring in unicode string in return str self sandbox pandas pandas core base py in str self if compat return self unicode return self bytes sandbox pandas pandas core arrays categorical py in unicode self unicode representation maxlen if len self codes maxlen result self tidy repr maxlen elif len self codes typeerror len of unsized object intervalarray take fails on take sparsearray allows it | 0 |
73,960 | 15,293,783,908 | IssuesEvent | 2021-02-24 01:01:12 | joshnewton31080/IdentityServer4 | https://api.github.com/repos/joshnewton31080/IdentityServer4 | closed | CVE-2019-11358 (Medium) detected in jquery-3.3.1.min.js, jquery-3.3.1.js - autoclosed | security vulnerability | ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.min.js</b>, <b>jquery-3.3.1.js</b></p></summary>
<p>
<details><summary><b>jquery-3.3.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p>
<p>Path to vulnerable library: IdentityServer4/samples/Clients/src/MvcCode/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/5_EntityFramework/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/4_JavaScriptClient/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/6_AspNetIdentity/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Clients/src/MvcAutomaticTokenManagement/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/2_InteractiveAspNetCore/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/3_AspNetCoreAndApis/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p>
<p>Path to vulnerable library: IdentityServer4/samples/Clients/src/MvcCode/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/5_EntityFramework/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/6_AspNetIdentity/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/4_JavaScriptClient/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/2_InteractiveAspNetCore/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/3_AspNetCoreAndApis/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Clients/src/MvcAutomaticTokenManagement/wwwroot/lib/jquery/dist/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.js** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.3.1","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.3.1","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-11358 (Medium) detected in jquery-3.3.1.min.js, jquery-3.3.1.js - autoclosed - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.min.js</b>, <b>jquery-3.3.1.js</b></p></summary>
<p>
<details><summary><b>jquery-3.3.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p>
<p>Path to vulnerable library: IdentityServer4/samples/Clients/src/MvcCode/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/5_EntityFramework/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/4_JavaScriptClient/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/6_AspNetIdentity/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Clients/src/MvcAutomaticTokenManagement/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/2_InteractiveAspNetCore/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js,IdentityServer4/samples/Quickstarts/3_AspNetCoreAndApis/src/MvcClient/wwwroot/lib/jquery/dist/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p>
<p>Path to vulnerable library: IdentityServer4/samples/Clients/src/MvcCode/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/5_EntityFramework/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/6_AspNetIdentity/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/4_JavaScriptClient/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/2_InteractiveAspNetCore/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Quickstarts/3_AspNetCoreAndApis/src/MvcClient/wwwroot/lib/jquery/dist/jquery.js,IdentityServer4/samples/Clients/src/MvcAutomaticTokenManagement/wwwroot/lib/jquery/dist/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.js** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.3.1","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.3.1","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_main | cve medium detected in jquery min js jquery js autoclosed cve medium severity vulnerability vulnerable libraries jquery min js jquery js jquery min js javascript library for dom operations library home page a href path to vulnerable library samples clients src mvccode wwwroot lib jquery dist jquery min js samples quickstarts entityframework src mvcclient wwwroot lib jquery dist jquery min js samples quickstarts javascriptclient src mvcclient wwwroot lib jquery dist jquery min js samples quickstarts aspnetidentity src mvcclient wwwroot lib jquery dist jquery min js samples clients src mvcautomatictokenmanagement wwwroot lib jquery dist jquery min js samples quickstarts interactiveaspnetcore src mvcclient wwwroot lib jquery dist jquery min js samples quickstarts aspnetcoreandapis src mvcclient wwwroot lib jquery dist jquery min js dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to vulnerable library samples clients src mvccode wwwroot lib jquery dist jquery js samples quickstarts entityframework src mvcclient wwwroot lib jquery dist jquery js samples quickstarts aspnetidentity src mvcclient wwwroot lib jquery dist jquery js samples quickstarts javascriptclient src mvcclient wwwroot lib jquery dist jquery js samples quickstarts interactiveaspnetcore src mvcclient wwwroot lib jquery dist jquery js samples quickstarts aspnetcoreandapis src mvcclient wwwroot lib jquery dist jquery js samples clients src mvcautomatictokenmanagement wwwroot lib jquery dist jquery js dependency hierarchy x jquery js vulnerable library found in base branch main vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion packagetype javascript packagename jquery packageversion packagefilepaths istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype vulnerabilityurl | 0 |
3,265 | 5,416,556,117 | IssuesEvent | 2017-03-02 00:56:51 | Microsoft/vscode-cpptools | https://api.github.com/repos/Microsoft/vscode-cpptools | closed | Switch Source Header is very slow | bug Language Service | I'm using vscode + cpptools with a very large codebase. When switching from .cpp to .h and back, vscode is very slow to respond the first time when toggling between files, especially when switching from .h back to .cpp. It takes seconds to complete, which makes it feel like the feature isn't working.
Can we have fast switching, or a more prominent visual indicator that signals that the process of switching is still "working on it."?
| 1.0 | Switch Source Header is very slow - I'm using vscode + cpptools with a very large codebase. When switching from .cpp to .h and back, vscode is very slow to respond the first time when toggling between files, especially when switching from .h back to .cpp. It takes seconds to complete, which makes it feel like the feature isn't working.
Can we have fast switching, or a more prominent visual indicator that signals that the process of switching is still "working on it."?
| non_main | switch source header is very slow i m using vscode cpptools with a very large codebase when switching from cpp to h and back vscode is very slow to respond the first time when toggling between files especially when switching from h back to cpp it takes seconds to complete which makes it feel like the feature isn t working can we have fast switching or a more prominent visual indicator that signals that the process of switching is still working on it | 0 |
14,785 | 9,412,341,947 | IssuesEvent | 2019-04-10 03:35:51 | alpersonalwebsite/challenges | https://api.github.com/repos/alpersonalwebsite/challenges | opened | WS-2019-0047 Medium Severity Vulnerability detected by WhiteSource | security vulnerability | ## WS-2019-0047 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/challenges/commit/0695676bdba146e144cfe2fe27378e1be00d575c">0695676bdba146e144cfe2fe27378e1be00d575c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of node-tar prior to 4.4.2 are vulnerable to Arbitrary File Overwrite. Extracting tarballs containing a hardlink to a file that already exists in the system, and a file that matches the hardlink will overwrite the system's file with the contents of the extracted file.
<p>Publish Date: 2019-04-05
<p>URL: <a href=>WS-2019-0047</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/803">https://www.npmjs.com/advisories/803</a></p>
<p>Release Date: 2019-04-05</p>
<p>Fix Resolution: 4.4.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isOpenPROnNewVersion":false,"isPackageBased":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"4.4.1","isTransitiveDependency":true,"dependencyTree":"react-scripts:2.1.8;fsevents:1.2.4;node-pre-gyp:0.10.0;tar:4.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.4.2"}],"vulnerabilityIdentifier":"WS-2019-0047","vulnerabilityDetails":"Versions of node-tar prior to 4.4.2 are vulnerable to Arbitrary File Overwrite. Extracting tarballs containing a hardlink to a file that already exists in the system, and a file that matches the hardlink will overwrite the system\u0027s file with the contents of the extracted file.","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> --> | True | WS-2019-0047 Medium Severity Vulnerability detected by WhiteSource - ## WS-2019-0047 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/challenges/commit/0695676bdba146e144cfe2fe27378e1be00d575c">0695676bdba146e144cfe2fe27378e1be00d575c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of node-tar prior to 4.4.2 are vulnerable to Arbitrary File Overwrite. Extracting tarballs containing a hardlink to a file that already exists in the system, and a file that matches the hardlink will overwrite the system's file with the contents of the extracted file.
<p>Publish Date: 2019-04-05
<p>URL: <a href=>WS-2019-0047</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/803">https://www.npmjs.com/advisories/803</a></p>
<p>Release Date: 2019-04-05</p>
<p>Fix Resolution: 4.4.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isOpenPROnNewVersion":false,"isPackageBased":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"4.4.1","isTransitiveDependency":true,"dependencyTree":"react-scripts:2.1.8;fsevents:1.2.4;node-pre-gyp:0.10.0;tar:4.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.4.2"}],"vulnerabilityIdentifier":"WS-2019-0047","vulnerabilityDetails":"Versions of node-tar prior to 4.4.2 are vulnerable to Arbitrary File Overwrite. Extracting tarballs containing a hardlink to a file that already exists in the system, and a file that matches the hardlink will overwrite the system\u0027s file with the contents of the extracted file.","cvss2Severity":"medium","cvss2Score":"5.0","extraData":{}}</REMEDIATE> --> | non_main | ws medium severity vulnerability detected by whitesource ws medium severity vulnerability vulnerable library tar tgz tar for node library home page a href dependency hierarchy react scripts tgz root library fsevents tgz node pre gyp tgz x tar tgz vulnerable library found in head commit a href vulnerability details versions of node tar prior to are vulnerable to arbitrary file overwrite extracting tarballs containing a hardlink to a file that already exists in the system and a file that matches the hardlink will overwrite the system s file with the contents of the extracted file publish date url ws cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource isopenpronvulnerability true isopenpronnewversion false ispackagebased true packages vulnerabilityidentifier ws vulnerabilitydetails versions of node tar prior to are vulnerable to arbitrary file overwrite extracting tarballs containing a hardlink to a file that already exists in the system and a file that matches the hardlink will overwrite the system file with the contents of the extracted file medium extradata | 0 |
1,615 | 6,572,638,112 | IssuesEvent | 2017-09-11 03:58:30 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | npm always installing even if package is installed already | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### CONFIGURATION
```
[defaults]
host_key_checking = False
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When using the npm module to install a package globally, it doesn't check if the module is installed before installing, making it always "change" the module.
##### STEPS TO REPRODUCE
Run the follwing task multiple timesL
```
- name: update npm
npm:
name: npm
global: yes
state: present
version: '3.10.8'
production: yes
```
##### EXPECTED RESULTS
First run: changed
Second run: ok
##### ACTUAL RESULTS
First run: changed
Second run: changed
| True | npm always installing even if package is installed already - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### CONFIGURATION
```
[defaults]
host_key_checking = False
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When using the npm module to install a package globally, it doesn't check if the module is installed before installing, making it always "change" the module.
##### STEPS TO REPRODUCE
Run the follwing task multiple timesL
```
- name: update npm
npm:
name: npm
global: yes
state: present
version: '3.10.8'
production: yes
```
##### EXPECTED RESULTS
First run: changed
Second run: ok
##### ACTUAL RESULTS
First run: changed
Second run: changed
| main | npm always installing even if package is installed already issue type bug report component name npm ansible version ansible configuration host key checking false os environment n a summary when using the npm module to install a package globally it doesn t check if the module is installed before installing making it always change the module steps to reproduce run the follwing task multiple timesl name update npm npm name npm global yes state present version production yes expected results first run changed second run ok actual results first run changed second run changed | 1 |
967 | 4,707,894,660 | IssuesEvent | 2016-10-13 21:31:10 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | VMware Template playbook options | affects_2.2 bug_report cloud P2 vmware waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
N/A
##### SUMMARY
Hi
I am setting up a new VMware virtual machine from a template using a Ansible playbook.
I want to be able to change the datastore and the network (vm_disk and vm_nic) of the VM during the setup of the VM. But when I add this information into the playbook (see below), nothing happens.
The new VM is created successfully and Ansible returns a success, but the datastore and network have not been adjust to what I requested in the playbook. They have remained the same as what the template image is.
Am I doing something incorrect in the playbook? Or is this not possible with Ansible?
Playbook (highlighted in bold is what is not being adjusted)
`---
- hosts: 127.0.0.1
connection: local
user: root
sudo: false
gather_facts: false
serial: 1
vars:
vcenter_hostname: server.local
esxhost: 172.25.10.10
nic_type: e1000e
network: Web Servers
network_type: standard
vmcluster: UK-CLUSTER
username: admin
password: password
folder: Utilities
notes: Created by Ansible
tasks:
- name: Create VM from template
vsphere_guest:
vcenter_hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
guest: "{{ name }}"
vm_extra_config:
notes: "{{ notes }}"
folder: "{{ folder }}"
from_template: yes
template_src: "{{ vmtemplate }}"
cluster: "{{ vmcluster }}"
vm_disk:
disk1:
type: "{{ disktype }}"
datastore: "{{ datastore }}"
vm_nic:
nic1:
type: "{{ nic_type }}"
network: "{{ network }}"
network_type: "{{ network_type }}"
resource_pool: "/Resources"
esxi:
datacenter: UK
hostname: "{{ esxhost }}"`
If I look at the example on the Ansible website, it doesn't look like it gives the option to allow this unless you setup a VM from an ISO file. (http://docs.ansible.com/ansible/vsphere_guest_module.html)
I want to have the same functionality if I use a template.
Cheers
| True | VMware Template playbook options - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
N/A
##### SUMMARY
Hi
I am setting up a new VMware virtual machine from a template using a Ansible playbook.
I want to be able to change the datastore and the network (vm_disk and vm_nic) of the VM during the setup of the VM. But when I add this information into the playbook (see below), nothing happens.
The new VM is created successfully and Ansible returns a success, but the datastore and network have not been adjust to what I requested in the playbook. They have remained the same as what the template image is.
Am I doing something incorrect in the playbook? Or is this not possible with Ansible?
Playbook (highlighted in bold is what is not being adjusted)
`---
- hosts: 127.0.0.1
connection: local
user: root
sudo: false
gather_facts: false
serial: 1
vars:
vcenter_hostname: server.local
esxhost: 172.25.10.10
nic_type: e1000e
network: Web Servers
network_type: standard
vmcluster: UK-CLUSTER
username: admin
password: password
folder: Utilities
notes: Created by Ansible
tasks:
- name: Create VM from template
vsphere_guest:
vcenter_hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
guest: "{{ name }}"
vm_extra_config:
notes: "{{ notes }}"
folder: "{{ folder }}"
from_template: yes
template_src: "{{ vmtemplate }}"
cluster: "{{ vmcluster }}"
vm_disk:
disk1:
type: "{{ disktype }}"
datastore: "{{ datastore }}"
vm_nic:
nic1:
type: "{{ nic_type }}"
network: "{{ network }}"
network_type: "{{ network_type }}"
resource_pool: "/Resources"
esxi:
datacenter: UK
hostname: "{{ esxhost }}"`
If I look at the example on the Ansible website, it doesn't look like it gives the option to allow this unless you setup a VM from an ISO file. (http://docs.ansible.com/ansible/vsphere_guest_module.html)
I want to have the same functionality if I use a template.
Cheers
| main | vmware template playbook options issue type bug report component name vsphere guest ansible version n a summary hi i am setting up a new vmware virtual machine from a template using a ansible playbook i want to be able to change the datastore and the network vm disk and vm nic of the vm during the setup of the vm but when i add this information into the playbook see below nothing happens the new vm is created successfully and ansible returns a success but the datastore and network have not been adjust to what i requested in the playbook they have remained the same as what the template image is am i doing something incorrect in the playbook or is this not possible with ansible playbook highlighted in bold is what is not being adjusted hosts connection local user root sudo false gather facts false serial vars vcenter hostname server local esxhost nic type network web servers network type standard vmcluster uk cluster username admin password password folder utilities notes created by ansible tasks name create vm from template vsphere guest vcenter hostname vcenter hostname username username password password guest name vm extra config notes notes folder folder from template yes template src vmtemplate cluster vmcluster vm disk type disktype datastore datastore vm nic type nic type network network network type network type resource pool resources esxi datacenter uk hostname esxhost if i look at the example on the ansible website it doesn t look like it gives the option to allow this unless you setup a vm from an iso file i want to have the same functionality if i use a template cheers | 1 |
5,075 | 25,967,942,870 | IssuesEvent | 2022-12-19 08:51:13 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Improve behavior of breadcrumb selector when no results are present | type: enhancement work: frontend status: ready restricted: maintainers | - It should display a "No results" message or something.
| True | Improve behavior of breadcrumb selector when no results are present - - It should display a "No results" message or something.
| main | improve behavior of breadcrumb selector when no results are present it should display a no results message or something | 1 |
5,555 | 27,804,298,710 | IssuesEvent | 2023-03-17 18:22:19 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | opened | Remove npm package "event-stream" | engineering maintain | Related to dependabot PR https://github.com/MozillaFoundation/foundation.mozilla.org/pull/10215
Remove npm package "event-stream" because it's not in use in this repo anymore.
| True | Remove npm package "event-stream" - Related to dependabot PR https://github.com/MozillaFoundation/foundation.mozilla.org/pull/10215
Remove npm package "event-stream" because it's not in use in this repo anymore.
| main | remove npm package event stream related to dependabot pr remove npm package event stream because it s not in use in this repo anymore | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.