Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
375,207 | 26,152,467,435 | IssuesEvent | 2022-12-30 15:51:38 | redpanda-data/documentation | https://api.github.com/repos/redpanda-data/documentation | opened | Separate tiered storage commands | documentation | ### Describe the Issue
<!--
What problem does this issue solve for customers?
Who is the audience for this update? For example, Infosec admins, cluster admins, or developers.
Do you know the context in which users will likely search for this content? If so, share it.
If this is a new feature, add a label for the version in which the update is expected.
-->
Due to the current behaviour outlined in https://github.com/redpanda-data/redpanda/issues/4499 , these commands need to be issued separately, rather than in a one-liner.
### Updates to existing documentation
<!--
Provide the URL of the page(s) to which the updates apply.
Which topic(s) should be updated?
What is the requested fix? Describe what is wrong in the existing doc and include screenshots if possible. Then provide the correct information.
Is this request to document an existing Redpanda feature that is not currently documented?
-->
https://docs.redpanda.com/docs/platform/data-management/tiered-storage/#enable-tiered-storage-for-a-topic
Instead of specifying:
```
rpk topic alter-config <topic_name> --set redpanda.remote.read=true --set redpanda.remote.write=true
```
It should currently be two commands of:
```
rpk topic alter-config <topic_name> --set redpanda.remote.read=true
```
and
```
rpk topic alter-config <topic_name> --set redpanda.remote.write=true
```
### Additional notes
When the linked core issue is resolved, it would be neater to revert this change, however having 2 commands in that case would still be valid, so it's not vital to revert. | 1.0 | Separate tiered storage commands - ### Describe the Issue
<!--
What problem does this issue solve for customers?
Who is the audience for this update? For example, Infosec admins, cluster admins, or developers.
Do you know the context in which users will likely search for this content? If so, share it.
If this is a new feature, add a label for the version in which the update is expected.
-->
Due to the current behaviour outlined in https://github.com/redpanda-data/redpanda/issues/4499 , these commands need to be issued separately, rather than in a one-liner.
### Updates to existing documentation
<!--
Provide the URL of the page(s) to which the updates apply.
Which topic(s) should be updated?
What is the requested fix? Describe what is wrong in the existing doc and include screenshots if possible. Then provide the correct information.
Is this request to document an existing Redpanda feature that is not currently documented?
-->
https://docs.redpanda.com/docs/platform/data-management/tiered-storage/#enable-tiered-storage-for-a-topic
Instead of specifying:
```
rpk topic alter-config <topic_name> --set redpanda.remote.read=true --set redpanda.remote.write=true
```
It should currently be two commands of:
```
rpk topic alter-config <topic_name> --set redpanda.remote.read=true
```
and
```
rpk topic alter-config <topic_name> --set redpanda.remote.write=true
```
### Additional notes
When the linked core issue is resolved, it would be neater to revert this change, however having 2 commands in that case would still be valid, so it's not vital to revert. | non_main | separate tiered storage commands describe the issue what problem does this issue solve for customers who is the audience for this update for example infosec admins cluster admins or developers do you know the context in which users will likely search for this content if so share it if this is a new feature add a label for the version in which the update is expected due to the current behaviour outlined in these commands need to be issued separately rather than in a one liner updates to existing documentation provide the url of the page s to which the updates apply which topic s should be updated what is the requested fix describe what is wrong in the existing doc and include screenshots if possible then provide the correct information is this request to document an existing redpanda feature that is not currently documented instead of specifying rpk topic alter config set redpanda remote read true set redpanda remote write true it should currently be two commands of rpk topic alter config set redpanda remote read true and rpk topic alter config set redpanda remote write true additional notes when the linked core issue is resolved it would be neater to revert this change however having commands in that case would still be valid so it s not vital to revert | 0 |
1,719 | 6,574,483,553 | IssuesEvent | 2017-09-11 13:03:37 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | docker_service: api_version related problems | affects_2.1 bug_report cloud docker waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* docker_service
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /usr/local/etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
ansible 2.2.0.0
config file = /usr/local/etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
ansible.cfg:
```ini
[defaults]
inventory = inventory.ini
retry_files_enabled = False
```
##### OS / ENVIRONMENT
ansible target:
* Ubuntu Trusty
* docker-compose==1.7.0 and 1.9.0
##### SUMMARY
I'm unable to use the `docker_service` module because the `docker-compose` client version is incompatible with the server.
Setting the `api_version` either in the task or in the environment to "auto" (or to the server version) does not help; maybe related to #5295.
Ubuntu Trusty package a docker with API version 1.18, which translate to a `docker-compose` version 1.3.3 (API version changed just after 1.4.0rc3 for release 1.4.0), incompatible with the ansible module requiring a package version ≥ 1.7.
##### STEPS TO REPRODUCE
Sample playbook:
```yaml
---
-
hosts:
- all
tasks:
- name: 'docker compose'
environment:
DOCKER_API_VERSION: '1.18'
docker_service:
# there is a docker-compose.yml inside,
# not included in the example because it fail before...
project_src: '/srv/dockers/traefik'
api_version: '1.18'
pull: yes
...
```
##### EXPECTED RESULTS
Working role :^)
##### ACTUAL RESULTS
```
TASK [docker compose] **********************************************************
fatal: [docker02]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to docker02.prd.iaas-manager.m0.p.fti.net closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 929, in <module>\r\n main()\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 924, in main\r\n result = ContainerManager(client).exec_module()\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 575, in exec_module\r\n result = self.cmd_up()\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 627, in cmd_up\r\n result.update(self.cmd_pull())\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 739, in cmd_pull\r\n image = service.image()\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/service.py\", line 307, in image\r\n return self.client.inspect_image(self.image_name)\r\n File \"/usr/local/lib/python2.7/dist-packages/docker/utils/decorators.py\", line 21, in wrapped\r\n return f(self, resource_id, *args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/docker/api/image.py\", line 136, in inspect_image\r\n self._get(self._url(\"/images/{0}/json\", image)), True\r\n File \"/usr/local/lib/python2.7/dist-packages/docker/client.py\", line 178, in _result\r\n self._raise_for_status(response)\r\n File \"/usr/local/lib/python2.7/dist-packages/docker/client.py\", line 173, in _raise_for_status\r\n raise errors.NotFound(e, response, explanation=explanation)\r\ndocker.errors.NotFound: 404 Client Error: Not Found (\"client and server don't have same version (client : 1.22, server: 1.18)\")\r\n", "msg": "MODULE FAILURE"}
msg: MODULE FAILURE
```
As you can see, the error message says it tried with `(client : 1.22, server: 1.18)` and totally ignored the parameter. | True | docker_service: api_version related problems - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* docker_service
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /usr/local/etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
ansible 2.2.0.0
config file = /usr/local/etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
ansible.cfg:
```ini
[defaults]
inventory = inventory.ini
retry_files_enabled = False
```
##### OS / ENVIRONMENT
ansible target:
* Ubuntu Trusty
* docker-compose==1.7.0 and 1.9.0
##### SUMMARY
I'm unable to use the `docker_service` module because the `docker-compose` client version is incompatible with the server.
Setting the `api_version` either in the task or in the environment to "auto" (or to the server version) does not help; maybe related to #5295.
Ubuntu Trusty package a docker with API version 1.18, which translate to a `docker-compose` version 1.3.3 (API version changed just after 1.4.0rc3 for release 1.4.0), incompatible with the ansible module requiring a package version ≥ 1.7.
##### STEPS TO REPRODUCE
Sample playbook:
```yaml
---
-
hosts:
- all
tasks:
- name: 'docker compose'
environment:
DOCKER_API_VERSION: '1.18'
docker_service:
# there is a docker-compose.yml inside,
# not included in the example because it fail before...
project_src: '/srv/dockers/traefik'
api_version: '1.18'
pull: yes
...
```
##### EXPECTED RESULTS
Working role :^)
##### ACTUAL RESULTS
```
TASK [docker compose] **********************************************************
fatal: [docker02]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to docker02.prd.iaas-manager.m0.p.fti.net closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 929, in <module>\r\n main()\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 924, in main\r\n result = ContainerManager(client).exec_module()\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 575, in exec_module\r\n result = self.cmd_up()\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 627, in cmd_up\r\n result.update(self.cmd_pull())\r\n File \"/tmp/ansible_Ow_f99/ansible_module_docker_service.py\", line 739, in cmd_pull\r\n image = service.image()\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/service.py\", line 307, in image\r\n return self.client.inspect_image(self.image_name)\r\n File \"/usr/local/lib/python2.7/dist-packages/docker/utils/decorators.py\", line 21, in wrapped\r\n return f(self, resource_id, *args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/docker/api/image.py\", line 136, in inspect_image\r\n self._get(self._url(\"/images/{0}/json\", image)), True\r\n File \"/usr/local/lib/python2.7/dist-packages/docker/client.py\", line 178, in _result\r\n self._raise_for_status(response)\r\n File \"/usr/local/lib/python2.7/dist-packages/docker/client.py\", line 173, in _raise_for_status\r\n raise errors.NotFound(e, response, explanation=explanation)\r\ndocker.errors.NotFound: 404 Client Error: Not Found (\"client and server don't have same version (client : 1.22, server: 1.18)\")\r\n", "msg": "MODULE FAILURE"}
msg: MODULE FAILURE
```
As you can see, the error message says it tried with `(client : 1.22, server: 1.18)` and totally ignored the parameter. | main | docker service api version related problems issue type bug report component name docker service ansible version ansible config file usr local etc ansible ansible cfg configured module search path default w o overrides ansible config file usr local etc ansible ansible cfg configured module search path default w o overrides configuration ansible cfg ini inventory inventory ini retry files enabled false os environment ansible target ubuntu trusty docker compose and summary i m unable to use the docker service module because the docker compose client version is incompatible with the server setting the api version either in the task or in the environment to auto or to the server version does not help maybe related to ubuntu trusty package a docker with api version which translate to a docker compose version api version changed just after for release incompatible with the ansible module requiring a package version ≥ steps to reproduce sample playbook yaml hosts all tasks name docker compose environment docker api version docker service there is a docker compose yml inside not included in the example because it fail before project src srv dockers traefik api version pull yes expected results working role actual results task fatal failed changed false failed true module stderr shared connection to prd iaas manager p fti net closed r n module stdout traceback most recent call last r n file tmp ansible ow ansible module docker service py line in r n main r n file tmp ansible ow ansible module docker service py line in main r n result containermanager client exec module r n file tmp ansible ow ansible module docker service py line in exec module r n result self cmd up r n file tmp ansible ow ansible module docker service py line in cmd up r n result update self cmd pull r n file tmp ansible ow ansible module docker service py line in cmd pull r n image service image r n file usr local lib dist packages compose service py line in image r n return self client inspect image self image name r n file usr local lib dist packages docker utils decorators py line in wrapped r n return f self resource id args kwargs r n file usr local lib dist packages docker api image py line in inspect image r n self get self url images json image true r n file usr local lib dist packages docker client py line in result r n self raise for status response r n file usr local lib dist packages docker client py line in raise for status r n raise errors notfound e response explanation explanation r ndocker errors notfound client error not found client and server don t have same version client server r n msg module failure msg module failure as you can see the error message says it tried with client server and totally ignored the parameter | 1 |
4,591 | 23,829,011,154 | IssuesEvent | 2022-09-05 17:52:11 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Improve behavior for applying filters | type: enhancement work: frontend status: draft restricted: maintainers | ## Current Behavior
* When a number of filters are added, if any of them is invalid, we do not apply any further changes until the state becomes valid again.
* We have 3 valid filters applied -> filtered state (1)
* We add one invalid/empty fourth filter.
* This is not applied. We show an error as expected.
* The state (1) is maintained.
* We add one more valid fifth filter.
* This is not applied. The state (1) is maintained.
* Close and open the filter dropdown
* Filters 4 and 5 are removed, even though 5 is valid.
## Expected Behavior
* Filter 5 should be applied. | True | Improve behavior for applying filters - ## Current Behavior
* When a number of filters are added, if any of them is invalid, we do not apply any further changes until the state becomes valid again.
* We have 3 valid filters applied -> filtered state (1)
* We add one invalid/empty fourth filter.
* This is not applied. We show an error as expected.
* The state (1) is maintained.
* We add one more valid fifth filter.
* This is not applied. The state (1) is maintained.
* Close and open the filter dropdown
* Filters 4 and 5 are removed, even though 5 is valid.
## Expected Behavior
* Filter 5 should be applied. | main | improve behavior for applying filters current behavior when a number of filters are added if any of them is invalid we do not apply any further changes until the state becomes valid again we have valid filters applied filtered state we add one invalid empty fourth filter this is not applied we show an error as expected the state is maintained we add one more valid fifth filter this is not applied the state is maintained close and open the filter dropdown filters and are removed even though is valid expected behavior filter should be applied | 1 |
3,794 | 16,218,278,644 | IssuesEvent | 2021-05-06 00:03:09 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | closed | Reconsidering the install path of `binary` | awaiting maintainer feedback discussion stale | On Apple Silicon, Homebrew defaults its installation to `/opt/homebrew/bin`. However, `binary` in casks still installs to `/usr/local/bin` (and silently fails if it cannot do it).
We should consider using `/opt/homebrew/bin` for `binary` on Apple Silicon.
Ping @Homebrew/cask. | True | Reconsidering the install path of `binary` - On Apple Silicon, Homebrew defaults its installation to `/opt/homebrew/bin`. However, `binary` in casks still installs to `/usr/local/bin` (and silently fails if it cannot do it).
We should consider using `/opt/homebrew/bin` for `binary` on Apple Silicon.
Ping @Homebrew/cask. | main | reconsidering the install path of binary on apple silicon homebrew defaults its installation to opt homebrew bin however binary in casks still installs to usr local bin and silently fails if it cannot do it we should consider using opt homebrew bin for binary on apple silicon ping homebrew cask | 1 |
767,966 | 26,948,712,027 | IssuesEvent | 2023-02-08 10:04:55 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Bluetooth: Host: Bonding information distribution in the non-bondable mode | bug priority: low area: Bluetooth area: Bluetooth Host | **Describe the bug**
The Zephyr Bluetooth Host (SMP module) seems to violate the Bluetooth specification in the following pairing corner case. With the Bonding Flag (BF) cleared in the Pairing Response, the stack should also set the key distribution flags to zero for the key distribution initiator and the responder. In this case, the specification requires no bonding information distribution (IRK, Identity Address, LTK, etc.).
This requirement is based on the 9.4.2 section, Part C, Vol 3 of the Bluetooth Core specification v5.3:
>
> 9.4.2 Non-bondable mode
>
> 9.4.2.1 Description
> A device in the non-bondable mode does not allow a bond to be created with a peer device.
>
> 9.4.2.2 Conditions
> If a device does not support pairing as defined in the Security Manager section then it is considered to be in non-bondable mode. **If Security Manager pairing is supported, the Host shall set the Bonding_Flags to ‘No Bonding’ as defined in [Vol 3] Part H, Section 3.5.1 and bonding information shall not be exchanged or stored.**
**To Reproduce**
Set the non-bondable mode for the Bluetooth Peripheral device using the following configuration ```CONFIG_BT_BONDABLE=n``` and pair with the Central device (e.g. Android device) that requests bonding in the Pairing Request. Observe in the sniffer trace or HCI logs that bonding information is exchanged between devices.
**Expected behavior**
Bonding information shall not be exchanged in the non-bondable mode.
| 1.0 | Bluetooth: Host: Bonding information distribution in the non-bondable mode - **Describe the bug**
The Zephyr Bluetooth Host (SMP module) seems to violate the Bluetooth specification in the following pairing corner case. With the Bonding Flag (BF) cleared in the Pairing Response, the stack should also set the key distribution flags to zero for the key distribution initiator and the responder. In this case, the specification requires no bonding information distribution (IRK, Identity Address, LTK, etc.).
This requirement is based on the 9.4.2 section, Part C, Vol 3 of the Bluetooth Core specification v5.3:
>
> 9.4.2 Non-bondable mode
>
> 9.4.2.1 Description
> A device in the non-bondable mode does not allow a bond to be created with a peer device.
>
> 9.4.2.2 Conditions
> If a device does not support pairing as defined in the Security Manager section then it is considered to be in non-bondable mode. **If Security Manager pairing is supported, the Host shall set the Bonding_Flags to ‘No Bonding’ as defined in [Vol 3] Part H, Section 3.5.1 and bonding information shall not be exchanged or stored.**
**To Reproduce**
Set the non-bondable mode for the Bluetooth Peripheral device using the following configuration ```CONFIG_BT_BONDABLE=n``` and pair with the Central device (e.g. Android device) that requests bonding in the Pairing Request. Observe in the sniffer trace or HCI logs that bonding information is exchanged between devices.
**Expected behavior**
Bonding information shall not be exchanged in the non-bondable mode.
| non_main | bluetooth host bonding information distribution in the non bondable mode describe the bug the zephyr bluetooth host smp module seems to violate the bluetooth specification in the following pairing corner case with the bonding flag bf cleared in the pairing response the stack should also set the key distribution flags to zero for the key distribution initiator and the responder in this case the specification requires no bonding information distribution irk identity address ltk etc this requirement is based on the section part c vol of the bluetooth core specification non bondable mode description a device in the non bondable mode does not allow a bond to be created with a peer device conditions if a device does not support pairing as defined in the security manager section then it is considered to be in non bondable mode if security manager pairing is supported the host shall set the bonding flags to ‘no bonding’ as defined in part h section and bonding information shall not be exchanged or stored to reproduce set the non bondable mode for the bluetooth peripheral device using the following configuration config bt bondable n and pair with the central device e g android device that requests bonding in the pairing request observe in the sniffer trace or hci logs that bonding information is exchanged between devices expected behavior bonding information shall not be exchanged in the non bondable mode | 0 |
253,419 | 21,679,275,531 | IssuesEvent | 2022-05-09 03:35:49 | metaplex-foundation/metaplex | https://api.github.com/repos/metaplex-foundation/metaplex | closed | [Bug]: Price (custom SPL token) mistake | needs tests Store Front priority: med bug Stale | ### Which package is this bug report for?
storefront
### Issue description
Hi,
There are a big mistake for display the price (for instant sale), the price is good in the listing:
<img width="272" alt="Screenshot 2022-02-06 at 16 14 36" src="https://user-images.githubusercontent.com/5221349/152685045-9bdc2d61-0afa-4b49-b177-2f59d118d2ee.png">
but not in the auction page:
<img width="689" alt="Screenshot 2022-02-06 at 16 13 58" src="https://user-images.githubusercontent.com/5221349/152685010-421eadf2-9567-491b-b482-1a38ccefeee8.png">
### Command
_No response_
### Relevant log output
_No response_
### Operating system
macOS
### Priority this issue should have
High (immediate attention needed)
### Check the Docs First
- [X] I have checked the docs and it didn't solve my issue | 1.0 | [Bug]: Price (custom SPL token) mistake - ### Which package is this bug report for?
storefront
### Issue description
Hi,
There are a big mistake for display the price (for instant sale), the price is good in the listing:
<img width="272" alt="Screenshot 2022-02-06 at 16 14 36" src="https://user-images.githubusercontent.com/5221349/152685045-9bdc2d61-0afa-4b49-b177-2f59d118d2ee.png">
but not in the auction page:
<img width="689" alt="Screenshot 2022-02-06 at 16 13 58" src="https://user-images.githubusercontent.com/5221349/152685010-421eadf2-9567-491b-b482-1a38ccefeee8.png">
### Command
_No response_
### Relevant log output
_No response_
### Operating system
macOS
### Priority this issue should have
High (immediate attention needed)
### Check the Docs First
- [X] I have checked the docs and it didn't solve my issue | non_main | price custom spl token mistake which package is this bug report for storefront issue description hi there are a big mistake for display the price for instant sale the price is good in the listing img width alt screenshot at src but not in the auction page img width alt screenshot at src command no response relevant log output no response operating system macos priority this issue should have high immediate attention needed check the docs first i have checked the docs and it didn t solve my issue | 0 |
385,786 | 11,425,696,177 | IssuesEvent | 2020-02-03 20:21:30 | kubeflow/kfctl | https://api.github.com/repos/kubeflow/kfctl | closed | update kfctl upgrade test to v1.0 | feature priority/p0 | Currently kfctl upgrade test covers v0.7.0 -> v0.7.1 upgrade.
We should update the test to cover v0.7.1 -> v1.0.0 upgrade. | 1.0 | update kfctl upgrade test to v1.0 - Currently kfctl upgrade test covers v0.7.0 -> v0.7.1 upgrade.
We should update the test to cover v0.7.1 -> v1.0.0 upgrade. | non_main | update kfctl upgrade test to currently kfctl upgrade test covers upgrade we should update the test to cover upgrade | 0 |
88,552 | 11,102,099,276 | IssuesEvent | 2019-12-16 22:58:32 | ipfs/docs | https://api.github.com/repos/ipfs/docs | closed | [NEW CONTENT] Dweb addressing | OKR 1: Content improvement Size: M design-content difficulty:easy docs-ipfs help wanted | At the IPFS developer summit in Berlin in July 2018, we had poster-making sessions where people explored various IPFS concepts. We should expand on the DWeb Addressing poster by adding a doc in the [`content/guides/concepts`](https://github.com/ipfs/docs/tree/master/content/guides/concepts) folder.

This is a subtask of #56. | 1.0 | [NEW CONTENT] Dweb addressing - At the IPFS developer summit in Berlin in July 2018, we had poster-making sessions where people explored various IPFS concepts. We should expand on the DWeb Addressing poster by adding a doc in the [`content/guides/concepts`](https://github.com/ipfs/docs/tree/master/content/guides/concepts) folder.

This is a subtask of #56. | non_main | dweb addressing at the ipfs developer summit in berlin in july we had poster making sessions where people explored various ipfs concepts we should expand on the dweb addressing poster by adding a doc in the folder this is a subtask of | 0 |
1,544 | 6,572,237,062 | IssuesEvent | 2017-09-11 00:26:28 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | mysql_replication: feature - Support Multi-Source Replication | affects_2.3 feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
[module: database/mysql/mysql_replication.py]
##### SUMMARY
<!--- Explain the problem briefly -->
MySQL support Multi-Source Replication ( https://dev.mysql.com/doc/refman/5.7/en/replication-multi-source.html ), but this module isn't support Multi-Source Replication.
Can you add a new option (called "channel",who accept string input) to this module?
exmaple
```
# Change master to master server 192.168.1.1 for channel "master-1"
- mysql_replication: mode=changemaster master_host=192.168.1.1 channel=master-1
```
| True | mysql_replication: feature - Support Multi-Source Replication - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
[module: database/mysql/mysql_replication.py]
##### SUMMARY
<!--- Explain the problem briefly -->
MySQL support Multi-Source Replication ( https://dev.mysql.com/doc/refman/5.7/en/replication-multi-source.html ), but this module isn't support Multi-Source Replication.
Can you add a new option (called "channel",who accept string input) to this module?
exmaple
```
# Change master to master server 192.168.1.1 for channel "master-1"
- mysql_replication: mode=changemaster master_host=192.168.1.1 channel=master-1
```
| main | mysql replication feature support multi source replication issue type feature idea component name summary mysql support multi source replication but this module isn t support multi source replication can you add a new option called channel who accept string input to this module exmaple change master to master server for channel master mysql replication mode changemaster master host channel master | 1 |
5,829 | 30,851,325,728 | IssuesEvent | 2023-08-02 16:58:15 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | opened | Test tree view is missing (Golang) | type: bug awaiting-maintainer | ### Description of the bug:
After running tests in my go project, IntelliJ's [test tree view](https://www.jetbrains.com/help/idea/viewing-and-exploring-test-results.html) no longer shows up.
This problem started with plugin version `2023.07.04.0.1-api-version-231` and does not manifest in version `2023.06.13.0.1-api-version-231`
This is the view that's missing:

### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
_No response_
### Which Intellij IDE are you using? Please provide the specific version.
IntelliJ IDEA 2023.1.4 (Ultimate Edition), Build #IU-231.9225.16, built on July 11, 2023
### What programming languages and tools are you using? Please provide specific versions.
go1.19.5 darwin/arm64; bazel 5.4.0
### What Bazel plugin version are you using?
2023.07.04.0.1-api-version-231
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_ | True | Test tree view is missing (Golang) - ### Description of the bug:
After running tests in my go project, IntelliJ's [test tree view](https://www.jetbrains.com/help/idea/viewing-and-exploring-test-results.html) no longer shows up.
This problem started with plugin version `2023.07.04.0.1-api-version-231` and does not manifest in version `2023.06.13.0.1-api-version-231`
This is the view that's missing:

### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
_No response_
### Which Intellij IDE are you using? Please provide the specific version.
IntelliJ IDEA 2023.1.4 (Ultimate Edition), Build #IU-231.9225.16, built on July 11, 2023
### What programming languages and tools are you using? Please provide specific versions.
go1.19.5 darwin/arm64; bazel 5.4.0
### What Bazel plugin version are you using?
2023.07.04.0.1-api-version-231
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_ | main | test tree view is missing golang description of the bug after running tests in my go project intellij s no longer shows up this problem started with plugin version api version and does not manifest in version api version this is the view that s missing what s the simplest easiest way to reproduce this bug please provide a minimal example if possible no response which intellij ide are you using please provide the specific version intellij idea ultimate edition build iu built on july what programming languages and tools are you using please provide specific versions darwin bazel what bazel plugin version are you using api version have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response | 1 |
1,176 | 5,096,330,755 | IssuesEvent | 2017-01-03 17:51:19 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | [FeatureRequest] file module with recurse: differentiate file mode and directory mode, exclude options | affects_2.1 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
file module
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ubuntu host to various guest
##### SUMMARY
When using file module with recurse, settings are set for all contents regardless of type.
Ideally, mode should be given the option to be set differently for file or directory like rsync
ex:
```
- file: path=/etc/some_directory state=directory mode=D0755,F0644 recurse=yes
[vs]
$ rsync --chmod=D0755,F564 ...
```
Also in many web tree, you have a tmp/cache folder with larger permissions (and some with stricter). Usual playbook now can't be idempotent easily without listing everything, especially as it seems with_fileglob is file only, not directory
```
[current playbook]
- file: path=/var/www/html/app state=directory mode=0755 recurse=yes
- file: path=/var/www/html/app/tmp state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/data state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/config state=directory mode=0640 group=www-data
[wish]
- file: path=/var/www/html/app state=directory mode=0755 recurse=yes exclude='(tmp|data|config)'
- file: path=/var/www/html/app/tmp state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/data state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/config state=directory mode=0640 group=www-data
```
this way playbook can be idempotent and still easily maintained.
If there is a better way, please advised.
Thanks
| True | [FeatureRequest] file module with recurse: differentiate file mode and directory mode, exclude options - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
file module
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ubuntu host to various guest
##### SUMMARY
When using file module with recurse, settings are set for all contents regardless of type.
Ideally, mode should be given the option to be set differently for file or directory like rsync
ex:
```
- file: path=/etc/some_directory state=directory mode=D0755,F0644 recurse=yes
[vs]
$ rsync --chmod=D0755,F564 ...
```
Also in many web tree, you have a tmp/cache folder with larger permissions (and some with stricter). Usual playbook now can't be idempotent easily without listing everything, especially as it seems with_fileglob is file only, not directory
```
[current playbook]
- file: path=/var/www/html/app state=directory mode=0755 recurse=yes
- file: path=/var/www/html/app/tmp state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/data state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/config state=directory mode=0640 group=www-data
[wish]
- file: path=/var/www/html/app state=directory mode=0755 recurse=yes exclude='(tmp|data|config)'
- file: path=/var/www/html/app/tmp state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/data state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/config state=directory mode=0640 group=www-data
```
this way playbook can be idempotent and still easily maintained.
If there is a better way, please advised.
Thanks
| main | file module with recurse differentiate file mode and directory mode exclude options issue type feature idea component name file module ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment ubuntu host to various guest summary when using file module with recurse settings are set for all contents regardless of type ideally mode should be given the option to be set differently for file or directory like rsync ex file path etc some directory state directory mode recurse yes rsync chmod also in many web tree you have a tmp cache folder with larger permissions and some with stricter usual playbook now can t be idempotent easily without listing everything especially as it seems with fileglob is file only not directory file path var www html app state directory mode recurse yes file path var www html app tmp state directory mode owner www data file path var www html app data state directory mode owner www data file path var www html app config state directory mode group www data file path var www html app state directory mode recurse yes exclude tmp data config file path var www html app tmp state directory mode owner www data file path var www html app data state directory mode owner www data file path var www html app config state directory mode group www data this way playbook can be idempotent and still easily maintained if there is a better way please advised thanks | 1 |
583 | 4,074,672,898 | IssuesEvent | 2016-05-28 16:26:58 | Homebrew/legacy-homebrew | https://api.github.com/repos/Homebrew/legacy-homebrew | closed | Deprecate old compiler support? | maintainer feedback | More specifically, as discussed elsewhere, should we mandate installing *everything* with either a modern Clang or modern GCC, rather than attempting to debug issues for compilers that no maintainer (I believe) uses on a daily basis?
Potential upsides:
* Would provide a consistent experience.
* Could reduce maintainer burden on edge cases and old compilers breaking formulae without us knowing because we don't test those compilers any more, which in turn might be nicer for end users as they should discover less of those breakages.
* Could make it easier to support older versions of OS X as one of the primarily breakage points would be reduced.
Potential downsides:
* Slightly clashes with Homebrew's nigh-adamant usage of system utilities unless they are insecure, lacking required features or very broken in other ways.
* Mandates everyone on old OS X versions installing a modern GCC prior to being able to build anything not bottled. Compiling GCC on a modern machine is a fairly long build, could be a lot worse on old Macs. | True | Deprecate old compiler support? - More specifically, as discussed elsewhere, should we mandate installing *everything* with either a modern Clang or modern GCC, rather than attempting to debug issues for compilers that no maintainer (I believe) uses on a daily basis?
Potential upsides:
* Would provide a consistent experience.
* Could reduce maintainer burden on edge cases and old compilers breaking formulae without us knowing because we don't test those compilers any more, which in turn might be nicer for end users as they should discover less of those breakages.
* Could make it easier to support older versions of OS X as one of the primarily breakage points would be reduced.
Potential downsides:
* Slightly clashes with Homebrew's nigh-adamant usage of system utilities unless they are insecure, lacking required features or very broken in other ways.
* Mandates everyone on old OS X versions installing a modern GCC prior to being able to build anything not bottled. Compiling GCC on a modern machine is a fairly long build, could be a lot worse on old Macs. | main | deprecate old compiler support more specifically as discussed elsewhere should we mandate installing everything with either a modern clang or modern gcc rather than attempting to debug issues for compilers that no maintainer i believe uses on a daily basis potential upsides would provide a consistent experience could reduce maintainer burden on edge cases and old compilers breaking formulae without us knowing because we don t test those compilers any more which in turn might be nicer for end users as they should discover less of those breakages could make it easier to support older versions of os x as one of the primarily breakage points would be reduced potential downsides slightly clashes with homebrew s nigh adamant usage of system utilities unless they are insecure lacking required features or very broken in other ways mandates everyone on old os x versions installing a modern gcc prior to being able to build anything not bottled compiling gcc on a modern machine is a fairly long build could be a lot worse on old macs | 1 |
60,519 | 12,125,719,239 | IssuesEvent | 2020-04-22 15:56:24 | RRZE-Webteam/fau-person | https://api.github.com/repos/RRZE-Webteam/fau-person | closed | Platzhalterbilder umstellen auf FA | Codeoptimierung enhancement | Entweder auf FontAwesome oder als SVG bereitstellen.
(SVG bevorzugt). | 1.0 | Platzhalterbilder umstellen auf FA - Entweder auf FontAwesome oder als SVG bereitstellen.
(SVG bevorzugt). | non_main | platzhalterbilder umstellen auf fa entweder auf fontawesome oder als svg bereitstellen svg bevorzugt | 0 |
4,643 | 24,038,574,703 | IssuesEvent | 2022-09-15 21:51:15 | tModLoader/tModLoader | https://api.github.com/repos/tModLoader/tModLoader | closed | IO save more fine-grained | Requestor-TML Maintainers Type: Change/Feature Request NEW ISSUE | ### Do you intend to personally contribute/program this feature?
Yes
### I would like to see this change made to improve my experience with
tModLoader code as a Contributor/Maintainer
### Description
Currently, when saving mod data, if there is an error in saving mod data, all mod data will be lost.
### What does this proposal attempt to solve or improve?
Catch errors so that they do not affect the saving of other mod data.
### Which (other) solutions should be considered?
_No response_ | True | IO save more fine-grained - ### Do you intend to personally contribute/program this feature?
Yes
### I would like to see this change made to improve my experience with
tModLoader code as a Contributor/Maintainer
### Description
Currently, when saving mod data, if there is an error in saving mod data, all mod data will be lost.
### What does this proposal attempt to solve or improve?
Catch errors so that they do not affect the saving of other mod data.
### Which (other) solutions should be considered?
_No response_ | main | io save more fine grained do you intend to personally contribute program this feature yes i would like to see this change made to improve my experience with tmodloader code as a contributor maintainer description currently when saving mod data if there is an error in saving mod data all mod data will be lost what does this proposal attempt to solve or improve catch errors so that they do not affect the saving of other mod data which other solutions should be considered no response | 1 |
295,020 | 22,173,750,924 | IssuesEvent | 2022-06-06 05:46:01 | splintered-reality/py_trees | https://api.github.com/repos/splintered-reality/py_trees | closed | Sphinx build is broken | type:documentation component:infra | ```
PY_TREES_DISABLE_COLORS=1 sphinx-build -E -b html doc doc/html
Running Sphinx v1.8.5
Extension error:
Could not import extension sphinx.builders.latex (exception: cannot import name 'contextfunction' from 'jinja2' (/mnt/mervin/workspaces/devel/py_trees/src/py_trees/.venv/lib/python3.8/site-packages/jinja2/__init__.py))
make: *** [Makefile:21: docs] Error 2
``` | 1.0 | Sphinx build is broken - ```
PY_TREES_DISABLE_COLORS=1 sphinx-build -E -b html doc doc/html
Running Sphinx v1.8.5
Extension error:
Could not import extension sphinx.builders.latex (exception: cannot import name 'contextfunction' from 'jinja2' (/mnt/mervin/workspaces/devel/py_trees/src/py_trees/.venv/lib/python3.8/site-packages/jinja2/__init__.py))
make: *** [Makefile:21: docs] Error 2
``` | non_main | sphinx build is broken py trees disable colors sphinx build e b html doc doc html running sphinx extension error could not import extension sphinx builders latex exception cannot import name contextfunction from mnt mervin workspaces devel py trees src py trees venv lib site packages init py make error | 0 |
12,699 | 3,640,677,079 | IssuesEvent | 2016-02-13 02:35:52 | gheber/kenzo | https://api.github.com/repos/gheber/kenzo | closed | Subsection 1.4.4 in Chapter1.ipynb | documentation | The functions of the subsection "1.4.4 Accessing objects" are used quite often in Kenzo, so it would be nice to include the documentation about them and not only the examples. | 1.0 | Subsection 1.4.4 in Chapter1.ipynb - The functions of the subsection "1.4.4 Accessing objects" are used quite often in Kenzo, so it would be nice to include the documentation about them and not only the examples. | non_main | subsection in ipynb the functions of the subsection accessing objects are used quite often in kenzo so it would be nice to include the documentation about them and not only the examples | 0 |
59,038 | 14,525,434,568 | IssuesEvent | 2020-12-14 12:56:39 | panda3d/panda3d | https://api.github.com/repos/panda3d/panda3d | closed | macOS: Thousand warnings from eigen when compiling with latest Xcode | build macos | ## Description
With the latest Xcode 12 on macOS, the compilation generates thousands of warnings due to some invalid code in libeigen:
<pre>
In file included from panda/src/cocoadisplay/p3cocoadisplay_composite1.mm:1:
In file included from panda/src/cocoadisplay/config_cocoadisplay.mm:15:
In file included from panda/src/cocoadisplay/cocoaGraphicsBuffer.h:18:
In file included from built/include/glgsg.h:90:
In file included from built/include/glstuff_src.h:32:
In file included from built/include/glTextureContext_src.h:15:
In file included from built/include/textureContext.h:20:
In file included from built/include/texture.h:25:
In file included from built/include/graphicsStateGuardianBase.h:21:
In file included from built/include/luse.h:41:
In file included from built/include/aa_luse.h:24:
In file included from built/include/lsimpleMatrix.h:20:
In file included from built/include/Eigen/Dense:1:
In file included from built/include/Eigen/Core:284:
built/include/Eigen/src/Core/Assign.h:57:41: warning: converting the enum constant to a boolean [-Wint-in-bool-context]
MaySliceVectorize = MightVectorize && DstHasDirectAccess
^
built/include/Eigen/src/Core/Assign.h:53:57: warning: converting the enum constant to a boolean [-Wint-in-bool-context]
MayLinearVectorize = MightVectorize && MayLinearize && DstHasDirectAccess
^
built/include/Eigen/src/Core/Assign.h:506:78: note: in instantiation of template class 'Eigen::internal::assign_traits<Eigen::Matrix<double, 1, 2, 3, 1, 2>, Eigen::Matrix<double, 1, 2, 1, 1, 2> >' requested here
internal::assign_impl<Derived, OtherDerived, int(SameType) ? int(internal::assign_traits<Derived, OtherDerived>::Traversal)
^
built/include/Eigen/src/Core/PlainObjectBase.h:414:20: note: in instantiation of function template specialization 'Eigen::DenseBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::lazyAssign<Eigen::Matrix<double, 1, 2, 1, 1, 2> >' requested here
return Base::lazyAssign(other.derived());
^
built/include/Eigen/src/Core/Assign.h:527:97: note: in instantiation of function template specialization 'Eigen::PlainObjectBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::lazyAssign<Eigen::Matrix<double, 1, 2, 1, 1, 2> >' requested here
static EIGEN_STRONG_INLINE Derived& run(Derived& dst, const OtherDerived& other) { return dst.lazyAssign(other.derived()); }
^
built/include/Eigen/src/Core/PlainObjectBase.h:653:69: note: in instantiation of member function 'Eigen::internal::assign_selector<Eigen::Matrix<double, 1, 2, 3, 1, 2>, Eigen::Matrix<double, 1, 2, 1, 1, 2>, false, false>::run' requested here
return internal::assign_selector<Derived,OtherDerived,false>::run(this->derived(), other.derived());
^
built/include/Eigen/src/Core/PlainObjectBase.h:635:101: note: in instantiation of function template specialization 'Eigen::PlainObjectBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::_set_noalias<Eigen::Matrix<double, 1, 2, 1, 1, 2> >' requested
here
EIGEN_STRONG_INLINE void _set_selector(const OtherDerived& other, const internal::true_type&) { _set_noalias(other.eval()); }
^
built/include/Eigen/src/Core/PlainObjectBase.h:630:7: note: in instantiation of function template specialization 'Eigen::PlainObjectBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::_set_selector<Eigen::CoeffBasedProduct<const Eigen::Matrix<double,
1, 2, 3, 1, 2> &, const Eigen::Block<const Eigen::Matrix<double, 3, 3, 3, 3, 3>, 2, 2, false>, 6> >' requested here
_set_selector(other.derived(), typename internal::conditional<static_cast<bool>(int(OtherDerived::Flags) & EvalBeforeAssigningBit), internal::true_type, internal::false_type>::type());
^
built/include/Eigen/src/Core/Matrix.h:172:20: note: in instantiation of function template specialization 'Eigen::PlainObjectBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::_set<Eigen::CoeffBasedProduct<const Eigen::Matrix<double, 1, 2, 3, 1, 2> &,
const Eigen::Block<const Eigen::Matrix<double, 3, 3, 3, 3, 3>, 2, 2, false>, 6> >' requested here
return Base::_set(other);
^
built/include/lmatrix3_src.I:655:8: note: in instantiation of function template specialization 'Eigen::Matrix<double, 1, 2, 3, 1, 2>::operator=<Eigen::CoeffBasedProduct<const Eigen::Matrix<double, 1, 2, 3, 1, 2> &, const Eigen::Block<const
Eigen::Matrix<double, 3, 3, 3, 3, 3>, 2, 2, false>, 6> >' requested here
v._v = v._v * _m.block<2, 2>(0, 0);
^
</pre>
This is fixed in libeigen 3.3.5, see https://gitlab.com/libeigen/eigen/-/issues/1402 (The needed commits are referenced in the comments)
## Steps to Reproduce
Switch to release/1.10.x branch and download third party tools from https://www.panda3d.org/download/panda3d-1.10.7/panda3d-1.10.7-tools-mac.tar.gz
## Environment
* Operating system: macOS
* System architecture: x64
* Panda3D version: 1.10.7
* Installation method: built from source
* Python version (if using Python): 3.7.9
* Compiler (if using C++): Xcode 12
| 1.0 | macOS: Thousand warnings from eigen when compiling with latest Xcode - ## Description
With the latest Xcode 12 on macOS, the compilation generates thousands of warnings due to some invalid code in libeigen:
<pre>
In file included from panda/src/cocoadisplay/p3cocoadisplay_composite1.mm:1:
In file included from panda/src/cocoadisplay/config_cocoadisplay.mm:15:
In file included from panda/src/cocoadisplay/cocoaGraphicsBuffer.h:18:
In file included from built/include/glgsg.h:90:
In file included from built/include/glstuff_src.h:32:
In file included from built/include/glTextureContext_src.h:15:
In file included from built/include/textureContext.h:20:
In file included from built/include/texture.h:25:
In file included from built/include/graphicsStateGuardianBase.h:21:
In file included from built/include/luse.h:41:
In file included from built/include/aa_luse.h:24:
In file included from built/include/lsimpleMatrix.h:20:
In file included from built/include/Eigen/Dense:1:
In file included from built/include/Eigen/Core:284:
built/include/Eigen/src/Core/Assign.h:57:41: warning: converting the enum constant to a boolean [-Wint-in-bool-context]
MaySliceVectorize = MightVectorize && DstHasDirectAccess
^
built/include/Eigen/src/Core/Assign.h:53:57: warning: converting the enum constant to a boolean [-Wint-in-bool-context]
MayLinearVectorize = MightVectorize && MayLinearize && DstHasDirectAccess
^
built/include/Eigen/src/Core/Assign.h:506:78: note: in instantiation of template class 'Eigen::internal::assign_traits<Eigen::Matrix<double, 1, 2, 3, 1, 2>, Eigen::Matrix<double, 1, 2, 1, 1, 2> >' requested here
internal::assign_impl<Derived, OtherDerived, int(SameType) ? int(internal::assign_traits<Derived, OtherDerived>::Traversal)
^
built/include/Eigen/src/Core/PlainObjectBase.h:414:20: note: in instantiation of function template specialization 'Eigen::DenseBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::lazyAssign<Eigen::Matrix<double, 1, 2, 1, 1, 2> >' requested here
return Base::lazyAssign(other.derived());
^
built/include/Eigen/src/Core/Assign.h:527:97: note: in instantiation of function template specialization 'Eigen::PlainObjectBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::lazyAssign<Eigen::Matrix<double, 1, 2, 1, 1, 2> >' requested here
static EIGEN_STRONG_INLINE Derived& run(Derived& dst, const OtherDerived& other) { return dst.lazyAssign(other.derived()); }
^
built/include/Eigen/src/Core/PlainObjectBase.h:653:69: note: in instantiation of member function 'Eigen::internal::assign_selector<Eigen::Matrix<double, 1, 2, 3, 1, 2>, Eigen::Matrix<double, 1, 2, 1, 1, 2>, false, false>::run' requested here
return internal::assign_selector<Derived,OtherDerived,false>::run(this->derived(), other.derived());
^
built/include/Eigen/src/Core/PlainObjectBase.h:635:101: note: in instantiation of function template specialization 'Eigen::PlainObjectBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::_set_noalias<Eigen::Matrix<double, 1, 2, 1, 1, 2> >' requested
here
EIGEN_STRONG_INLINE void _set_selector(const OtherDerived& other, const internal::true_type&) { _set_noalias(other.eval()); }
^
built/include/Eigen/src/Core/PlainObjectBase.h:630:7: note: in instantiation of function template specialization 'Eigen::PlainObjectBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::_set_selector<Eigen::CoeffBasedProduct<const Eigen::Matrix<double,
1, 2, 3, 1, 2> &, const Eigen::Block<const Eigen::Matrix<double, 3, 3, 3, 3, 3>, 2, 2, false>, 6> >' requested here
_set_selector(other.derived(), typename internal::conditional<static_cast<bool>(int(OtherDerived::Flags) & EvalBeforeAssigningBit), internal::true_type, internal::false_type>::type());
^
built/include/Eigen/src/Core/Matrix.h:172:20: note: in instantiation of function template specialization 'Eigen::PlainObjectBase<Eigen::Matrix<double, 1, 2, 3, 1, 2> >::_set<Eigen::CoeffBasedProduct<const Eigen::Matrix<double, 1, 2, 3, 1, 2> &,
const Eigen::Block<const Eigen::Matrix<double, 3, 3, 3, 3, 3>, 2, 2, false>, 6> >' requested here
return Base::_set(other);
^
built/include/lmatrix3_src.I:655:8: note: in instantiation of function template specialization 'Eigen::Matrix<double, 1, 2, 3, 1, 2>::operator=<Eigen::CoeffBasedProduct<const Eigen::Matrix<double, 1, 2, 3, 1, 2> &, const Eigen::Block<const
Eigen::Matrix<double, 3, 3, 3, 3, 3>, 2, 2, false>, 6> >' requested here
v._v = v._v * _m.block<2, 2>(0, 0);
^
</pre>
This is fixed in libeigen 3.3.5, see https://gitlab.com/libeigen/eigen/-/issues/1402 (The needed commits are referenced in the comments)
## Steps to Reproduce
Switch to release/1.10.x branch and download third party tools from https://www.panda3d.org/download/panda3d-1.10.7/panda3d-1.10.7-tools-mac.tar.gz
## Environment
* Operating system: macOS
* System architecture: x64
* Panda3D version: 1.10.7
* Installation method: built from source
* Python version (if using Python): 3.7.9
* Compiler (if using C++): Xcode 12
| non_main | macos thousand warnings from eigen when compiling with latest xcode description with the latest xcode on macos the compilation generates thousands of warnings due to some invalid code in libeigen in file included from panda src cocoadisplay mm in file included from panda src cocoadisplay config cocoadisplay mm in file included from panda src cocoadisplay cocoagraphicsbuffer h in file included from built include glgsg h in file included from built include glstuff src h in file included from built include gltexturecontext src h in file included from built include texturecontext h in file included from built include texture h in file included from built include graphicsstateguardianbase h in file included from built include luse h in file included from built include aa luse h in file included from built include lsimplematrix h in file included from built include eigen dense in file included from built include eigen core built include eigen src core assign h warning converting the enum constant to a boolean mayslicevectorize mightvectorize dsthasdirectaccess built include eigen src core assign h warning converting the enum constant to a boolean maylinearvectorize mightvectorize maylinearize dsthasdirectaccess built include eigen src core assign h note in instantiation of template class eigen internal assign traits eigen matrix requested here internal assign impl traversal built include eigen src core plainobjectbase h note in instantiation of function template specialization eigen densebase lazyassign requested here return base lazyassign other derived built include eigen src core assign h note in instantiation of function template specialization eigen plainobjectbase lazyassign requested here static eigen strong inline derived run derived dst const otherderived other return dst lazyassign other derived built include eigen src core plainobjectbase h note in instantiation of member function eigen internal assign selector eigen matrix false false run requested here return internal assign selector run this derived other derived built include eigen src core plainobjectbase h note in instantiation of function template specialization eigen plainobjectbase set noalias requested here eigen strong inline void set selector const otherderived other const internal true type set noalias other eval built include eigen src core plainobjectbase h note in instantiation of function template specialization eigen plainobjectbase set selector eigen coeffbasedproduct const eigen matrix double const eigen block false requested here set selector other derived typename internal conditional int otherderived flags evalbeforeassigningbit internal true type internal false type type built include eigen src core matrix h note in instantiation of function template specialization eigen plainobjectbase set const eigen block false requested here return base set other built include src i note in instantiation of function template specialization eigen matrix operator const eigen block const eigen matrix false requested here v v v v m block this is fixed in libeigen see the needed commits are referenced in the comments steps to reproduce switch to release x branch and download third party tools from environment operating system macos system architecture version installation method built from source python version if using python compiler if using c xcode | 0 |
202,483 | 23,077,344,183 | IssuesEvent | 2022-07-26 01:50:21 | directoryxx/Inventory-SISI | https://api.github.com/repos/directoryxx/Inventory-SISI | opened | CVE-2022-31129 (High) detected in moment-2.24.0.tgz | security vulnerability | ## CVE-2022-31129 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.24.0.tgz</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://registry.npmjs.org/moment/-/moment-2.24.0.tgz">https://registry.npmjs.org/moment/-/moment-2.24.0.tgz</a></p>
<p>Path to dependency file: /assets/adminlte/bower_components/bootstrap-daterangepicker/package.json</p>
<p>Path to vulnerable library: /assets/adminlte/bower_components/bootstrap-daterangepicker/node_modules/moment/package.json</p>
<p>
Dependency Hierarchy:
- :x: **moment-2.24.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
moment is a JavaScript date library for parsing, validating, manipulating, and formatting dates. Affected versions of moment were found to use an inefficient parsing algorithm. Specifically using string-to-date parsing in moment (more specifically rfc2822 parsing, which is tried by default) has quadratic (N^2) complexity on specific inputs. Users may notice a noticeable slowdown is observed with inputs above 10k characters. Users who pass user-provided strings without sanity length checks to moment constructor are vulnerable to (Re)DoS attacks. The problem is patched in 2.29.4, the patch can be applied to all affected versions with minimal tweaking. Users are advised to upgrade. Users unable to upgrade should consider limiting date lengths accepted from user input.
<p>Publish Date: 2022-07-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31129>CVE-2022-31129</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g">https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g</a></p>
<p>Release Date: 2022-07-06</p>
<p>Fix Resolution: moment - 2.29.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-31129 (High) detected in moment-2.24.0.tgz - ## CVE-2022-31129 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.24.0.tgz</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://registry.npmjs.org/moment/-/moment-2.24.0.tgz">https://registry.npmjs.org/moment/-/moment-2.24.0.tgz</a></p>
<p>Path to dependency file: /assets/adminlte/bower_components/bootstrap-daterangepicker/package.json</p>
<p>Path to vulnerable library: /assets/adminlte/bower_components/bootstrap-daterangepicker/node_modules/moment/package.json</p>
<p>
Dependency Hierarchy:
- :x: **moment-2.24.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
moment is a JavaScript date library for parsing, validating, manipulating, and formatting dates. Affected versions of moment were found to use an inefficient parsing algorithm. Specifically using string-to-date parsing in moment (more specifically rfc2822 parsing, which is tried by default) has quadratic (N^2) complexity on specific inputs. Users may notice a noticeable slowdown is observed with inputs above 10k characters. Users who pass user-provided strings without sanity length checks to moment constructor are vulnerable to (Re)DoS attacks. The problem is patched in 2.29.4, the patch can be applied to all affected versions with minimal tweaking. Users are advised to upgrade. Users unable to upgrade should consider limiting date lengths accepted from user input.
<p>Publish Date: 2022-07-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31129>CVE-2022-31129</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g">https://github.com/moment/moment/security/advisories/GHSA-wc69-rhjr-hc9g</a></p>
<p>Release Date: 2022-07-06</p>
<p>Fix Resolution: moment - 2.29.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in moment tgz cve high severity vulnerability vulnerable library moment tgz parse validate manipulate and display dates library home page a href path to dependency file assets adminlte bower components bootstrap daterangepicker package json path to vulnerable library assets adminlte bower components bootstrap daterangepicker node modules moment package json dependency hierarchy x moment tgz vulnerable library vulnerability details moment is a javascript date library for parsing validating manipulating and formatting dates affected versions of moment were found to use an inefficient parsing algorithm specifically using string to date parsing in moment more specifically parsing which is tried by default has quadratic n complexity on specific inputs users may notice a noticeable slowdown is observed with inputs above characters users who pass user provided strings without sanity length checks to moment constructor are vulnerable to re dos attacks the problem is patched in the patch can be applied to all affected versions with minimal tweaking users are advised to upgrade users unable to upgrade should consider limiting date lengths accepted from user input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution moment step up your open source security game with mend | 0 |
1,118 | 4,989,292,599 | IssuesEvent | 2016-12-08 11:18:52 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | vsphere_portgroup only updates 1 host. | affects_2.2 bug_report cloud vmware waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_portgroup
##### ANSIBLE VERSION
```
2.2
```
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
When specifiying a vcenter server only 1 host of your datacenter is changed.
##### STEPS TO REPRODUCE
```
- name: Add Management Network VM Portgroup
local_action:
module: vmware_portgroup
hostname: vcenter_hostname
username: vcenter_username
password: vcenter_password
switch_name: vswitch_name
portgroup_name: portgroup_name
vlan_id: vlan_id
```
##### EXPECTED RESULTS
When specifying a vcenter server you should be able to specify a cluster of all the hosts you want changed.
##### ACTUAL RESULTS
```
host = get_all_objs(content, [vim.HostSystem])
if not host:
raise SystemExit("Unable to locate Physical Host.")
host_system = host.keys()[0] <<
```
only the first item in host.keys is changed with the new portgroup
| True | vsphere_portgroup only updates 1 host. - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_portgroup
##### ANSIBLE VERSION
```
2.2
```
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
When specifiying a vcenter server only 1 host of your datacenter is changed.
##### STEPS TO REPRODUCE
```
- name: Add Management Network VM Portgroup
local_action:
module: vmware_portgroup
hostname: vcenter_hostname
username: vcenter_username
password: vcenter_password
switch_name: vswitch_name
portgroup_name: portgroup_name
vlan_id: vlan_id
```
##### EXPECTED RESULTS
When specifying a vcenter server you should be able to specify a cluster of all the hosts you want changed.
##### ACTUAL RESULTS
```
host = get_all_objs(content, [vim.HostSystem])
if not host:
raise SystemExit("Unable to locate Physical Host.")
host_system = host.keys()[0] <<
```
only the first item in host.keys is changed with the new portgroup
| main | vsphere portgroup only updates host issue type bug report component name vmware portgroup ansible version os environment ubuntu summary when specifiying a vcenter server only host of your datacenter is changed steps to reproduce name add management network vm portgroup local action module vmware portgroup hostname vcenter hostname username vcenter username password vcenter password switch name vswitch name portgroup name portgroup name vlan id vlan id expected results when specifying a vcenter server you should be able to specify a cluster of all the hosts you want changed actual results host get all objs content if not host raise systemexit unable to locate physical host host system host keys only the first item in host keys is changed with the new portgroup | 1 |
10,890 | 8,788,401,086 | IssuesEvent | 2018-12-20 22:03:02 | firebase/firebase-ios-sdk | https://api.github.com/repos/firebase/firebase-ios-sdk | closed | SDK Download stuck at 5.14 | Infrastructure | Please update the manual iOS SDK download to Firebase 5.15. Thanks! cc @paulb777 | 1.0 | SDK Download stuck at 5.14 - Please update the manual iOS SDK download to Firebase 5.15. Thanks! cc @paulb777 | non_main | sdk download stuck at please update the manual ios sdk download to firebase thanks cc | 0 |
2,762 | 9,872,946,623 | IssuesEvent | 2019-06-22 09:42:22 | arcticicestudio/snowsaw | https://api.github.com/repos/arcticicestudio/snowsaw | opened | Husky | context-workflow scope-maintainability type-feature | <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48658801-30ad2a80-ea48-11e8-9323-16bb0b25002b.png" width="20%" /></p>
> Must be resolved **after** #44
Integrate [Husky][gh-husky], the tool that make Git hooks easy and can prevent bad Git commits, pushes and more :dog: _woof_!
### Configuration
The configuration file `.huskyrc.js` will be placed in the project root and includes the command to run for any [supported Git hook][gh-husky-docs-hooks]. It will at least contain configs for the following hooks:
- `pre-commit` - Run lint-staged (#44) before each commit (via `lint-staged` command) to ensure all staged files are compliant to all style guides.
## Tasks
- [ ] Install [husky][npm-husky] package.
- [ ] Implement `.huskyrc.js` configuration file.
[gh-husky]: https://github.com/typicode/husky
[gh-husky-docs-hooks]: https://github.com/typicode/husky/blob/master/DOCS.md#supported-hooks
[npm-husky]: https://www.npmjs.com/package/husky | True | Husky - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48658801-30ad2a80-ea48-11e8-9323-16bb0b25002b.png" width="20%" /></p>
> Must be resolved **after** #44
Integrate [Husky][gh-husky], the tool that make Git hooks easy and can prevent bad Git commits, pushes and more :dog: _woof_!
### Configuration
The configuration file `.huskyrc.js` will be placed in the project root and includes the command to run for any [supported Git hook][gh-husky-docs-hooks]. It will at least contain configs for the following hooks:
- `pre-commit` - Run lint-staged (#44) before each commit (via `lint-staged` command) to ensure all staged files are compliant to all style guides.
## Tasks
- [ ] Install [husky][npm-husky] package.
- [ ] Implement `.huskyrc.js` configuration file.
[gh-husky]: https://github.com/typicode/husky
[gh-husky-docs-hooks]: https://github.com/typicode/husky/blob/master/DOCS.md#supported-hooks
[npm-husky]: https://www.npmjs.com/package/husky | main | husky must be resolved after integrate the tool that make git hooks easy and can prevent bad git commits pushes and more dog woof configuration the configuration file huskyrc js will be placed in the project root and includes the command to run for any it will at least contain configs for the following hooks pre commit run lint staged before each commit via lint staged command to ensure all staged files are compliant to all style guides tasks install package implement huskyrc js configuration file | 1 |
176,810 | 13,654,229,261 | IssuesEvent | 2020-09-27 16:25:16 | tarantool/tarantool | https://api.github.com/repos/tarantool/tarantool | opened | test: flaky app-tap/http_client.test.lua test | flaky test qa | Tarantool version:
Tarantool 2.6.0-114-g6c04687566
Target: Linux-x86_64-RelWithDebInfo
Build options: cmake . -DCMAKE_INSTALL_PREFIX=/builds/M4RrgQZ3/0/tarantool/tarantool/static-build/tarantool-prefix -DENABLE_BACKTRACE=TRUE
Compiler: /usr/bin/cc /usr/bin/c++
C_FLAGS: -static-libstdc++ -fexceptions -funwind-tables -fno-omit-frame-pointer -fno-stack-protector -fno-common -fopenmp -msse2 -std=c11 -Wall -Wextra -Wno-strict-aliasing -Wno-char-subscripts -Wno-format-truncation -Wno-gnu-alignof-expression -fno-gnu89-inline -Wno-cast-function-type -Werror
CXX_FLAGS: -static-libstdc++ -fexceptions -funwind-tables -fno-omit-frame-pointer -fno-stack-protector -fno-common -fopenmp -msse2 -std=c++11 -Wall -Wextra -Wno-strict-aliasing -Wno-char-subscripts -Wno-format-truncation -Wno-invalid-offsetof -Wno-gnu-alignof-expression -Wno-cast-function-type -Werror
OS version:
CentOS 7
Bug description:
https://gitlab.com/tarantool/tarantool/-/jobs/759207861#L4888
https://gitlab.com/tarantool/tarantool/-/jobs/759236803#L2293
https://gitlab.com/tarantool/tarantool/-/jobs/759476184#L2185
Not possible to use results file for checksum creation due to a lot of changing information printing.
```
[034] app-tap/http_client.test.lua [ fail ]
[034] Test failed! Output from reject file app-tap/http_client.reject:
[034] TAP version 13
[034] # TARANTOOL_SRC_DIR=/builds/M4RrgQZ3/0/tarantool/tarantool
[034] 1..2
[034] # http over AF_INET
[034] 1..11
[034] # starting HTTP server on 127.0.0.1:34324...
[034] not ok - server started
[034] ---
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] line: 0
[034] expected: heartbeat
[034] trace:
[034] - line: 25
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: upvalue
[034] name: start_server
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 585
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: global
[034] name: run_tests
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 616
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: local
[034] name: fun
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 218
[034] source: '@builtin/tap.lua'
[034] filename: builtin/tap.lua
[034] what: Lua
[034] namewhat: method
[034] name: test
[034] src: builtin/tap.lua
[034] - line: 0
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: main
[034] namewhat:
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] ...
[034] # trying to connect to http://127.0.0.1:34324/
[034] not ok - connection is ok
[034] ---
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] trace:
[034] - line: 25
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: upvalue
[034] name: start_server
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 585
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: global
[034] name: run_tests
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 616
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: local
[034] name: fun
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 218
[034] source: '@builtin/tap.lua'
[034] filename: builtin/tap.lua
[034] what: Lua
[034] namewhat: method
[034] name: test
[034] src: builtin/tap.lua
[034] - line: 0
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: main
[034] namewhat:
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] line: 0
[034] expected: 200
[034] got: 595
[034] ...
[034]
[034] Last 15 lines of Tarantool Log file [Instance "app_server"][/builds/M4RrgQZ3/0/tarantool/tarantool/test/var/034_app-tap/http_client.test.lua.tarantool.log]:
[034] Traceback (most recent call last):
[034] File "/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/httpd.py", line 141, in <module>
[034] sock.bind(sock_addr)
[034] File "<string>", line 1, in bind
[034] socket.error: [Errno 98] Address already in use
```
Steps to reproduce:
Optional (but very desirable):
* coredump
* backtrace
* netstat
| 1.0 | test: flaky app-tap/http_client.test.lua test - Tarantool version:
Tarantool 2.6.0-114-g6c04687566
Target: Linux-x86_64-RelWithDebInfo
Build options: cmake . -DCMAKE_INSTALL_PREFIX=/builds/M4RrgQZ3/0/tarantool/tarantool/static-build/tarantool-prefix -DENABLE_BACKTRACE=TRUE
Compiler: /usr/bin/cc /usr/bin/c++
C_FLAGS: -static-libstdc++ -fexceptions -funwind-tables -fno-omit-frame-pointer -fno-stack-protector -fno-common -fopenmp -msse2 -std=c11 -Wall -Wextra -Wno-strict-aliasing -Wno-char-subscripts -Wno-format-truncation -Wno-gnu-alignof-expression -fno-gnu89-inline -Wno-cast-function-type -Werror
CXX_FLAGS: -static-libstdc++ -fexceptions -funwind-tables -fno-omit-frame-pointer -fno-stack-protector -fno-common -fopenmp -msse2 -std=c++11 -Wall -Wextra -Wno-strict-aliasing -Wno-char-subscripts -Wno-format-truncation -Wno-invalid-offsetof -Wno-gnu-alignof-expression -Wno-cast-function-type -Werror
OS version:
CentOS 7
Bug description:
https://gitlab.com/tarantool/tarantool/-/jobs/759207861#L4888
https://gitlab.com/tarantool/tarantool/-/jobs/759236803#L2293
https://gitlab.com/tarantool/tarantool/-/jobs/759476184#L2185
Not possible to use results file for checksum creation due to a lot of changing information printing.
```
[034] app-tap/http_client.test.lua [ fail ]
[034] Test failed! Output from reject file app-tap/http_client.reject:
[034] TAP version 13
[034] # TARANTOOL_SRC_DIR=/builds/M4RrgQZ3/0/tarantool/tarantool
[034] 1..2
[034] # http over AF_INET
[034] 1..11
[034] # starting HTTP server on 127.0.0.1:34324...
[034] not ok - server started
[034] ---
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] line: 0
[034] expected: heartbeat
[034] trace:
[034] - line: 25
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: upvalue
[034] name: start_server
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 585
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: global
[034] name: run_tests
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 616
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: local
[034] name: fun
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 218
[034] source: '@builtin/tap.lua'
[034] filename: builtin/tap.lua
[034] what: Lua
[034] namewhat: method
[034] name: test
[034] src: builtin/tap.lua
[034] - line: 0
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: main
[034] namewhat:
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] ...
[034] # trying to connect to http://127.0.0.1:34324/
[034] not ok - connection is ok
[034] ---
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] trace:
[034] - line: 25
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: upvalue
[034] name: start_server
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 585
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: global
[034] name: run_tests
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 616
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: Lua
[034] namewhat: local
[034] name: fun
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] - line: 218
[034] source: '@builtin/tap.lua'
[034] filename: builtin/tap.lua
[034] what: Lua
[034] namewhat: method
[034] name: test
[034] src: builtin/tap.lua
[034] - line: 0
[034] source: '@/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] filename: /builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/http_client.test.lua
[034] what: main
[034] namewhat:
[034] src: '.../0/tarantool/tarantool/test/app-tap/http_client.test.lua'
[034] line: 0
[034] expected: 200
[034] got: 595
[034] ...
[034]
[034] Last 15 lines of Tarantool Log file [Instance "app_server"][/builds/M4RrgQZ3/0/tarantool/tarantool/test/var/034_app-tap/http_client.test.lua.tarantool.log]:
[034] Traceback (most recent call last):
[034] File "/builds/M4RrgQZ3/0/tarantool/tarantool/test/app-tap/httpd.py", line 141, in <module>
[034] sock.bind(sock_addr)
[034] File "<string>", line 1, in bind
[034] socket.error: [Errno 98] Address already in use
```
Steps to reproduce:
Optional (but very desirable):
* coredump
* backtrace
* netstat
| non_main | test flaky app tap http client test lua test tarantool version tarantool target linux relwithdebinfo build options cmake dcmake install prefix builds tarantool tarantool static build tarantool prefix denable backtrace true compiler usr bin cc usr bin c c flags static libstdc fexceptions funwind tables fno omit frame pointer fno stack protector fno common fopenmp std wall wextra wno strict aliasing wno char subscripts wno format truncation wno gnu alignof expression fno inline wno cast function type werror cxx flags static libstdc fexceptions funwind tables fno omit frame pointer fno stack protector fno common fopenmp std c wall wextra wno strict aliasing wno char subscripts wno format truncation wno invalid offsetof wno gnu alignof expression wno cast function type werror os version centos bug description not possible to use results file for checksum creation due to a lot of changing information printing app tap http client test lua test failed output from reject file app tap http client reject tap version tarantool src dir builds tarantool tarantool http over af inet starting http server on not ok server started filename builds tarantool tarantool test app tap http client test lua line expected heartbeat trace line source builds tarantool tarantool test app tap http client test lua filename builds tarantool tarantool test app tap http client test lua what lua namewhat upvalue name start server src tarantool tarantool test app tap http client test lua line source builds tarantool tarantool test app tap http client test lua filename builds tarantool tarantool test app tap http client test lua what lua namewhat global name run tests src tarantool tarantool test app tap http client test lua line source builds tarantool tarantool test app tap http client test lua filename builds tarantool tarantool test app tap http client test lua what lua namewhat local name fun src tarantool tarantool test app tap http client test lua line source builtin tap lua filename builtin tap lua what lua namewhat method name test src builtin tap lua line source builds tarantool tarantool test app tap http client test lua filename builds tarantool tarantool test app tap http client test lua what main namewhat src tarantool tarantool test app tap http client test lua trying to connect to not ok connection is ok filename builds tarantool tarantool test app tap http client test lua trace line source builds tarantool tarantool test app tap http client test lua filename builds tarantool tarantool test app tap http client test lua what lua namewhat upvalue name start server src tarantool tarantool test app tap http client test lua line source builds tarantool tarantool test app tap http client test lua filename builds tarantool tarantool test app tap http client test lua what lua namewhat global name run tests src tarantool tarantool test app tap http client test lua line source builds tarantool tarantool test app tap http client test lua filename builds tarantool tarantool test app tap http client test lua what lua namewhat local name fun src tarantool tarantool test app tap http client test lua line source builtin tap lua filename builtin tap lua what lua namewhat method name test src builtin tap lua line source builds tarantool tarantool test app tap http client test lua filename builds tarantool tarantool test app tap http client test lua what main namewhat src tarantool tarantool test app tap http client test lua line expected got last lines of tarantool log file traceback most recent call last file builds tarantool tarantool test app tap httpd py line in sock bind sock addr file line in bind socket error address already in use steps to reproduce optional but very desirable coredump backtrace netstat | 0 |
190,613 | 15,252,284,077 | IssuesEvent | 2021-02-20 02:15:56 | machineagency/jubilee | https://api.github.com/repos/machineagency/jubilee | opened | XY Frame Assembly instructions PDF corrections | assembly_instructions documentation | XY Frame Assembly instructions PDF:
Sheet 13 of 18, the written instructions cite different item numbers than the legend:
<img width="831" alt="Screen Shot 2021-02-19 at 6 06 41 PM" src="https://user-images.githubusercontent.com/49258142/108579840-bf6f1880-72dd-11eb-8006-669535435913.png">
Sheet 17 of 18, the legend calls for 2300mm belt lengths, which is excessively long. I trimmed mine to 2155mm to start with:
<img width="827" alt="Screen Shot 2021-02-19 at 6 11 31 PM" src="https://user-images.githubusercontent.com/49258142/108579984-54721180-72de-11eb-8bb5-0a31ede0a026.png">
| 1.0 | XY Frame Assembly instructions PDF corrections - XY Frame Assembly instructions PDF:
Sheet 13 of 18, the written instructions cite different item numbers than the legend:
<img width="831" alt="Screen Shot 2021-02-19 at 6 06 41 PM" src="https://user-images.githubusercontent.com/49258142/108579840-bf6f1880-72dd-11eb-8006-669535435913.png">
Sheet 17 of 18, the legend calls for 2300mm belt lengths, which is excessively long. I trimmed mine to 2155mm to start with:
<img width="827" alt="Screen Shot 2021-02-19 at 6 11 31 PM" src="https://user-images.githubusercontent.com/49258142/108579984-54721180-72de-11eb-8bb5-0a31ede0a026.png">
| non_main | xy frame assembly instructions pdf corrections xy frame assembly instructions pdf sheet of the written instructions cite different item numbers than the legend img width alt screen shot at pm src sheet of the legend calls for belt lengths which is excessively long i trimmed mine to to start with img width alt screen shot at pm src | 0 |
36,137 | 17,467,866,825 | IssuesEvent | 2021-08-06 19:48:27 | flutter/flutter | https://api.github.com/repos/flutter/flutter | opened | Consider allowing raster cache entries to survive 1+ frames without usage | engine severe: performance P4 | Currently the raster cache clears all entries that have not been used at the end of the frame. In the case of SVGs, this can lead to repeated rendering jank in scenarios like a scrolling list or tabbar view where the same picture (pending flutter_svg fix) is continually re-rasterized as a user interacts with the application.
For especially complex pictures (https://github.com/flutter/flutter/issues/87826), we should support some method of keeping the cache entry alive past one frame. Some ideas:
* We could tune a threshold for lack of access. Probably flaky, risk optimizing for benchmarks.
* We could tie the entry to the lifetime of the engine Picture object. If the picture isn't disposed, then the framework must be keeping it alive intentionally
* We could provide a new API that returned some sort of raster handle that the framework could manage | True | Consider allowing raster cache entries to survive 1+ frames without usage - Currently the raster cache clears all entries that have not been used at the end of the frame. In the case of SVGs, this can lead to repeated rendering jank in scenarios like a scrolling list or tabbar view where the same picture (pending flutter_svg fix) is continually re-rasterized as a user interacts with the application.
For especially complex pictures (https://github.com/flutter/flutter/issues/87826), we should support some method of keeping the cache entry alive past one frame. Some ideas:
* We could tune a threshold for lack of access. Probably flaky, risk optimizing for benchmarks.
* We could tie the entry to the lifetime of the engine Picture object. If the picture isn't disposed, then the framework must be keeping it alive intentionally
* We could provide a new API that returned some sort of raster handle that the framework could manage | non_main | consider allowing raster cache entries to survive frames without usage currently the raster cache clears all entries that have not been used at the end of the frame in the case of svgs this can lead to repeated rendering jank in scenarios like a scrolling list or tabbar view where the same picture pending flutter svg fix is continually re rasterized as a user interacts with the application for especially complex pictures we should support some method of keeping the cache entry alive past one frame some ideas we could tune a threshold for lack of access probably flaky risk optimizing for benchmarks we could tie the entry to the lifetime of the engine picture object if the picture isn t disposed then the framework must be keeping it alive intentionally we could provide a new api that returned some sort of raster handle that the framework could manage | 0 |
5,601 | 28,048,246,883 | IssuesEvent | 2023-03-29 02:02:59 | cncf/glossary | https://api.github.com/repos/cncf/glossary | closed | Need to rebase all dev-xx (branch for l10n) with main | lang/en lang/pt lang/it lang/hi lang/ko lang/de lang/ar lang/bn lang/es maintainers lang/zh triage/accepted lang/fr lang/ur | Hello, this issue is from the maintainers.
Since there are several major changes that affect to development branches for localization (Tags for terms, Deprecated terms, Labeler workflow), all localization branches (dev-xx) needs to be updated.
This is an action item to all localization teams. (especially for the members who are managing development branch itself)
Let me check all localization branches are updated.
- [x] dev-ko
- [x] dev-bn
- [x] dev-hi
- [x] dev-es
- [x] dev-fr
- [x] dev-pt
- [x] dev-zh
- [x] dev-it
- [x] dev-de
- [x] dev-ur
- [x] dev-ar
Please note that there is a target date to complete this issue. After the target date, some PRs to a dev-xx branch which is not updated can face `Some checks were not successful`.
Target date: Feb. 12 | True | Need to rebase all dev-xx (branch for l10n) with main - Hello, this issue is from the maintainers.
Since there are several major changes that affect to development branches for localization (Tags for terms, Deprecated terms, Labeler workflow), all localization branches (dev-xx) needs to be updated.
This is an action item to all localization teams. (especially for the members who are managing development branch itself)
Let me check all localization branches are updated.
- [x] dev-ko
- [x] dev-bn
- [x] dev-hi
- [x] dev-es
- [x] dev-fr
- [x] dev-pt
- [x] dev-zh
- [x] dev-it
- [x] dev-de
- [x] dev-ur
- [x] dev-ar
Please note that there is a target date to complete this issue. After the target date, some PRs to a dev-xx branch which is not updated can face `Some checks were not successful`.
Target date: Feb. 12 | main | need to rebase all dev xx branch for with main hello this issue is from the maintainers since there are several major changes that affect to development branches for localization tags for terms deprecated terms labeler workflow all localization branches dev xx needs to be updated this is an action item to all localization teams especially for the members who are managing development branch itself let me check all localization branches are updated dev ko dev bn dev hi dev es dev fr dev pt dev zh dev it dev de dev ur dev ar please note that there is a target date to complete this issue after the target date some prs to a dev xx branch which is not updated can face some checks were not successful target date feb | 1 |
169,672 | 6,414,214,151 | IssuesEvent | 2017-08-08 09:36:37 | spring-projects/spring-boot | https://api.github.com/repos/spring-projects/spring-boot | closed | Update the Flyway web endpoint to return a Map and include all of each migration's properties | priority: normal theme: actuator | The following properties are missing:
- `installedBy`
- `installedRank` | 1.0 | Update the Flyway web endpoint to return a Map and include all of each migration's properties - The following properties are missing:
- `installedBy`
- `installedRank` | non_main | update the flyway web endpoint to return a map and include all of each migration s properties the following properties are missing installedby installedrank | 0 |
172,439 | 14,360,817,544 | IssuesEvent | 2020-11-30 17:21:41 | 2Abendsegler/GClh | https://api.github.com/repos/2Abendsegler/GClh | closed | [New Map] Discussion | documentation | Es ist soweit, die neue Map ist nun für alle verfügbar und ich gehe davon aus, dass sie in ein paar Monaten auch die aktuelle Ansicht ablöst:
https://forums.geocaching.com/GC/index.php?/topic/351038-release-notes-website-new-searchmap-%EF%BB%BF-january-17-2019/
Wir müssen uns nun also Gedanken machen, welche Features wir in der neue Map umsetzen wollen, und vor allem wie. Ich bin dafür, für jedes Feature eine Issue aufzumachen, so dass auch mehrere Leute gleichzeitig an der Sache arbeiten können. Damit man die Sachen leichter findet schlage ich folgenden Titel als Beispiel für die Issues vor:
"[New Map] Show Owner Name in Cache Details"
Ich denke es ist das beste, wenn jeder seine Ideen als Issue auf macht. Dann können wir in den jeweiligen Issues themenspezifisch diskutieren. | 1.0 | [New Map] Discussion - Es ist soweit, die neue Map ist nun für alle verfügbar und ich gehe davon aus, dass sie in ein paar Monaten auch die aktuelle Ansicht ablöst:
https://forums.geocaching.com/GC/index.php?/topic/351038-release-notes-website-new-searchmap-%EF%BB%BF-january-17-2019/
Wir müssen uns nun also Gedanken machen, welche Features wir in der neue Map umsetzen wollen, und vor allem wie. Ich bin dafür, für jedes Feature eine Issue aufzumachen, so dass auch mehrere Leute gleichzeitig an der Sache arbeiten können. Damit man die Sachen leichter findet schlage ich folgenden Titel als Beispiel für die Issues vor:
"[New Map] Show Owner Name in Cache Details"
Ich denke es ist das beste, wenn jeder seine Ideen als Issue auf macht. Dann können wir in den jeweiligen Issues themenspezifisch diskutieren. | non_main | discussion es ist soweit die neue map ist nun für alle verfügbar und ich gehe davon aus dass sie in ein paar monaten auch die aktuelle ansicht ablöst wir müssen uns nun also gedanken machen welche features wir in der neue map umsetzen wollen und vor allem wie ich bin dafür für jedes feature eine issue aufzumachen so dass auch mehrere leute gleichzeitig an der sache arbeiten können damit man die sachen leichter findet schlage ich folgenden titel als beispiel für die issues vor show owner name in cache details ich denke es ist das beste wenn jeder seine ideen als issue auf macht dann können wir in den jeweiligen issues themenspezifisch diskutieren | 0 |
134,010 | 12,559,722,309 | IssuesEvent | 2020-06-07 19:50:10 | andrealexandre/dark-souls-dead-counter | https://api.github.com/repos/andrealexandre/dark-souls-dead-counter | opened | Investigate forked code and rewrite in C/C++ | documentation | Investigate forked code and learn from forked code.
The current code is written in C#, we want to rewrite this code C/C++ (possibly C to avoid C++ complexity).
Produce any documentation necessary. | 1.0 | Investigate forked code and rewrite in C/C++ - Investigate forked code and learn from forked code.
The current code is written in C#, we want to rewrite this code C/C++ (possibly C to avoid C++ complexity).
Produce any documentation necessary. | non_main | investigate forked code and rewrite in c c investigate forked code and learn from forked code the current code is written in c we want to rewrite this code c c possibly c to avoid c complexity produce any documentation necessary | 0 |
8,540 | 2,611,517,053 | IssuesEvent | 2015-02-27 05:51:53 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | closed | connection hanging after entering user and password | auto-migrated Component-QtFrontend Milestone-NextRelease Priority-Medium Type-Defect | ```
Some strange behaviour, on a clean config, when you connect to the server you
have to enter your nick and pwd, but then the frontend indefinitely hangs...
Here is the log I saved
Server: ("CONNECTED", "Hedgewars server http://www.hedgewars.org/", "1")
Client: ("NICK", "")
Client: ("PROTO", "44")
Server: ("ERROR", "Incorrect command (state: not entered)")
Server: ("ERROR", "Incorrect command (state: not entered)")
Then you click back and trigger bug 570
The next time you try connecting all goes smooth
Server: ("CONNECTED", "Hedgewars server http://www.hedgewars.org/", "1")
Client: ("NICK", "koda")
Client: ("PROTO", "44")
Server: ("NICK", "koda")
Server: ("PROTO", "44")
Server: ("ASKPASSWORD")
Client: ("PASSWORD", "<trimmed>")
Server: ("LOBBY:JOINED", "koda", and other peeps...)
```
Original issue reported on code.google.com by `vittorio...@gmail.com` on 27 Mar 2013 at 10:42
* Blocking: #580 | 1.0 | connection hanging after entering user and password - ```
Some strange behaviour, on a clean config, when you connect to the server you
have to enter your nick and pwd, but then the frontend indefinitely hangs...
Here is the log I saved
Server: ("CONNECTED", "Hedgewars server http://www.hedgewars.org/", "1")
Client: ("NICK", "")
Client: ("PROTO", "44")
Server: ("ERROR", "Incorrect command (state: not entered)")
Server: ("ERROR", "Incorrect command (state: not entered)")
Then you click back and trigger bug 570
The next time you try connecting all goes smooth
Server: ("CONNECTED", "Hedgewars server http://www.hedgewars.org/", "1")
Client: ("NICK", "koda")
Client: ("PROTO", "44")
Server: ("NICK", "koda")
Server: ("PROTO", "44")
Server: ("ASKPASSWORD")
Client: ("PASSWORD", "<trimmed>")
Server: ("LOBBY:JOINED", "koda", and other peeps...)
```
Original issue reported on code.google.com by `vittorio...@gmail.com` on 27 Mar 2013 at 10:42
* Blocking: #580 | non_main | connection hanging after entering user and password some strange behaviour on a clean config when you connect to the server you have to enter your nick and pwd but then the frontend indefinitely hangs here is the log i saved server connected hedgewars server client nick client proto server error incorrect command state not entered server error incorrect command state not entered then you click back and trigger bug the next time you try connecting all goes smooth server connected hedgewars server client nick koda client proto server nick koda server proto server askpassword client password server lobby joined koda and other peeps original issue reported on code google com by vittorio gmail com on mar at blocking | 0 |
1,928 | 6,599,830,367 | IssuesEvent | 2017-09-17 02:01:33 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | android-sdk install fails: `staged_path.to_s` | awaiting maintainer feedback | #### General troubleshooting steps
- [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [x] None of the templates was appropriate for my issue, or I’m not sure.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
#### Description of issue
I attempted to install `android-sdk` and had some transient wifi problems. I stopped the install process, restarted my network, and continued the install, which appeared to complete successfully. I am now in a position where the SDK appears to be installed but `/usr/local/share/android-sdk` does not exist. I can't uninstall or reinstall because the uninstall process fails repeatedly.
#### Output of your command with `--verbose --debug`
```
==> Uninstalling Cask android-sdk
==> Uninstalling Cask android-sdk
==> Un-installing artifacts
==> Determining which artifacts are present in Cask android-sdk
==> 47 artifact/s defined
Error: undefined local variable or method `summarize' for #<Hbc::Artifact::PreflightBlock:0x007ff054219160>
Follow the instructions here:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_artifact.rb:67:in `to_s'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `puts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `puts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `odebug'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:379:in `uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:370:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:22:in `block in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `rescue in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:155:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
```
#### Output of `brew cask doctor`
```
==> Homebrew-Cask Version
Homebrew-Cask 1.3.2-60-gaeab091
caskroom/homebrew-cask (git revision d45ec; last commit 2017-09-15)
==> Homebrew-Cask Install Location
<NONE>
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (1 files, 49.3MB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3731 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (166 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LANG="en_US.UTF-8"
PATH="~/.sdkman/candidates/gradle/current/bin:~/bin:/usr/local/opt/mysql51/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm"
SHELL="/usr/local/bin/fish"
```
| True | android-sdk install fails: `staged_path.to_s` - #### General troubleshooting steps
- [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [x] None of the templates was appropriate for my issue, or I’m not sure.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
#### Description of issue
I attempted to install `android-sdk` and had some transient wifi problems. I stopped the install process, restarted my network, and continued the install, which appeared to complete successfully. I am now in a position where the SDK appears to be installed but `/usr/local/share/android-sdk` does not exist. I can't uninstall or reinstall because the uninstall process fails repeatedly.
#### Output of your command with `--verbose --debug`
```
==> Uninstalling Cask android-sdk
==> Uninstalling Cask android-sdk
==> Un-installing artifacts
==> Determining which artifacts are present in Cask android-sdk
==> 47 artifact/s defined
Error: undefined local variable or method `summarize' for #<Hbc::Artifact::PreflightBlock:0x007ff054219160>
Follow the instructions here:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_artifact.rb:67:in `to_s'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `puts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `puts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `odebug'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:379:in `uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:370:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:22:in `block in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `rescue in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:155:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>'
```
#### Output of `brew cask doctor`
```
==> Homebrew-Cask Version
Homebrew-Cask 1.3.2-60-gaeab091
caskroom/homebrew-cask (git revision d45ec; last commit 2017-09-15)
==> Homebrew-Cask Install Location
<NONE>
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (1 files, 49.3MB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3731 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (166 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LANG="en_US.UTF-8"
PATH="~/.sdkman/candidates/gradle/current/bin:~/bin:/usr/local/opt/mysql51/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm"
SHELL="/usr/local/bin/fish"
```
| main | android sdk install fails staged path to s general troubleshooting steps i have checked the instructions for or before opening the issue none of the templates was appropriate for my issue or i’m not sure i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue i attempted to install android sdk and had some transient wifi problems i stopped the install process restarted my network and continued the install which appeared to complete successfully i am now in a position where the sdk appears to be installed but usr local share android sdk does not exist i can t uninstall or reinstall because the uninstall process fails repeatedly output of your command with verbose debug uninstalling cask android sdk uninstalling cask android sdk un installing artifacts determining which artifacts are present in cask android sdk artifact s defined error undefined local variable or method summarize for follow the instructions here usr local homebrew library homebrew cask lib hbc artifact abstract artifact rb in to s usr local homebrew library homebrew cask lib hbc utils rb in puts usr local homebrew library homebrew cask lib hbc utils rb in puts usr local homebrew library homebrew cask lib hbc utils rb in odebug usr local homebrew library homebrew cask lib hbc installer rb in uninstall artifacts usr local homebrew library homebrew cask lib hbc installer rb in uninstall usr local homebrew library homebrew cask lib hbc cli uninstall rb in block in run usr local homebrew library homebrew cask lib hbc cli uninstall rb in each usr local homebrew library homebrew cask lib hbc cli uninstall rb in run usr local homebrew library homebrew cask lib hbc cli abstract command rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor homebrew cask version homebrew cask caskroom homebrew cask git revision last commit homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads library caches homebrew cask files homebrew cask taps usr local homebrew library taps caskroom homebrew cask casks usr local homebrew library taps caskroom homebrew versions casks contents of load path usr local homebrew library homebrew cask lib usr local homebrew library homebrew library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal environment variables lang en us utf path sdkman candidates gradle current bin bin usr local opt bin usr local bin usr bin bin usr sbin sbin opt bin usr local homebrew library taps homebrew homebrew services cmd usr local homebrew library homebrew shims scm shell usr local bin fish | 1 |
175,759 | 21,329,768,359 | IssuesEvent | 2022-04-18 06:36:37 | LaudateCorpus1/JQuery-Mobile | https://api.github.com/repos/LaudateCorpus1/JQuery-Mobile | opened | CVE-2021-43138 (High) detected in multiple libraries | security vulnerability | ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-1.0.0.tgz</b>, <b>async-3.2.0.tgz</b>, <b>async-2.6.3.tgz</b>, <b>async-0.9.2.tgz</b>, <b>async-0.2.10.tgz</b>, <b>async-2.5.0.tgz</b>, <b>async-2.0.1.tgz</b></p></summary>
<p>
<details><summary><b>async-1.0.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-1.0.0.tgz">https://registry.npmjs.org/async/-/async-1.0.0.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/winston/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- sails-1.5.2.tgz (Root Library)
- prompt-1.2.1.tgz
- winston-2.4.5.tgz
- :x: **async-1.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-3.2.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-3.2.0.tgz">https://registry.npmjs.org/async/-/async-3.2.0.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/grunt-legacy-util/node_modules/async/package.json,/Application/node_modules/grunt-contrib-less/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-less-3.0.0.tgz (Root Library)
- :x: **async-3.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/grunt-contrib-watch/node_modules/async/package.json,/Application/node_modules/grunt-contrib-clean/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-watch-1.1.0.tgz (Root Library)
- :x: **async-2.6.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-0.9.2.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-0.9.2.tgz">https://registry.npmjs.org/async/-/async-0.9.2.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/jake/node_modules/async/package.json,/Application/node_modules/prompt/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- ejs-3.1.6.tgz (Root Library)
- jake-10.8.2.tgz
- :x: **async-0.9.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-0.2.10.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-0.2.10.tgz">https://registry.npmjs.org/async/-/async-0.2.10.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/@sailshq/nedb/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- sails-disk-2.1.1.tgz (Root Library)
- nedb-1.8.1.tgz
- :x: **async-0.2.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.5.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.5.0.tgz">https://registry.npmjs.org/async/-/async-2.5.0.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/sails/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- sails-1.5.2.tgz (Root Library)
- :x: **async-2.5.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.0.1.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.0.1.tgz">https://registry.npmjs.org/async/-/async-2.0.1.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- sails-disk-2.1.1.tgz (Root Library)
- :x: **async-2.0.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LaudateCorpus1/JQuery-Mobile/commit/48591fdaa1c416f0fbbfb879e119fee8ac7e396a">48591fdaa1c416f0fbbfb879e119fee8ac7e396a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution: async - v3.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-43138 (High) detected in multiple libraries - ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-1.0.0.tgz</b>, <b>async-3.2.0.tgz</b>, <b>async-2.6.3.tgz</b>, <b>async-0.9.2.tgz</b>, <b>async-0.2.10.tgz</b>, <b>async-2.5.0.tgz</b>, <b>async-2.0.1.tgz</b></p></summary>
<p>
<details><summary><b>async-1.0.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-1.0.0.tgz">https://registry.npmjs.org/async/-/async-1.0.0.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/winston/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- sails-1.5.2.tgz (Root Library)
- prompt-1.2.1.tgz
- winston-2.4.5.tgz
- :x: **async-1.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-3.2.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-3.2.0.tgz">https://registry.npmjs.org/async/-/async-3.2.0.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/grunt-legacy-util/node_modules/async/package.json,/Application/node_modules/grunt-contrib-less/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-less-3.0.0.tgz (Root Library)
- :x: **async-3.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/grunt-contrib-watch/node_modules/async/package.json,/Application/node_modules/grunt-contrib-clean/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-watch-1.1.0.tgz (Root Library)
- :x: **async-2.6.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-0.9.2.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-0.9.2.tgz">https://registry.npmjs.org/async/-/async-0.9.2.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/jake/node_modules/async/package.json,/Application/node_modules/prompt/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- ejs-3.1.6.tgz (Root Library)
- jake-10.8.2.tgz
- :x: **async-0.9.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-0.2.10.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-0.2.10.tgz">https://registry.npmjs.org/async/-/async-0.2.10.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/@sailshq/nedb/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- sails-disk-2.1.1.tgz (Root Library)
- nedb-1.8.1.tgz
- :x: **async-0.2.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.5.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.5.0.tgz">https://registry.npmjs.org/async/-/async-2.5.0.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/sails/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- sails-1.5.2.tgz (Root Library)
- :x: **async-2.5.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.0.1.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.0.1.tgz">https://registry.npmjs.org/async/-/async-2.0.1.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- sails-disk-2.1.1.tgz (Root Library)
- :x: **async-2.0.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LaudateCorpus1/JQuery-Mobile/commit/48591fdaa1c416f0fbbfb879e119fee8ac7e396a">48591fdaa1c416f0fbbfb879e119fee8ac7e396a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution: async - v3.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries async tgz async tgz async tgz async tgz async tgz async tgz async tgz async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file application package json path to vulnerable library application node modules winston node modules async package json dependency hierarchy sails tgz root library prompt tgz winston tgz x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file application package json path to vulnerable library application node modules grunt legacy util node modules async package json application node modules grunt contrib less node modules async package json dependency hierarchy grunt contrib less tgz root library x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file application package json path to vulnerable library application node modules grunt contrib watch node modules async package json application node modules grunt contrib clean node modules async package json dependency hierarchy grunt contrib watch tgz root library x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file application package json path to vulnerable library application node modules jake node modules async package json application node modules prompt node modules async package json dependency hierarchy ejs tgz root library jake tgz x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file application package json path to vulnerable library application node modules sailshq nedb node modules async package json dependency hierarchy sails disk tgz root library nedb tgz x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file application package json path to vulnerable library application node modules sails node modules async package json dependency hierarchy sails tgz root library x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file application package json path to vulnerable library application node modules async package json dependency hierarchy sails disk tgz root library x async tgz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability exists in async through fixed in which could let a malicious user obtain privileges via the mapvalues method publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async step up your open source security game with whitesource | 0 |
2,695 | 9,413,636,759 | IssuesEvent | 2019-04-10 08:18:24 | IPVS-AS/MBP | https://api.github.com/repos/IPVS-AS/MBP | opened | Clean up plugin directory | maintainance | There is a "plugin" directory that contains lots of javascript plugins which are not used by the project. I don't see a reason for keeping all these plugins in the repo since they are not required, so this directory should be cleaned up. | True | Clean up plugin directory - There is a "plugin" directory that contains lots of javascript plugins which are not used by the project. I don't see a reason for keeping all these plugins in the repo since they are not required, so this directory should be cleaned up. | main | clean up plugin directory there is a plugin directory that contains lots of javascript plugins which are not used by the project i don t see a reason for keeping all these plugins in the repo since they are not required so this directory should be cleaned up | 1 |
31,383 | 4,705,009,971 | IssuesEvent | 2016-10-13 13:24:55 | DBCDK/biblo | https://api.github.com/repos/DBCDK/biblo | closed | Bruger får UPS-fejl v login | bug Fixed Test please | En bruger får en fejlside med "UPS siden findes ikke", når hun logger ind. AFB og jeg har efterprøvet det og får samme resultat. Hendes brugerid er frejxxxx | 1.0 | Bruger får UPS-fejl v login - En bruger får en fejlside med "UPS siden findes ikke", når hun logger ind. AFB og jeg har efterprøvet det og får samme resultat. Hendes brugerid er frejxxxx | non_main | bruger får ups fejl v login en bruger får en fejlside med ups siden findes ikke når hun logger ind afb og jeg har efterprøvet det og får samme resultat hendes brugerid er frejxxxx | 0 |
776 | 4,383,247,190 | IssuesEvent | 2016-08-07 12:08:10 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | lxc_container: not supporting lxc 2.0/lxd - ubuntu xenial ? | bug_report cloud waiting_on_maintainer |
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lxc_container
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /opt/tmp/vagrant/homelab/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
log_path=ansible.log
roles_path = ./
transport = ssh
forks=5
callback_plugins = callback_plugins/
[ssh_connection]
ssh_args = -o ForwardAgent=yes
pipelining=True
scp_if_ssh=True
```
##### OS / ENVIRONMENT
Orchestrator: Ubuntu Trusty LTS
Target: Ubuntu Xenial LTS
##### SUMMARY
Creating container is not working. Command is not found
##### STEPS TO REPRODUCE
Orchestrator
```
$ cat test.yml
---
- hosts: all
tasks:
- name: Create a started container
lxc_container:
name: test
container_log: true
template: ubuntu-16.04
state: started
# container_command: |
# apt-get update
# apt-get install -y python
$ time ansible-playbook -i inventory test.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [A]
TASK [Create a started container] **********************************************
fatal: [A]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to find required executable lxc-create"}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'test.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
A : ok=1 changed=0 unreachable=0 failed=1
```
Target
```
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.1 LTS
Release: 16.04
Codename: xenial
# dpkg -l |egrep '(lxc|lxd)'
ii liblxc1 2.0.3-0ubuntu1~ubuntu16.04.1 amd64 Linux Containers userspace tools (library)
ii lxc-common 2.0.3-0ubuntu1~ubuntu16.04.1 amd64 Linux Containers userspace tools (common tools)
ii lxcfs 2.0.2-0ubuntu1~ubuntu16.04.1 amd64 FUSE based filesystem for LXC
ii lxd 2.0.3-0ubuntu1~ubuntu16.04.2 amd64 Container hypervisor based on LXC - daemon
ii lxd-client 2.0.3-0ubuntu1~ubuntu16.04.2 amd64 Container hypervisor based on LXC - client
ii python-lxc 0.1-0ubuntu6 amd64 Linux container userspace tools (Python 2.x bindings)
```
in lxc2/lxd, creation of container is not done with lxc-create but with lxc init/launch/start/...
see
https://linuxcontainers.org/lxd/getting-started-cli/
Online demonstrator
https://linuxcontainers.org/lxd/try-it/
##### EXPECTED RESULTS
creation of a container.
##### ACTUAL RESULTS
fail to create container. need to use manual command to do so.
| True | lxc_container: not supporting lxc 2.0/lxd - ubuntu xenial ? -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lxc_container
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /opt/tmp/vagrant/homelab/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
log_path=ansible.log
roles_path = ./
transport = ssh
forks=5
callback_plugins = callback_plugins/
[ssh_connection]
ssh_args = -o ForwardAgent=yes
pipelining=True
scp_if_ssh=True
```
##### OS / ENVIRONMENT
Orchestrator: Ubuntu Trusty LTS
Target: Ubuntu Xenial LTS
##### SUMMARY
Creating container is not working. Command is not found
##### STEPS TO REPRODUCE
Orchestrator
```
$ cat test.yml
---
- hosts: all
tasks:
- name: Create a started container
lxc_container:
name: test
container_log: true
template: ubuntu-16.04
state: started
# container_command: |
# apt-get update
# apt-get install -y python
$ time ansible-playbook -i inventory test.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [A]
TASK [Create a started container] **********************************************
fatal: [A]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to find required executable lxc-create"}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'test.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
A : ok=1 changed=0 unreachable=0 failed=1
```
Target
```
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.1 LTS
Release: 16.04
Codename: xenial
# dpkg -l |egrep '(lxc|lxd)'
ii liblxc1 2.0.3-0ubuntu1~ubuntu16.04.1 amd64 Linux Containers userspace tools (library)
ii lxc-common 2.0.3-0ubuntu1~ubuntu16.04.1 amd64 Linux Containers userspace tools (common tools)
ii lxcfs 2.0.2-0ubuntu1~ubuntu16.04.1 amd64 FUSE based filesystem for LXC
ii lxd 2.0.3-0ubuntu1~ubuntu16.04.2 amd64 Container hypervisor based on LXC - daemon
ii lxd-client 2.0.3-0ubuntu1~ubuntu16.04.2 amd64 Container hypervisor based on LXC - client
ii python-lxc 0.1-0ubuntu6 amd64 Linux container userspace tools (Python 2.x bindings)
```
in lxc2/lxd, creation of container is not done with lxc-create but with lxc init/launch/start/...
see
https://linuxcontainers.org/lxd/getting-started-cli/
Online demonstrator
https://linuxcontainers.org/lxd/try-it/
##### EXPECTED RESULTS
creation of a container.
##### ACTUAL RESULTS
fail to create container. need to use manual command to do so.
| main | lxc container not supporting lxc lxd ubuntu xenial issue type bug report component name lxc container ansible version ansible config file opt tmp vagrant homelab ansible cfg configured module search path default w o overrides configuration log path ansible log roles path transport ssh forks callback plugins callback plugins ssh args o forwardagent yes pipelining true scp if ssh true os environment orchestrator ubuntu trusty lts target ubuntu xenial lts summary creating container is not working command is not found steps to reproduce orchestrator cat test yml hosts all tasks name create a started container lxc container name test container log true template ubuntu state started container command apt get update apt get install y python time ansible playbook i inventory test yml play task ok task fatal failed changed false failed true msg failed to find required executable lxc create no more hosts left could not create retry file test retry no such file or directory play recap a ok changed unreachable failed target lsb release a no lsb modules are available distributor id ubuntu description ubuntu lts release codename xenial dpkg l egrep lxc lxd ii linux containers userspace tools library ii lxc common linux containers userspace tools common tools ii lxcfs fuse based filesystem for lxc ii lxd container hypervisor based on lxc daemon ii lxd client container hypervisor based on lxc client ii python lxc linux container userspace tools python x bindings in lxd creation of container is not done with lxc create but with lxc init launch start see online demonstrator expected results creation of a container actual results fail to create container need to use manual command to do so | 1 |
752 | 4,351,489,927 | IssuesEvent | 2016-07-31 22:00:21 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | unarchive permission issues with become and http src | bug_report waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
unarchive module
##### ANSIBLE VERSION
N/A
##### SUMMARY
The unarchive module does not appear to honor become permissions when downloading a remote file. It looks like the module tries to download the file to ansible tmp directory as the become_user however that user does not have permissions to write there since they are owned by the user ansible is running as. It only appears to work when become_user is root.
Here is an example from my playbook that fails.
```
- name: Extract Zip
unarchive: src=http://someserver.com/files/zipfile.zip dest=/opt/exploded copy=no
become: yes
become_user: user2
```
| True | unarchive permission issues with become and http src - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
unarchive module
##### ANSIBLE VERSION
N/A
##### SUMMARY
The unarchive module does not appear to honor become permissions when downloading a remote file. It looks like the module tries to download the file to ansible tmp directory as the become_user however that user does not have permissions to write there since they are owned by the user ansible is running as. It only appears to work when become_user is root.
Here is an example from my playbook that fails.
```
- name: Extract Zip
unarchive: src=http://someserver.com/files/zipfile.zip dest=/opt/exploded copy=no
become: yes
become_user: user2
```
| main | unarchive permission issues with become and http src issue type bug report component name unarchive module ansible version n a summary the unarchive module does not appear to honor become permissions when downloading a remote file it looks like the module tries to download the file to ansible tmp directory as the become user however that user does not have permissions to write there since they are owned by the user ansible is running as it only appears to work when become user is root here is an example from my playbook that fails name extract zip unarchive src dest opt exploded copy no become yes become user | 1 |
382,439 | 11,306,342,319 | IssuesEvent | 2020-01-18 13:24:57 | redeclipse/base | https://api.github.com/repos/redeclipse/base | opened | Improve keybind menu | difficulty: medium priority: low status: enhancement | Review of the keybind menu has shown some usability issues compared to the previous iterations. The areas of input need to be highlighted better (like a button that should be pressed) and an option needs to be added to clear a particular bind (unbind all keys associated with that command). | 1.0 | Improve keybind menu - Review of the keybind menu has shown some usability issues compared to the previous iterations. The areas of input need to be highlighted better (like a button that should be pressed) and an option needs to be added to clear a particular bind (unbind all keys associated with that command). | non_main | improve keybind menu review of the keybind menu has shown some usability issues compared to the previous iterations the areas of input need to be highlighted better like a button that should be pressed and an option needs to be added to clear a particular bind unbind all keys associated with that command | 0 |
2,162 | 7,524,340,059 | IssuesEvent | 2018-04-13 06:38:48 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Commands should invoke named methods | Area: analyzer Area: maintainability feature in progress | If a command invokes a lambda, then the multiple parameters get captured.
So instead, a command should invoke a named method. In addition, that makes it much easier to understand. | True | Commands should invoke named methods - If a command invokes a lambda, then the multiple parameters get captured.
So instead, a command should invoke a named method. In addition, that makes it much easier to understand. | main | commands should invoke named methods if a command invokes a lambda then the multiple parameters get captured so instead a command should invoke a named method in addition that makes it much easier to understand | 1 |
220,249 | 7,354,599,031 | IssuesEvent | 2018-03-09 07:43:50 | cuappdev/podcast-ios | https://api.github.com/repos/cuappdev/podcast-ios | closed | Update logo on loginviewcontroller | BUG BASHHH!!! Priority: Critical Status: In Progress | make sure you upload the one with the transparent background
The icon should be switched for that to what eileen uploaded on Zeplin
<img width="337" alt="screen shot 2018-03-07 at 6 25 56 pm" src="https://user-images.githubusercontent.com/14966713/37124123-fdc21102-2234-11e8-9c16-603388397615.png">
| 1.0 | Update logo on loginviewcontroller - make sure you upload the one with the transparent background
The icon should be switched for that to what eileen uploaded on Zeplin
<img width="337" alt="screen shot 2018-03-07 at 6 25 56 pm" src="https://user-images.githubusercontent.com/14966713/37124123-fdc21102-2234-11e8-9c16-603388397615.png">
| non_main | update logo on loginviewcontroller make sure you upload the one with the transparent background the icon should be switched for that to what eileen uploaded on zeplin img width alt screen shot at pm src | 0 |
650,361 | 21,388,676,230 | IssuesEvent | 2022-04-21 03:31:45 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Getting Something went wrong and Retrieval Error when trying to terminate admin user active session | ui Priority/Highest Severity/Critical bug console Affected-5.12.0 QA-Reported | **How to reproduce:**
1. Access console
2. List admin user
3. Go to Active sessions tab
4. Click on Terminate session by listing the current session
5. There will be 2 alerts one as successful and 1 as a error in terminating session
6. Then try to list users or try to do any operation in the console
7. Will end up in something went wrong errors


https://user-images.githubusercontent.com/31848014/164158612-f82e296f-bc21-46f4-b319-6dba0d6cbd33.mp4
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
IS 5.12.0 alpha 20 | 1.0 | Getting Something went wrong and Retrieval Error when trying to terminate admin user active session - **How to reproduce:**
1. Access console
2. List admin user
3. Go to Active sessions tab
4. Click on Terminate session by listing the current session
5. There will be 2 alerts one as successful and 1 as a error in terminating session
6. Then try to list users or try to do any operation in the console
7. Will end up in something went wrong errors


https://user-images.githubusercontent.com/31848014/164158612-f82e296f-bc21-46f4-b319-6dba0d6cbd33.mp4
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
IS 5.12.0 alpha 20 | non_main | getting something went wrong and retrieval error when trying to terminate admin user active session how to reproduce access console list admin user go to active sessions tab click on terminate session by listing the current session there will be alerts one as successful and as a error in terminating session then try to list users or try to do any operation in the console will end up in something went wrong errors environment information please complete the following information remove any unnecessary fields is alpha | 0 |
95,199 | 3,940,373,329 | IssuesEvent | 2016-04-27 00:27:25 | TranslationWMcs435/TranslationWMcs435 | https://api.github.com/repos/TranslationWMcs435/TranslationWMcs435 | closed | Try to get Robolectric to work | Medium Priority New Feature | As of now, RobolectricTranslator has been tried to be implemented with no success. I'll try my hand at it and see if I can get it to work. | 1.0 | Try to get Robolectric to work - As of now, RobolectricTranslator has been tried to be implemented with no success. I'll try my hand at it and see if I can get it to work. | non_main | try to get robolectric to work as of now robolectrictranslator has been tried to be implemented with no success i ll try my hand at it and see if i can get it to work | 0 |
336,344 | 24,494,333,394 | IssuesEvent | 2022-10-10 07:14:08 | datakaveri/iudx-deployment | https://api.github.com/repos/datakaveri/iudx-deployment | closed | Add guidelines/docs to use git based working | documentation | - Like how to contribute
- remotes, PRs, linking of issue.
- milestones, projects | 1.0 | Add guidelines/docs to use git based working - - Like how to contribute
- remotes, PRs, linking of issue.
- milestones, projects | non_main | add guidelines docs to use git based working like how to contribute remotes prs linking of issue milestones projects | 0 |
5,270 | 26,635,952,206 | IssuesEvent | 2023-01-24 21:58:24 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Layout overflow in Data Explorer page | type: bug work: frontend status: ready restricted: maintainers | ## Description
* In the data explorer, notice that when the window size is reduced, content overflows and we're unable to access them.
* 
* Notice that the inspector overflows and half of it is hidden, and the action pane buttons are hidden. We don't have any scrollbars or ways to access them.
* Not having a horizontal scrollbar for the content overflow is a regression introduced in https://github.com/centerofci/mathesar/pull/2249.
* The action pane issue has existed for a while.
* For reference, this was the previous UX. We need the same UX for the content, where we have a larger horizontal scrollbar at the bottom to scroll through the content.
* 
* For the action pane, we want a similar approach as we have for the table page, as done in https://github.com/centerofci/mathesar/pull/2249, where we show icons and try to fit it all in the viewport.
Related to: https://github.com/centerofci/mathesar/issues/2150 | True | Layout overflow in Data Explorer page - ## Description
* In the data explorer, notice that when the window size is reduced, content overflows and we're unable to access them.
* 
* Notice that the inspector overflows and half of it is hidden, and the action pane buttons are hidden. We don't have any scrollbars or ways to access them.
* Not having a horizontal scrollbar for the content overflow is a regression introduced in https://github.com/centerofci/mathesar/pull/2249.
* The action pane issue has existed for a while.
* For reference, this was the previous UX. We need the same UX for the content, where we have a larger horizontal scrollbar at the bottom to scroll through the content.
* 
* For the action pane, we want a similar approach as we have for the table page, as done in https://github.com/centerofci/mathesar/pull/2249, where we show icons and try to fit it all in the viewport.
Related to: https://github.com/centerofci/mathesar/issues/2150 | main | layout overflow in data explorer page description in the data explorer notice that when the window size is reduced content overflows and we re unable to access them notice that the inspector overflows and half of it is hidden and the action pane buttons are hidden we don t have any scrollbars or ways to access them not having a horizontal scrollbar for the content overflow is a regression introduced in the action pane issue has existed for a while for reference this was the previous ux we need the same ux for the content where we have a larger horizontal scrollbar at the bottom to scroll through the content for the action pane we want a similar approach as we have for the table page as done in where we show icons and try to fit it all in the viewport related to | 1 |
5,572 | 27,882,329,574 | IssuesEvent | 2023-03-21 20:27:08 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: datatable state manager not updating with new `rows` values such as `isSelected` | type: bug 🐛 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 status: needs reproduction | ### Package
carbon-components, carbon-components-react, @carbon/colors, @carbon/icons-react, @carbon/motion, @carbon/themes, @carbon/type
### Browser
Chrome, Safari, Firefox, Edge
### Package version
colors: 10.33.0, icons-react: 10.49.0, motion: 10.25.0, themes: 10.48.0, type: 10.39.0, carbon-components-react: 7.50.0, carbon-components: 10.50.0
### React version
16.12.0
### Description
We also use
"@carbon/ibm-cloud-cognitive" version being "0.87.3"
We are trying to make it so user can add a column above the column selected

But the issue is when we update our code properly with the new `isSelected` value it doesn't get updated on the state manager for datatable. I left a more thorough explanation video in the carbon slack repo https://ibm-analytics.slack.com/archives/C2K6RFJ1G/p1678392304145939 please use this video for understanding the issue as it's hard to explain in text. If needed, feel free to message me on slack and we can webex to go over issue more.
The slack link should explain most of the issue and I hope a carbon dev can understand what I mean.
Select column

add new column for above, this is what we see from carbon

issue - new column is being selected
expected display

I go more into detail on steps to reproduce on why this is carbon issue. Everything from our end in the code appears to all be correct for the `rows` render of DataTable, I verified this numerous times and the conclusion I ended up with that this is an issue with the state manager for the `isSelected` property and how it is handled in `DataTable.js`
I checked this issue
https://github.com/carbon-design-system/carbon/issues/6241 although it looks similar, I believe it is actually different than what I am going over. Also not duplicate because I go a bit more into detail for my scenario and usecase, thanks!
I saw this reply https://github.com/carbon-design-system/carbon/issues/6241#issuecomment-643283008 and I am hoping that this isn't the same for this scenario because it doesn't make sense to why the user would have to set up their own state manager than the one built into `DataTable` when this is carbon code having this issue. Just hoping this can be fixed and addressed, let me know if any questions. Thanks!
### Reproduction/example
-
### Steps to reproduce
I spent some time trying to get a sandbox set up and I couldn't get it and I also don't have a lot of time to spend on trying to do so since there is some custom parts that we have in our code that I would need to try to re-create on sandbox and I don't have the time to do all of this currently. But the issue remains to be on carbon end and I can't resolve the issue on my end due to this blockage. I'll do my best to explain without a sandbox, if more info is needed please contact me on slack.
Essentially though
- Have a datatable, have it be able to add rows
- select a row. on the data table and have a custom batch action to appends a new row
- The new row should be appended above the selected row (which is per design on our issue)
- The value `isSelected` should remain on the row selected and not on the newly appended row.
So if I have
<table>
<tr>
<th> Id </th>
<th> Column </th>
</tr>
<tr>
<td> 1 </td>
<td> row 1 </td>
</tr>
<tr>
<td> 2 </td>
<td> row 2 </td>
</tr>
<tr>
<td> 3 </td>
<td> row 3 </td>
</tr>
</table>
Say I select row 2 and add a new row
Step 1
<table>
<tr>
<th> Id </th>
<th> Column </th>
</tr>
<tr>
<td> 1 </td>
<td> row 1 </td>
</tr>
<tr>
<td> 2 </td>
<td> row 2 (isSelected set to true)</td>
</tr>
<tr>
<td> 3 </td>
<td> row 3 </td>
</tr>
</table>
Step 2 - add new row
<table>
<tr>
<th> Id </th>
<th> Column </th>
</tr>
<tr>
<td> 1 </td>
<td> row 1 </td>
</tr>
<tr>
<td> 2 </td>
<td> row 4 (New row) </td>
</tr>
<tr>
<td> 3 </td>
<td> row 2 (Should still have `isSelected` set to true in the rows)</td>
</tr>
<tr>
<td> 4 </td>
<td> row 3 </td>
</tr>
</table>
The output I am seeing is
<table>
<tr>
<th> Id </th>
<th> Column </th>
</tr>
<tr>
<td> 1 </td>
<td> row 1 </td>
</tr>
<tr>
<td> 2 </td>
<td> row 4 (New row) - has `this.state.rowsById` for `isSelected` set to true, but this shouldn't be the case </td>
</tr>
<tr>
<td> 3 </td>
<td> row 2 (No longer has `isSelected` set to true visually, but in the data for `this.props.rows` it does)</td>
</tr>
<tr>
<td> 4 </td>
<td> row 3 </td>
</tr>
</table>
I put breakpoints on DataTable.js and went through, here is the first instance that we see `this.state.rowsById` get the new row added using the same scenario provided above

Now this doesn't make sense (to me at least) why for index `1` why `isSelected` is being set to `true` because this doesn't reflect accurately to the data being sent and no where through the various breakpoints does this `isSelected` ever gets updated.
See `this.props.rows`

`column_4` at index 1 says `isSelected` is false, but for index 2, column_2 `isSelected` is set to true as it should be. This is correct.
The state manager never updates with this new info as `isSelected` is not correct.
This is the best example I can give, if more info is needed. Please slack me (slack is on that thread) and we can webex or discuss further on the issue.
### Suggested Severity
Severity 2 = User cannot complete task, and/or no workaround within the user experience of a given component.
### Application/PAL
IBM DataStage
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: datatable state manager not updating with new `rows` values such as `isSelected` - ### Package
carbon-components, carbon-components-react, @carbon/colors, @carbon/icons-react, @carbon/motion, @carbon/themes, @carbon/type
### Browser
Chrome, Safari, Firefox, Edge
### Package version
colors: 10.33.0, icons-react: 10.49.0, motion: 10.25.0, themes: 10.48.0, type: 10.39.0, carbon-components-react: 7.50.0, carbon-components: 10.50.0
### React version
16.12.0
### Description
We also use
"@carbon/ibm-cloud-cognitive" version being "0.87.3"
We are trying to make it so user can add a column above the column selected

But the issue is when we update our code properly with the new `isSelected` value it doesn't get updated on the state manager for datatable. I left a more thorough explanation video in the carbon slack repo https://ibm-analytics.slack.com/archives/C2K6RFJ1G/p1678392304145939 please use this video for understanding the issue as it's hard to explain in text. If needed, feel free to message me on slack and we can webex to go over issue more.
The slack link should explain most of the issue and I hope a carbon dev can understand what I mean.
Select column

add new column for above, this is what we see from carbon

issue - new column is being selected
expected display

I go more into detail on steps to reproduce on why this is carbon issue. Everything from our end in the code appears to all be correct for the `rows` render of DataTable, I verified this numerous times and the conclusion I ended up with that this is an issue with the state manager for the `isSelected` property and how it is handled in `DataTable.js`
I checked this issue
https://github.com/carbon-design-system/carbon/issues/6241 although it looks similar, I believe it is actually different than what I am going over. Also not duplicate because I go a bit more into detail for my scenario and usecase, thanks!
I saw this reply https://github.com/carbon-design-system/carbon/issues/6241#issuecomment-643283008 and I am hoping that this isn't the same for this scenario because it doesn't make sense to why the user would have to set up their own state manager than the one built into `DataTable` when this is carbon code having this issue. Just hoping this can be fixed and addressed, let me know if any questions. Thanks!
### Reproduction/example
-
### Steps to reproduce
I spent some time trying to get a sandbox set up and I couldn't get it and I also don't have a lot of time to spend on trying to do so since there is some custom parts that we have in our code that I would need to try to re-create on sandbox and I don't have the time to do all of this currently. But the issue remains to be on carbon end and I can't resolve the issue on my end due to this blockage. I'll do my best to explain without a sandbox, if more info is needed please contact me on slack.
Essentially though
- Have a datatable, have it be able to add rows
- select a row. on the data table and have a custom batch action to appends a new row
- The new row should be appended above the selected row (which is per design on our issue)
- The value `isSelected` should remain on the row selected and not on the newly appended row.
So if I have
<table>
<tr>
<th> Id </th>
<th> Column </th>
</tr>
<tr>
<td> 1 </td>
<td> row 1 </td>
</tr>
<tr>
<td> 2 </td>
<td> row 2 </td>
</tr>
<tr>
<td> 3 </td>
<td> row 3 </td>
</tr>
</table>
Say I select row 2 and add a new row
Step 1
<table>
<tr>
<th> Id </th>
<th> Column </th>
</tr>
<tr>
<td> 1 </td>
<td> row 1 </td>
</tr>
<tr>
<td> 2 </td>
<td> row 2 (isSelected set to true)</td>
</tr>
<tr>
<td> 3 </td>
<td> row 3 </td>
</tr>
</table>
Step 2 - add new row
<table>
<tr>
<th> Id </th>
<th> Column </th>
</tr>
<tr>
<td> 1 </td>
<td> row 1 </td>
</tr>
<tr>
<td> 2 </td>
<td> row 4 (New row) </td>
</tr>
<tr>
<td> 3 </td>
<td> row 2 (Should still have `isSelected` set to true in the rows)</td>
</tr>
<tr>
<td> 4 </td>
<td> row 3 </td>
</tr>
</table>
The output I am seeing is
<table>
<tr>
<th> Id </th>
<th> Column </th>
</tr>
<tr>
<td> 1 </td>
<td> row 1 </td>
</tr>
<tr>
<td> 2 </td>
<td> row 4 (New row) - has `this.state.rowsById` for `isSelected` set to true, but this shouldn't be the case </td>
</tr>
<tr>
<td> 3 </td>
<td> row 2 (No longer has `isSelected` set to true visually, but in the data for `this.props.rows` it does)</td>
</tr>
<tr>
<td> 4 </td>
<td> row 3 </td>
</tr>
</table>
I put breakpoints on DataTable.js and went through, here is the first instance that we see `this.state.rowsById` get the new row added using the same scenario provided above

Now this doesn't make sense (to me at least) why for index `1` why `isSelected` is being set to `true` because this doesn't reflect accurately to the data being sent and no where through the various breakpoints does this `isSelected` ever gets updated.
See `this.props.rows`

`column_4` at index 1 says `isSelected` is false, but for index 2, column_2 `isSelected` is set to true as it should be. This is correct.
The state manager never updates with this new info as `isSelected` is not correct.
This is the best example I can give, if more info is needed. Please slack me (slack is on that thread) and we can webex or discuss further on the issue.
### Suggested Severity
Severity 2 = User cannot complete task, and/or no workaround within the user experience of a given component.
### Application/PAL
IBM DataStage
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | datatable state manager not updating with new rows values such as isselected package carbon components carbon components react carbon colors carbon icons react carbon motion carbon themes carbon type browser chrome safari firefox edge package version colors icons react motion themes type carbon components react carbon components react version description we also use carbon ibm cloud cognitive version being we are trying to make it so user can add a column above the column selected but the issue is when we update our code properly with the new isselected value it doesn t get updated on the state manager for datatable i left a more thorough explanation video in the carbon slack repo please use this video for understanding the issue as it s hard to explain in text if needed feel free to message me on slack and we can webex to go over issue more the slack link should explain most of the issue and i hope a carbon dev can understand what i mean select column add new column for above this is what we see from carbon issue new column is being selected expected display i go more into detail on steps to reproduce on why this is carbon issue everything from our end in the code appears to all be correct for the rows render of datatable i verified this numerous times and the conclusion i ended up with that this is an issue with the state manager for the isselected property and how it is handled in datatable js i checked this issue although it looks similar i believe it is actually different than what i am going over also not duplicate because i go a bit more into detail for my scenario and usecase thanks i saw this reply and i am hoping that this isn t the same for this scenario because it doesn t make sense to why the user would have to set up their own state manager than the one built into datatable when this is carbon code having this issue just hoping this can be fixed and addressed let me know if any questions thanks reproduction example steps to reproduce i spent some time trying to get a sandbox set up and i couldn t get it and i also don t have a lot of time to spend on trying to do so since there is some custom parts that we have in our code that i would need to try to re create on sandbox and i don t have the time to do all of this currently but the issue remains to be on carbon end and i can t resolve the issue on my end due to this blockage i ll do my best to explain without a sandbox if more info is needed please contact me on slack essentially though have a datatable have it be able to add rows select a row on the data table and have a custom batch action to appends a new row the new row should be appended above the selected row which is per design on our issue the value isselected should remain on the row selected and not on the newly appended row so if i have id column row row row say i select row and add a new row step id column row row isselected set to true row step add new row id column row row new row row should still have isselected set to true in the rows row the output i am seeing is id column row row new row has this state rowsbyid for isselected set to true but this shouldn t be the case row no longer has isselected set to true visually but in the data for this props rows it does row i put breakpoints on datatable js and went through here is the first instance that we see this state rowsbyid get the new row added using the same scenario provided above now this doesn t make sense to me at least why for index why isselected is being set to true because this doesn t reflect accurately to the data being sent and no where through the various breakpoints does this isselected ever gets updated see this props rows column at index says isselected is false but for index column isselected is set to true as it should be this is correct the state manager never updates with this new info as isselected is not correct this is the best example i can give if more info is needed please slack me slack is on that thread and we can webex or discuss further on the issue suggested severity severity user cannot complete task and or no workaround within the user experience of a given component application pal ibm datastage code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
304,247 | 9,329,362,275 | IssuesEvent | 2019-03-28 02:02:59 | milleniumbug/DidacticalEnigma | https://api.github.com/repos/milleniumbug/DidacticalEnigma | closed | Optimize resource usage | high-priority | Currently on startup, the program creates dictionary lookup files from the JMdict and JNedict files, which take ~1GB of disk space. This could be significantly reduced because there's a lot of redudant information stored there. Also, during creation of these files the application can take up to 2GB RAM. The creation of these files can take up to several minutes even on highly performant machines, and it could be even slower otherwise.
These all need to be fixed. | 1.0 | Optimize resource usage - Currently on startup, the program creates dictionary lookup files from the JMdict and JNedict files, which take ~1GB of disk space. This could be significantly reduced because there's a lot of redudant information stored there. Also, during creation of these files the application can take up to 2GB RAM. The creation of these files can take up to several minutes even on highly performant machines, and it could be even slower otherwise.
These all need to be fixed. | non_main | optimize resource usage currently on startup the program creates dictionary lookup files from the jmdict and jnedict files which take of disk space this could be significantly reduced because there s a lot of redudant information stored there also during creation of these files the application can take up to ram the creation of these files can take up to several minutes even on highly performant machines and it could be even slower otherwise these all need to be fixed | 0 |
2,983 | 10,757,908,848 | IssuesEvent | 2019-10-31 14:07:31 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | Go module to install packages | affects_2.5 feature has_pr needs_maintainer support:core | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Go module
##### ANSIBLE VERSION
```
Ansible 2.5.0
```
##### CONFIGURATION
Default.
##### OS / ENVIRONMENT
Docker OS: Ubuntu
Host OS: macOS
##### SUMMARY
I'd like to see `go` module to handle package installation, similar to `apt` or `pip`.
```
tasks:
- name: Install pup (CLI HTML processor)
go: name=pup state=present
```
which would be equivalent to `go get github.com/ericchiang/pup` command.
##### STEPS TO REPRODUCE
`provision.yml`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Install GO (Google's Programming Language)
apt: name=golang state=present
- name: Install pup (CLI HTML processor)
go: name=pup state=present
```
##### EXPECTED RESULTS
> Step 4/9 : RUN go get github.com/ericchiang/pup
##### ACTUAL RESULTS
> ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path. | True | Go module to install packages - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Go module
##### ANSIBLE VERSION
```
Ansible 2.5.0
```
##### CONFIGURATION
Default.
##### OS / ENVIRONMENT
Docker OS: Ubuntu
Host OS: macOS
##### SUMMARY
I'd like to see `go` module to handle package installation, similar to `apt` or `pip`.
```
tasks:
- name: Install pup (CLI HTML processor)
go: name=pup state=present
```
which would be equivalent to `go get github.com/ericchiang/pup` command.
##### STEPS TO REPRODUCE
`provision.yml`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Install GO (Google's Programming Language)
apt: name=golang state=present
- name: Install pup (CLI HTML processor)
go: name=pup state=present
```
##### EXPECTED RESULTS
> Step 4/9 : RUN go get github.com/ericchiang/pup
##### ACTUAL RESULTS
> ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path. | main | go module to install packages issue type feature idea component name go module ansible version ansible configuration default os environment docker os ubuntu host os macos summary i d like to see go module to handle package installation similar to apt or pip tasks name install pup cli html processor go name pup state present which would be equivalent to go get github com ericchiang pup command steps to reproduce provision yml yaml hosts localhost tasks name install go google s programming language apt name golang state present name install pup cli html processor go name pup state present expected results step run go get github com ericchiang pup actual results error no action detected in task this often indicates a misspelled module name or incorrect module path | 1 |
1,475 | 6,400,962,270 | IssuesEvent | 2017-08-05 16:22:35 | beefproject/beef | https://api.github.com/repos/beefproject/beef | closed | Don't hardcode specific versions in Gemfile | Maintainability | Hello, I'm trying to improve the beef-xss package for Kali and in the process I want to use system-wide ruby gems provided by proper debian packages. I packaged most of the required gems but when I do that I package the latest version of each gem (or keep the version already available in Kali, which on the contrary might be older than the one that you are requiring).
Please fix the Gemfile to avoid exact version requirements (like you do on eventmachine, sinatra, rack and now rubydns) and favor relationships that matches the reality of beef-xss requirements (i.e. >= a specific minimal version that is required). FWIW Kali still has evenmachine 0.12, rack 1.4.1, sinatra 1.3.2. Do you know if beef works with them?
Furthermore you have some "~>" relationship that imposes to use an older version. em-websocket for instance is locked to 0.3.X while we now have 0.5.1 in Kali. Would beef work with 0.5.1 ?
In general, it's best to assume that the code will continue to work with new versions of the gems unless you have prior experience with a specific gem breaking backwards compatibility all the time (or if the upstream author announced a clear versioning scheme reflecting the compatibility breaks).
| True | Don't hardcode specific versions in Gemfile - Hello, I'm trying to improve the beef-xss package for Kali and in the process I want to use system-wide ruby gems provided by proper debian packages. I packaged most of the required gems but when I do that I package the latest version of each gem (or keep the version already available in Kali, which on the contrary might be older than the one that you are requiring).
Please fix the Gemfile to avoid exact version requirements (like you do on eventmachine, sinatra, rack and now rubydns) and favor relationships that matches the reality of beef-xss requirements (i.e. >= a specific minimal version that is required). FWIW Kali still has evenmachine 0.12, rack 1.4.1, sinatra 1.3.2. Do you know if beef works with them?
Furthermore you have some "~>" relationship that imposes to use an older version. em-websocket for instance is locked to 0.3.X while we now have 0.5.1 in Kali. Would beef work with 0.5.1 ?
In general, it's best to assume that the code will continue to work with new versions of the gems unless you have prior experience with a specific gem breaking backwards compatibility all the time (or if the upstream author announced a clear versioning scheme reflecting the compatibility breaks).
| main | don t hardcode specific versions in gemfile hello i m trying to improve the beef xss package for kali and in the process i want to use system wide ruby gems provided by proper debian packages i packaged most of the required gems but when i do that i package the latest version of each gem or keep the version already available in kali which on the contrary might be older than the one that you are requiring please fix the gemfile to avoid exact version requirements like you do on eventmachine sinatra rack and now rubydns and favor relationships that matches the reality of beef xss requirements i e a specific minimal version that is required fwiw kali still has evenmachine rack sinatra do you know if beef works with them furthermore you have some relationship that imposes to use an older version em websocket for instance is locked to x while we now have in kali would beef work with in general it s best to assume that the code will continue to work with new versions of the gems unless you have prior experience with a specific gem breaking backwards compatibility all the time or if the upstream author announced a clear versioning scheme reflecting the compatibility breaks | 1 |
22,384 | 7,165,256,871 | IssuesEvent | 2018-01-29 13:56:58 | angular/angular-cli | https://api.github.com/repos/angular/angular-cli | closed | ng serve reloads fail with cli 1.7.0 beta | comp: cli/build | ### Versions
```
Angular CLI: 1.7.0-beta.1
Node: 9.4.0
OS: win32 x64
Angular: 6.0.0-beta.0
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
@angular/cli: 1.7.0-beta.1
@ngtools/json-schema: 1.1.0
typescript: 2.6.2
webpack: 3.10.0
```
### Repro steps
create app with routing
ng serve, load home page, navigate away from home url
reload page in browser - or make edit and save for auto releoad
### Observed behavior
```
Load fails as it tries to load everything from wrong path (... below represents path that was navigated to)
GET http://localhost:4200/.../inline.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../polyfills.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../styles.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../vendor.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../main.bundle.js net::ERR_ABORTED
```
### Desired behavior
It should work as it does with cli version < 1.7.0
### Mention any other details that might be useful (optional)
deployed dist continues to work, its only ng serve that is failing
| 1.0 | ng serve reloads fail with cli 1.7.0 beta - ### Versions
```
Angular CLI: 1.7.0-beta.1
Node: 9.4.0
OS: win32 x64
Angular: 6.0.0-beta.0
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
@angular/cli: 1.7.0-beta.1
@ngtools/json-schema: 1.1.0
typescript: 2.6.2
webpack: 3.10.0
```
### Repro steps
create app with routing
ng serve, load home page, navigate away from home url
reload page in browser - or make edit and save for auto releoad
### Observed behavior
```
Load fails as it tries to load everything from wrong path (... below represents path that was navigated to)
GET http://localhost:4200/.../inline.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../polyfills.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../styles.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../vendor.bundle.js net::ERR_ABORTED
GET http://localhost:4200/.../main.bundle.js net::ERR_ABORTED
```
### Desired behavior
It should work as it does with cli version < 1.7.0
### Mention any other details that might be useful (optional)
deployed dist continues to work, its only ng serve that is failing
| non_main | ng serve reloads fail with cli beta versions angular cli beta node os angular beta animations common compiler compiler cli core forms language service platform browser platform browser dynamic router angular cli beta ngtools json schema typescript webpack repro steps create app with routing ng serve load home page navigate away from home url reload page in browser or make edit and save for auto releoad observed behavior load fails as it tries to load everything from wrong path below represents path that was navigated to get net err aborted get net err aborted get net err aborted get net err aborted get net err aborted desired behavior it should work as it does with cli version mention any other details that might be useful optional deployed dist continues to work its only ng serve that is failing | 0 |
553,003 | 16,332,804,168 | IssuesEvent | 2021-05-12 11:23:21 | lutraconsulting/input | https://api.github.com/repos/lutraconsulting/input | closed | Refactor form-related models | enhancement forms high priority | There are several models linked together making things quite complex - and buggy in some more advanced scenarios, e.g. when using conditional visibility. It would be good to simplify the whole approach - maybe something like this:
- do not link models together - e.g. the dreaded QgsQuickSubModel
- use just simple list models (not hierarchical)
- have one central controller class for all the form logic to avoid spaghetti of signals
- remove QgsQuickAttributeModel if possible - if we don't need that item model then let's keep the logic in the central controller

| 1.0 | Refactor form-related models - There are several models linked together making things quite complex - and buggy in some more advanced scenarios, e.g. when using conditional visibility. It would be good to simplify the whole approach - maybe something like this:
- do not link models together - e.g. the dreaded QgsQuickSubModel
- use just simple list models (not hierarchical)
- have one central controller class for all the form logic to avoid spaghetti of signals
- remove QgsQuickAttributeModel if possible - if we don't need that item model then let's keep the logic in the central controller

| non_main | refactor form related models there are several models linked together making things quite complex and buggy in some more advanced scenarios e g when using conditional visibility it would be good to simplify the whole approach maybe something like this do not link models together e g the dreaded qgsquicksubmodel use just simple list models not hierarchical have one central controller class for all the form logic to avoid spaghetti of signals remove qgsquickattributemodel if possible if we don t need that item model then let s keep the logic in the central controller | 0 |
181,710 | 30,728,309,483 | IssuesEvent | 2023-07-27 21:48:49 | 18F/TLC-crew | https://api.github.com/repos/18F/TLC-crew | closed | Identify and implement method for adding contact form to 18F site | design Initiative 1 | - [x] Identify options for adding a contact form to the 18F site (Google form, Touchpoints, etc.)
- [ ] Confirm form fields and email address with 18F LT / business development
- [x] Add form
- [x] Test
- [ ] Add to production
[PR](https://github.com/18F/18f.gsa.gov/pull/3716) | 1.0 | Identify and implement method for adding contact form to 18F site - - [x] Identify options for adding a contact form to the 18F site (Google form, Touchpoints, etc.)
- [ ] Confirm form fields and email address with 18F LT / business development
- [x] Add form
- [x] Test
- [ ] Add to production
[PR](https://github.com/18F/18f.gsa.gov/pull/3716) | non_main | identify and implement method for adding contact form to site identify options for adding a contact form to the site google form touchpoints etc confirm form fields and email address with lt business development add form test add to production | 0 |
3,894 | 17,330,206,989 | IssuesEvent | 2021-07-28 00:22:38 | walbourn/directx-vs-templates | https://api.github.com/repos/walbourn/directx-vs-templates | closed | VS 2022 VSIX Support | maintainence | Add support for the template VSIX to install on VS 2022.
See [Microsoft Docs](https://docs.microsoft.com/en-us/visualstudio/extensibility/migration/update-visual-studio-extension?view=vs-2022) for details. | True | VS 2022 VSIX Support - Add support for the template VSIX to install on VS 2022.
See [Microsoft Docs](https://docs.microsoft.com/en-us/visualstudio/extensibility/migration/update-visual-studio-extension?view=vs-2022) for details. | main | vs vsix support add support for the template vsix to install on vs see for details | 1 |
15 | 2,515,192,952 | IssuesEvent | 2015-01-15 17:00:17 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | opened | Extract the aselect module out of the repository | enhancement low maintainability | It should get its own repository and allow installation through composer. Is it still in use? @pmeulen @joostd is there anybody still using this? | True | Extract the aselect module out of the repository - It should get its own repository and allow installation through composer. Is it still in use? @pmeulen @joostd is there anybody still using this? | main | extract the aselect module out of the repository it should get its own repository and allow installation through composer is it still in use pmeulen joostd is there anybody still using this | 1 |
2,457 | 8,639,880,948 | IssuesEvent | 2018-11-23 22:20:24 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | is it possible to use another GPIO pin ? | V1 related (not maintained) | i recently working on a project (actually an assignment from my lecturer). i tried to make my raspberry pi 3 works with a 5" touchscreen, unfortunately the touchscreen requires all the first 26 gpio pins, in other word it covers the pin 12 that supposed to be connected to a wire antenna. so is there any possible way to use another pin for the output signal ? and how to do that | True | is it possible to use another GPIO pin ? - i recently working on a project (actually an assignment from my lecturer). i tried to make my raspberry pi 3 works with a 5" touchscreen, unfortunately the touchscreen requires all the first 26 gpio pins, in other word it covers the pin 12 that supposed to be connected to a wire antenna. so is there any possible way to use another pin for the output signal ? and how to do that | main | is it possible to use another gpio pin i recently working on a project actually an assignment from my lecturer i tried to make my raspberry pi works with a touchscreen unfortunately the touchscreen requires all the first gpio pins in other word it covers the pin that supposed to be connected to a wire antenna so is there any possible way to use another pin for the output signal and how to do that | 1 |
90,436 | 10,681,562,852 | IssuesEvent | 2019-10-22 01:21:45 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | HttpClient.DefaultRequestVersion of HTTP/2 is not honored if using SendAsync | area-System.Net.Http documentation | This will send a request via HTTP/1.1:
```c#
var client = new HttpClient() { DefaultRequestVersion = HttpVersion.Version20 };
var msg = new HttpRequestMessage { Method = HttpMethod.Get, RequestUri = new Uri("https://contoso.com") };
client.SendAsync(msg);
```
The default version is only being used for the convenience methods (`GetAsync` etc.). | 1.0 | HttpClient.DefaultRequestVersion of HTTP/2 is not honored if using SendAsync - This will send a request via HTTP/1.1:
```c#
var client = new HttpClient() { DefaultRequestVersion = HttpVersion.Version20 };
var msg = new HttpRequestMessage { Method = HttpMethod.Get, RequestUri = new Uri("https://contoso.com") };
client.SendAsync(msg);
```
The default version is only being used for the convenience methods (`GetAsync` etc.). | non_main | httpclient defaultrequestversion of http is not honored if using sendasync this will send a request via http c var client new httpclient defaultrequestversion httpversion var msg new httprequestmessage method httpmethod get requesturi new uri client sendasync msg the default version is only being used for the convenience methods getasync etc | 0 |
595 | 4,097,214,483 | IssuesEvent | 2016-06-03 00:10:34 | Particular/ServiceControl | https://api.github.com/repos/Particular/ServiceControl | closed | Slider for audit and retention period is not precise | Tag: Installer Tag: Maintainer Prio Type: Improvement |
Slider for audit and retention period is not precise. If you know that you can use the arrow keys to inc/dec the value then you see that it has a precision of one hour.

Improvements:
- Make it obvious that the user can use the arrow keys
- Add plus and minus / arrow buttons to inc / dec the sliders.
- Popup a value if the user wants to archive for more then two months stating that SC is not meant for archiving and that the audit forwarding feature should be used. | True | Slider for audit and retention period is not precise -
Slider for audit and retention period is not precise. If you know that you can use the arrow keys to inc/dec the value then you see that it has a precision of one hour.

Improvements:
- Make it obvious that the user can use the arrow keys
- Add plus and minus / arrow buttons to inc / dec the sliders.
- Popup a value if the user wants to archive for more then two months stating that SC is not meant for archiving and that the audit forwarding feature should be used. | main | slider for audit and retention period is not precise slider for audit and retention period is not precise if you know that you can use the arrow keys to inc dec the value then you see that it has a precision of one hour improvements make it obvious that the user can use the arrow keys add plus and minus arrow buttons to inc dec the sliders popup a value if the user wants to archive for more then two months stating that sc is not meant for archiving and that the audit forwarding feature should be used | 1 |
13,858 | 8,380,956,544 | IssuesEvent | 2018-10-07 19:52:19 | JuliaLang/julia | https://api.github.com/repos/JuliaLang/julia | closed | sortrows() is slow for numerical arrays | help wanted performance | Julia sorting functions are slow compared to Matlab.
Matlab R2012b sorting of integer arrays:
```
>> a=randi(30000000,30000000,2);
>> tic; b=sort(a,2); toc
Elapsed time is 0.882043 seconds.
>> tic; b=sort(a,2); toc
Elapsed time is 0.903528 seconds.
>> tic; b=sort(a,2); toc
Elapsed time is 0.900367 seconds.
```
```
Julia sorting of integer arrays:
julia> a=rand(Int64,30000000,2);
julia> @time sort(a,2);
elapsed time: 46.625749977 seconds (17279978752 bytes allocated, 19.15% gc time)
```
The Julia sort() is more than 40 times slower than Matlab.
The sortrows() has a similar problem. Still working with the same 30,000,000 x 2 array size:
```
>> tic; c=sortrows(b); toc
Elapsed time is 12.071053 seconds.
```
Compare with
```
julia> @time sortrows(a);
elapsed time: 158.761497384 seconds (5765741164 bytes allocated, 3.42% gc time)
```
[ViralBShah note: The sort performance discussed here is no longer an issue, only the sortrows]
| True | sortrows() is slow for numerical arrays - Julia sorting functions are slow compared to Matlab.
Matlab R2012b sorting of integer arrays:
```
>> a=randi(30000000,30000000,2);
>> tic; b=sort(a,2); toc
Elapsed time is 0.882043 seconds.
>> tic; b=sort(a,2); toc
Elapsed time is 0.903528 seconds.
>> tic; b=sort(a,2); toc
Elapsed time is 0.900367 seconds.
```
```
Julia sorting of integer arrays:
julia> a=rand(Int64,30000000,2);
julia> @time sort(a,2);
elapsed time: 46.625749977 seconds (17279978752 bytes allocated, 19.15% gc time)
```
The Julia sort() is more than 40 times slower than Matlab.
The sortrows() has a similar problem. Still working with the same 30,000,000 x 2 array size:
```
>> tic; c=sortrows(b); toc
Elapsed time is 12.071053 seconds.
```
Compare with
```
julia> @time sortrows(a);
elapsed time: 158.761497384 seconds (5765741164 bytes allocated, 3.42% gc time)
```
[ViralBShah note: The sort performance discussed here is no longer an issue, only the sortrows]
| non_main | sortrows is slow for numerical arrays julia sorting functions are slow compared to matlab matlab sorting of integer arrays a randi tic b sort a toc elapsed time is seconds tic b sort a toc elapsed time is seconds tic b sort a toc elapsed time is seconds julia sorting of integer arrays julia a rand julia time sort a elapsed time seconds bytes allocated gc time the julia sort is more than times slower than matlab the sortrows has a similar problem still working with the same x array size tic c sortrows b toc elapsed time is seconds compare with julia time sortrows a elapsed time seconds bytes allocated gc time | 0 |
716,080 | 24,620,456,558 | IssuesEvent | 2022-10-15 21:26:14 | bitfoundation/bitplatform | https://api.github.com/repos/bitfoundation/bitplatform | closed | Incorrect `Value` handling in the `BitInputBase` | area / components high priority | The current implementation in the BitInputBase does not affect the UI when only the `Value` parameter is changed from outside of the input component. | 1.0 | Incorrect `Value` handling in the `BitInputBase` - The current implementation in the BitInputBase does not affect the UI when only the `Value` parameter is changed from outside of the input component. | non_main | incorrect value handling in the bitinputbase the current implementation in the bitinputbase does not affect the ui when only the value parameter is changed from outside of the input component | 0 |
4,530 | 23,545,888,748 | IssuesEvent | 2022-08-21 05:06:09 | web3phl/directory | https://api.github.com/repos/web3phl/directory | closed | add footer info | chore maintainers only tweak | ### 🤔 Not Existing Feature Request?
- [X] Yes, I'm sure, this is a new requested feature!
### 🤔 Not an Idea or Suggestion?
- [X] Yes, I'm sure, this is not idea or suggestion!
### 📋 Request Details
Add the name of the dev team who is responsible for the development of this directory. It should be simple and improve later. I'll take this one btw.
### 📜 Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/web3phl/directory/blob/main/CODE_OF_CONDUCT.md). | True | add footer info - ### 🤔 Not Existing Feature Request?
- [X] Yes, I'm sure, this is a new requested feature!
### 🤔 Not an Idea or Suggestion?
- [X] Yes, I'm sure, this is not idea or suggestion!
### 📋 Request Details
Add the name of the dev team who is responsible for the development of this directory. It should be simple and improve later. I'll take this one btw.
### 📜 Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/web3phl/directory/blob/main/CODE_OF_CONDUCT.md). | main | add footer info 🤔 not existing feature request yes i m sure this is a new requested feature 🤔 not an idea or suggestion yes i m sure this is not idea or suggestion 📋 request details add the name of the dev team who is responsible for the development of this directory it should be simple and improve later i ll take this one btw 📜 code of conduct i agree to follow this project s | 1 |
2,151 | 7,345,448,378 | IssuesEvent | 2018-03-07 17:26:31 | department-of-veterans-affairs/compliance | https://api.github.com/repos/department-of-veterans-affairs/compliance | opened | Human Interface | Design Engineering Architecture (DEA) In Scope | **Description**
Interfaces with Common Look and Feel, that are compliant with Section 508 of the Rehabilitation Act of 1998
**Success Criteria**
| 1.0 | Human Interface - **Description**
Interfaces with Common Look and Feel, that are compliant with Section 508 of the Rehabilitation Act of 1998
**Success Criteria**
| non_main | human interface description interfaces with common look and feel that are compliant with section of the rehabilitation act of success criteria | 0 |
2,088 | 7,105,604,811 | IssuesEvent | 2018-01-16 14:14:00 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Chemistry Macros not Functioning Correctly | Maintainability/Hinders improvements UI | Issue reported from Round ID: 81551 (/tg/Station Sybil [ENGLISH] [US-EAST] [100% LAG FREE])
Using the macro:
nitrogen=10;silicon=10;potassium=10;carbon=10;oxygen=10;sugar=10;carbon=15;silicon=15
Expected output is
90 units of Tricord (30 Anti-Tox, 30 Bicaridine, 30 Kelotane)
Instead I get:
90 units of Tricord
10 units of Kelotane
Using the macro:
nitrogen=10;silicon=10;potassium=10;carbon=10;oxygen=10;sugar=10;carbon=20;silicon=20
Expected output is:
90 units of Tricord
10 units of Kelotane
Instead I get:
90 units of Tricord
30 units of Kelotane | True | Chemistry Macros not Functioning Correctly - Issue reported from Round ID: 81551 (/tg/Station Sybil [ENGLISH] [US-EAST] [100% LAG FREE])
Using the macro:
nitrogen=10;silicon=10;potassium=10;carbon=10;oxygen=10;sugar=10;carbon=15;silicon=15
Expected output is
90 units of Tricord (30 Anti-Tox, 30 Bicaridine, 30 Kelotane)
Instead I get:
90 units of Tricord
10 units of Kelotane
Using the macro:
nitrogen=10;silicon=10;potassium=10;carbon=10;oxygen=10;sugar=10;carbon=20;silicon=20
Expected output is:
90 units of Tricord
10 units of Kelotane
Instead I get:
90 units of Tricord
30 units of Kelotane | main | chemistry macros not functioning correctly issue reported from round id tg station sybil using the macro nitrogen silicon potassium carbon oxygen sugar carbon silicon expected output is units of tricord anti tox bicaridine kelotane instead i get units of tricord units of kelotane using the macro nitrogen silicon potassium carbon oxygen sugar carbon silicon expected output is units of tricord units of kelotane instead i get units of tricord units of kelotane | 1 |
1,360 | 5,872,483,397 | IssuesEvent | 2017-05-15 11:39:47 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | win_stat returns md5, but the data is sha1 | affects_1.9 bug_report docs_report waiting_on_maintainer windows | ##### Issue Type:
- Bug Report
##### Component Name:
win_stat module
##### Ansible Version:
ansible 1.9.3
(even though I'm on 1.9.3, I provide links to code in the devel branch)
##### Ansible Configuration:
N/A
##### Environment:
N/A
##### Summary:
The win_stat modules reportedly returns a checksum and md5, but in all reality it is a sha1.
##### Steps To Reproduce:
Run win_stat on a file and take note of 'md5' result (and the 'checksum' result if desired).
##### Expected Results:
I would expect 'md5' result to be the actual md5sum of the file.
Additionally, if `sha1` was returned in the result, I would expect it to be the actual sha1sum of the file.
##### Actual Results:
In actuality, the 'md5' result is the sha1sum of the file.
You can easily see the bug in the devel branch at these two links:
https://github.com/ansible/ansible-modules-core/blob/devel/windows/win_stat.ps1#L68
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/powershell.ps1#L213
| True | win_stat returns md5, but the data is sha1 - ##### Issue Type:
- Bug Report
##### Component Name:
win_stat module
##### Ansible Version:
ansible 1.9.3
(even though I'm on 1.9.3, I provide links to code in the devel branch)
##### Ansible Configuration:
N/A
##### Environment:
N/A
##### Summary:
The win_stat modules reportedly returns a checksum and md5, but in all reality it is a sha1.
##### Steps To Reproduce:
Run win_stat on a file and take note of 'md5' result (and the 'checksum' result if desired).
##### Expected Results:
I would expect 'md5' result to be the actual md5sum of the file.
Additionally, if `sha1` was returned in the result, I would expect it to be the actual sha1sum of the file.
##### Actual Results:
In actuality, the 'md5' result is the sha1sum of the file.
You can easily see the bug in the devel branch at these two links:
https://github.com/ansible/ansible-modules-core/blob/devel/windows/win_stat.ps1#L68
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/powershell.ps1#L213
| main | win stat returns but the data is issue type bug report component name win stat module ansible version ansible even though i m on i provide links to code in the devel branch ansible configuration n a environment n a summary the win stat modules reportedly returns a checksum and but in all reality it is a steps to reproduce run win stat on a file and take note of result and the checksum result if desired expected results i would expect result to be the actual of the file additionally if was returned in the result i would expect it to be the actual of the file actual results in actuality the result is the of the file you can easily see the bug in the devel branch at these two links | 1 |
3,745 | 15,764,579,442 | IssuesEvent | 2021-03-31 13:22:28 | arcticicestudio/styleguide-javascript | https://api.github.com/repos/arcticicestudio/styleguide-javascript | opened | Update Node package dependencies & GitHub Action versions | context-workflow scope-compatibility scope-maintainability scope-quality scope-stability type-task | In #32 all ESLint packages and dependencies have been updated to the latest version.
This issue updates all repository development packages and GitHub Actions to the latest versions and adapts to the changes:
- **Update to ESLint v7** — bump package version from [`v6.2.0` to `v7.23.0`][gh-eslint/eslint-comp-v6.2.0_v7.23.0]. See #32 and the [official v7 migration guide][esl-docs-guides-mig_v7] for more details.
- **Remove `--ext` option for ESLint tasks** — as of ESLint v7, [files matched by `overrides[].files` are now linted by default][esl-docs-guides-mig_v7#override_file_match] which makes it obsolete to explicitly define file extensions like `*.js`.
- [del-cli][gh-sindresorhus/del-cli] — Bump minimum version from [`v2.0.0` to `v3.0.1`][gh-sindresorhus/del-cli-comp-v2.0.0_v3.0.1].
- [eslint-config-prettier][gh-prettier/eslint-config-prettier] — Bump version from [`v6.1.0` to `v8.1.0`][gh-prettier/eslint-config-prettier-comp-v6.1.0_v8.1.0].
- [eslint-plugin-prettier][gh-prettier/eslint-plugin-prettier] — Bump minimum version from [`v3.1.0` to `v3.3.1`][gh-prettier/eslint-plugin-prettier-comp-v3.1.0_v3.3.1].
- [eslint-plugin-import][gh-benmosher/eslint-plugin-import] — Bump minimum version from [`v2.18.2` to `v2.22.1`][gh-benmosher/eslint-plugin-import-comp-v2.18.2_v2.22.1].
- [husky][gh-typicode/husky] — Bump minimum version from [`v3.0.4` to `v6.0.0`][gh-typicode/husky-comp-v3.0.4_v6.0.0]. This also includes some breaking changes that require migrations. Run the official migration CLI to automatically migrate from v4 to v6: `npx husky-init && npm exec -- github:typicode/husky-4-to-6 --remove-v4-config`
- [lint-staged][gh-okonet/lint-staged] — Bump minimum version from [`v9.2.3` to `v10.5.4`][gh-okonet/lint-staged-comp-v9.2.3_v10.5.4].
- [prettier][gh-prettier/prettier] — Bump minimum version from [`v1.18.2` to `v2.2.1`][gh-prettier/prettier-comp-v1.18.2_v2.2.1].
- [remark-cli][gh-remarkjs/remark] — Bump minimum version from [`v7.0.0` to `v9.0.0`][gh-remarkjs/remark-comp-v7.0.0_v9.0.0].
[esl-docs-guides-mig_v7]: https://eslint.org/docs/user-guide/migrating-to-7.0.0
[esl-docs-guides-mig_v7#override_file_match]: https://eslint.org/docs/user-guide/migrating-to-7.0.0#lint-files-matched-by-overridesfiles-by-default
[gh-benmosher/eslint-plugin-import-comp-v2.18.2_v2.22.1]: https://github.com/benmosher/eslint-plugin-import/compare/v2.18.2...v2.22.1
[gh-benmosher/eslint-plugin-import]: https://github.com/benmosher/eslint-plugin-import
[gh-eslint/eslint-comp-v6.2.0_v7.23.0]: https://github.com/eslint/eslint/compare/v6.2.0....v7.23.0
[gh-okonet/lint-staged-comp-v9.2.3_v10.5.4]: https://github.com/typicode/husky/compare/v9.2.3...v10.5.4
[gh-okonet/lint-staged]: https://github.com/okonet/lint-staged
[gh-prettier/eslint-config-prettier-comp-v6.1.0_v8.1.0]: https://github.com/prettier/eslint-config-prettier/compare/v6.1.0...v8.1.0
[gh-prettier/eslint-config-prettier]: https://github.com/prettier/eslint-config-prettier
[gh-prettier/eslint-plugin-prettier-comp-v3.1.0_v3.3.1]: https://github.com/prettier/eslint-plugin-prettier/compare/v3.1.0...v3.3.1
[gh-prettier/eslint-plugin-prettier]: https://github.com/prettier/eslint-plugin-prettier
[gh-prettier/prettier-comp-v1.18.2_v2.2.1]: https://github.com/typicode/husky/compare/v1.18.2...v2.2.1
[gh-prettier/prettier]: https://github.com/prettier/prettier
[gh-remarkjs/remark-comp-v7.0.0_v9.0.0]: https://github.com/typicode/husky/compare/v7.0.0...v9.0.0
[gh-remarkjs/remark]: https://github.com/remarkjs/remark/releases
[gh-sindresorhus/del-cli-comp-v2.0.0_v3.0.1]: https://github.com/sindresorhus/del-cli/compare/v2.0.0...v3.0.1
[gh-sindresorhus/del-cli]: https://github.com/sindresorhus/del-cli
[gh-typicode/husky-comp-v3.0.4_v6.0.0]: https://github.com/typicode/husky/compare/v3.0.4...v6.0.0
[gh-typicode/husky]: https://github.com/typicode/husky
| True | Update Node package dependencies & GitHub Action versions - In #32 all ESLint packages and dependencies have been updated to the latest version.
This issue updates all repository development packages and GitHub Actions to the latest versions and adapts to the changes:
- **Update to ESLint v7** — bump package version from [`v6.2.0` to `v7.23.0`][gh-eslint/eslint-comp-v6.2.0_v7.23.0]. See #32 and the [official v7 migration guide][esl-docs-guides-mig_v7] for more details.
- **Remove `--ext` option for ESLint tasks** — as of ESLint v7, [files matched by `overrides[].files` are now linted by default][esl-docs-guides-mig_v7#override_file_match] which makes it obsolete to explicitly define file extensions like `*.js`.
- [del-cli][gh-sindresorhus/del-cli] — Bump minimum version from [`v2.0.0` to `v3.0.1`][gh-sindresorhus/del-cli-comp-v2.0.0_v3.0.1].
- [eslint-config-prettier][gh-prettier/eslint-config-prettier] — Bump version from [`v6.1.0` to `v8.1.0`][gh-prettier/eslint-config-prettier-comp-v6.1.0_v8.1.0].
- [eslint-plugin-prettier][gh-prettier/eslint-plugin-prettier] — Bump minimum version from [`v3.1.0` to `v3.3.1`][gh-prettier/eslint-plugin-prettier-comp-v3.1.0_v3.3.1].
- [eslint-plugin-import][gh-benmosher/eslint-plugin-import] — Bump minimum version from [`v2.18.2` to `v2.22.1`][gh-benmosher/eslint-plugin-import-comp-v2.18.2_v2.22.1].
- [husky][gh-typicode/husky] — Bump minimum version from [`v3.0.4` to `v6.0.0`][gh-typicode/husky-comp-v3.0.4_v6.0.0]. This also includes some breaking changes that require migrations. Run the official migration CLI to automatically migrate from v4 to v6: `npx husky-init && npm exec -- github:typicode/husky-4-to-6 --remove-v4-config`
- [lint-staged][gh-okonet/lint-staged] — Bump minimum version from [`v9.2.3` to `v10.5.4`][gh-okonet/lint-staged-comp-v9.2.3_v10.5.4].
- [prettier][gh-prettier/prettier] — Bump minimum version from [`v1.18.2` to `v2.2.1`][gh-prettier/prettier-comp-v1.18.2_v2.2.1].
- [remark-cli][gh-remarkjs/remark] — Bump minimum version from [`v7.0.0` to `v9.0.0`][gh-remarkjs/remark-comp-v7.0.0_v9.0.0].
[esl-docs-guides-mig_v7]: https://eslint.org/docs/user-guide/migrating-to-7.0.0
[esl-docs-guides-mig_v7#override_file_match]: https://eslint.org/docs/user-guide/migrating-to-7.0.0#lint-files-matched-by-overridesfiles-by-default
[gh-benmosher/eslint-plugin-import-comp-v2.18.2_v2.22.1]: https://github.com/benmosher/eslint-plugin-import/compare/v2.18.2...v2.22.1
[gh-benmosher/eslint-plugin-import]: https://github.com/benmosher/eslint-plugin-import
[gh-eslint/eslint-comp-v6.2.0_v7.23.0]: https://github.com/eslint/eslint/compare/v6.2.0....v7.23.0
[gh-okonet/lint-staged-comp-v9.2.3_v10.5.4]: https://github.com/typicode/husky/compare/v9.2.3...v10.5.4
[gh-okonet/lint-staged]: https://github.com/okonet/lint-staged
[gh-prettier/eslint-config-prettier-comp-v6.1.0_v8.1.0]: https://github.com/prettier/eslint-config-prettier/compare/v6.1.0...v8.1.0
[gh-prettier/eslint-config-prettier]: https://github.com/prettier/eslint-config-prettier
[gh-prettier/eslint-plugin-prettier-comp-v3.1.0_v3.3.1]: https://github.com/prettier/eslint-plugin-prettier/compare/v3.1.0...v3.3.1
[gh-prettier/eslint-plugin-prettier]: https://github.com/prettier/eslint-plugin-prettier
[gh-prettier/prettier-comp-v1.18.2_v2.2.1]: https://github.com/typicode/husky/compare/v1.18.2...v2.2.1
[gh-prettier/prettier]: https://github.com/prettier/prettier
[gh-remarkjs/remark-comp-v7.0.0_v9.0.0]: https://github.com/typicode/husky/compare/v7.0.0...v9.0.0
[gh-remarkjs/remark]: https://github.com/remarkjs/remark/releases
[gh-sindresorhus/del-cli-comp-v2.0.0_v3.0.1]: https://github.com/sindresorhus/del-cli/compare/v2.0.0...v3.0.1
[gh-sindresorhus/del-cli]: https://github.com/sindresorhus/del-cli
[gh-typicode/husky-comp-v3.0.4_v6.0.0]: https://github.com/typicode/husky/compare/v3.0.4...v6.0.0
[gh-typicode/husky]: https://github.com/typicode/husky
| main | update node package dependencies github action versions in all eslint packages and dependencies have been updated to the latest version this issue updates all repository development packages and github actions to the latest versions and adapts to the changes update to eslint — bump package version from see and the for more details remove ext option for eslint tasks — as of eslint files are now linted by default which makes it obsolete to explicitly define file extensions like js — bump minimum version from — bump version from — bump minimum version from — bump minimum version from — bump minimum version from this also includes some breaking changes that require migrations run the official migration cli to automatically migrate from to npx husky init npm exec github typicode husky to remove config — bump minimum version from — bump minimum version from — bump minimum version from | 1 |
126,041 | 10,374,143,963 | IssuesEvent | 2019-09-09 08:58:07 | QubesOS/updates-status | https://api.github.com/repos/QubesOS/updates-status | closed | core-qubesdb v4.1.0 (r4.1) | r4.1-buster-cur-test r4.1-centos7-cur-test r4.1-dom0-cur-test r4.1-fc29-cur-test r4.1-fc30-cur-test r4.1-stretch-cur-test | Update of core-qubesdb to v4.1.0 for Qubes r4.1, see comments below for details.
Built from: https://github.com/QubesOS/qubes-core-qubesdb/commit/cd1472fdefd23684123ae21192c5cd02108328ff
[Changes since previous version](https://github.com/QubesOS/qubes-core-qubesdb/compare/v4.0.10...v4.1.0):
QubesOS/qubes-core-qubesdb@cd1472f version 4.1.0
QubesOS/qubes-core-qubesdb@64c6428 travis: update to R4.1
QubesOS/qubes-core-qubesdb@0235b68 rpm: drop direct xen-libs dependency
Referenced issues:
QubesOS/qubes-issues#3945
If you're release manager, you can issue GPG-inline signed command:
* `Upload core-qubesdb cd1472fdefd23684123ae21192c5cd02108328ff r4.1 current repo` (available 7 days from now)
* `Upload core-qubesdb cd1472fdefd23684123ae21192c5cd02108328ff r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload core-qubesdb cd1472fdefd23684123ae21192c5cd02108328ff r4.1 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| 6.0 | core-qubesdb v4.1.0 (r4.1) - Update of core-qubesdb to v4.1.0 for Qubes r4.1, see comments below for details.
Built from: https://github.com/QubesOS/qubes-core-qubesdb/commit/cd1472fdefd23684123ae21192c5cd02108328ff
[Changes since previous version](https://github.com/QubesOS/qubes-core-qubesdb/compare/v4.0.10...v4.1.0):
QubesOS/qubes-core-qubesdb@cd1472f version 4.1.0
QubesOS/qubes-core-qubesdb@64c6428 travis: update to R4.1
QubesOS/qubes-core-qubesdb@0235b68 rpm: drop direct xen-libs dependency
Referenced issues:
QubesOS/qubes-issues#3945
If you're release manager, you can issue GPG-inline signed command:
* `Upload core-qubesdb cd1472fdefd23684123ae21192c5cd02108328ff r4.1 current repo` (available 7 days from now)
* `Upload core-qubesdb cd1472fdefd23684123ae21192c5cd02108328ff r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload core-qubesdb cd1472fdefd23684123ae21192c5cd02108328ff r4.1 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| non_main | core qubesdb update of core qubesdb to for qubes see comments below for details built from qubesos qubes core qubesdb version qubesos qubes core qubesdb travis update to qubesos qubes core qubesdb rpm drop direct xen libs dependency referenced issues qubesos qubes issues if you re release manager you can issue gpg inline signed command upload core qubesdb current repo available days from now upload core qubesdb current dists repo you can choose subset of distributions like vm vm available days from now upload core qubesdb security testing repo above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it | 0 |
62,404 | 17,023,915,875 | IssuesEvent | 2021-07-03 04:32:21 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | HTML and link export not shown in Firefox | Component: website Priority: minor Resolution: worksforme Type: defect | **[Submitted to the original trac issue database at 11.30am, Wednesday, 31st December 2014]**
Hi everyone,
in Firefox, the "share" tab on openstreetmap.org doesnt show the options to export a link or embeddable HTML, but only the option to export a static image. Chromium shows both options as expected, whereas in Firefox the HTML/link export simply doesnt show up at all; see the attached screenshots where it is easily noticable that the share tab is much shorter in Firefox than in Chromium.
This is on Arch Linux 64bit, with Firefox 34.0.5 and Chromium 39.0.2171.95.
Steps to reproduce:
1. Install Firefox
2. Browse to https://openstreetmap.org
3. Click on the "share" tab
4. Wonder where the HTML and link export option is
Greetings
Marvin | 1.0 | HTML and link export not shown in Firefox - **[Submitted to the original trac issue database at 11.30am, Wednesday, 31st December 2014]**
Hi everyone,
in Firefox, the "share" tab on openstreetmap.org doesnt show the options to export a link or embeddable HTML, but only the option to export a static image. Chromium shows both options as expected, whereas in Firefox the HTML/link export simply doesnt show up at all; see the attached screenshots where it is easily noticable that the share tab is much shorter in Firefox than in Chromium.
This is on Arch Linux 64bit, with Firefox 34.0.5 and Chromium 39.0.2171.95.
Steps to reproduce:
1. Install Firefox
2. Browse to https://openstreetmap.org
3. Click on the "share" tab
4. Wonder where the HTML and link export option is
Greetings
Marvin | non_main | html and link export not shown in firefox hi everyone in firefox the share tab on openstreetmap org doesnt show the options to export a link or embeddable html but only the option to export a static image chromium shows both options as expected whereas in firefox the html link export simply doesnt show up at all see the attached screenshots where it is easily noticable that the share tab is much shorter in firefox than in chromium this is on arch linux with firefox and chromium steps to reproduce install firefox browse to click on the share tab wonder where the html and link export option is greetings marvin | 0 |
246,777 | 18,853,695,635 | IssuesEvent | 2021-11-12 01:32:06 | hackforla/design-systems | https://api.github.com/repos/hackforla/design-systems | closed | Revisit the problem statement of the project | documentation enhancement Role: Research size: 1pts Feature User Journey | ### Overview
After obtaining the first research insights the problem statement needs to be revisited.
### Action Items
- [x] Revisit the current problem statement document.
- [x] Create and fill out a Problem Statement table and add to the Research Plan
### Resources/Instructions
[Problem statement](https://docs.google.com/document/d/16fLMrwmJI6y0Hdh5sij_wKvwGT7eCuBvX9kIswmL7vQ/edit#heading=h.qp2vcpubfz0j)
[Research plan] | 1.0 | Revisit the problem statement of the project - ### Overview
After obtaining the first research insights the problem statement needs to be revisited.
### Action Items
- [x] Revisit the current problem statement document.
- [x] Create and fill out a Problem Statement table and add to the Research Plan
### Resources/Instructions
[Problem statement](https://docs.google.com/document/d/16fLMrwmJI6y0Hdh5sij_wKvwGT7eCuBvX9kIswmL7vQ/edit#heading=h.qp2vcpubfz0j)
[Research plan] | non_main | revisit the problem statement of the project overview after obtaining the first research insights the problem statement needs to be revisited action items revisit the current problem statement document create and fill out a problem statement table and add to the research plan resources instructions | 0 |
120,157 | 10,101,226,071 | IssuesEvent | 2019-07-29 08:13:56 | gap-packages/PackageManager | https://api.github.com/repos/gap-packages/PackageManager | closed | Improve test robustness | bug in tests | GAPDoc updated from 1.6.2 to 1.6.3, so we have a new test failure:
```
########> Diff in /home/gap/inst/gap-stable-4.10/pkg/PackageManager-0.4/tst/Pa\
ckageManager.tst:664
# Input is:
InstallPackage("https://gap-packages.github.io/PackageManager/dummy/uuid-too-n\
ew.tar.gz");
# Expected output:
#I Package GAPDoc >= 999.0 unavailable: only version 1.6.2 was found
#I Dependencies not satisfied for uuid-0.6
false
# But found:
#I Package GAPDoc >= 999.0 unavailable: only version 1.6.3 was found
#I Dependencies not satisfied for uuid-0.6
false
########
```
(see https://travis-ci.org/gap-infra/gap-docker-pkg-tests-stable-4.10/jobs/559464647). Perhaps one could invent a way to reduce reliance of this test on a specific version number.
| 1.0 | Improve test robustness - GAPDoc updated from 1.6.2 to 1.6.3, so we have a new test failure:
```
########> Diff in /home/gap/inst/gap-stable-4.10/pkg/PackageManager-0.4/tst/Pa\
ckageManager.tst:664
# Input is:
InstallPackage("https://gap-packages.github.io/PackageManager/dummy/uuid-too-n\
ew.tar.gz");
# Expected output:
#I Package GAPDoc >= 999.0 unavailable: only version 1.6.2 was found
#I Dependencies not satisfied for uuid-0.6
false
# But found:
#I Package GAPDoc >= 999.0 unavailable: only version 1.6.3 was found
#I Dependencies not satisfied for uuid-0.6
false
########
```
(see https://travis-ci.org/gap-infra/gap-docker-pkg-tests-stable-4.10/jobs/559464647). Perhaps one could invent a way to reduce reliance of this test on a specific version number.
| non_main | improve test robustness gapdoc updated from to so we have a new test failure diff in home gap inst gap stable pkg packagemanager tst pa ckagemanager tst input is installpackage ew tar gz expected output i package gapdoc unavailable only version was found i dependencies not satisfied for uuid false but found i package gapdoc unavailable only version was found i dependencies not satisfied for uuid false see perhaps one could invent a way to reduce reliance of this test on a specific version number | 0 |
118,753 | 11,991,877,146 | IssuesEvent | 2020-04-08 09:08:26 | python-pillow/Pillow | https://api.github.com/repos/python-pillow/Pillow | closed | I cannot compile docs | Documentation | <!--
Thank you for reporting an issue.
Follow these guidelines to ensure your issue is handled properly.
If you have a ...
1. General question: consider asking the question on Stack Overflow
with the python-imaging-library tag:
* https://stackoverflow.com/questions/tagged/python-imaging-library
Do not ask a question in both places.
If you think you have found a bug or have an unexplained exception
then file a bug report here.
2. Bug report: include a self-contained, copy-pastable example that
generates the issue if possible. Be concise with code posted.
Guidelines on how to provide a good bug report:
* https://stackoverflow.com/help/mcve
Bug reports which follow these guidelines are easier to diagnose,
and are often handled much more quickly.
3. Feature request: do a quick search of existing issues
to make sure this has not been asked before.
We know asking good questions takes effort, and we appreciate your time.
Thank you.
-->
### What did you do?
I am creating a linux package for PisiLinux of pillow 7.1.1. I can compile pillow with no problems. I cannot compile docs.
### What did you expect to happen?
Creating html documentation.
### What actually happened?
Building of documentation finished with problems, 7 warnings, then stop.
### What are your OS, Python and Pillow versions?
* OS: PisiLinux 2.1.2
* Python: 3.8.2
* Pillow: 7.1.1
* Sphinx: 3.0.0
<!--
Please include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.
The best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as Plone, Django, or Buildout, try to replicate the issue just using Pillow.
-->
```python
def build():
# suppress warnings
pisitools.cflags.add("-Wno-sign-compare -Wno-pointer-sign")
# fix unused direct dependency analysis
shelltools.export("LDSHARED", "x86_64-pc-linux-gnu-gcc -Wl,-O1,--as-needed -shared -lpthread")
pythonmodules.compile(pyVer="3")
# build documentation
shelltools.export("PYTHONPATH", "%s/Pillow-%s/build/lib.linux-x86_64-3.8" % (get.workDIR(), get.srcVERSION()))
shelltools.system("make -C docs html")
```
And This was the error:
```
Running make -C docs html
make: Entering directory '/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/docs'
sphinx-build -b html -W --keep-going -d _build/doctrees . _build/html
Running Sphinx v3.0.0
making output directory... done
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/docs/conf.py:292: RemovedInSphinx40Warning: The app.add_javascript() is deprecated. Please use app.add_js_file() instead.
app.add_javascript("js/script.js")
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 75 source files that are out of date
updating environment: [new config] 75 added, 0 changed, 0 removed
reading sources... [100%] releasenotes/index
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.load_end:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.load_end, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.load_prepare:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.load_prepare, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.load_read:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.load_read, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.seek:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.seek, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.tell:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.tell, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.verify:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.verify, other instance in reference/plugins, use :noindex: for one of them
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] releasenotes/index
generating indices... genindex py-modindexdone
highlighting module code... [100%] PIL._binary
writing additional pages... searchdone
copying images... [100%] releasenotes/../../Tests/images/imagedraw_stroke_different.png
copying static files... ... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build finished with problems, 7 warnings.
make: *** [Makefile:45: html] Error 1
make: Leaving directory '/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/docs'
```
| 1.0 | I cannot compile docs - <!--
Thank you for reporting an issue.
Follow these guidelines to ensure your issue is handled properly.
If you have a ...
1. General question: consider asking the question on Stack Overflow
with the python-imaging-library tag:
* https://stackoverflow.com/questions/tagged/python-imaging-library
Do not ask a question in both places.
If you think you have found a bug or have an unexplained exception
then file a bug report here.
2. Bug report: include a self-contained, copy-pastable example that
generates the issue if possible. Be concise with code posted.
Guidelines on how to provide a good bug report:
* https://stackoverflow.com/help/mcve
Bug reports which follow these guidelines are easier to diagnose,
and are often handled much more quickly.
3. Feature request: do a quick search of existing issues
to make sure this has not been asked before.
We know asking good questions takes effort, and we appreciate your time.
Thank you.
-->
### What did you do?
I am creating a linux package for PisiLinux of pillow 7.1.1. I can compile pillow with no problems. I cannot compile docs.
### What did you expect to happen?
Creating html documentation.
### What actually happened?
Building of documentation finished with problems, 7 warnings, then stop.
### What are your OS, Python and Pillow versions?
* OS: PisiLinux 2.1.2
* Python: 3.8.2
* Pillow: 7.1.1
* Sphinx: 3.0.0
<!--
Please include **code** that reproduces the issue and whenever possible, an **image** that demonstrates the issue. Please upload images to GitHub, not to third-party file hosting sites. If necessary, add the image to a zip or tar archive.
The best reproductions are self-contained scripts with minimal dependencies. If you are using a framework such as Plone, Django, or Buildout, try to replicate the issue just using Pillow.
-->
```python
def build():
# suppress warnings
pisitools.cflags.add("-Wno-sign-compare -Wno-pointer-sign")
# fix unused direct dependency analysis
shelltools.export("LDSHARED", "x86_64-pc-linux-gnu-gcc -Wl,-O1,--as-needed -shared -lpthread")
pythonmodules.compile(pyVer="3")
# build documentation
shelltools.export("PYTHONPATH", "%s/Pillow-%s/build/lib.linux-x86_64-3.8" % (get.workDIR(), get.srcVERSION()))
shelltools.system("make -C docs html")
```
And This was the error:
```
Running make -C docs html
make: Entering directory '/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/docs'
sphinx-build -b html -W --keep-going -d _build/doctrees . _build/html
Running Sphinx v3.0.0
making output directory... done
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/docs/conf.py:292: RemovedInSphinx40Warning: The app.add_javascript() is deprecated. Please use app.add_js_file() instead.
app.add_javascript("js/script.js")
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 75 source files that are out of date
updating environment: [new config] 75 added, 0 changed, 0 removed
reading sources... [100%] releasenotes/index
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.load_end:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.load_end, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.load_prepare:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.load_prepare, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.load_read:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.load_read, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.seek:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.seek, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.tell:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.tell, other instance in reference/plugins, use :noindex: for one of them
/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/build/lib.linux-x86_64-3.8/PIL/PngImagePlugin.py:docstring of PIL.PngImagePlugin.PngImageFile.verify:1: WARNING: duplicate object description of PIL.PngImagePlugin.PngImageFile.verify, other instance in reference/plugins, use :noindex: for one of them
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] releasenotes/index
generating indices... genindex py-modindexdone
highlighting module code... [100%] PIL._binary
writing additional pages... searchdone
copying images... [100%] releasenotes/../../Tests/images/imagedraw_stroke_different.png
copying static files... ... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build finished with problems, 7 warnings.
make: *** [Makefile:45: html] Error 1
make: Leaving directory '/var/pisi/python3-pillow-7.1.1-6/work/Pillow-7.1.1/docs'
```
| non_main | i cannot compile docs thank you for reporting an issue follow these guidelines to ensure your issue is handled properly if you have a general question consider asking the question on stack overflow with the python imaging library tag do not ask a question in both places if you think you have found a bug or have an unexplained exception then file a bug report here bug report include a self contained copy pastable example that generates the issue if possible be concise with code posted guidelines on how to provide a good bug report bug reports which follow these guidelines are easier to diagnose and are often handled much more quickly feature request do a quick search of existing issues to make sure this has not been asked before we know asking good questions takes effort and we appreciate your time thank you what did you do i am creating a linux package for pisilinux of pillow i can compile pillow with no problems i cannot compile docs what did you expect to happen creating html documentation what actually happened building of documentation finished with problems warnings then stop what are your os python and pillow versions os pisilinux python pillow sphinx please include code that reproduces the issue and whenever possible an image that demonstrates the issue please upload images to github not to third party file hosting sites if necessary add the image to a zip or tar archive the best reproductions are self contained scripts with minimal dependencies if you are using a framework such as plone django or buildout try to replicate the issue just using pillow python def build suppress warnings pisitools cflags add wno sign compare wno pointer sign fix unused direct dependency analysis shelltools export ldshared pc linux gnu gcc wl as needed shared lpthread pythonmodules compile pyver build documentation shelltools export pythonpath s pillow s build lib linux get workdir get srcversion shelltools system make c docs html and this was the error running make c docs html make entering directory var pisi pillow work pillow docs sphinx build b html w keep going d build doctrees build html running sphinx making output directory done var pisi pillow work pillow docs conf py the app add javascript is deprecated please use app add js file instead app add javascript js script js building targets for po files that are out of date building targets for source files that are out of date updating environment added changed removed reading sources releasenotes index var pisi pillow work pillow build lib linux pil pngimageplugin py docstring of pil pngimageplugin pngimagefile warning duplicate object description of pil pngimageplugin pngimagefile other instance in reference plugins use noindex for one of them var pisi pillow work pillow build lib linux pil pngimageplugin py docstring of pil pngimageplugin pngimagefile load end warning duplicate object description of pil pngimageplugin pngimagefile load end other instance in reference plugins use noindex for one of them var pisi pillow work pillow build lib linux pil pngimageplugin py docstring of pil pngimageplugin pngimagefile load prepare warning duplicate object description of pil pngimageplugin pngimagefile load prepare other instance in reference plugins use noindex for one of them var pisi pillow work pillow build lib linux pil pngimageplugin py docstring of pil pngimageplugin pngimagefile load read warning duplicate object description of pil pngimageplugin pngimagefile load read other instance in reference plugins use noindex for one of them var pisi pillow work pillow build lib linux pil pngimageplugin py docstring of pil pngimageplugin pngimagefile seek warning duplicate object description of pil pngimageplugin pngimagefile seek other instance in reference plugins use noindex for one of them var pisi pillow work pillow build lib linux pil pngimageplugin py docstring of pil pngimageplugin pngimagefile tell warning duplicate object description of pil pngimageplugin pngimagefile tell other instance in reference plugins use noindex for one of them var pisi pillow work pillow build lib linux pil pngimageplugin py docstring of pil pngimageplugin pngimagefile verify warning duplicate object description of pil pngimageplugin pngimagefile verify other instance in reference plugins use noindex for one of them looking for now outdated files none found pickling environment done checking consistency done preparing documents done writing output releasenotes index generating indices genindex py modindexdone highlighting module code pil binary writing additional pages searchdone copying images releasenotes tests images imagedraw stroke different png copying static files done copying extra files done dumping search index in english code en done dumping object inventory done build finished with problems warnings make error make leaving directory var pisi pillow work pillow docs | 0 |
191,136 | 22,208,363,340 | IssuesEvent | 2022-06-07 16:44:18 | TIBCOSoftware/labs-air | https://api.github.com/repos/TIBCOSoftware/labs-air | opened | CVE-2021-23368 (Medium) detected in postcss-7.0.24.tgz | security vulnerability | ## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-7.0.24.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.24.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.24.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **postcss-7.0.24.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/labs-air/commit/2b36f19c6531f1a3964d83923e752838cd9d62cb">2b36f19c6531f1a3964d83923e752838cd9d62cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: 7.0.36</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.24","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"postcss:7.0.24","isMinimumFixVersionAvailable":true,"minimumFixVersion":"7.0.36","isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23368","vulnerabilityDetails":"The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23368","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-23368 (Medium) detected in postcss-7.0.24.tgz - ## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-7.0.24.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.24.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.24.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **postcss-7.0.24.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/labs-air/commit/2b36f19c6531f1a3964d83923e752838cd9d62cb">2b36f19c6531f1a3964d83923e752838cd9d62cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: 7.0.36</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.24","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"postcss:7.0.24","isMinimumFixVersionAvailable":true,"minimumFixVersion":"7.0.36","isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23368","vulnerabilityDetails":"The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23368","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_main | cve medium detected in postcss tgz cve medium severity vulnerability vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href dependency hierarchy x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss from and before are vulnerable to regular expression denial of service redos during source map parsing publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree postcss isminimumfixversionavailable true minimumfixversion isbinary true basebranches vulnerabilityidentifier cve vulnerabilitydetails the package postcss from and before are vulnerable to regular expression denial of service redos during source map parsing vulnerabilityurl | 0 |
5,083 | 25,995,709,100 | IssuesEvent | 2022-12-20 11:22:22 | precice/precice | https://api.github.com/repos/precice/precice | opened | Use data structure similar to `time::Storage` in mesh::Data::_values | maintainability | In #1504 I introduce `time::Storage` in `CouplingData` to be able to store multiple samples per time window. This will become even more important when we move to subcycling (see #1414).
I'm currently restricting the use of `time::Storage` to `cplscheme::CouplingData`. This requires manually moving data from `cplscheme::CouplingData` to `mesh::Data::_values` and back again before and after mapping (or acceleration). This is not ideal, but in the current situation a reasonable solution with limited scope.
Ideally we would use this datastructure everywhere in preCICE where `mesh::Data::_values` is accessed (acceleration, mapping, communication ...). | True | Use data structure similar to `time::Storage` in mesh::Data::_values - In #1504 I introduce `time::Storage` in `CouplingData` to be able to store multiple samples per time window. This will become even more important when we move to subcycling (see #1414).
I'm currently restricting the use of `time::Storage` to `cplscheme::CouplingData`. This requires manually moving data from `cplscheme::CouplingData` to `mesh::Data::_values` and back again before and after mapping (or acceleration). This is not ideal, but in the current situation a reasonable solution with limited scope.
Ideally we would use this datastructure everywhere in preCICE where `mesh::Data::_values` is accessed (acceleration, mapping, communication ...). | main | use data structure similar to time storage in mesh data values in i introduce time storage in couplingdata to be able to store multiple samples per time window this will become even more important when we move to subcycling see i m currently restricting the use of time storage to cplscheme couplingdata this requires manually moving data from cplscheme couplingdata to mesh data values and back again before and after mapping or acceleration this is not ideal but in the current situation a reasonable solution with limited scope ideally we would use this datastructure everywhere in precice where mesh data values is accessed acceleration mapping communication | 1 |
16,173 | 21,677,441,166 | IssuesEvent | 2022-05-08 23:20:28 | Eriku33/Foundry-VTT-Image-Hover | https://api.github.com/repos/Eriku33/Foundry-VTT-Image-Hover | closed | Image shows up under Small Time | Monitor module compatibility | Would it be possible to up the z-index for the image? Right now, if you have Small Time module open, it is on top of the image. I think it has a z-index of around 70 IIRC. | True | Image shows up under Small Time - Would it be possible to up the z-index for the image? Right now, if you have Small Time module open, it is on top of the image. I think it has a z-index of around 70 IIRC. | non_main | image shows up under small time would it be possible to up the z index for the image right now if you have small time module open it is on top of the image i think it has a z index of around iirc | 0 |
20,193 | 3,314,629,942 | IssuesEvent | 2015-11-06 06:55:02 | yinheli/lightweight-java-profiler | https://api.github.com/repos/yinheli/lightweight-java-profiler | closed | invalid suffix on literal | auto-migrated Priority-Medium Type-Defect | ```
display.cc:23:22: warning: invalid suffix on literal; C++11 requires a space
between literal and identifier [-Wliteral-suffix]
fprintf(file_, "%"PRIdPTR" ", traces[i].count);
```
Original issue reported on code.google.com by `Sam.Hall...@gmail.com` on 29 Sep 2014 at 1:08 | 1.0 | invalid suffix on literal - ```
display.cc:23:22: warning: invalid suffix on literal; C++11 requires a space
between literal and identifier [-Wliteral-suffix]
fprintf(file_, "%"PRIdPTR" ", traces[i].count);
```
Original issue reported on code.google.com by `Sam.Hall...@gmail.com` on 29 Sep 2014 at 1:08 | non_main | invalid suffix on literal display cc warning invalid suffix on literal c requires a space between literal and identifier fprintf file pridptr traces count original issue reported on code google com by sam hall gmail com on sep at | 0 |
925 | 4,629,279,646 | IssuesEvent | 2016-09-28 08:45:01 | Particular/ServicePulse | https://api.github.com/repos/Particular/ServicePulse | closed | Confirmation messages should be unique and more detailed about what's exactly affected | Impact: S Size: S Tag: Maintainer Prio Tag: Triaged Type: Feature | The current confirmation messages are too generic

In this example, the title should indicate what's being done exactly (Retry messages) and which items are affected exactly (number of messages)
CC // @mauroservienti | True | Confirmation messages should be unique and more detailed about what's exactly affected - The current confirmation messages are too generic

In this example, the title should indicate what's being done exactly (Retry messages) and which items are affected exactly (number of messages)
CC // @mauroservienti | main | confirmation messages should be unique and more detailed about what s exactly affected the current confirmation messages are too generic in this example the title should indicate what s being done exactly retry messages and which items are affected exactly number of messages cc mauroservienti | 1 |
300,626 | 22,689,201,819 | IssuesEvent | 2022-07-04 17:28:12 | alexrold/avengers | https://api.github.com/repos/alexrold/avengers | closed | Tereas de equipo | bug documentation | ### TODO
- [x] Tarea 1
- [x] Tarea 2
- [x] Tarea 3
- [x] Tarea 4
- [x] Tarea 5
- [x] Tarea 6 | 1.0 | Tereas de equipo - ### TODO
- [x] Tarea 1
- [x] Tarea 2
- [x] Tarea 3
- [x] Tarea 4
- [x] Tarea 5
- [x] Tarea 6 | non_main | tereas de equipo todo tarea tarea tarea tarea tarea tarea | 0 |
3,704 | 15,115,668,787 | IssuesEvent | 2021-02-09 05:04:30 | geolexica/geolexica-server | https://api.github.com/repos/geolexica/geolexica-server | opened | Extract Jbuilder tag to a separate gem | maintainability | The Liquid tag which has been introduced in #157 is generally useful and should be extracted to a separate gem. | True | Extract Jbuilder tag to a separate gem - The Liquid tag which has been introduced in #157 is generally useful and should be extracted to a separate gem. | main | extract jbuilder tag to a separate gem the liquid tag which has been introduced in is generally useful and should be extracted to a separate gem | 1 |
5,015 | 25,763,267,708 | IssuesEvent | 2022-12-08 22:37:35 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Globals --> Support Fn::Transform | type/feature stage/pm-review maintainer/need-response | ## Feature Request/Revision
### Describe your idea/feature/enhancement
Earlier versions of SAM CLI (0.47.0) used to support the following:
```
Globals:
Fn::Transform:
Name: AWS::Include
Parameters:
Location: !Sub
- s3://{EnvironmentURL}/function_globals.yaml
- { EnvironmentURL: !Ref EnvironmentURL }
```
```
# function_globals.yaml
Function:
Tags:
......
Environment:
.......
VpcConfig:
..........
```
### Proposal
Update validation array of `['API', 'Function', 'HttpApi', 'SimpleTable']` to allow cloudformation functions that import files from remote sources.
Things to consider:
1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model)
I think we can talk through this feature on this thread to formalize it in the spec.
If you pull down an older version you'll see this functionality already existed before the validation was added.
### Additional Details
Here is the schema from [serverless-application-model](https://github.com/aws/serverless-application-model/blob/develop/samtranslator/validator/sam_schema/schema.json)
I think we would have to update the spec to allow for cloudformation functions to be included as a valid sam document [here](https://github.com/aws/serverless-application-model/blob/1ba4e567901bf53f104f853fcdfae5fad7a2bda9/samtranslator/validator/sam_schema/schema.json#L1039-L1060)
If there is a better way to allow for `Fn::Transform` to be allowed in this section let me know. | True | Globals --> Support Fn::Transform - ## Feature Request/Revision
### Describe your idea/feature/enhancement
Earlier versions of SAM CLI (0.47.0) used to support the following:
```
Globals:
Fn::Transform:
Name: AWS::Include
Parameters:
Location: !Sub
- s3://{EnvironmentURL}/function_globals.yaml
- { EnvironmentURL: !Ref EnvironmentURL }
```
```
# function_globals.yaml
Function:
Tags:
......
Environment:
.......
VpcConfig:
..........
```
### Proposal
Update validation array of `['API', 'Function', 'HttpApi', 'SimpleTable']` to allow cloudformation functions that import files from remote sources.
Things to consider:
1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model)
I think we can talk through this feature on this thread to formalize it in the spec.
If you pull down an older version you'll see this functionality already existed before the validation was added.
### Additional Details
Here is the schema from [serverless-application-model](https://github.com/aws/serverless-application-model/blob/develop/samtranslator/validator/sam_schema/schema.json)
I think we would have to update the spec to allow for cloudformation functions to be included as a valid sam document [here](https://github.com/aws/serverless-application-model/blob/1ba4e567901bf53f104f853fcdfae5fad7a2bda9/samtranslator/validator/sam_schema/schema.json#L1039-L1060)
If there is a better way to allow for `Fn::Transform` to be allowed in this section let me know. | main | globals support fn transform feature request revision describe your idea feature enhancement earlier versions of sam cli used to support the following globals fn transform name aws include parameters location sub environmenturl function globals yaml environmenturl ref environmenturl function globals yaml function tags environment vpcconfig proposal update validation array of to allow cloudformation functions that import files from remote sources things to consider will this require any updates to the i think we can talk through this feature on this thread to formalize it in the spec if you pull down an older version you ll see this functionality already existed before the validation was added additional details here is the schema from i think we would have to update the spec to allow for cloudformation functions to be included as a valid sam document if there is a better way to allow for fn transform to be allowed in this section let me know | 1 |
93,513 | 26,973,375,919 | IssuesEvent | 2023-02-09 07:34:36 | 202212-GIZ-YE-FEW/movie-project-parties-of-the-caribbean | https://api.github.com/repos/202212-GIZ-YE-FEW/movie-project-parties-of-the-caribbean | closed | Apply dropdown filter according to other features | build-functionality | A filter dropdown to filter the displayed movies in the home page, based on (popular, relase date, top rated, now playing and up coming) | 1.0 | Apply dropdown filter according to other features - A filter dropdown to filter the displayed movies in the home page, based on (popular, relase date, top rated, now playing and up coming) | non_main | apply dropdown filter according to other features a filter dropdown to filter the displayed movies in the home page based on popular relase date top rated now playing and up coming | 0 |
320,970 | 23,833,141,768 | IssuesEvent | 2022-09-06 01:07:16 | SaxyPandaBear/TwitchSongRequests | https://api.github.com/repos/SaxyPandaBear/TwitchSongRequests | closed | Add instructions in UI to document how a user should test their connection | documentation enhancement good first issue ui | We should include a page/step where the user can see validation instructions to make sure their stream is set up to queue songs via channel points. This could also include screenshots/gif/video instructions for validation as well. Consider adding a FAQ maybe? | 1.0 | Add instructions in UI to document how a user should test their connection - We should include a page/step where the user can see validation instructions to make sure their stream is set up to queue songs via channel points. This could also include screenshots/gif/video instructions for validation as well. Consider adding a FAQ maybe? | non_main | add instructions in ui to document how a user should test their connection we should include a page step where the user can see validation instructions to make sure their stream is set up to queue songs via channel points this could also include screenshots gif video instructions for validation as well consider adding a faq maybe | 0 |
1,067 | 4,889,235,461 | IssuesEvent | 2016-11-18 09:31:47 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | include_vars - directory inconsistency | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
include_vars
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.2.0
config file = /home/user/project/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
Inconsistent directory handling by include_vars module
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
structure:
site.yml -> include: /playbooks/test.yml
/playbooks/test.yml -> roles: test
/roles/test/tasks/main.yml -> include_vars: "{{ vars_file | default() }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
consistency
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
include_vars: "{{ vars_file | default() }}"
fatal: [hostname]: FAILED! => {"failed": true, "msg": "the file_name '/home/user/project/roles/test/tasks' does not exist, or is not readable"}
include_vars: "{{ vars_file | default('../file.yml') }}"
fatal: [hostname]: FAILED! => {"changed": false, "failed": true, "file": "/home/user/project/file.yml", "invocation": {"module_args": {"_raw_params": "../file.yml"}, "module_name": "include_vars"}, "msg": "Source file not found."}
include_vars: "{{ vars_file | default('file.yml') }}"
fatal: [hostname]: FAILED! => {"changed": false, "failed": true, "file": "/home/user/project/playbooks/file.yml", "invocation": {"module_args": {"_raw_params": "file.yml"}, "module_name": "include_vars"}, "msg": "Source file not found."}
works as expected with absolute path
```
| True | include_vars - directory inconsistency - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
include_vars
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.2.0
config file = /home/user/project/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
Inconsistent directory handling by include_vars module
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
structure:
site.yml -> include: /playbooks/test.yml
/playbooks/test.yml -> roles: test
/roles/test/tasks/main.yml -> include_vars: "{{ vars_file | default() }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
consistency
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
include_vars: "{{ vars_file | default() }}"
fatal: [hostname]: FAILED! => {"failed": true, "msg": "the file_name '/home/user/project/roles/test/tasks' does not exist, or is not readable"}
include_vars: "{{ vars_file | default('../file.yml') }}"
fatal: [hostname]: FAILED! => {"changed": false, "failed": true, "file": "/home/user/project/file.yml", "invocation": {"module_args": {"_raw_params": "../file.yml"}, "module_name": "include_vars"}, "msg": "Source file not found."}
include_vars: "{{ vars_file | default('file.yml') }}"
fatal: [hostname]: FAILED! => {"changed": false, "failed": true, "file": "/home/user/project/playbooks/file.yml", "invocation": {"module_args": {"_raw_params": "file.yml"}, "module_name": "include_vars"}, "msg": "Source file not found."}
works as expected with absolute path
```
| main | include vars directory inconsistency issue type bug report component name include vars ansible version ansible config file home user project ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary inconsistent directory handling by include vars module steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used structure site yml include playbooks test yml playbooks test yml roles test roles test tasks main yml include vars vars file default expected results consistency actual results include vars vars file default fatal failed failed true msg the file name home user project roles test tasks does not exist or is not readable include vars vars file default file yml fatal failed changed false failed true file home user project file yml invocation module args raw params file yml module name include vars msg source file not found include vars vars file default file yml fatal failed changed false failed true file home user project playbooks file yml invocation module args raw params file yml module name include vars msg source file not found works as expected with absolute path | 1 |
45,242 | 13,108,483,565 | IssuesEvent | 2020-08-04 16:55:12 | phunware/react-select | https://api.github.com/repos/phunware/react-select | opened | WS-2017-3757 (Medium) detected in content-type-parser-1.0.2.tgz | security vulnerability | ## WS-2017-3757 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>content-type-parser-1.0.2.tgz</b></p></summary>
<p>Parse the value of the Content-Type header</p>
<p>Library home page: <a href="https://registry.npmjs.org/content-type-parser/-/content-type-parser-1.0.2.tgz">https://registry.npmjs.org/content-type-parser/-/content-type-parser-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-select/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/react-select/node_modules/content-type-parser/package.json</p>
<p>
Dependency Hierarchy:
- jsdom-9.12.0.tgz (Root Library)
- :x: **content-type-parser-1.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/phunware/react-select/commit/7b7ee4fda1530a8aba251e15f46bd683a40393d8">7b7ee4fda1530a8aba251e15f46bd683a40393d8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
all versions prior to 2.0.0 of content-type-parser npm package are vulnerable to ReDoS via the user agent parser. the vulnerability was fixed by reintroducing a new parser and deleting the old one.
<p>Publish Date: 2017-12-10
<p>URL: <a href=https://github.com/jsdom/whatwg-mimetype/commit/26c539a699778f8743b8319c298b5fb28a4328d0>WS-2017-3757</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jsdom/whatwg-mimetype/issues/3">https://github.com/jsdom/whatwg-mimetype/issues/3</a></p>
<p>Release Date: 2020-04-30</p>
<p>Fix Resolution: v2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2017-3757 (Medium) detected in content-type-parser-1.0.2.tgz - ## WS-2017-3757 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>content-type-parser-1.0.2.tgz</b></p></summary>
<p>Parse the value of the Content-Type header</p>
<p>Library home page: <a href="https://registry.npmjs.org/content-type-parser/-/content-type-parser-1.0.2.tgz">https://registry.npmjs.org/content-type-parser/-/content-type-parser-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-select/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/react-select/node_modules/content-type-parser/package.json</p>
<p>
Dependency Hierarchy:
- jsdom-9.12.0.tgz (Root Library)
- :x: **content-type-parser-1.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/phunware/react-select/commit/7b7ee4fda1530a8aba251e15f46bd683a40393d8">7b7ee4fda1530a8aba251e15f46bd683a40393d8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
all versions prior to 2.0.0 of content-type-parser npm package are vulnerable to ReDoS via the user agent parser. the vulnerability was fixed by reintroducing a new parser and deleting the old one.
<p>Publish Date: 2017-12-10
<p>URL: <a href=https://github.com/jsdom/whatwg-mimetype/commit/26c539a699778f8743b8319c298b5fb28a4328d0>WS-2017-3757</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jsdom/whatwg-mimetype/issues/3">https://github.com/jsdom/whatwg-mimetype/issues/3</a></p>
<p>Release Date: 2020-04-30</p>
<p>Fix Resolution: v2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws medium detected in content type parser tgz ws medium severity vulnerability vulnerable library content type parser tgz parse the value of the content type header library home page a href path to dependency file tmp ws scm react select package json path to vulnerable library tmp ws scm react select node modules content type parser package json dependency hierarchy jsdom tgz root library x content type parser tgz vulnerable library found in head commit a href vulnerability details all versions prior to of content type parser npm package are vulnerable to redos via the user agent parser the vulnerability was fixed by reintroducing a new parser and deleting the old one publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
288,425 | 24,905,144,987 | IssuesEvent | 2022-10-29 06:16:26 | Team-Gabozago/gabozago_frontend | https://api.github.com/repos/Team-Gabozago/gabozago_frontend | opened | [FE] 컴포넌트 분리 | feature test | ## 🤷♂️ Description
재사용되어지는 컴포넌트들을 분리하자.
## 📝 Primary Commits
- [ ] 컴포넌트 추상화 작업
- [ ] Icon, Fonts 컴포넌트화 작업
- [ ] Button 컴포넌트 작업
- [ ] Header 컴포넌트 작업
- [ ] Input 컴포넌트 작업
- [ ] Popup 컴포넌트 작업
- [ ] Comment 컴포넌트 작업
- [ ] ?
어떤 컴포넌트들이 필요할까
## 📷 Screenshots
준비중.... | 1.0 | [FE] 컴포넌트 분리 - ## 🤷♂️ Description
재사용되어지는 컴포넌트들을 분리하자.
## 📝 Primary Commits
- [ ] 컴포넌트 추상화 작업
- [ ] Icon, Fonts 컴포넌트화 작업
- [ ] Button 컴포넌트 작업
- [ ] Header 컴포넌트 작업
- [ ] Input 컴포넌트 작업
- [ ] Popup 컴포넌트 작업
- [ ] Comment 컴포넌트 작업
- [ ] ?
어떤 컴포넌트들이 필요할까
## 📷 Screenshots
준비중.... | non_main | 컴포넌트 분리 🤷♂️ description 재사용되어지는 컴포넌트들을 분리하자 📝 primary commits 컴포넌트 추상화 작업 icon fonts 컴포넌트화 작업 button 컴포넌트 작업 header 컴포넌트 작업 input 컴포넌트 작업 popup 컴포넌트 작업 comment 컴포넌트 작업 어떤 컴포넌트들이 필요할까 📷 screenshots 준비중 | 0 |
1,820 | 6,577,329,226 | IssuesEvent | 2017-09-12 00:08:56 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Compute Engine: Provide a way to specify initial boot disk type and size | affects_2.1 cloud feature_idea gce waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
gce module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /Users/lihanli/projects/gce/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
-->
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_\* environment variables).
##### OS / ENVIRONMENT
<!---
-->
OS X El Capitan
##### SUMMARY
<!--- Explain the problem briefly -->
`gce` module cannot specify boot disk size and type.
See http://docs.ansible.com/ansible/gce_module.html
The **disks** parameter
a list of persistent disks to attach to the instance; a string value gives the name of the disk; alternatively, a dictionary value can define 'name' and 'mode' ('READ_ONLY' or 'READ_WRITE'). The first entry will be the boot disk (which must be READ_WRITE).
It does not let you specify the size and the type.
##### STEPS TO REPRODUCE
<!---
-->
<!--- Paste example playbooks or commands between quotes below -->
```
```
| True | Compute Engine: Provide a way to specify initial boot disk type and size - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
gce module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /Users/lihanli/projects/gce/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
-->
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_\* environment variables).
##### OS / ENVIRONMENT
<!---
-->
OS X El Capitan
##### SUMMARY
<!--- Explain the problem briefly -->
`gce` module cannot specify boot disk size and type.
See http://docs.ansible.com/ansible/gce_module.html
The **disks** parameter
a list of persistent disks to attach to the instance; a string value gives the name of the disk; alternatively, a dictionary value can define 'name' and 'mode' ('READ_ONLY' or 'READ_WRITE'). The first entry will be the boot disk (which must be READ_WRITE).
It does not let you specify the size and the type.
##### STEPS TO REPRODUCE
<!---
-->
<!--- Paste example playbooks or commands between quotes below -->
```
```
| main | compute engine provide a way to specify initial boot disk type and size issue type feature idea component name gce module ansible version ansible config file users lihanli projects gce ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment os x el capitan summary gce module cannot specify boot disk size and type see the disks parameter a list of persistent disks to attach to the instance a string value gives the name of the disk alternatively a dictionary value can define name and mode read only or read write the first entry will be the boot disk which must be read write it does not let you specify the size and the type steps to reproduce | 1 |
102,756 | 22,087,302,848 | IssuesEvent | 2022-06-01 01:01:45 | Matheus-Msantos/Cultura.i_web | https://api.github.com/repos/Matheus-Msantos/Cultura.i_web | closed | [Q.A] - Área administrativa / categoria | Code | **Para fazer**
- [x] Ao cadastrar uma categoria o titulo aparece "ATUALIZAR CATEGORIA" e o botão deve ser "cadastrar" | 1.0 | [Q.A] - Área administrativa / categoria - **Para fazer**
- [x] Ao cadastrar uma categoria o titulo aparece "ATUALIZAR CATEGORIA" e o botão deve ser "cadastrar" | non_main | área administrativa categoria para fazer ao cadastrar uma categoria o titulo aparece atualizar categoria e o botão deve ser cadastrar | 0 |
16,254 | 21,870,275,787 | IssuesEvent | 2022-05-19 04:01:02 | fixrtm/fixRTM | https://api.github.com/repos/fixrtm/fixRTM | opened | 'S' key may not work | bug compatibility not-checked | ### Before submitting a bug, please make sure following checks.
- [X] You have finished loading all model packs before login to world/server.
- [X] You're using the latest stable, snapshot, or daily-snapshot version of fixRTM.
- [X] You're using correct version of RTM and NGTLib for your fixRTM.
- [X] You couldn't find same bag in the issues.
### Descrive the bug
'S' key for notch may not work
### To Reproduce
install the following mods
```
CreativeCore_v1.10.70_mc1.12.2.jar
Decocraft-2.6.3.7_1.12.2.jar
DynamicSurroundings-1.12.2-3.6.3.jar
effortlessbuilding-1.12.2-2.16.jar
FCL-1.12.73.jar
fixRtm-2.0.20.jar
FRSM-4.4.1.jar
Furenikus_Roads-1.2.0-dev-b21.jar
Golden+Airport+Pack+[IV]+3.2.6.jar
Immersive+Vehicles-1.12.2-20.5.0.jar
ImmersiveEngineering-0.12-92.jar
ImmersiveRailroading-1.12.2-forge-1.9.1-500205.jar
journeymap-1.12.2-5.7.1.jar
kirosblocks-1.2.1.jar
LittleTiles_v1.5.57_mc1.12.2 (1).jar
malisiscore-1.12.2-6.5.1.jar
malisisdoors-1.12.2-7.3.0.jar
MCTE2.4.11-24_forge-1.12.2-14.23.2.2611.jar
mcw-bridges-1.0.6b-mc1.12.2.jar
mcw-doors-1.0.3-mc1.12.2.jar
mcw-roofs-1.0.2-mc1.12.2.jar
mcw-windows-1.0.0-mc1.12.2.jar
Miszko's+Polish+Trackside+Decor+Pack+0.4.jar
modroadworksreborn-0.0.3.jar
MTS_Official_Pack_V23.jar
NGTLib2.4.19-35_forge-1.12.2-14.23.2.2611.jar
OptiFine_1.12.2_HD_U_G5.jar
OreLib-1.12.2-3.6.0.1.jar
PTRLib-1.0.5.jar
replaymod-1.12.2-2.6.4.jar
RTM2.4.22-40_forge-1.12.2-14.23.2.2611.jar
terrarium-1.1.5.jar
TrackAPI-1.2.jar
trafficcontrol-0.4.1.jar
UniversalModCore-1.12.2-forge-1.1.3-2ef602.jar
UNU+Civilian+Pack+[MTS]+1.12.2-20.5.0-5.2.0.jar
UNU+Parts+Pack+[MTS]+1.12.2-20.5.0-5.1.0.jar
worldedit-forge-mc1.12.2-6.1.10-dist.jar
```
ride train & press 'S'
### Expected Behavior
Notch move
### Actual Behavior
Nothing
### OS
windows
### Minecraft Forge Version
N/A
### fixRTM Version
2.0.20
### Other Mods
see ToReporduce | True | 'S' key may not work - ### Before submitting a bug, please make sure following checks.
- [X] You have finished loading all model packs before login to world/server.
- [X] You're using the latest stable, snapshot, or daily-snapshot version of fixRTM.
- [X] You're using correct version of RTM and NGTLib for your fixRTM.
- [X] You couldn't find same bag in the issues.
### Descrive the bug
'S' key for notch may not work
### To Reproduce
install the following mods
```
CreativeCore_v1.10.70_mc1.12.2.jar
Decocraft-2.6.3.7_1.12.2.jar
DynamicSurroundings-1.12.2-3.6.3.jar
effortlessbuilding-1.12.2-2.16.jar
FCL-1.12.73.jar
fixRtm-2.0.20.jar
FRSM-4.4.1.jar
Furenikus_Roads-1.2.0-dev-b21.jar
Golden+Airport+Pack+[IV]+3.2.6.jar
Immersive+Vehicles-1.12.2-20.5.0.jar
ImmersiveEngineering-0.12-92.jar
ImmersiveRailroading-1.12.2-forge-1.9.1-500205.jar
journeymap-1.12.2-5.7.1.jar
kirosblocks-1.2.1.jar
LittleTiles_v1.5.57_mc1.12.2 (1).jar
malisiscore-1.12.2-6.5.1.jar
malisisdoors-1.12.2-7.3.0.jar
MCTE2.4.11-24_forge-1.12.2-14.23.2.2611.jar
mcw-bridges-1.0.6b-mc1.12.2.jar
mcw-doors-1.0.3-mc1.12.2.jar
mcw-roofs-1.0.2-mc1.12.2.jar
mcw-windows-1.0.0-mc1.12.2.jar
Miszko's+Polish+Trackside+Decor+Pack+0.4.jar
modroadworksreborn-0.0.3.jar
MTS_Official_Pack_V23.jar
NGTLib2.4.19-35_forge-1.12.2-14.23.2.2611.jar
OptiFine_1.12.2_HD_U_G5.jar
OreLib-1.12.2-3.6.0.1.jar
PTRLib-1.0.5.jar
replaymod-1.12.2-2.6.4.jar
RTM2.4.22-40_forge-1.12.2-14.23.2.2611.jar
terrarium-1.1.5.jar
TrackAPI-1.2.jar
trafficcontrol-0.4.1.jar
UniversalModCore-1.12.2-forge-1.1.3-2ef602.jar
UNU+Civilian+Pack+[MTS]+1.12.2-20.5.0-5.2.0.jar
UNU+Parts+Pack+[MTS]+1.12.2-20.5.0-5.1.0.jar
worldedit-forge-mc1.12.2-6.1.10-dist.jar
```
ride train & press 'S'
### Expected Behavior
Notch move
### Actual Behavior
Nothing
### OS
windows
### Minecraft Forge Version
N/A
### fixRTM Version
2.0.20
### Other Mods
see ToReporduce | non_main | s key may not work before submitting a bug please make sure following checks you have finished loading all model packs before login to world server you re using the latest stable snapshot or daily snapshot version of fixrtm you re using correct version of rtm and ngtlib for your fixrtm you couldn t find same bag in the issues descrive the bug s key for notch may not work to reproduce install the following mods creativecore jar decocraft jar dynamicsurroundings jar effortlessbuilding jar fcl jar fixrtm jar frsm jar furenikus roads dev jar golden airport pack jar immersive vehicles jar immersiveengineering jar immersiverailroading forge jar journeymap jar kirosblocks jar littletiles jar malisiscore jar malisisdoors jar forge jar mcw bridges jar mcw doors jar mcw roofs jar mcw windows jar miszko s polish trackside decor pack jar modroadworksreborn jar mts official pack jar forge jar optifine hd u jar orelib jar ptrlib jar replaymod jar forge jar terrarium jar trackapi jar trafficcontrol jar universalmodcore forge jar unu civilian pack jar unu parts pack jar worldedit forge dist jar ride train press s expected behavior notch move actual behavior nothing os windows minecraft forge version n a fixrtm version other mods see toreporduce | 0 |
3,657 | 14,940,188,597 | IssuesEvent | 2021-01-25 17:55:58 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | reopened | Enhancement tab component padding | status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 type: bug 🐛 | ## Title line template:
Enhancement tab component padding
## What package(s) are you using?
carbon-components-react
## Detailed description
The current tab component doesn't adjust the spacing according to the text. Instead, the tabs are rendered with unequal width. When we have short or long names in the tabs this can lead to unequal spacing. Especially in foreign languages (Chinese,...)
Here is an example:
https://github.ibm.com/wdp-gov/tracker/issues/37720


Unequal spacing in tabs doesn’t look professional for customers. Equal spacing can improve legibility and helps customers better to navigate. We always try to use short and precise labels, but we have to consider languages that can require more characters and less characters. We want to avoid overrides or custom CSS to the tab component and hope this can be addressed and fixed by Carbon so more teams can benefit from.
**Is this issue related to a specific component?**
Tabs
**What did you expect to happen? What happened instead? What would you like to see changed?**
We did different explorations and wanted to allow the users to have equal spacing (48 px) to the left of the text.
Our proposal is to use 48 px always as fix padding between name and next tab – regardless if it is a short or long name.

Currently our development can only solve this by custom CSS which shouldn’t be the case and we want to avoid overrides.
**What browser are you working in?**
Firefox/Chrome
**What version of the Carbon Design System are you using?**
v10.27.0
**What offering/product do you work on? Any pressing ship or release dates we should be aware of?**
Offering: Watson Knowledge Catalog
Note: This issue is reflected in many services in Cloud Pak for Data and brought up by other teams in the platform too.
It would be great to get this fixed for one of our upcoming patches/release.
February update GA: 02/16/21
March update GA: 03/16/21
| True | Enhancement tab component padding - ## Title line template:
Enhancement tab component padding
## What package(s) are you using?
carbon-components-react
## Detailed description
The current tab component doesn't adjust the spacing according to the text. Instead, the tabs are rendered with unequal width. When we have short or long names in the tabs this can lead to unequal spacing. Especially in foreign languages (Chinese,...)
Here is an example:
https://github.ibm.com/wdp-gov/tracker/issues/37720


Unequal spacing in tabs doesn’t look professional for customers. Equal spacing can improve legibility and helps customers better to navigate. We always try to use short and precise labels, but we have to consider languages that can require more characters and less characters. We want to avoid overrides or custom CSS to the tab component and hope this can be addressed and fixed by Carbon so more teams can benefit from.
**Is this issue related to a specific component?**
Tabs
**What did you expect to happen? What happened instead? What would you like to see changed?**
We did different explorations and wanted to allow the users to have equal spacing (48 px) to the left of the text.
Our proposal is to use 48 px always as fix padding between name and next tab – regardless if it is a short or long name.

Currently our development can only solve this by custom CSS which shouldn’t be the case and we want to avoid overrides.
**What browser are you working in?**
Firefox/Chrome
**What version of the Carbon Design System are you using?**
v10.27.0
**What offering/product do you work on? Any pressing ship or release dates we should be aware of?**
Offering: Watson Knowledge Catalog
Note: This issue is reflected in many services in Cloud Pak for Data and brought up by other teams in the platform too.
It would be great to get this fixed for one of our upcoming patches/release.
February update GA: 02/16/21
March update GA: 03/16/21
| main | enhancement tab component padding title line template enhancement tab component padding what package s are you using carbon components react detailed description the current tab component doesn t adjust the spacing according to the text instead the tabs are rendered with unequal width when we have short or long names in the tabs this can lead to unequal spacing especially in foreign languages chinese here is an example unequal spacing in tabs doesn’t look professional for customers equal spacing can improve legibility and helps customers better to navigate we always try to use short and precise labels but we have to consider languages that can require more characters and less characters we want to avoid overrides or custom css to the tab component and hope this can be addressed and fixed by carbon so more teams can benefit from is this issue related to a specific component tabs what did you expect to happen what happened instead what would you like to see changed we did different explorations and wanted to allow the users to have equal spacing px to the left of the text our proposal is to use px always as fix padding between name and next tab – regardless if it is a short or long name currently our development can only solve this by custom css which shouldn’t be the case and we want to avoid overrides what browser are you working in firefox chrome what version of the carbon design system are you using what offering product do you work on any pressing ship or release dates we should be aware of offering watson knowledge catalog note this issue is reflected in many services in cloud pak for data and brought up by other teams in the platform too it would be great to get this fixed for one of our upcoming patches release february update ga march update ga | 1 |
4,538 | 23,618,866,124 | IssuesEvent | 2022-08-24 18:27:56 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Dropped schemas not reflected correctly when dropped outside Mathesar | type: bug work: backend status: ready restricted: maintainers | ## Reproduce
1. Confirm that `DB_REFLECTION_INTERVAL` is set to `1`. Expect changes made via `psql` to be reflected quickly.
1. In psql, execute
```sql
create schema "another one";
```
1. Via the API, GET `http://localhost:8000/api/db/v0/schemas/`
1. Observe the results to contain an entry like the following. (Good)
```json
{
"id": 7,
"name": "another one",
"database": "mathesar_tables",
"has_dependencies": true
}
```
1. In psql, execute
```sql
drop schema "another one";
```
1. Via the API, GET `http://localhost:8000/api/db/v0/schemas/`
1. Expect the results array _not_ to contain an entry for the schema.
1. Observe the results array to contain exactly the entry as before, as if no reflection has been performed (even though it's been longer than the 1s interval).
1. Restart Mathesar.
1. Via the API, GET `http://localhost:8000/api/db/v0/schemas/`, with the same expectation as prior.
1. Observe the results array to contain:
```json
{
"id": 7,
"name": null,
"database": "mathesar_tables",
"has_dependencies": true
}
```
Notice here than `name` is `null`. This null value eventually manifests into an error in the front end if the user attempts to filter the list of schemas by a search query because the front end does not expect the `name` value to ever be anything but a string.
| True | Dropped schemas not reflected correctly when dropped outside Mathesar - ## Reproduce
1. Confirm that `DB_REFLECTION_INTERVAL` is set to `1`. Expect changes made via `psql` to be reflected quickly.
1. In psql, execute
```sql
create schema "another one";
```
1. Via the API, GET `http://localhost:8000/api/db/v0/schemas/`
1. Observe the results to contain an entry like the following. (Good)
```json
{
"id": 7,
"name": "another one",
"database": "mathesar_tables",
"has_dependencies": true
}
```
1. In psql, execute
```sql
drop schema "another one";
```
1. Via the API, GET `http://localhost:8000/api/db/v0/schemas/`
1. Expect the results array _not_ to contain an entry for the schema.
1. Observe the results array to contain exactly the entry as before, as if no reflection has been performed (even though it's been longer than the 1s interval).
1. Restart Mathesar.
1. Via the API, GET `http://localhost:8000/api/db/v0/schemas/`, with the same expectation as prior.
1. Observe the results array to contain:
```json
{
"id": 7,
"name": null,
"database": "mathesar_tables",
"has_dependencies": true
}
```
Notice here than `name` is `null`. This null value eventually manifests into an error in the front end if the user attempts to filter the list of schemas by a search query because the front end does not expect the `name` value to ever be anything but a string.
| main | dropped schemas not reflected correctly when dropped outside mathesar reproduce confirm that db reflection interval is set to expect changes made via psql to be reflected quickly in psql execute sql create schema another one via the api get observe the results to contain an entry like the following good json id name another one database mathesar tables has dependencies true in psql execute sql drop schema another one via the api get expect the results array not to contain an entry for the schema observe the results array to contain exactly the entry as before as if no reflection has been performed even though it s been longer than the interval restart mathesar via the api get with the same expectation as prior observe the results array to contain json id name null database mathesar tables has dependencies true notice here than name is null this null value eventually manifests into an error in the front end if the user attempts to filter the list of schemas by a search query because the front end does not expect the name value to ever be anything but a string | 1 |
258,122 | 22,284,675,734 | IssuesEvent | 2022-06-11 12:31:42 | xbmc/xbmc | https://api.github.com/repos/xbmc/xbmc | closed | Media Fails to play "&" | Triage: Tested and not reproduced | <!--- Please fill out this template to the best of your ability. You can always edit this issue once you have created it. -->
<!--- Read the following link before you create a new problem report: https://kodi.wiki/view/HOW-TO:Submit_a_bug_report -->
## Bug report
### Describe the bug
Here is a clear and concise description of what the problem is:
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- A bug report that is not clear will be closed -->
<!--- Put your text below this line -->
Kodi fails to play media when a file name has the ampersand symbol within the file name. For example, a media container with the name Inspection & Testing.mp4 fails to play, removing the ampersand symbol or simplifying the name to Inspection and Testing.mp4 allows the file to be played
## Expected Behavior
Here is a clear and concise description of what was expected to happen:
<!--- Tell us what should happen -->
<!--- Put your text below this line -->
Video fails to play
## Actual Behavior
<!--- Tell us what happens instead -->
<!--- Put your text below this line -->
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
<!--- Put your text below this line -->
removing the ampersand symbol or simplifying the name to Inspection and Testing.mp4 allows the file to be played
### To Reproduce
Steps to reproduce the behavior:
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
<!--- Put your text below this line -->
1.
2.
3.
### Debuglog
<!--- Put your text below this line -->
<!--- A debuglog is always mandatory when creating an issue. Provide one! -->
The debuglog can be found here:
### Screenshots
Here are some links or screenshots to help explain the problem:
<!--- Put your text below this line -->
## Additional context or screenshots (if appropriate)
Here is some additional context or explanation that might help:
<!--- How has this bug affected you? What were you trying to accomplish? -->
<!--- Put your text below this line -->
### Your Environment
Used Operating system:
<!--- Include as many relevant details about the environment you experienced the bug in -->
<!--- Put your text below this line. Checkboxes can easily be ticked once issue is created -->
- [ ] Android
- [ ] iOS
- [ ] tvOS
- [x ] Linux
- [x ] OSX
- [x ] Windows
- [ ] Windows UWP
- Operating system version/name:
- Kodi version: 19.4
<!--- End of this issue -->
*note: Once the issue is made we require you to update it with new information or Kodi versions should that be required.
Team Kodi will consider your problem report however, we will not make any promises the problem will be solved.*
| 1.0 | Media Fails to play "&" - <!--- Please fill out this template to the best of your ability. You can always edit this issue once you have created it. -->
<!--- Read the following link before you create a new problem report: https://kodi.wiki/view/HOW-TO:Submit_a_bug_report -->
## Bug report
### Describe the bug
Here is a clear and concise description of what the problem is:
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- A bug report that is not clear will be closed -->
<!--- Put your text below this line -->
Kodi fails to play media when a file name has the ampersand symbol within the file name. For example, a media container with the name Inspection & Testing.mp4 fails to play, removing the ampersand symbol or simplifying the name to Inspection and Testing.mp4 allows the file to be played
## Expected Behavior
Here is a clear and concise description of what was expected to happen:
<!--- Tell us what should happen -->
<!--- Put your text below this line -->
Video fails to play
## Actual Behavior
<!--- Tell us what happens instead -->
<!--- Put your text below this line -->
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
<!--- Put your text below this line -->
removing the ampersand symbol or simplifying the name to Inspection and Testing.mp4 allows the file to be played
### To Reproduce
Steps to reproduce the behavior:
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
<!--- Put your text below this line -->
1.
2.
3.
### Debuglog
<!--- Put your text below this line -->
<!--- A debuglog is always mandatory when creating an issue. Provide one! -->
The debuglog can be found here:
### Screenshots
Here are some links or screenshots to help explain the problem:
<!--- Put your text below this line -->
## Additional context or screenshots (if appropriate)
Here is some additional context or explanation that might help:
<!--- How has this bug affected you? What were you trying to accomplish? -->
<!--- Put your text below this line -->
### Your Environment
Used Operating system:
<!--- Include as many relevant details about the environment you experienced the bug in -->
<!--- Put your text below this line. Checkboxes can easily be ticked once issue is created -->
- [ ] Android
- [ ] iOS
- [ ] tvOS
- [x ] Linux
- [x ] OSX
- [x ] Windows
- [ ] Windows UWP
- Operating system version/name:
- Kodi version: 19.4
<!--- End of this issue -->
*note: Once the issue is made we require you to update it with new information or Kodi versions should that be required.
Team Kodi will consider your problem report however, we will not make any promises the problem will be solved.*
| non_main | media fails to play bug report describe the bug here is a clear and concise description of what the problem is kodi fails to play media when a file name has the ampersand symbol within the file name for example a media container with the name inspection testing fails to play removing the ampersand symbol or simplifying the name to inspection and testing allows the file to be played expected behavior here is a clear and concise description of what was expected to happen video fails to play actual behavior possible fix removing the ampersand symbol or simplifying the name to inspection and testing allows the file to be played to reproduce steps to reproduce the behavior debuglog the debuglog can be found here screenshots here are some links or screenshots to help explain the problem additional context or screenshots if appropriate here is some additional context or explanation that might help your environment used operating system android ios tvos linux osx windows windows uwp operating system version name kodi version note once the issue is made we require you to update it with new information or kodi versions should that be required team kodi will consider your problem report however we will not make any promises the problem will be solved | 0 |
4,849 | 24,973,663,962 | IssuesEvent | 2022-11-02 05:06:56 | rthadur/bazel | https://api.github.com/repos/rthadur/bazel | reopened | Dummy Issue | bug dependencies awaiting-review awaiting-maintainer P2 | ### Description
For testing Purpose
### Issue Type
_No response_
### Operating System
_No response_
### Coral Device
_No response_
### Other Devices
_No response_
### Programming Language
_No response_
### Relevant Log Output
_No response_ | True | Dummy Issue - ### Description
For testing Purpose
### Issue Type
_No response_
### Operating System
_No response_
### Coral Device
_No response_
### Other Devices
_No response_
### Programming Language
_No response_
### Relevant Log Output
_No response_ | main | dummy issue description for testing purpose issue type no response operating system no response coral device no response other devices no response programming language no response relevant log output no response | 1 |
435,776 | 12,541,143,971 | IssuesEvent | 2020-06-05 11:46:07 | GilesStrong/lumin | https://api.github.com/repos/GilesStrong/lumin | opened | Multi-threaded data loading and augmentation? | improvement low priority | ## Current state
The current process for loading data during training is:
1. A complete fold of data is loaded from hard-drive (hdf5) by a `FoldYielder`
1. Any requested data augmentation is applied to the fold
1. The fold is then passed to a `BatchYielder`. Either the entire fold is then loaded to device at once, or mini-batches are loaded to device one at a time
1. Mini-batches are passed through the model and parameters are updated
The current process for loading data during predicting is:
1. A complete fold of data is loaded from hard-drive (hdf5) by a `FoldYielder`
1. Any requested data augmentation is applied to the fold
1. The entire fold if passed through the model, or mini-batches are passed separately
## Problems
- The use of data augmentation currently causes perceptible slow-downs during training and testing
- Loading data to device can be slow: quicker to load entire fold at once, but requires large memory
## Possible solutions
- Data augmentation is applied using multi-threading. Should be trivial, but splitting and concatenating of DataFrames may actually slow down process. Maybe Dask could be useful?
- Worker processes are used by `BatchYielder` to load minibatches to device in the background, reducing the memory overhead whilst not leading to delays.
- Could perhaps replace `BatchYielder` with, or inherit from, a PyTorch Dataloader, which includes multi-threaded workers (although I find that they're slower than single-core...) | 1.0 | Multi-threaded data loading and augmentation? - ## Current state
The current process for loading data during training is:
1. A complete fold of data is loaded from hard-drive (hdf5) by a `FoldYielder`
1. Any requested data augmentation is applied to the fold
1. The fold is then passed to a `BatchYielder`. Either the entire fold is then loaded to device at once, or mini-batches are loaded to device one at a time
1. Mini-batches are passed through the model and parameters are updated
The current process for loading data during predicting is:
1. A complete fold of data is loaded from hard-drive (hdf5) by a `FoldYielder`
1. Any requested data augmentation is applied to the fold
1. The entire fold if passed through the model, or mini-batches are passed separately
## Problems
- The use of data augmentation currently causes perceptible slow-downs during training and testing
- Loading data to device can be slow: quicker to load entire fold at once, but requires large memory
## Possible solutions
- Data augmentation is applied using multi-threading. Should be trivial, but splitting and concatenating of DataFrames may actually slow down process. Maybe Dask could be useful?
- Worker processes are used by `BatchYielder` to load minibatches to device in the background, reducing the memory overhead whilst not leading to delays.
- Could perhaps replace `BatchYielder` with, or inherit from, a PyTorch Dataloader, which includes multi-threaded workers (although I find that they're slower than single-core...) | non_main | multi threaded data loading and augmentation current state the current process for loading data during training is a complete fold of data is loaded from hard drive by a foldyielder any requested data augmentation is applied to the fold the fold is then passed to a batchyielder either the entire fold is then loaded to device at once or mini batches are loaded to device one at a time mini batches are passed through the model and parameters are updated the current process for loading data during predicting is a complete fold of data is loaded from hard drive by a foldyielder any requested data augmentation is applied to the fold the entire fold if passed through the model or mini batches are passed separately problems the use of data augmentation currently causes perceptible slow downs during training and testing loading data to device can be slow quicker to load entire fold at once but requires large memory possible solutions data augmentation is applied using multi threading should be trivial but splitting and concatenating of dataframes may actually slow down process maybe dask could be useful worker processes are used by batchyielder to load minibatches to device in the background reducing the memory overhead whilst not leading to delays could perhaps replace batchyielder with or inherit from a pytorch dataloader which includes multi threaded workers although i find that they re slower than single core | 0 |
204,222 | 23,223,483,380 | IssuesEvent | 2022-08-02 20:42:49 | Tim-Demo/JuiceShop | https://api.github.com/repos/Tim-Demo/JuiceShop | reopened | CVE-2020-24977 (Medium) detected in gettextv0.20.1 | security vulnerability | ## CVE-2020-24977 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gettextv0.20.1</b></p></summary>
<p>
<p>git://git.savannah.gnu.org/gettext.git </p>
<p>Library home page: <a href=https://github.com/autotools-mirror/gettext.git>https://github.com/autotools-mirror/gettext.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Tim-Demo/JuiceShop/commit/ba236fd18ec3e6450d68d675bce1609d2e5d3230">ba236fd18ec3e6450d68d675bce1609d2e5d3230</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/libxmljs2/vendor/libxml/xmlschemastypes.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
GNOME project libxml2 v2.9.10 has a global buffer over-read vulnerability in xmlEncodeEntitiesInternal at libxml2/entities.c. The issue has been fixed in commit 50f06b3e.
<p>Publish Date: 2020-09-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24977>CVE-2020-24977</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2NQ5GTDYOVH26PBCPYXXMGW5ZZXWMGZC/">https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2NQ5GTDYOVH26PBCPYXXMGW5ZZXWMGZC/</a></p>
<p>Release Date: 2020-09-04</p>
<p>Fix Resolution: 2.9.10-7</p>
</p>
</details>
<p></p>
| True | CVE-2020-24977 (Medium) detected in gettextv0.20.1 - ## CVE-2020-24977 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gettextv0.20.1</b></p></summary>
<p>
<p>git://git.savannah.gnu.org/gettext.git </p>
<p>Library home page: <a href=https://github.com/autotools-mirror/gettext.git>https://github.com/autotools-mirror/gettext.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Tim-Demo/JuiceShop/commit/ba236fd18ec3e6450d68d675bce1609d2e5d3230">ba236fd18ec3e6450d68d675bce1609d2e5d3230</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/libxmljs2/vendor/libxml/xmlschemastypes.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
GNOME project libxml2 v2.9.10 has a global buffer over-read vulnerability in xmlEncodeEntitiesInternal at libxml2/entities.c. The issue has been fixed in commit 50f06b3e.
<p>Publish Date: 2020-09-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24977>CVE-2020-24977</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2NQ5GTDYOVH26PBCPYXXMGW5ZZXWMGZC/">https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2NQ5GTDYOVH26PBCPYXXMGW5ZZXWMGZC/</a></p>
<p>Release Date: 2020-09-04</p>
<p>Fix Resolution: 2.9.10-7</p>
</p>
</details>
<p></p>
| non_main | cve medium detected in cve medium severity vulnerability vulnerable library git git savannah gnu org gettext git library home page a href found in head commit a href found in base branch main vulnerable source files node modules vendor libxml xmlschemastypes c vulnerability details gnome project has a global buffer over read vulnerability in xmlencodeentitiesinternal at entities c the issue has been fixed in commit publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
4,594 | 23,830,482,328 | IssuesEvent | 2022-09-05 20:02:48 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Data Explorer frontend - Demo video readiness | work: frontend status: ready restricted: maintainers type: meta | The following checklist is a bunch of bugs and enhancements for Data Explorer's demo video readiness
- [ ] Implement `_array` mathesar type
- [ ] Cell component for array type
- [ ] Type specific handling for items within array
- [ ] Filtering, grouping
- [ ] Fix alias generation for columns in transformations
- There's a bug where output aliases aren't unique when transforms are applied
- [ ] Decide and implement default summarization when user navigates from grouped table result to Data Explorer, showing record summaries
- Related meeting notes: [Record summaries in data explorer](https://wiki.mathesar.org/en/meeting-notes/2022-09#h-2022-09-02-record-summaries-in-data-explorer) | True | Data Explorer frontend - Demo video readiness - The following checklist is a bunch of bugs and enhancements for Data Explorer's demo video readiness
- [ ] Implement `_array` mathesar type
- [ ] Cell component for array type
- [ ] Type specific handling for items within array
- [ ] Filtering, grouping
- [ ] Fix alias generation for columns in transformations
- There's a bug where output aliases aren't unique when transforms are applied
- [ ] Decide and implement default summarization when user navigates from grouped table result to Data Explorer, showing record summaries
- Related meeting notes: [Record summaries in data explorer](https://wiki.mathesar.org/en/meeting-notes/2022-09#h-2022-09-02-record-summaries-in-data-explorer) | main | data explorer frontend demo video readiness the following checklist is a bunch of bugs and enhancements for data explorer s demo video readiness implement array mathesar type cell component for array type type specific handling for items within array filtering grouping fix alias generation for columns in transformations there s a bug where output aliases aren t unique when transforms are applied decide and implement default summarization when user navigates from grouped table result to data explorer showing record summaries related meeting notes | 1 |
1,723 | 6,574,505,549 | IssuesEvent | 2017-09-11 13:08:30 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Unarchive module not working with "mode" option set and tar.gz archives | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
unarchive
##### ANSIBLE VERSION
```
ansible-playbook 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Unarchive does not work with `mode:` options set for a tar.gz archive, seems to work fine for a sample zip archive.
##### STEPS TO REPRODUCE
```
✗ cat > inv3.yml <<EOF
[servicetest]
${MYSERVERIP}
[servicetest:vars]
ansible_ssh_user=${MYUSER}
ansible_ssh_private_key_file=${MYSSHPEM}
EOF
✗ tar -cvz -f foo.tar.gz inv3.yml
a inv3.yml
✗ scp -i ${MYSSHPEM} foo.tar.gz ${MYUSER}@${MYSERVERIP}:~/foo.tar.gz
foo.tar.gz
✗ cat baz.yml
- hosts: servicetest
tasks:
- unarchive:
src: /home/ec2-user/foo.tar.gz
dest: /home/ec2-user/foo
copy: no
mode: 0750
✗ ansible-playbook -i inv3.yml baz.yml -vvvv
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: baz.yml **************************************************************
1 plays in baz.yml
PLAY [servicetest] *************************************************************
TASK [setup] *******************************************************************
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095 `" && echo ansible-tmp-1477938800.66-23242196276095="` echo $HOME/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpirT5Ap TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095/setup
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095/setup; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095/" > /dev/null 2>&1 && sleep 0'"'"''
ok: [10.11.12.13]
TASK [unarchive] ***************************************************************
task path: /Users/alxnov/Workspace/firewalkwithme/playbooks/fluffy/baz.yml:3
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477938801.5-158057866662852 `" && echo ansible-tmp-1477938801.5-158057866662852="` echo $HOME/.ansible/tmp/ansible-tmp-1477938801.5-158057866662852 `" ) && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373 `" && echo ansible-tmp-1477938801.61-150668743544373="` echo $HOME/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpvDaqWq TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373/stat
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373/stat; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373/" > /dev/null 2>&1 && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916 `" && echo ansible-tmp-1477938802.07-258573422285916="` echo $HOME/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpjZnZzw TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916/unarchive
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916/unarchive; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916/" > /dev/null 2>&1 && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'rm -f -r /home/ansible-user/.ansible/tmp/ansible-tmp-1477938801.5-158057866662852/ > /dev/null 2>&1 && sleep 0'"'"''
fatal: [10.11.12.13]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"backup": null, "content": null, "copy": false, "creates": null, "delimiter": null, "dest": "/home/ansible-user/foo", "directory_mode": null, "exclude": [], "extra_opts": [], "follow": false, "force": null, "group": null, "keep_newer": false, "list_files": false, "mode": 488, "original_basename": "foo.tar.gz", "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/home/ansible-user/foo.tar.gz"}}, "msg": "path /home/ansible-user/foo/inv3.yml does not exist", "path": "/home/ansible-user/foo/inv3.yml", "state": "absent"}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'baz.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
10.11.12.13 : ok=1 changed=0 unreachable=0 failed=1
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
```
✗ grep -v mode baz.yml > baz2.yml
✗ ansible-playbook -i inv3.yml baz2.yml -vvvv
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: baz2.yml *************************************************************
1 plays in baz2.yml
PLAY [servicetest] *************************************************************
TASK [setup] *******************************************************************
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807 `" && echo ansible-tmp-1477939280.81-250171781459807="` echo $HOME/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpqYQRYi TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807/setup
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807/setup; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807/" > /dev/null 2>&1 && sleep 0'"'"''
ok: [10.11.12.13]
TASK [unarchive] ***************************************************************
task path: /Users/alxnov/Workspace/firewalkwithme/playbooks/fluffy/baz2.yml:3
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477939281.62-278860514056026 `" && echo ansible-tmp-1477939281.62-278860514056026="` echo $HOME/.ansible/tmp/ansible-tmp-1477939281.62-278860514056026 `" ) && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264 `" && echo ansible-tmp-1477939281.73-224375086385264="` echo $HOME/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmptlnx43 TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264/stat
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264/stat; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264/" > /dev/null 2>&1 && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241 `" && echo ansible-tmp-1477939282.19-225224644675241="` echo $HOME/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpsMHgOg TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241/unarchive
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241/unarchive; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241/" > /dev/null 2>&1 && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'rm -f -r /home/ansible-user/.ansible/tmp/ansible-tmp-1477939281.62-278860514056026/ > /dev/null 2>&1 && sleep 0'"'"''
changed: [10.11.12.13] => {"changed": true, "dest": "/home/ansible-user/foo", "extract_results": {"cmd": "/usr/bin/gtar -C \"/home/ansible-user/foo\" -xz -f \"/home/ansible-user/foo.tar.gz\"", "err": "", "out": "", "rc": 0}, "gid": 1000, "group": "ansible-user", "handler": "TgzArchive", "invocation": {"module_args": {"backup": null, "content": null, "copy": false, "creates": null, "delimiter": null, "dest": "/home/ansible-user/foo", "directory_mode": null, "exclude": [], "extra_opts": [], "follow": false, "force": null, "group": null, "keep_newer": false, "list_files": false, "mode": null, "original_basename": "foo.tar.gz", "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/home/ansible-user/foo.tar.gz"}}, "mode": "0775", "owner": "ansible-user", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 4096, "src": "/home/ansible-user/foo.tar.gz", "state": "directory", "uid": 1000}
PLAY RECAP *********************************************************************
10.11.12.13 : ok=2 changed=1 unreachable=0 failed=0
``` | True | Unarchive module not working with "mode" option set and tar.gz archives - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
unarchive
##### ANSIBLE VERSION
```
ansible-playbook 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Unarchive does not work with `mode:` options set for a tar.gz archive, seems to work fine for a sample zip archive.
##### STEPS TO REPRODUCE
```
✗ cat > inv3.yml <<EOF
[servicetest]
${MYSERVERIP}
[servicetest:vars]
ansible_ssh_user=${MYUSER}
ansible_ssh_private_key_file=${MYSSHPEM}
EOF
✗ tar -cvz -f foo.tar.gz inv3.yml
a inv3.yml
✗ scp -i ${MYSSHPEM} foo.tar.gz ${MYUSER}@${MYSERVERIP}:~/foo.tar.gz
foo.tar.gz
✗ cat baz.yml
- hosts: servicetest
tasks:
- unarchive:
src: /home/ec2-user/foo.tar.gz
dest: /home/ec2-user/foo
copy: no
mode: 0750
✗ ansible-playbook -i inv3.yml baz.yml -vvvv
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: baz.yml **************************************************************
1 plays in baz.yml
PLAY [servicetest] *************************************************************
TASK [setup] *******************************************************************
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095 `" && echo ansible-tmp-1477938800.66-23242196276095="` echo $HOME/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpirT5Ap TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095/setup
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095/setup; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477938800.66-23242196276095/" > /dev/null 2>&1 && sleep 0'"'"''
ok: [10.11.12.13]
TASK [unarchive] ***************************************************************
task path: /Users/alxnov/Workspace/firewalkwithme/playbooks/fluffy/baz.yml:3
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477938801.5-158057866662852 `" && echo ansible-tmp-1477938801.5-158057866662852="` echo $HOME/.ansible/tmp/ansible-tmp-1477938801.5-158057866662852 `" ) && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373 `" && echo ansible-tmp-1477938801.61-150668743544373="` echo $HOME/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpvDaqWq TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373/stat
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373/stat; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477938801.61-150668743544373/" > /dev/null 2>&1 && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916 `" && echo ansible-tmp-1477938802.07-258573422285916="` echo $HOME/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpjZnZzw TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916/unarchive
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916/unarchive; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477938802.07-258573422285916/" > /dev/null 2>&1 && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'rm -f -r /home/ansible-user/.ansible/tmp/ansible-tmp-1477938801.5-158057866662852/ > /dev/null 2>&1 && sleep 0'"'"''
fatal: [10.11.12.13]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"backup": null, "content": null, "copy": false, "creates": null, "delimiter": null, "dest": "/home/ansible-user/foo", "directory_mode": null, "exclude": [], "extra_opts": [], "follow": false, "force": null, "group": null, "keep_newer": false, "list_files": false, "mode": 488, "original_basename": "foo.tar.gz", "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/home/ansible-user/foo.tar.gz"}}, "msg": "path /home/ansible-user/foo/inv3.yml does not exist", "path": "/home/ansible-user/foo/inv3.yml", "state": "absent"}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'baz.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
10.11.12.13 : ok=1 changed=0 unreachable=0 failed=1
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
```
✗ grep -v mode baz.yml > baz2.yml
✗ ansible-playbook -i inv3.yml baz2.yml -vvvv
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: baz2.yml *************************************************************
1 plays in baz2.yml
PLAY [servicetest] *************************************************************
TASK [setup] *******************************************************************
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807 `" && echo ansible-tmp-1477939280.81-250171781459807="` echo $HOME/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpqYQRYi TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807/setup
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807/setup; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477939280.81-250171781459807/" > /dev/null 2>&1 && sleep 0'"'"''
ok: [10.11.12.13]
TASK [unarchive] ***************************************************************
task path: /Users/alxnov/Workspace/firewalkwithme/playbooks/fluffy/baz2.yml:3
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477939281.62-278860514056026 `" && echo ansible-tmp-1477939281.62-278860514056026="` echo $HOME/.ansible/tmp/ansible-tmp-1477939281.62-278860514056026 `" ) && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264 `" && echo ansible-tmp-1477939281.73-224375086385264="` echo $HOME/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmptlnx43 TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264/stat
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264/stat; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477939281.73-224375086385264/" > /dev/null 2>&1 && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241 `" && echo ansible-tmp-1477939282.19-225224644675241="` echo $HOME/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241 `" ) && sleep 0'"'"''
<10.11.12.13> PUT /var/folders/rs/rf66zygj1tqgx6whbtstr38w0000gp/T/tmpsMHgOg TO /home/ansible-user/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241/unarchive
<10.11.12.13> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r '[10.11.12.13]'
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.11.12.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible-user/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241/unarchive; rm -rf "/home/ansible-user/.ansible/tmp/ansible-tmp-1477939282.19-225224644675241/" > /dev/null 2>&1 && sleep 0'"'"''
<10.11.12.13> ESTABLISH SSH CONNECTION FOR USER: ansible-user
<10.11.12.13> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/alxnov/.ssh/ansible-user.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansible-user -o ConnectTimeout=10 -o ControlPath=/Users/alxnov/.ansible/cp/ansible-ssh-%h-%p-%r 10.11.12.13 '/bin/sh -c '"'"'rm -f -r /home/ansible-user/.ansible/tmp/ansible-tmp-1477939281.62-278860514056026/ > /dev/null 2>&1 && sleep 0'"'"''
changed: [10.11.12.13] => {"changed": true, "dest": "/home/ansible-user/foo", "extract_results": {"cmd": "/usr/bin/gtar -C \"/home/ansible-user/foo\" -xz -f \"/home/ansible-user/foo.tar.gz\"", "err": "", "out": "", "rc": 0}, "gid": 1000, "group": "ansible-user", "handler": "TgzArchive", "invocation": {"module_args": {"backup": null, "content": null, "copy": false, "creates": null, "delimiter": null, "dest": "/home/ansible-user/foo", "directory_mode": null, "exclude": [], "extra_opts": [], "follow": false, "force": null, "group": null, "keep_newer": false, "list_files": false, "mode": null, "original_basename": "foo.tar.gz", "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/home/ansible-user/foo.tar.gz"}}, "mode": "0775", "owner": "ansible-user", "secontext": "unconfined_u:object_r:user_home_t:s0", "size": 4096, "src": "/home/ansible-user/foo.tar.gz", "state": "directory", "uid": 1000}
PLAY RECAP *********************************************************************
10.11.12.13 : ok=2 changed=1 unreachable=0 failed=0
``` | main | unarchive module not working with mode option set and tar gz archives issue type bug report component name unarchive ansible version ansible playbook config file configured module search path default w o overrides configuration n a os environment n a summary unarchive does not work with mode options set for a tar gz archive seems to work fine for a sample zip archive steps to reproduce ✗ cat yml eof myserverip ansible ssh user myuser ansible ssh private key file mysshpem eof ✗ tar cvz f foo tar gz yml a yml ✗ scp i mysshpem foo tar gz myuser myserverip foo tar gz foo tar gz ✗ cat baz yml hosts servicetest tasks unarchive src home user foo tar gz dest home user foo copy no mode ✗ ansible playbook i yml baz yml vvvv no config file found using defaults loaded callback default of type stdout playbook baz yml plays in baz yml play task establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders rs t to home ansible user ansible tmp ansible tmp setup ssh exec sftp b c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r tt bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible user ansible tmp ansible tmp setup rm rf home ansible user ansible tmp ansible tmp dev null sleep ok task task path users alxnov workspace firewalkwithme playbooks fluffy baz yml establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders rs t tmpvdaqwq to home ansible user ansible tmp ansible tmp stat ssh exec sftp b c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r tt bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible user ansible tmp ansible tmp stat rm rf home ansible user ansible tmp ansible tmp dev null sleep establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders rs t tmpjznzzw to home ansible user ansible tmp ansible tmp unarchive ssh exec sftp b c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r tt bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible user ansible tmp ansible tmp unarchive rm rf home ansible user ansible tmp ansible tmp dev null sleep establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c rm f r home ansible user ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args backup null content null copy false creates null delimiter null dest home ansible user foo directory mode null exclude extra opts follow false force null group null keep newer false list files false mode original basename foo tar gz owner null regexp null remote src null selevel null serole null setype null seuser null src home ansible user foo tar gz msg path home ansible user foo yml does not exist path home ansible user foo yml state absent no more hosts left could not create retry file baz retry no such file or directory play recap ok changed unreachable failed expected results ✗ grep v mode baz yml yml ✗ ansible playbook i yml yml vvvv no config file found using defaults loaded callback default of type stdout playbook yml plays in yml play task establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders rs t tmpqyqryi to home ansible user ansible tmp ansible tmp setup ssh exec sftp b c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r tt bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible user ansible tmp ansible tmp setup rm rf home ansible user ansible tmp ansible tmp dev null sleep ok task task path users alxnov workspace firewalkwithme playbooks fluffy yml establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders rs t to home ansible user ansible tmp ansible tmp stat ssh exec sftp b c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r tt bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible user ansible tmp ansible tmp stat rm rf home ansible user ansible tmp ansible tmp dev null sleep establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders rs t tmpsmhgog to home ansible user ansible tmp ansible tmp unarchive ssh exec sftp b c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r tt bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible user ansible tmp ansible tmp unarchive rm rf home ansible user ansible tmp ansible tmp dev null sleep establish ssh connection for user ansible user ssh exec ssh c vvv o controlmaster auto o controlpersist o identityfile users alxnov ssh ansible user pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansible user o connecttimeout o controlpath users alxnov ansible cp ansible ssh h p r bin sh c rm f r home ansible user ansible tmp ansible tmp dev null sleep changed changed true dest home ansible user foo extract results cmd usr bin gtar c home ansible user foo xz f home ansible user foo tar gz err out rc gid group ansible user handler tgzarchive invocation module args backup null content null copy false creates null delimiter null dest home ansible user foo directory mode null exclude extra opts follow false force null group null keep newer false list files false mode null original basename foo tar gz owner null regexp null remote src null selevel null serole null setype null seuser null src home ansible user foo tar gz mode owner ansible user secontext unconfined u object r user home t size src home ansible user foo tar gz state directory uid play recap ok changed unreachable failed | 1 |
9,248 | 13,026,118,984 | IssuesEvent | 2020-07-27 14:32:17 | snek-at/ops-engine | https://api.github.com/repos/snek-at/ops-engine | closed | Template Repository for Wagtail | Requirement | **Describe the feature or change you'd like**
A template repository for using Wagtail should be created.
| 1.0 | Template Repository for Wagtail - **Describe the feature or change you'd like**
A template repository for using Wagtail should be created.
| non_main | template repository for wagtail describe the feature or change you d like a template repository for using wagtail should be created | 0 |
626,744 | 19,832,424,380 | IssuesEvent | 2022-01-20 13:23:59 | input-output-hk/cardano-ledger | https://api.github.com/repos/input-output-hk/cardano-ledger | closed | Inconsistent variable naming between ledger executable and formal specs in Ch8.3 UTxO Properties | wontfix formal-spec :scroll: executable-model :gear: byron priority low | In Chapter 8.3 of the [formal ledger spec](https://hydra.iohk.io/build/1243914/download/1/ledger-spec.pdf), Property 1 (No double spending), there is a small inconsistency between the variable names used compared to the ones used in the [executable spec](https://github.com/input-output-hk/cardano-ledger-specs/blob/01f1f22f707d990ee765a1d351ef75cc9ff75a17/byron/ledger/executable-spec/test/Ledger/UTxO/Properties.hs#L37). Functionality isn't affected.
In the formal spec `i<j` is used, as highlighted in the image below, where as in the executable spec the code works as `j<i`.

| 1.0 | Inconsistent variable naming between ledger executable and formal specs in Ch8.3 UTxO Properties - In Chapter 8.3 of the [formal ledger spec](https://hydra.iohk.io/build/1243914/download/1/ledger-spec.pdf), Property 1 (No double spending), there is a small inconsistency between the variable names used compared to the ones used in the [executable spec](https://github.com/input-output-hk/cardano-ledger-specs/blob/01f1f22f707d990ee765a1d351ef75cc9ff75a17/byron/ledger/executable-spec/test/Ledger/UTxO/Properties.hs#L37). Functionality isn't affected.
In the formal spec `i<j` is used, as highlighted in the image below, where as in the executable spec the code works as `j<i`.

| non_main | inconsistent variable naming between ledger executable and formal specs in utxo properties in chapter of the property no double spending there is a small inconsistency between the variable names used compared to the ones used in the functionality isn t affected in the formal spec i j is used as highlighted in the image below where as in the executable spec the code works as j i | 0 |
5,598 | 28,041,106,592 | IssuesEvent | 2023-03-28 18:35:10 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | closed | $(location //:target) doesn't expand from plugin | type: bug product: CLion os: linux lang: c++ awaiting-maintainer | My intellij log shows:
```
2019-02-28 14:16:52,310 [17223181] WARN - idea.blaze.cpp.BlazeCWorkspace - Issues collecting info from C++ compiler. Showing first few out of 2:
[Compiler exited with error code 1: /tmp/blaze_compiler2.sh -xc++ -nostdinc -isystem external/com_ubuntu_sysroot/opt/libcxx/include/c++/v1 -idirafter external/com_ubuntu_sysroot/usr/include -idirafter external/com_ubuntu_sysroot/usr/include/x86_64-linux-gnu -idirafter external/clang/lib/clang/6.0.0/include
[.... eradicated ...]
(location\ //tools/cruise_tidy:CruiseTidy.so) -Xclang -add-plugin -Xclang cruise-tidy -Xclang -plugin-arg-cruise-tidy -Xclang -packages-file -Xclang -plugin-arg-cruise-tidy -Xclang autogenerated/list_cruise_pkgs.bzl -include $(location\ //tools/cruise_tidy:cruise_tidy_dependency_hack.h) -D__CLANG_SUPPORT_DYN_ANNOTATION__ -DROSCONSOLE_BACKEND=log4cxx -DPYTHON_BAZEL_HACK -DTIXML_USE_STL -DROSPACK_API_BACKCOMPAT_V1 -DYAML_CPP_NO_CONTRIB -DEIGEN_MPL2_ONLY -DOPENCV_TINY_GPU_MODULE -DHAVE_NEW_YAMLCPP -DSQLITE_OMIT_DEPRECATED -fpch-preprocess -v -dD -E
clang version 6.0.0 (tags/RELEASE_600/final)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /home/tdeegan/.cache/bazel/_bazel_tdeegan/a8696b2acbcb6442362edeb92dd13e97/external/clang/bin
Found CUDA installation: /usr/local/cuda-9.0, version 9.0
clang: [0;1;31merror: [0mno such file or directory: '//tools/cruise_tidy:CruiseTidy.so)'[0m
clang: [0;1;31merror: [0mno such file or directory: '//tools/cruise_tidy:cruise_tidy_dependency_hack.h)'[0m
```
I've added the following copts to my cc_library:
```
[
"-Xclang",
"$(location //tools/cruise_tidy:CruiseTidy.so)",
]
```
It seems as though the command line invocation created by the plugin does not expand the `$(location //tools/cruise_tidy:CruiseTidy.so)` things from skylark... Any ideas how to overcome this? | True | $(location //:target) doesn't expand from plugin - My intellij log shows:
```
2019-02-28 14:16:52,310 [17223181] WARN - idea.blaze.cpp.BlazeCWorkspace - Issues collecting info from C++ compiler. Showing first few out of 2:
[Compiler exited with error code 1: /tmp/blaze_compiler2.sh -xc++ -nostdinc -isystem external/com_ubuntu_sysroot/opt/libcxx/include/c++/v1 -idirafter external/com_ubuntu_sysroot/usr/include -idirafter external/com_ubuntu_sysroot/usr/include/x86_64-linux-gnu -idirafter external/clang/lib/clang/6.0.0/include
[.... eradicated ...]
(location\ //tools/cruise_tidy:CruiseTidy.so) -Xclang -add-plugin -Xclang cruise-tidy -Xclang -plugin-arg-cruise-tidy -Xclang -packages-file -Xclang -plugin-arg-cruise-tidy -Xclang autogenerated/list_cruise_pkgs.bzl -include $(location\ //tools/cruise_tidy:cruise_tidy_dependency_hack.h) -D__CLANG_SUPPORT_DYN_ANNOTATION__ -DROSCONSOLE_BACKEND=log4cxx -DPYTHON_BAZEL_HACK -DTIXML_USE_STL -DROSPACK_API_BACKCOMPAT_V1 -DYAML_CPP_NO_CONTRIB -DEIGEN_MPL2_ONLY -DOPENCV_TINY_GPU_MODULE -DHAVE_NEW_YAMLCPP -DSQLITE_OMIT_DEPRECATED -fpch-preprocess -v -dD -E
clang version 6.0.0 (tags/RELEASE_600/final)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /home/tdeegan/.cache/bazel/_bazel_tdeegan/a8696b2acbcb6442362edeb92dd13e97/external/clang/bin
Found CUDA installation: /usr/local/cuda-9.0, version 9.0
clang: [0;1;31merror: [0mno such file or directory: '//tools/cruise_tidy:CruiseTidy.so)'[0m
clang: [0;1;31merror: [0mno such file or directory: '//tools/cruise_tidy:cruise_tidy_dependency_hack.h)'[0m
```
I've added the following copts to my cc_library:
```
[
"-Xclang",
"$(location //tools/cruise_tidy:CruiseTidy.so)",
]
```
It seems as though the command line invocation created by the plugin does not expand the `$(location //tools/cruise_tidy:CruiseTidy.so)` things from skylark... Any ideas how to overcome this? | main | location target doesn t expand from plugin my intellij log shows warn idea blaze cpp blazecworkspace issues collecting info from c compiler showing first few out of compiler exited with error code tmp blaze sh xc nostdinc isystem external com ubuntu sysroot opt libcxx include c idirafter external com ubuntu sysroot usr include idirafter external com ubuntu sysroot usr include linux gnu idirafter external clang lib clang include location tools cruise tidy cruisetidy so xclang add plugin xclang cruise tidy xclang plugin arg cruise tidy xclang packages file xclang plugin arg cruise tidy xclang autogenerated list cruise pkgs bzl include location tools cruise tidy cruise tidy dependency hack h d clang support dyn annotation drosconsole backend dpython bazel hack dtixml use stl drospack api backcompat dyaml cpp no contrib deigen only dopencv tiny gpu module dhave new yamlcpp dsqlite omit deprecated fpch preprocess v dd e clang version tags release final target unknown linux gnu thread model posix installeddir home tdeegan cache bazel bazel tdeegan external clang bin found cuda installation usr local cuda version clang such file or directory tools cruise tidy cruisetidy so clang such file or directory tools cruise tidy cruise tidy dependency hack h i ve added the following copts to my cc library xclang location tools cruise tidy cruisetidy so it seems as though the command line invocation created by the plugin does not expand the location tools cruise tidy cruisetidy so things from skylark any ideas how to overcome this | 1 |
375,796 | 11,134,803,794 | IssuesEvent | 2019-12-20 12:49:09 | visual-framework/vf-core | https://api.github.com/repos/visual-framework/vf-core | closed | ebi-vf1-integration component additions | Priority: High Status: WIP Type: Bug | In the `vf-wp` WordPress theme I'm seeing CSS specificity issues with `ebi-global.css` overriding the friendlier `vf-core` styles.
Logging those as I notice them in this issue below. | 1.0 | ebi-vf1-integration component additions - In the `vf-wp` WordPress theme I'm seeing CSS specificity issues with `ebi-global.css` overriding the friendlier `vf-core` styles.
Logging those as I notice them in this issue below. | non_main | ebi integration component additions in the vf wp wordpress theme i m seeing css specificity issues with ebi global css overriding the friendlier vf core styles logging those as i notice them in this issue below | 0 |
2,471 | 8,639,905,167 | IssuesEvent | 2018-11-23 22:34:12 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | Does not work on OrangePI | V1 related (not maintained) | Hello,
today I´ve discovered Rpitx, and wanted to try it out on my Raspberry compatible board: **OrangePI PC**.
I´ve followed the manual for FM Modulation.
When running `sudo ./rpitx -m RF -i fm.ft -f 100000 -l`
Following happens:
```
rpitx Version 0.2 compiled Sep 16 2016 (F5OEO Evariste) running on Master PLL = 1000000000
Wheezy
Failed to open mailbox
MASH 1 Freq PLL# 6
END OF PiTx
root@orangepi:~/rpitx#
```
I guess the error is `Failed to open mailbox` - but I have no idea how to fix this error or which problem causes it. Any ideas?
| True | Does not work on OrangePI - Hello,
today I´ve discovered Rpitx, and wanted to try it out on my Raspberry compatible board: **OrangePI PC**.
I´ve followed the manual for FM Modulation.
When running `sudo ./rpitx -m RF -i fm.ft -f 100000 -l`
Following happens:
```
rpitx Version 0.2 compiled Sep 16 2016 (F5OEO Evariste) running on Master PLL = 1000000000
Wheezy
Failed to open mailbox
MASH 1 Freq PLL# 6
END OF PiTx
root@orangepi:~/rpitx#
```
I guess the error is `Failed to open mailbox` - but I have no idea how to fix this error or which problem causes it. Any ideas?
| main | does not work on orangepi hello today i´ve discovered rpitx and wanted to try it out on my raspberry compatible board orangepi pc i´ve followed the manual for fm modulation when running sudo rpitx m rf i fm ft f l following happens rpitx version compiled sep evariste running on master pll wheezy failed to open mailbox mash freq pll end of pitx root orangepi rpitx i guess the error is failed to open mailbox but i have no idea how to fix this error or which problem causes it any ideas | 1 |
37,448 | 15,297,239,898 | IssuesEvent | 2021-02-24 08:08:25 | Tencent/bk-bcs | https://api.github.com/repos/Tencent/bk-bcs | closed | [feature][bcs-logbeat-sidecar] 支持配置采集器打包上报功能 | feature inner service | **feature相关背景与描述**
每次上报日志时会携带重复的meta数据,业务可能会有大量的日志上报,导致不必要的网络资源浪费,采集器支持日志打包上报的功能,通过开启这个功能,可以有效减少网络资源的浪费
**解决方案描述**
通过配置采集器的package字段即可开启该功能,经测试可用 | 1.0 | [feature][bcs-logbeat-sidecar] 支持配置采集器打包上报功能 - **feature相关背景与描述**
每次上报日志时会携带重复的meta数据,业务可能会有大量的日志上报,导致不必要的网络资源浪费,采集器支持日志打包上报的功能,通过开启这个功能,可以有效减少网络资源的浪费
**解决方案描述**
通过配置采集器的package字段即可开启该功能,经测试可用 | non_main | 支持配置采集器打包上报功能 feature相关背景与描述 每次上报日志时会携带重复的meta数据,业务可能会有大量的日志上报,导致不必要的网络资源浪费,采集器支持日志打包上报的功能,通过开启这个功能,可以有效减少网络资源的浪费 解决方案描述 通过配置采集器的package字段即可开启该功能,经测试可用 | 0 |
22,421 | 15,171,664,014 | IssuesEvent | 2021-02-13 04:31:22 | intel/dffml | https://api.github.com/repos/intel/dffml | opened | ci: Run tests for latest release in GitHub actions using schedule | enhancement kind/infrastructure p0 | We need to copy the existing targets. We probably also want to toggle consoletest to not use `-e` when we're testing the latest release.
References:
- https://futurestud.io/tutorials/github-actions-trigger-builds-on-schedule-cron | 1.0 | ci: Run tests for latest release in GitHub actions using schedule - We need to copy the existing targets. We probably also want to toggle consoletest to not use `-e` when we're testing the latest release.
References:
- https://futurestud.io/tutorials/github-actions-trigger-builds-on-schedule-cron | non_main | ci run tests for latest release in github actions using schedule we need to copy the existing targets we probably also want to toggle consoletest to not use e when we re testing the latest release references | 0 |
2,858 | 10,268,731,810 | IssuesEvent | 2019-08-23 07:15:06 | arcticicestudio/arctic | https://api.github.com/repos/arcticicestudio/arctic | opened | yarn/npm configuration file | context-workflow scope-dx scope-maintainability scope-stability type-task | <p align="center"><img src="https://upload.wikimedia.org/wikipedia/commons/d/db/Npm-logo.svg?sanitize=true" width="20%" /></p>
Add the [`.yarnrc`][y-d-rc] and [`.npmrc`][npm-d-rc] configuration files to enforce project-wide standards for all core team members and contributors. This includes the usage of the [Yarn][y-d-c] and [npm][npm-d-ex] `exact` version resolution as well as disabling the creation of the npm [`package-lock.json`][npm-d-lock] file in order to prevent conflicts with the [`yarn.lock`][y-d-lock] file.
[npm-d-lock]: https://docs.npmjs.com/files/package-lock.json
[npm-d-ex]: https://docs.npmjs.com/misc/config#save-exact
[npm-d-rc]: https://docs.npmjs.com/files/npmrc
[y-d-rc]: https://yarnpkg.com/lang/en/docs/yarnrc
[y-d-c]: https://yarnpkg.com/lang/en/docs/cli/config
[y-d-lock]: https://yarnpkg.com/lang/en/docs/yarn-lock | True | yarn/npm configuration file - <p align="center"><img src="https://upload.wikimedia.org/wikipedia/commons/d/db/Npm-logo.svg?sanitize=true" width="20%" /></p>
Add the [`.yarnrc`][y-d-rc] and [`.npmrc`][npm-d-rc] configuration files to enforce project-wide standards for all core team members and contributors. This includes the usage of the [Yarn][y-d-c] and [npm][npm-d-ex] `exact` version resolution as well as disabling the creation of the npm [`package-lock.json`][npm-d-lock] file in order to prevent conflicts with the [`yarn.lock`][y-d-lock] file.
[npm-d-lock]: https://docs.npmjs.com/files/package-lock.json
[npm-d-ex]: https://docs.npmjs.com/misc/config#save-exact
[npm-d-rc]: https://docs.npmjs.com/files/npmrc
[y-d-rc]: https://yarnpkg.com/lang/en/docs/yarnrc
[y-d-c]: https://yarnpkg.com/lang/en/docs/cli/config
[y-d-lock]: https://yarnpkg.com/lang/en/docs/yarn-lock | main | yarn npm configuration file add the and configuration files to enforce project wide standards for all core team members and contributors this includes the usage of the and exact version resolution as well as disabling the creation of the npm file in order to prevent conflicts with the file | 1 |
381 | 3,413,337,417 | IssuesEvent | 2015-12-06 16:24:56 | spyder-ide/spyder | https://api.github.com/repos/spyder-ide/spyder | closed | Create conda.recipe folder at repo level | Enhancement Maintainability | Most project out there that use conda, have the recipe embedded directly in the repo in a `conda.recipe` folder.
I think is a good idea to have the same. | True | Create conda.recipe folder at repo level - Most project out there that use conda, have the recipe embedded directly in the repo in a `conda.recipe` folder.
I think is a good idea to have the same. | main | create conda recipe folder at repo level most project out there that use conda have the recipe embedded directly in the repo in a conda recipe folder i think is a good idea to have the same | 1 |
151,576 | 5,824,498,281 | IssuesEvent | 2017-05-07 13:37:19 | javaee/mvc-spec | https://api.github.com/repos/javaee/mvc-spec | closed | Explore ways to avoid hardcoding URIs in templates | Priority: Major Type: Task | In templates, links (most importantly a elements) and form actions require a URI. If this URI is provided as a string, even if prefixed by a getBaseUri() (or similar) call, this will be redundant to the declarative mapping to URIs on controller methods. JAX-RS provides the UrIBuilder API to address this from within resources, but there should at least be a convenient way to access this from templates, possibly from the MvcContext object. | 1.0 | Explore ways to avoid hardcoding URIs in templates - In templates, links (most importantly a elements) and form actions require a URI. If this URI is provided as a string, even if prefixed by a getBaseUri() (or similar) call, this will be redundant to the declarative mapping to URIs on controller methods. JAX-RS provides the UrIBuilder API to address this from within resources, but there should at least be a convenient way to access this from templates, possibly from the MvcContext object. | non_main | explore ways to avoid hardcoding uris in templates in templates links most importantly a elements and form actions require a uri if this uri is provided as a string even if prefixed by a getbaseuri or similar call this will be redundant to the declarative mapping to uris on controller methods jax rs provides the uribuilder api to address this from within resources but there should at least be a convenient way to access this from templates possibly from the mvccontext object | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.