Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
60,390 | 14,792,286,649 | IssuesEvent | 2021-01-12 14:34:42 | DaemonEngine/Daemon | https://api.github.com/repos/DaemonEngine/Daemon | closed | Various loud CMake warnings when compiling. | A-Build T-Enhancement | Compiling Unvanquished with a recent CMake version (3.19) produces a lot of warnings about behavior being deprecated soon. I'd like to take a look at this to make the build nicer on my machine. Assigning to myself as I'll make a PR for this shortly. | 1.0 | Various loud CMake warnings when compiling. - Compiling Unvanquished with a recent CMake version (3.19) produces a lot of warnings about behavior being deprecated soon. I'd like to take a look at this to make the build nicer on my machine. Assigning to myself as I'll make a PR for this shortly. | non_main | various loud cmake warnings when compiling compiling unvanquished with a recent cmake version produces a lot of warnings about behavior being deprecated soon i d like to take a look at this to make the build nicer on my machine assigning to myself as i ll make a pr for this shortly | 0 |
3,027 | 11,201,203,024 | IssuesEvent | 2020-01-04 01:19:53 | amyjko/faculty | https://api.github.com/repos/amyjko/faculty | closed | Move images in to sub-directories | maintainability | The image directory is getting unruly. Move blog post images into a dedicated directory, as well as other image types. | True | Move images in to sub-directories - The image directory is getting unruly. Move blog post images into a dedicated directory, as well as other image types. | main | move images in to sub directories the image directory is getting unruly move blog post images into a dedicated directory as well as other image types | 1 |
3,054 | 11,440,030,367 | IssuesEvent | 2020-02-05 08:49:02 | precice/precice | https://api.github.com/repos/precice/precice | opened | Interleaved Output of Assertions | good first issue maintainability | # Problem description
Hitting an assertion on all ranks results in chunk-wise interleaved output.
These chunks cannot be associated with the rank.
# Solution
Buffer the entire resulting assertion text and output it in one go.
# Relevant Information
https://github.com/precice/precice/blob/develop/src/utils/assertion.hpp
| True | Interleaved Output of Assertions - # Problem description
Hitting an assertion on all ranks results in chunk-wise interleaved output.
These chunks cannot be associated with the rank.
# Solution
Buffer the entire resulting assertion text and output it in one go.
# Relevant Information
https://github.com/precice/precice/blob/develop/src/utils/assertion.hpp
| main | interleaved output of assertions problem description hitting an assertion on all ranks results in chunk wise interleaved output these chunks cannot be associated with the rank solution buffer the entire resulting assertion text and output it in one go relevant information | 1 |
465,559 | 13,388,035,802 | IssuesEvent | 2020-09-02 16:48:13 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | binder: version of Drake binary is not guaranteed to be the latest | component: jupyter priority: medium team: kitware | I posted in the issue linked below today, and opened up a new Binder session. I noticed that I got an old version of `drake`:
https://github.com/RobotLocomotion/drake/issues/11457#issuecomment-622985019
> Version info (from notebook):
> ```
> In [3]: !cat /opt/drake/share/doc/drake/VERSION.TXT
> Out [3]: 20200420074541 b02ff6e7b75d1a9a3ae7d4cf21fe76fa558cff74
> ```
FTR, my session URL was: (not that you'll be able to visit it?)
`https://notebooks.gesis.org/binder/jupyter/user/robotlocomotion-drake-2q29xfvv/tree/tutorials`
When posting this issue, I wanted to make sure I could do this from scratch, so I opened another new Binder session:
- Visit: https://mybinder.org/v2/gh/RobotLocomotion/drake/nightly-release?filepath=tutorials
- Get a newly provisioned session
- Execute `!cat /opt/drake/share/doc/drake/VERSION.TXT` in a notebook.
However, when I opened a new session, I got a different version:
```
In [1]: !cat /opt/drake/share/doc/drake/VERSION.TXT
Out [1]: 20200430074549 26c38c92d871567f333eb8ece2677bcfc4ab165f
```
FTR, this new session URL was:
`https://hub.gke.mybinder.org/user/robotlocomotion-drake-zp1zbpov/tree/tutorials`
I think you predicted something like this might happen due to our usage of a fixed branch name. My guess is it's a combination of optimization of server-side w/ JupyterHub's side, possibly coupled with user's local browser cache.
Note that the URLs come from different hosts entirely, `notebooks.gesis.org` vs. `hub.gke.mybinder.org`.
Is there a possible (easy) mitigation?
Perhaps advice to users, like Ctrl+Shift+R (fully refresh cache) when visiting the Binder provisioning page? https://mybinder.org/v2/gh/RobotLocomotion/drake/nightly-release?filepath=tutorials
EDIT: Mentioned in Drake Slack, `#python` channel:
https://drakedevelopers.slack.com/archives/C2CK4CWE7/p1588440496027800 | 1.0 | binder: version of Drake binary is not guaranteed to be the latest - I posted in the issue linked below today, and opened up a new Binder session. I noticed that I got an old version of `drake`:
https://github.com/RobotLocomotion/drake/issues/11457#issuecomment-622985019
> Version info (from notebook):
> ```
> In [3]: !cat /opt/drake/share/doc/drake/VERSION.TXT
> Out [3]: 20200420074541 b02ff6e7b75d1a9a3ae7d4cf21fe76fa558cff74
> ```
FTR, my session URL was: (not that you'll be able to visit it?)
`https://notebooks.gesis.org/binder/jupyter/user/robotlocomotion-drake-2q29xfvv/tree/tutorials`
When posting this issue, I wanted to make sure I could do this from scratch, so I opened another new Binder session:
- Visit: https://mybinder.org/v2/gh/RobotLocomotion/drake/nightly-release?filepath=tutorials
- Get a newly provisioned session
- Execute `!cat /opt/drake/share/doc/drake/VERSION.TXT` in a notebook.
However, when I opened a new session, I got a different version:
```
In [1]: !cat /opt/drake/share/doc/drake/VERSION.TXT
Out [1]: 20200430074549 26c38c92d871567f333eb8ece2677bcfc4ab165f
```
FTR, this new session URL was:
`https://hub.gke.mybinder.org/user/robotlocomotion-drake-zp1zbpov/tree/tutorials`
I think you predicted something like this might happen due to our usage of a fixed branch name. My guess is it's a combination of optimization of server-side w/ JupyterHub's side, possibly coupled with user's local browser cache.
Note that the URLs come from different hosts entirely, `notebooks.gesis.org` vs. `hub.gke.mybinder.org`.
Is there a possible (easy) mitigation?
Perhaps advice to users, like Ctrl+Shift+R (fully refresh cache) when visiting the Binder provisioning page? https://mybinder.org/v2/gh/RobotLocomotion/drake/nightly-release?filepath=tutorials
EDIT: Mentioned in Drake Slack, `#python` channel:
https://drakedevelopers.slack.com/archives/C2CK4CWE7/p1588440496027800 | non_main | binder version of drake binary is not guaranteed to be the latest i posted in the issue linked below today and opened up a new binder session i noticed that i got an old version of drake version info from notebook in cat opt drake share doc drake version txt out ftr my session url was not that you ll be able to visit it when posting this issue i wanted to make sure i could do this from scratch so i opened another new binder session visit get a newly provisioned session execute cat opt drake share doc drake version txt in a notebook however when i opened a new session i got a different version in cat opt drake share doc drake version txt out ftr this new session url was i think you predicted something like this might happen due to our usage of a fixed branch name my guess is it s a combination of optimization of server side w jupyterhub s side possibly coupled with user s local browser cache note that the urls come from different hosts entirely notebooks gesis org vs hub gke mybinder org is there a possible easy mitigation perhaps advice to users like ctrl shift r fully refresh cache when visiting the binder provisioning page edit mentioned in drake slack python channel | 0 |
1,139 | 4,998,879,085 | IssuesEvent | 2016-12-09 21:20:09 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2: group_id doesn't seem to accept a list of security groups as documented | affects_2.1 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
Ansible doesn't accept a list of security groups when launching an instance, while it's [documented](https://docs.ansible.com/ansible/ec2_module.html) that it does: "security group id (or list of ids) to use with the instance"
##### STEPS TO REPRODUCE
Execute the following task:
<!--- Paste example playbooks or commands between quotes below -->
```
- name: "Launch proxy instance: {{ owner }}_i_{{ env }}_dmz_2"
ec2:
region: "{{ region }}"
image: "{{ ami_id }}"
count_tag:
Name: "{{ owner }}_i_{{ env }}_dmz_2"
exact_count: 1
#wait: yes
instance_type: "t2.micro"
key_name: "{{ ssh_key_name}}"
# TODO
group_id:
- "{{ preprod_sg_ssh.group_id }}"
- "{{ preprod_sg_proxy.group_id }}"
vpc_subnet_id: "{{ preprod_subnet_dmz_2 }}"
zone: "{{ az2 }}"
instance_tags:
Name: "{{ owner }}_i_{{ env }}_dmz_2"
Env: "{{ owner }}_{{ env }}"
Tier: "{{ owner }}_{{ env }}_dmz"
register: preprod_i_dmz_2 # preprod_i_dmz_2.tagged_instances[0].id
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Expected that the two SGs specified would be assigned to the instance.
##### ACTUAL RESULTS
None of the two SGs were assigned to the instance. The instance had the default SG assigned.
| True | ec2: group_id doesn't seem to accept a list of security groups as documented - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
Ansible doesn't accept a list of security groups when launching an instance, while it's [documented](https://docs.ansible.com/ansible/ec2_module.html) that it does: "security group id (or list of ids) to use with the instance"
##### STEPS TO REPRODUCE
Execute the following task:
<!--- Paste example playbooks or commands between quotes below -->
```
- name: "Launch proxy instance: {{ owner }}_i_{{ env }}_dmz_2"
ec2:
region: "{{ region }}"
image: "{{ ami_id }}"
count_tag:
Name: "{{ owner }}_i_{{ env }}_dmz_2"
exact_count: 1
#wait: yes
instance_type: "t2.micro"
key_name: "{{ ssh_key_name}}"
# TODO
group_id:
- "{{ preprod_sg_ssh.group_id }}"
- "{{ preprod_sg_proxy.group_id }}"
vpc_subnet_id: "{{ preprod_subnet_dmz_2 }}"
zone: "{{ az2 }}"
instance_tags:
Name: "{{ owner }}_i_{{ env }}_dmz_2"
Env: "{{ owner }}_{{ env }}"
Tier: "{{ owner }}_{{ env }}_dmz"
register: preprod_i_dmz_2 # preprod_i_dmz_2.tagged_instances[0].id
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Expected that the two SGs specified would be assigned to the instance.
##### ACTUAL RESULTS
None of the two SGs were assigned to the instance. The instance had the default SG assigned.
| main | group id doesn t seem to accept a list of security groups as documented issue type bug report component name ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary ansible doesn t accept a list of security groups when launching an instance while it s that it does security group id or list of ids to use with the instance steps to reproduce execute the following task name launch proxy instance owner i env dmz region region image ami id count tag name owner i env dmz exact count wait yes instance type micro key name ssh key name todo group id preprod sg ssh group id preprod sg proxy group id vpc subnet id preprod subnet dmz zone instance tags name owner i env dmz env owner env tier owner env dmz register preprod i dmz preprod i dmz tagged instances id expected results expected that the two sgs specified would be assigned to the instance actual results none of the two sgs were assigned to the instance the instance had the default sg assigned | 1 |
9,964 | 3,985,093,696 | IssuesEvent | 2016-05-07 17:02:28 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Guest access vanishes guest articles from the articlemanager for everyone except superusers | No Code Attached Yet | #### Steps to reproduce the issue
Inside Admin select Usermanager - Options, set Guest User Group to Guest, save&close.
Select Content/Article Manager, select Categories, and create category test with access level Guest.
Select Articles and create a new article called test, with access level guest. Click on save: now you see it.
Click on save&close: now you don’t see it anymore, and forever it wil be absent from the article manager lists.
So: you made an article, and after save&close you will no more see it in the articlemanger.
On the frontend meanwhile: you will see the article when it is published. However: it will disappear when you login to try and edit it!
#### Expected result
Inside the article manager you can see all articles that you have edit and delete rights for.
So: you made an article and would like to edit or delete it: these rights you have.
#### Actual result
You have edited and delete access to an article but because the article disappears from view after logging in. which you need to do to gain access to the backend of the site, you can no more edit it or delete it.
The current guest setting is to radical: it is intended on the frontend of the site but it also governs inside the admin of the website.
There is a difference between the intended behavior of an article with guest access on the frontend versus the same article on the backend.
The guest account setting inside the Options inside the Usermanager are intended for the frontend only!.They however also rule the behavior inside the admin of the website. There this behavior is not intended.
Solution: to differentiate between those two 'states'.
#### System information (as much as possible)
Standard joomla site, 3.3.6.
#### Additional comments
This seems to me a bug. On the other hand it is completely logical.
When the backend will be more integrated inside the frontend, there are perhaps more situations where the backend-logic and the frontend logic clash.
For now the only solution is to use the superuser access: there is an overruling mechanism that is triggered by this access-level. This solution is not preferred in situations where editors are not to be given ’absolute control’ of the entire website.
| 1.0 | Guest access vanishes guest articles from the articlemanager for everyone except superusers - #### Steps to reproduce the issue
Inside Admin select Usermanager - Options, set Guest User Group to Guest, save&close.
Select Content/Article Manager, select Categories, and create category test with access level Guest.
Select Articles and create a new article called test, with access level guest. Click on save: now you see it.
Click on save&close: now you don’t see it anymore, and forever it wil be absent from the article manager lists.
So: you made an article, and after save&close you will no more see it in the articlemanger.
On the frontend meanwhile: you will see the article when it is published. However: it will disappear when you login to try and edit it!
#### Expected result
Inside the article manager you can see all articles that you have edit and delete rights for.
So: you made an article and would like to edit or delete it: these rights you have.
#### Actual result
You have edited and delete access to an article but because the article disappears from view after logging in. which you need to do to gain access to the backend of the site, you can no more edit it or delete it.
The current guest setting is to radical: it is intended on the frontend of the site but it also governs inside the admin of the website.
There is a difference between the intended behavior of an article with guest access on the frontend versus the same article on the backend.
The guest account setting inside the Options inside the Usermanager are intended for the frontend only!.They however also rule the behavior inside the admin of the website. There this behavior is not intended.
Solution: to differentiate between those two 'states'.
#### System information (as much as possible)
Standard joomla site, 3.3.6.
#### Additional comments
This seems to me a bug. On the other hand it is completely logical.
When the backend will be more integrated inside the frontend, there are perhaps more situations where the backend-logic and the frontend logic clash.
For now the only solution is to use the superuser access: there is an overruling mechanism that is triggered by this access-level. This solution is not preferred in situations where editors are not to be given ’absolute control’ of the entire website.
| non_main | guest access vanishes guest articles from the articlemanager for everyone except superusers steps to reproduce the issue inside admin select usermanager options set guest user group to guest save close select content article manager select categories and create category test with access level guest select articles and create a new article called test with access level guest click on save now you see it click on save close now you don’t see it anymore and forever it wil be absent from the article manager lists so you made an article and after save close you will no more see it in the articlemanger on the frontend meanwhile you will see the article when it is published however it will disappear when you login to try and edit it expected result inside the article manager you can see all articles that you have edit and delete rights for so you made an article and would like to edit or delete it these rights you have actual result you have edited and delete access to an article but because the article disappears from view after logging in which you need to do to gain access to the backend of the site you can no more edit it or delete it the current guest setting is to radical it is intended on the frontend of the site but it also governs inside the admin of the website there is a difference between the intended behavior of an article with guest access on the frontend versus the same article on the backend the guest account setting inside the options inside the usermanager are intended for the frontend only they however also rule the behavior inside the admin of the website there this behavior is not intended solution to differentiate between those two states system information as much as possible standard joomla site additional comments this seems to me a bug on the other hand it is completely logical when the backend will be more integrated inside the frontend there are perhaps more situations where the backend logic and the frontend logic clash for now the only solution is to use the superuser access there is an overruling mechanism that is triggered by this access level this solution is not preferred in situations where editors are not to be given ’absolute control’ of the entire website | 0 |
116,924 | 9,888,717,456 | IssuesEvent | 2019-06-25 12:15:51 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | GCE-5000: apiserver - increased cpu usage | kind/failing-test sig/scalability | <!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
gce-scale-performance
**Which test(s) are failing**:
density test
**Since when has it been failing**:
Run 1117699728771911683. 2019-04-15 10:03 CEST
**Testgrid link**:
https://testgrid.k8s.io/sig-scalability-gce#gce-scale-performance
**Reason for failure**:
Increased cpu usage of apiserver
**Anything else we need to know**:
The test is not failing, however the cpu usage increased from 10 cores to 25 cores (90th percentile).
| 1.0 | GCE-5000: apiserver - increased cpu usage - <!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
gce-scale-performance
**Which test(s) are failing**:
density test
**Since when has it been failing**:
Run 1117699728771911683. 2019-04-15 10:03 CEST
**Testgrid link**:
https://testgrid.k8s.io/sig-scalability-gce#gce-scale-performance
**Reason for failure**:
Increased cpu usage of apiserver
**Anything else we need to know**:
The test is not failing, however the cpu usage increased from 10 cores to 25 cores (90th percentile).
| non_main | gce apiserver increased cpu usage which jobs are failing gce scale performance which test s are failing density test since when has it been failing run cest testgrid link reason for failure increased cpu usage of apiserver anything else we need to know the test is not failing however the cpu usage increased from cores to cores percentile | 0 |
12,759 | 15,115,739,616 | IssuesEvent | 2021-02-09 05:14:47 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | New GTiff create option not offered for QGIS raster processing | Bug Processing | OSGeo4W 3.17.0 version '9fe1c17719' and also 3.16.3
Defined a new GTiff creation option set (via Options / GDAL). It is valid and works fine for e.g. exporting a raster layer from the layer tree.

However, selecting this creation option set in the Advanced Options of a QGIS raster processing algorithm (e.g. Clip Raster by Mask Layer, but also tried Warp) does not populate the parameter list, which stays empty.

Selecting one of the "out of the box" options sets, like JPEG or High Compression, does populate the list and execute properly.
I've tested and the options I'm specifying are consistent with the datatype of the raster being processed, and the options can be manually entered via the + button.
| 1.0 | New GTiff create option not offered for QGIS raster processing - OSGeo4W 3.17.0 version '9fe1c17719' and also 3.16.3
Defined a new GTiff creation option set (via Options / GDAL). It is valid and works fine for e.g. exporting a raster layer from the layer tree.

However, selecting this creation option set in the Advanced Options of a QGIS raster processing algorithm (e.g. Clip Raster by Mask Layer, but also tried Warp) does not populate the parameter list, which stays empty.

Selecting one of the "out of the box" options sets, like JPEG or High Compression, does populate the list and execute properly.
I've tested and the options I'm specifying are consistent with the datatype of the raster being processed, and the options can be manually entered via the + button.
| non_main | new gtiff create option not offered for qgis raster processing version and also defined a new gtiff creation option set via options gdal it is valid and works fine for e g exporting a raster layer from the layer tree however selecting this creation option set in the advanced options of a qgis raster processing algorithm e g clip raster by mask layer but also tried warp does not populate the parameter list which stays empty selecting one of the out of the box options sets like jpeg or high compression does populate the list and execute properly i ve tested and the options i m specifying are consistent with the datatype of the raster being processed and the options can be manually entered via the button | 0 |
929 | 4,642,761,981 | IssuesEvent | 2016-09-30 10:53:14 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | win_feature always returns "Failed to add feature" on Windows Server 2016 | affects_2.1 bug_report waiting_on_maintainer windows | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* `win_feature`
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = SNIPPED/ansible/ansible.cfg
configured module search path = ['library']
```
##### CONFIGURATION
* n/a
##### OS / ENVIRONMENT
* Host: Mac OS X El Capitan 10.11.6
* Target: Windows Server 2016 Datacenter (RTM Build 14393.rs1_release.160915-0644)
##### SUMMARY
`win_feature` module is unable to add features to Windows Server 2016 targets (including trivial use cases such as `Telnet-Client`)
##### STEPS TO REPRODUCE
```
ansible -m win_feature -a 'name=AD-Domain-Services' examplehost
```
(in this case, `examplehost` is a VM running on Virtualbox 5.0.26r108824)
##### EXPECTED RESULTS
```
127.0.0.1 | SUCCESS => {
"changed": true,
"exitcode": "0",
"failed": false,
"feature_result": [],
"invocation": {
"module_name": "win_feature"
},
"msg": "Happy Happy Joy Joy",
"restart_needed": false,
"success": true
}
```
##### ACTUAL RESULTS
```
Using SNIPPED/ansible/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<127.0.0.1> ESTABLISH WINRM CONNECTION FOR USER: vagrant on PORT 55986 TO 127.0.0.1
<127.0.0.1> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1475181976.06-278942010948362").FullName | Write-Host -Separator '';
<127.0.0.1> PUT "/var/folders/15/b3hfwryj5570qt_r8b9q2jpw0000gn/T/tmpAgxz8u" TO "C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362\win_feature.ps1"
<127.0.0.1> EXEC Set-StrictMode -Version Latest
Try
{
& 'C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362\win_feature.ps1'
}
Catch
{
$_obj = @{ failed = $true }
If ($_.Exception.GetType)
{
$_obj.Add('msg', $_.Exception.Message)
}
Else
{
$_obj.Add('msg', $_.ToString())
}
If ($_.InvocationInfo.PositionMessage)
{
$_obj.Add('exception', $_.InvocationInfo.PositionMessage)
}
ElseIf ($_.ScriptStackTrace)
{
$_obj.Add('exception', $_.ScriptStackTrace)
}
Try
{
$_obj.Add('error_record', ($_ | ConvertTo-Json | ConvertFrom-Json))
}
Catch
{
}
Echo $_obj | ConvertTo-Json -Compress -Depth 99
Exit 1
}
Finally { Remove-Item "C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362" -Force -Recurse -ErrorAction SilentlyContinue }
127.0.0.1 | FAILED! => {
"changed": false,
"exitcode": "Failed",
"failed": true,
"feature_result": [],
"invocation": {
"module_name": "win_feature"
},
"msg": "Failed to add feature",
"restart_needed": false,
"success": false
}
``` | True | win_feature always returns "Failed to add feature" on Windows Server 2016 - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* `win_feature`
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = SNIPPED/ansible/ansible.cfg
configured module search path = ['library']
```
##### CONFIGURATION
* n/a
##### OS / ENVIRONMENT
* Host: Mac OS X El Capitan 10.11.6
* Target: Windows Server 2016 Datacenter (RTM Build 14393.rs1_release.160915-0644)
##### SUMMARY
`win_feature` module is unable to add features to Windows Server 2016 targets (including trivial use cases such as `Telnet-Client`)
##### STEPS TO REPRODUCE
```
ansible -m win_feature -a 'name=AD-Domain-Services' examplehost
```
(in this case, `examplehost` is a VM running on Virtualbox 5.0.26r108824)
##### EXPECTED RESULTS
```
127.0.0.1 | SUCCESS => {
"changed": true,
"exitcode": "0",
"failed": false,
"feature_result": [],
"invocation": {
"module_name": "win_feature"
},
"msg": "Happy Happy Joy Joy",
"restart_needed": false,
"success": true
}
```
##### ACTUAL RESULTS
```
Using SNIPPED/ansible/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<127.0.0.1> ESTABLISH WINRM CONNECTION FOR USER: vagrant on PORT 55986 TO 127.0.0.1
<127.0.0.1> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1475181976.06-278942010948362").FullName | Write-Host -Separator '';
<127.0.0.1> PUT "/var/folders/15/b3hfwryj5570qt_r8b9q2jpw0000gn/T/tmpAgxz8u" TO "C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362\win_feature.ps1"
<127.0.0.1> EXEC Set-StrictMode -Version Latest
Try
{
& 'C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362\win_feature.ps1'
}
Catch
{
$_obj = @{ failed = $true }
If ($_.Exception.GetType)
{
$_obj.Add('msg', $_.Exception.Message)
}
Else
{
$_obj.Add('msg', $_.ToString())
}
If ($_.InvocationInfo.PositionMessage)
{
$_obj.Add('exception', $_.InvocationInfo.PositionMessage)
}
ElseIf ($_.ScriptStackTrace)
{
$_obj.Add('exception', $_.ScriptStackTrace)
}
Try
{
$_obj.Add('error_record', ($_ | ConvertTo-Json | ConvertFrom-Json))
}
Catch
{
}
Echo $_obj | ConvertTo-Json -Compress -Depth 99
Exit 1
}
Finally { Remove-Item "C:\Users\vagrant\AppData\Local\Temp\ansible-tmp-1475181976.06-278942010948362" -Force -Recurse -ErrorAction SilentlyContinue }
127.0.0.1 | FAILED! => {
"changed": false,
"exitcode": "Failed",
"failed": true,
"feature_result": [],
"invocation": {
"module_name": "win_feature"
},
"msg": "Failed to add feature",
"restart_needed": false,
"success": false
}
``` | main | win feature always returns failed to add feature on windows server issue type bug report component name win feature ansible version ansible config file snipped ansible ansible cfg configured module search path configuration n a os environment host mac os x el capitan target windows server datacenter rtm build release summary win feature module is unable to add features to windows server targets including trivial use cases such as telnet client steps to reproduce ansible m win feature a name ad domain services examplehost in this case examplehost is a vm running on virtualbox expected results success changed true exitcode failed false feature result invocation module name win feature msg happy happy joy joy restart needed false success true actual results using snipped ansible ansible cfg as config file loaded callback minimal of type stdout establish winrm connection for user vagrant on port to exec set strictmode version latest new item type directory path env temp name ansible tmp fullname write host separator put var folders t to c users vagrant appdata local temp ansible tmp win feature exec set strictmode version latest try c users vagrant appdata local temp ansible tmp win feature catch obj failed true if exception gettype obj add msg exception message else obj add msg tostring if invocationinfo positionmessage obj add exception invocationinfo positionmessage elseif scriptstacktrace obj add exception scriptstacktrace try obj add error record convertto json convertfrom json catch echo obj convertto json compress depth exit finally remove item c users vagrant appdata local temp ansible tmp force recurse erroraction silentlycontinue failed changed false exitcode failed failed true feature result invocation module name win feature msg failed to add feature restart needed false success false | 1 |
137,557 | 30,713,289,139 | IssuesEvent | 2023-07-27 11:19:47 | SuperTux/supertux | https://api.github.com/repos/SuperTux/supertux | closed | Leafshot is hardcoded in the Kamikaze Snowball code!? | involves:functionality category:code status:needs-work | I just learned that Leafshot's code is within `kamikazesnowball.cpp` which makes it currently impossible to change their sprite. They should be either their own separate object as they also have a different speed value and are also freeze-able. | 1.0 | Leafshot is hardcoded in the Kamikaze Snowball code!? - I just learned that Leafshot's code is within `kamikazesnowball.cpp` which makes it currently impossible to change their sprite. They should be either their own separate object as they also have a different speed value and are also freeze-able. | non_main | leafshot is hardcoded in the kamikaze snowball code i just learned that leafshot s code is within kamikazesnowball cpp which makes it currently impossible to change their sprite they should be either their own separate object as they also have a different speed value and are also freeze able | 0 |
359,030 | 25,214,090,216 | IssuesEvent | 2022-11-14 07:40:47 | scylladb/scylla-monitoring | https://api.github.com/repos/scylladb/scylla-monitoring | opened | docs: sizing calculator | documentation | It would be usefull to represent the sizing info incded here:
https://monitoring.docs.scylladb.com/stable/install/monitoring_stack.html#calculating-prometheus-minimal-disk-space-requirement
In a small sizing calculator with the inputs:
- number of Scylla cores
- data retention in days
For reference, here is a similar calculator in docs
https://docs.scylladb.com/stable/cql/consistency-calculator.html | 1.0 | docs: sizing calculator - It would be usefull to represent the sizing info incded here:
https://monitoring.docs.scylladb.com/stable/install/monitoring_stack.html#calculating-prometheus-minimal-disk-space-requirement
In a small sizing calculator with the inputs:
- number of Scylla cores
- data retention in days
For reference, here is a similar calculator in docs
https://docs.scylladb.com/stable/cql/consistency-calculator.html | non_main | docs sizing calculator it would be usefull to represent the sizing info incded here in a small sizing calculator with the inputs number of scylla cores data retention in days for reference here is a similar calculator in docs | 0 |
65,377 | 7,875,008,573 | IssuesEvent | 2018-06-25 18:56:27 | Opentrons/opentrons | https://api.github.com/repos/Opentrons/opentrons | opened | PD: stop LassPass from trying to auto-fill the form | bug protocol designer | ## current behavior
When filling out a form in PD, when LastPass is installed in Chrome, LastPass will open up a buncha tabs as the user clicks into the form!
## expected behavior
Hopefully setting `autocomplete="off"` on our `<form>` elements will stop this, as long as the user as "Allow pages to disable autofill" checked
Via https://stackoverflow.com/a/28216951/556651 | 1.0 | PD: stop LassPass from trying to auto-fill the form - ## current behavior
When filling out a form in PD, when LastPass is installed in Chrome, LastPass will open up a buncha tabs as the user clicks into the form!
## expected behavior
Hopefully setting `autocomplete="off"` on our `<form>` elements will stop this, as long as the user as "Allow pages to disable autofill" checked
Via https://stackoverflow.com/a/28216951/556651 | non_main | pd stop lasspass from trying to auto fill the form current behavior when filling out a form in pd when lastpass is installed in chrome lastpass will open up a buncha tabs as the user clicks into the form expected behavior hopefully setting autocomplete off on our elements will stop this as long as the user as allow pages to disable autofill checked via | 0 |
67,793 | 28,048,486,038 | IssuesEvent | 2023-03-29 02:21:00 | MicrosoftDocs/live-share | https://api.github.com/repos/MicrosoftDocs/live-share | closed | [VS][C++] Error list not working | bug client: vs area: language services needs-repro | Error list on the guest in a Live Share session is not synced with host's error list. | 1.0 | [VS][C++] Error list not working - Error list on the guest in a Live Share session is not synced with host's error list. | non_main | error list not working error list on the guest in a live share session is not synced with host s error list | 0 |
3,719 | 15,375,755,236 | IssuesEvent | 2021-03-02 15:15:48 | zaproxy/zaproxy | https://api.github.com/repos/zaproxy/zaproxy | closed | Create AbstractHostFileScanRule | Maintainability add-on | Similar to `addOns/commonlib/src/main/java/org/zaproxy/addon/commonlib/AbstractAppFilePlugin.java` for use in HiddenFileScanRule, the ELMAH scan rule, and whatever future use cases. To ensure consistency and ease of maintenance. | True | Create AbstractHostFileScanRule - Similar to `addOns/commonlib/src/main/java/org/zaproxy/addon/commonlib/AbstractAppFilePlugin.java` for use in HiddenFileScanRule, the ELMAH scan rule, and whatever future use cases. To ensure consistency and ease of maintenance. | main | create abstracthostfilescanrule similar to addons commonlib src main java org zaproxy addon commonlib abstractappfileplugin java for use in hiddenfilescanrule the elmah scan rule and whatever future use cases to ensure consistency and ease of maintenance | 1 |
2,914 | 10,391,741,219 | IssuesEvent | 2019-09-11 08:12:08 | precice/precice | https://api.github.com/repos/precice/precice | opened | Generalize Mesh adding and filtering | maintainability | We currently have slight variations of the same code that handles adding one mesh to another.
1. `void Mesh::addMesh(Mesh const& diff);` adds the `diff` to the current mesh.
2. `void ReceivedPartition::filterMesh(mesh::Mesh &filteredMesh, const bool filterByBB)`
This adds the internal mesh to `filteredMesh` and filters vertices based on a predicate: tagged vertices, or vertices inside a bounding-box.
Both functions can be generalized to:
```cpp
// Generalized version filtering vertices based on a given unary predicate.
template<typename UnaryPredicate>
void Mesh::addMesh(Mesh const& other, UnaryPredicate p);
// Version that simply adds the Mesh
void Mesh::addMesh(Mesh const& other) {
addMesh(other, [](mesh::Vertex const &) { return true; });
}
// The new possible implementation.
void ReceivedPartition::filterMesh(mesh::Mesh &filteredMesh, const bool filterByBB) {
if (filterByBB) {
filteredMesh.addMesh(_mesh,
[this](mesh::Vertex const & v){ return this->isVertexinBB(v);});
} else {
filteredMesh.addMesh(_mesh,
[](mesh::Vertex const & v){ return v.isTagged();});
}
}
```
This makes the code DRY, easier to maintain and allows us to optimize a single function. | True | Generalize Mesh adding and filtering - We currently have slight variations of the same code that handles adding one mesh to another.
1. `void Mesh::addMesh(Mesh const& diff);` adds the `diff` to the current mesh.
2. `void ReceivedPartition::filterMesh(mesh::Mesh &filteredMesh, const bool filterByBB)`
This adds the internal mesh to `filteredMesh` and filters vertices based on a predicate: tagged vertices, or vertices inside a bounding-box.
Both functions can be generalized to:
```cpp
// Generalized version filtering vertices based on a given unary predicate.
template<typename UnaryPredicate>
void Mesh::addMesh(Mesh const& other, UnaryPredicate p);
// Version that simply adds the Mesh
void Mesh::addMesh(Mesh const& other) {
addMesh(other, [](mesh::Vertex const &) { return true; });
}
// The new possible implementation.
void ReceivedPartition::filterMesh(mesh::Mesh &filteredMesh, const bool filterByBB) {
if (filterByBB) {
filteredMesh.addMesh(_mesh,
[this](mesh::Vertex const & v){ return this->isVertexinBB(v);});
} else {
filteredMesh.addMesh(_mesh,
[](mesh::Vertex const & v){ return v.isTagged();});
}
}
```
This makes the code DRY, easier to maintain and allows us to optimize a single function. | main | generalize mesh adding and filtering we currently have slight variations of the same code that handles adding one mesh to another void mesh addmesh mesh const diff adds the diff to the current mesh void receivedpartition filtermesh mesh mesh filteredmesh const bool filterbybb this adds the internal mesh to filteredmesh and filters vertices based on a predicate tagged vertices or vertices inside a bounding box both functions can be generalized to cpp generalized version filtering vertices based on a given unary predicate template void mesh addmesh mesh const other unarypredicate p version that simply adds the mesh void mesh addmesh mesh const other addmesh other mesh vertex const return true the new possible implementation void receivedpartition filtermesh mesh mesh filteredmesh const bool filterbybb if filterbybb filteredmesh addmesh mesh mesh vertex const v return this isvertexinbb v else filteredmesh addmesh mesh mesh vertex const v return v istagged this makes the code dry easier to maintain and allows us to optimize a single function | 1 |
321,907 | 9,810,372,862 | IssuesEvent | 2019-06-12 20:19:20 | 2019-a-gr2-moviles/Museum-app-Cerezo-Murillo | https://api.github.com/repos/2019-a-gr2-moviles/Museum-app-Cerezo-Murillo | opened | Modificación de información personal de los usuarios | medium priority user story | # Descripción
Yo como usuario quiero ver y modificar mi información personal para mantenerla actualizada.
# Criterios de aceptación
- El usuario podrá ver sus datos personales: Nombre, Apellido y Correo electrónico.
- El usuario podrá modificar su correo electrónico y su contraseña. | 1.0 | Modificación de información personal de los usuarios - # Descripción
Yo como usuario quiero ver y modificar mi información personal para mantenerla actualizada.
# Criterios de aceptación
- El usuario podrá ver sus datos personales: Nombre, Apellido y Correo electrónico.
- El usuario podrá modificar su correo electrónico y su contraseña. | non_main | modificación de información personal de los usuarios descripción yo como usuario quiero ver y modificar mi información personal para mantenerla actualizada criterios de aceptación el usuario podrá ver sus datos personales nombre apellido y correo electrónico el usuario podrá modificar su correo electrónico y su contraseña | 0 |
5,316 | 26,831,755,815 | IssuesEvent | 2023-02-02 16:28:03 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | opened | Use consistent string formatting in JS | engineering maintain needs grooming | ## Description
Based on https://github.com/mozilla/foundation.mozilla.org/pull/10053#discussion_r1093784778
The code base uses a mix of backticks <code>`</code> and quotes `"` for string formatting in JS.
It would be nice to find a consistent standard and have the linting piont out when we are inconsistent.
Which one we go with should be decided by the team.
## Acceptance criteria
- [ ] The code base uses consistent string formatting.
- [ ] Running `inv lint-js` shows errors when we use inconsistent formatting. | True | Use consistent string formatting in JS - ## Description
Based on https://github.com/mozilla/foundation.mozilla.org/pull/10053#discussion_r1093784778
The code base uses a mix of backticks <code>`</code> and quotes `"` for string formatting in JS.
It would be nice to find a consistent standard and have the linting piont out when we are inconsistent.
Which one we go with should be decided by the team.
## Acceptance criteria
- [ ] The code base uses consistent string formatting.
- [ ] Running `inv lint-js` shows errors when we use inconsistent formatting. | main | use consistent string formatting in js description based on the code base uses a mix of backticks and quotes for string formatting in js it would be nice to find a consistent standard and have the linting piont out when we are inconsistent which one we go with should be decided by the team acceptance criteria the code base uses consistent string formatting running inv lint js shows errors when we use inconsistent formatting | 1 |
3,061 | 11,459,653,698 | IssuesEvent | 2020-02-07 07:53:14 | alacritty/alacritty | https://api.github.com/repos/alacritty/alacritty | closed | Performance of `cat` in GNU Screen worse than xterm’s on X11? | C - waiting on maintainer | > Which operating system does the issue occur on?
NixOS (Linux) 17.03.1844.83706dd49f (Gorilla)
> If on linux, are you using X11 or Wayland?
X11
---
If I set up _really_ fast keyboard repetition, e.g. `"-ardelay" "150" "-arinterval" "8"` in xserver’s args, and hold a letter in both xterm, and Alacritty, the latter visibly stutters, while xterm is very smooth.
Here’s my config, nothing fancy: https://github.com/michalrus/dotfiles/blob/237d1a7e94e22f6ea5c3e3c28d50452a4342e70e/dotfiles/michalrus/base/.config/alacritty/alacritty.yml | True | Performance of `cat` in GNU Screen worse than xterm’s on X11? - > Which operating system does the issue occur on?
NixOS (Linux) 17.03.1844.83706dd49f (Gorilla)
> If on linux, are you using X11 or Wayland?
X11
---
If I set up _really_ fast keyboard repetition, e.g. `"-ardelay" "150" "-arinterval" "8"` in xserver’s args, and hold a letter in both xterm, and Alacritty, the latter visibly stutters, while xterm is very smooth.
Here’s my config, nothing fancy: https://github.com/michalrus/dotfiles/blob/237d1a7e94e22f6ea5c3e3c28d50452a4342e70e/dotfiles/michalrus/base/.config/alacritty/alacritty.yml | main | performance of cat in gnu screen worse than xterm’s on which operating system does the issue occur on nixos linux gorilla if on linux are you using or wayland if i set up really fast keyboard repetition e g ardelay arinterval in xserver’s args and hold a letter in both xterm and alacritty the latter visibly stutters while xterm is very smooth here’s my config nothing fancy | 1 |
2,193 | 7,745,171,211 | IssuesEvent | 2018-05-29 17:28:04 | react-navigation/react-navigation | https://api.github.com/repos/react-navigation/react-navigation | closed | getScreenDetails do not works no more after migrating to v2 | needs response from maintainer | In v1 i was using this code
```
const AppStack = StackNavigator(
{
Home: HomeScreen,
Example: ExampleContentScreen
},
{
navigationOptions: {
header: (navigationOptions) => {
console.log (navigationOptions);
const { scene, getScreenDetails } = navigationOptions;
const screenDetails = getScreenDetails(scene);
const { options } = screenDetails;
return (
<CustomHeader
onLeftIconPress={options.onLeftIconPress}
leftIconName={options.leftIconName}
title={options.headerTitle}
/>
);
}
}
}
);
```
I changed `StackNavigator` into `createStackNavigator`.
Running this code i got an error about `getScreenDetails()` that doesn't exists.
what the migration path for this function? | True | getScreenDetails do not works no more after migrating to v2 - In v1 i was using this code
```
const AppStack = StackNavigator(
{
Home: HomeScreen,
Example: ExampleContentScreen
},
{
navigationOptions: {
header: (navigationOptions) => {
console.log (navigationOptions);
const { scene, getScreenDetails } = navigationOptions;
const screenDetails = getScreenDetails(scene);
const { options } = screenDetails;
return (
<CustomHeader
onLeftIconPress={options.onLeftIconPress}
leftIconName={options.leftIconName}
title={options.headerTitle}
/>
);
}
}
}
);
```
I changed `StackNavigator` into `createStackNavigator`.
Running this code i got an error about `getScreenDetails()` that doesn't exists.
what the migration path for this function? | main | getscreendetails do not works no more after migrating to in i was using this code const appstack stacknavigator home homescreen example examplecontentscreen navigationoptions header navigationoptions console log navigationoptions const scene getscreendetails navigationoptions const screendetails getscreendetails scene const options screendetails return customheader onlefticonpress options onlefticonpress lefticonname options lefticonname title options headertitle i changed stacknavigator into createstacknavigator running this code i got an error about getscreendetails that doesn t exists what the migration path for this function | 1 |
10,377 | 3,385,127,426 | IssuesEvent | 2015-11-27 09:43:06 | Automattic/wp-calypso | https://api.github.com/repos/Automattic/wp-calypso | closed | Components: Add VerticalNavItem to devdocs | Components Documentation [Type] Task | Originally reported by @scruffian.
We should add this new component to https://wpcalypso.wordpress.com/devdocs/design

@folletto noticed this is quite similar to `FoldableCard`. He also pointed that, to be generic enough, this requires support to more than just a title (i.e. My Site → Settings). | 1.0 | Components: Add VerticalNavItem to devdocs - Originally reported by @scruffian.
We should add this new component to https://wpcalypso.wordpress.com/devdocs/design

@folletto noticed this is quite similar to `FoldableCard`. He also pointed that, to be generic enough, this requires support to more than just a title (i.e. My Site → Settings). | non_main | components add verticalnavitem to devdocs originally reported by scruffian we should add this new component to folletto noticed this is quite similar to foldablecard he also pointed that to be generic enough this requires support to more than just a title i e my site → settings | 0 |
1,214 | 5,194,607,873 | IssuesEvent | 2017-01-23 04:58:40 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Git ability to clean untracked and ignored files | affects_2.3 feature_idea waiting_on_maintainer | Git module should have an option to run `git clean -f` to remove untracked files. This is useful to say build a project from a pristine repository. Currently all untracked files remain in the directory.
I think there should be two options :
- `clean_untracked` - remove files and directories
- `clean_ignored` - remove ignored flies
If this sounds good, I can send a PR
| True | Git ability to clean untracked and ignored files - Git module should have an option to run `git clean -f` to remove untracked files. This is useful to say build a project from a pristine repository. Currently all untracked files remain in the directory.
I think there should be two options :
- `clean_untracked` - remove files and directories
- `clean_ignored` - remove ignored flies
If this sounds good, I can send a PR
| main | git ability to clean untracked and ignored files git module should have an option to run git clean f to remove untracked files this is useful to say build a project from a pristine repository currently all untracked files remain in the directory i think there should be two options clean untracked remove files and directories clean ignored remove ignored flies if this sounds good i can send a pr | 1 |
93,722 | 3,908,718,139 | IssuesEvent | 2016-04-19 16:45:56 | docker/docker | https://api.github.com/repos/docker/docker | closed | Running out of inodes on /run | kind/bug priority/P1 | After updating to docker 1.11 I seem to run out of inodes on /run.
Inspecting the `/run/docker/libcontainerd` dirs I see that my containers have the following numbers of of files:
```
/var/run/docker/libcontainerd/30b451cdf9a4b92c35e4b042715f0c560b535b0c5af4bc67acf7580baa1b2264
9334
/var/run/docker/libcontainerd/35fd9cdb1f80db3bb5fd16aadfe1b30ead0582906f992e4d4865a81798a15072
256507
/var/run/docker/libcontainerd/3a34cbef697c5b7a3865b4556d97c21427b599ff56ab6833bd02013f71aa7738
35129
/var/run/docker/libcontainerd/5e7e460fcf536f2cd75032a9a92e4843812fa4b72b8b5e13b089a90f1ef28e2a
9942
/var/run/docker/libcontainerd/5ebcfb17c26815533f4abd883fb59344554077018f04ce062a67bbdfe50f5c3d
9942
/var/run/docker/libcontainerd/776106ffe1802805469ff0078e02b71e7c9dd9e8c08e8d31951712c6b8438b49
1002
/var/run/docker/libcontainerd/91f43f1dd0252f30363337a4c073962194b9cc7fb1df632a92db4f2fa15ca551
55663
/var/run/docker/libcontainerd/b417ff5d733d5f2bc37cac57bb7af72b86bc680f44e7d5656c55d1067ce66c5d
46336
/var/run/docker/libcontainerd/bd07d50b3c3bdf7e9c8ddf6894e5a079af09f7c9d0d9e27f7d62065cdf181618
9942
/var/run/docker/libcontainerd/ce6bf9750bd4183f2b0d87cf67d11c4047e0473cf396867b70798cccd818e67f
10705
/var/run/docker/libcontainerd/d59d15e94c33556ae099232caed4d824c966e754a2ae63116f3a4249ef290507
7581
/var/run/docker/libcontainerd/docker-containerd.pid
1
/var/run/docker/libcontainerd/docker-containerd.sock
1
/var/run/docker/libcontainerd/e6f8f0e6de324d13389f167c68a31c345eb740b0b8b60b3995a170940e38d01f
26
/var/run/docker/libcontainerd/eb7baaf652b9247f9efbadebbdce9b6b3d88b1515ab71462b91b995187496373
35275
/var/run/docker/libcontainerd/event.ts
1
```
The container with the most files is one I use to 'exec' commands on for health checking within an overlay network.
**Restarting the container resolves the issue**
Listing the files within that culprit container's dir in `/var/run/docker/libcontainerd` I see piles of
```
...
123e29754fba075637b49a21362c5265e5c350b5aa516c187cdc498f8d365a01-stdin
123e29754fba075637b49a21362c5265e5c350b5aa516c187cdc498f8d365a01-stdout
...
```
I also updated kernel recently as well
Output of uname -a:
```
Linux aws-qa-node-1 4.2.0-35-generic #40~14.04.1-Ubuntu SMP Fri Mar 18 16:37:35 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
```
Trying to exec a command on a container:
```
sudo docker exec dev_consulcheck_blue_1 /bin/sh
mkfifo: /var/run/docker/libcontainerd/35fd9cdb1f80db3bb5fd16aadfe1b30ead0582906f992e4d4865a81798a15072/041ced8a2353434d55e034a37d7c096e4030bbf1878176331c69d1666af3990f-stdin no space left on device
```
Trying to run a container:
```
sudo docker run --rm -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
385e281300cc: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:4a887a2326ec9e0fa90cce7b4764b0e627b5d6afcb81a3f73c85dc29cea00048
Status: Downloaded newer image for busybox:latest
docker: Error response from daemon: mkdir /var/run/docker/libcontainerd/557517143c7a394f2bdd0f2719274d596682b4eaa180e56c176ae2770e10e815/rootfs: no space left on device
```
**Output of `docker version`:**
```
Client:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:34:23 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:34:23 2016
OS/Arch: linux/amd64
```
**Output of `docker info`:**
```
Containers: 13
Running: 13
Paused: 0
Stopped: 0
Images: 69
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 397
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local rexray
Network: host bridge null overlay
Kernel Version: 4.2.0-35-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.952 GiB
Name: aws-qa-node-1
ID: A5QZ:FCVL:NXOH:5V3R:SKZ4:V3EZ:55D2:EPLE:3QBM:TQXK:EXEL:MFSI
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
nodeindex=1
role.service=true
role.infra=true
provider=generic
ec2.instance.type.t2.small=true
Cluster store: consul://10.0.2.95:8500
Cluster advertise: 10.0.4.203:2376
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**
Running on aws.
| 1.0 | Running out of inodes on /run - After updating to docker 1.11 I seem to run out of inodes on /run.
Inspecting the `/run/docker/libcontainerd` dirs I see that my containers have the following numbers of of files:
```
/var/run/docker/libcontainerd/30b451cdf9a4b92c35e4b042715f0c560b535b0c5af4bc67acf7580baa1b2264
9334
/var/run/docker/libcontainerd/35fd9cdb1f80db3bb5fd16aadfe1b30ead0582906f992e4d4865a81798a15072
256507
/var/run/docker/libcontainerd/3a34cbef697c5b7a3865b4556d97c21427b599ff56ab6833bd02013f71aa7738
35129
/var/run/docker/libcontainerd/5e7e460fcf536f2cd75032a9a92e4843812fa4b72b8b5e13b089a90f1ef28e2a
9942
/var/run/docker/libcontainerd/5ebcfb17c26815533f4abd883fb59344554077018f04ce062a67bbdfe50f5c3d
9942
/var/run/docker/libcontainerd/776106ffe1802805469ff0078e02b71e7c9dd9e8c08e8d31951712c6b8438b49
1002
/var/run/docker/libcontainerd/91f43f1dd0252f30363337a4c073962194b9cc7fb1df632a92db4f2fa15ca551
55663
/var/run/docker/libcontainerd/b417ff5d733d5f2bc37cac57bb7af72b86bc680f44e7d5656c55d1067ce66c5d
46336
/var/run/docker/libcontainerd/bd07d50b3c3bdf7e9c8ddf6894e5a079af09f7c9d0d9e27f7d62065cdf181618
9942
/var/run/docker/libcontainerd/ce6bf9750bd4183f2b0d87cf67d11c4047e0473cf396867b70798cccd818e67f
10705
/var/run/docker/libcontainerd/d59d15e94c33556ae099232caed4d824c966e754a2ae63116f3a4249ef290507
7581
/var/run/docker/libcontainerd/docker-containerd.pid
1
/var/run/docker/libcontainerd/docker-containerd.sock
1
/var/run/docker/libcontainerd/e6f8f0e6de324d13389f167c68a31c345eb740b0b8b60b3995a170940e38d01f
26
/var/run/docker/libcontainerd/eb7baaf652b9247f9efbadebbdce9b6b3d88b1515ab71462b91b995187496373
35275
/var/run/docker/libcontainerd/event.ts
1
```
The container with the most files is one I use to 'exec' commands on for health checking within an overlay network.
**Restarting the container resolves the issue**
Listing the files within that culprit container's dir in `/var/run/docker/libcontainerd` I see piles of
```
...
123e29754fba075637b49a21362c5265e5c350b5aa516c187cdc498f8d365a01-stdin
123e29754fba075637b49a21362c5265e5c350b5aa516c187cdc498f8d365a01-stdout
...
```
I also updated kernel recently as well
Output of uname -a:
```
Linux aws-qa-node-1 4.2.0-35-generic #40~14.04.1-Ubuntu SMP Fri Mar 18 16:37:35 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
```
Trying to exec a command on a container:
```
sudo docker exec dev_consulcheck_blue_1 /bin/sh
mkfifo: /var/run/docker/libcontainerd/35fd9cdb1f80db3bb5fd16aadfe1b30ead0582906f992e4d4865a81798a15072/041ced8a2353434d55e034a37d7c096e4030bbf1878176331c69d1666af3990f-stdin no space left on device
```
Trying to run a container:
```
sudo docker run --rm -it busybox /bin/sh
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
385e281300cc: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:4a887a2326ec9e0fa90cce7b4764b0e627b5d6afcb81a3f73c85dc29cea00048
Status: Downloaded newer image for busybox:latest
docker: Error response from daemon: mkdir /var/run/docker/libcontainerd/557517143c7a394f2bdd0f2719274d596682b4eaa180e56c176ae2770e10e815/rootfs: no space left on device
```
**Output of `docker version`:**
```
Client:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:34:23 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:34:23 2016
OS/Arch: linux/amd64
```
**Output of `docker info`:**
```
Containers: 13
Running: 13
Paused: 0
Stopped: 0
Images: 69
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 397
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local rexray
Network: host bridge null overlay
Kernel Version: 4.2.0-35-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.952 GiB
Name: aws-qa-node-1
ID: A5QZ:FCVL:NXOH:5V3R:SKZ4:V3EZ:55D2:EPLE:3QBM:TQXK:EXEL:MFSI
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
nodeindex=1
role.service=true
role.infra=true
provider=generic
ec2.instance.type.t2.small=true
Cluster store: consul://10.0.2.95:8500
Cluster advertise: 10.0.4.203:2376
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**
Running on aws.
| non_main | running out of inodes on run after updating to docker i seem to run out of inodes on run inspecting the run docker libcontainerd dirs i see that my containers have the following numbers of of files var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd docker containerd pid var run docker libcontainerd docker containerd sock var run docker libcontainerd var run docker libcontainerd var run docker libcontainerd event ts the container with the most files is one i use to exec commands on for health checking within an overlay network restarting the container resolves the issue listing the files within that culprit container s dir in var run docker libcontainerd i see piles of stdin stdout i also updated kernel recently as well output of uname a linux aws qa node generic ubuntu smp fri mar utc gnu linux trying to exec a command on a container sudo docker exec dev consulcheck blue bin sh mkfifo var run docker libcontainerd stdin no space left on device trying to run a container sudo docker run rm it busybox bin sh unable to find image busybox latest locally latest pulling from library busybox pull complete pull complete digest status downloaded newer image for busybox latest docker error response from daemon mkdir var run docker libcontainerd rootfs no space left on device output of docker version client version api version go version git commit built wed apr os arch linux server version api version go version git commit built wed apr os arch linux output of docker info containers running paused stopped images server version storage driver aufs root dir var lib docker aufs backing filesystem extfs dirs supported true logging driver json file cgroup driver cgroupfs plugins volume local rexray network host bridge null overlay kernel version generic operating system ubuntu lts ostype linux architecture cpus total memory gib name aws qa node id fcvl nxoh eple tqxk exel mfsi docker root dir var lib docker debug mode client false debug mode server false registry warning no swap limit support labels nodeindex role service true role infra true provider generic instance type small true cluster store consul cluster advertise additional environment details aws virtualbox physical etc running on aws | 0 |
1,872 | 6,577,498,977 | IssuesEvent | 2017-09-12 01:20:17 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_vpc module erroneously recreates VPCs when passing loosely defined CIDR blocks | affects_2.1 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_vpc module
##### ANSIBLE VERSION
```
ansible 2.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Host OS is Arch Linux, I'm building infrastructure in AWS using boto version 2.39.0, and aws-cli version 1.10.17.
##### SUMMARY
When creating VPCs, AWS will automatically convert your subnet CIDR blocks to it's strictest representation (10.20.30.0/16 will be converted to 10.20.0.0/16), however, when performing checks (beginning line 193 of ec2_vpc.py) to determine if the VPC needs to be modified, Ansible uses the representation provided by the user, which can differ from the representation returned by AWS, in this case, a new VPC will be erroneously created for each subsequent playbook run.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Save the following playbook as ec2_vpc-test.yml and run it with ansible-playbook ec2_vpc-test.yml
```
---
- hosts: localhost
tasks:
- name: "Create VPC"
local_action:
module: ec2_vpc
state: present
cidr_block: "10.20.30.0/16"
resource_tags:
Name: 'ec2_vpc subnet test'
region: "eu-west-1"
- name: "Create VPC"
local_action:
module: ec2_vpc
state: present
cidr_block: "10.20.30.0/16"
resource_tags:
Name: 'ec2_vpc subnet test'
region: "eu-west-1"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I expect that only one VPC will be created regardless of how many times the playbook is run
##### ACTUAL RESULTS
Two new, identical VPCs are created every time this playbook is run, despite no playbook changes being made.
```
[dwood@dawood-arch ansible]$ ansible-playbook ec2_vpc-test.yml
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost] ***************************************************************
TASK [Create VPC] **************************************************************
changed: [localhost -> localhost]
TASK [Create VPC] **************************************************************
changed: [localhost -> localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0
[dwood@dawood-arch ansible]$
```
| True | ec2_vpc module erroneously recreates VPCs when passing loosely defined CIDR blocks - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_vpc module
##### ANSIBLE VERSION
```
ansible 2.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Host OS is Arch Linux, I'm building infrastructure in AWS using boto version 2.39.0, and aws-cli version 1.10.17.
##### SUMMARY
When creating VPCs, AWS will automatically convert your subnet CIDR blocks to it's strictest representation (10.20.30.0/16 will be converted to 10.20.0.0/16), however, when performing checks (beginning line 193 of ec2_vpc.py) to determine if the VPC needs to be modified, Ansible uses the representation provided by the user, which can differ from the representation returned by AWS, in this case, a new VPC will be erroneously created for each subsequent playbook run.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Save the following playbook as ec2_vpc-test.yml and run it with ansible-playbook ec2_vpc-test.yml
```
---
- hosts: localhost
tasks:
- name: "Create VPC"
local_action:
module: ec2_vpc
state: present
cidr_block: "10.20.30.0/16"
resource_tags:
Name: 'ec2_vpc subnet test'
region: "eu-west-1"
- name: "Create VPC"
local_action:
module: ec2_vpc
state: present
cidr_block: "10.20.30.0/16"
resource_tags:
Name: 'ec2_vpc subnet test'
region: "eu-west-1"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I expect that only one VPC will be created regardless of how many times the playbook is run
##### ACTUAL RESULTS
Two new, identical VPCs are created every time this playbook is run, despite no playbook changes being made.
```
[dwood@dawood-arch ansible]$ ansible-playbook ec2_vpc-test.yml
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost] ***************************************************************
TASK [Create VPC] **************************************************************
changed: [localhost -> localhost]
TASK [Create VPC] **************************************************************
changed: [localhost -> localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0
[dwood@dawood-arch ansible]$
```
| main | vpc module erroneously recreates vpcs when passing loosely defined cidr blocks issue type bug report component name vpc module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment host os is arch linux i m building infrastructure in aws using boto version and aws cli version summary when creating vpcs aws will automatically convert your subnet cidr blocks to it s strictest representation will be converted to however when performing checks beginning line of vpc py to determine if the vpc needs to be modified ansible uses the representation provided by the user which can differ from the representation returned by aws in this case a new vpc will be erroneously created for each subsequent playbook run steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used save the following playbook as vpc test yml and run it with ansible playbook vpc test yml hosts localhost tasks name create vpc local action module vpc state present cidr block resource tags name vpc subnet test region eu west name create vpc local action module vpc state present cidr block resource tags name vpc subnet test region eu west expected results i expect that only one vpc will be created regardless of how many times the playbook is run actual results two new identical vpcs are created every time this playbook is run despite no playbook changes being made ansible playbook vpc test yml host file not found etc ansible hosts provided hosts list is empty only localhost is available play task changed task changed play recap localhost ok changed unreachable failed | 1 |
200 | 2,832,153,909 | IssuesEvent | 2015-05-25 04:39:40 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | closed | Flag NOSHIELD is unused | Maintainability - Hinders improvements Not a bug | Defined in __DEFINES/flags.dm
The item flag NOSHIELD is meant to be used to allow weapons to bypass the riot shield, however while it is defined, it is not actually used anywhere in the code. | True | Flag NOSHIELD is unused - Defined in __DEFINES/flags.dm
The item flag NOSHIELD is meant to be used to allow weapons to bypass the riot shield, however while it is defined, it is not actually used anywhere in the code. | main | flag noshield is unused defined in defines flags dm the item flag noshield is meant to be used to allow weapons to bypass the riot shield however while it is defined it is not actually used anywhere in the code | 1 |
231 | 2,905,114,374 | IssuesEvent | 2015-06-18 21:42:32 | cattolyst/datafinisher | https://api.github.com/repos/cattolyst/datafinisher | opened | Identify common or cumbersome SQL code patterns and write functions to generate them | maintainability | Examples: select statements, join statements, date manipulation, various group concatenations. This ticket isn't about actually writing these functions, but rather collecting and prioritizing a list of candidate SQL patterns. | True | Identify common or cumbersome SQL code patterns and write functions to generate them - Examples: select statements, join statements, date manipulation, various group concatenations. This ticket isn't about actually writing these functions, but rather collecting and prioritizing a list of candidate SQL patterns. | main | identify common or cumbersome sql code patterns and write functions to generate them examples select statements join statements date manipulation various group concatenations this ticket isn t about actually writing these functions but rather collecting and prioritizing a list of candidate sql patterns | 1 |
242,100 | 20,196,931,275 | IssuesEvent | 2022-02-11 11:31:57 | getsentry/sentry-ruby | https://api.github.com/repos/getsentry/sentry-ruby | closed | Reorganize `sentry-rails`' test apps | testing sentry-rails | `sentry-rails` supports a wide range of Rails versions (from 5.0 to 7.0). This means that it's test setup is complex and [full of compatibility workarounds](https://github.com/getsentry/sentry-ruby/blob/master/sentry-rails/spec/support/test_rails_app/app.rb#L155-L197). We should start thinking a more maintainable way to build test apps (perhaps one app file for each Rails version). | 1.0 | Reorganize `sentry-rails`' test apps - `sentry-rails` supports a wide range of Rails versions (from 5.0 to 7.0). This means that it's test setup is complex and [full of compatibility workarounds](https://github.com/getsentry/sentry-ruby/blob/master/sentry-rails/spec/support/test_rails_app/app.rb#L155-L197). We should start thinking a more maintainable way to build test apps (perhaps one app file for each Rails version). | non_main | reorganize sentry rails test apps sentry rails supports a wide range of rails versions from to this means that it s test setup is complex and we should start thinking a more maintainable way to build test apps perhaps one app file for each rails version | 0 |
1,113 | 4,988,930,209 | IssuesEvent | 2016-12-08 10:06:21 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | template_module: does not fail anymore when source file is absent | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
template module
##### ANSIBLE VERSION
2.1.0
##### SUMMARY
Issue Type: Bug Report
Ansible Version: ansible-playbook 2.1.0 (devel 45355cd566) last updated 2016/01/08 15:07:35 (GMT +200)
Environment: Ubuntu 15.04
Problem: I thing the expected behaviour is that Ansible fails on runtime when a source file is missing.
The old error looks like this:
```
fatal: [{{hostname}}] => input file not found at /{foobar}/roles/foobar/templates/etc/apt/foo/bar.j2 or /{foobar}/etc/apt/foo/bar.j2
```
Using the development version a non-existing source file is just ignored and marked as green during runtime - the destination file isn't touched at all.
| True | template_module: does not fail anymore when source file is absent - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
template module
##### ANSIBLE VERSION
2.1.0
##### SUMMARY
Issue Type: Bug Report
Ansible Version: ansible-playbook 2.1.0 (devel 45355cd566) last updated 2016/01/08 15:07:35 (GMT +200)
Environment: Ubuntu 15.04
Problem: I thing the expected behaviour is that Ansible fails on runtime when a source file is missing.
The old error looks like this:
```
fatal: [{{hostname}}] => input file not found at /{foobar}/roles/foobar/templates/etc/apt/foo/bar.j2 or /{foobar}/etc/apt/foo/bar.j2
```
Using the development version a non-existing source file is just ignored and marked as green during runtime - the destination file isn't touched at all.
| main | template module does not fail anymore when source file is absent issue type bug report component name template module ansible version summary issue type bug report ansible version ansible playbook devel last updated gmt environment ubuntu problem i thing the expected behaviour is that ansible fails on runtime when a source file is missing the old error looks like this fatal input file not found at foobar roles foobar templates etc apt foo bar or foobar etc apt foo bar using the development version a non existing source file is just ignored and marked as green during runtime the destination file isn t touched at all | 1 |
2,454 | 8,639,878,902 | IssuesEvent | 2018-11-23 22:19:16 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | Modulate pll instead clock divider leads to a pure signal | V1 related (not maintained) | Is it technically possible to implement this commit from a fork of PiFmRds to Rpitx ?
[](url)
https://github.com/SaucySoliton/PiFmRds/commit/6984fd64b7d919f7632850f6c66826880daeef00
The result of pll modulation is just amazing. Only harmonics emissions, no more spurious, heavy intermod, phase noise...
Bonus, carrier can go up to 650MHz, rpitx is limited to 250 before switching to harmonic mode.
With it, the RF output looks almost like a regular transmitter and is not scary anymore.
Thank you for your amazing work. | True | Modulate pll instead clock divider leads to a pure signal - Is it technically possible to implement this commit from a fork of PiFmRds to Rpitx ?
[](url)
https://github.com/SaucySoliton/PiFmRds/commit/6984fd64b7d919f7632850f6c66826880daeef00
The result of pll modulation is just amazing. Only harmonics emissions, no more spurious, heavy intermod, phase noise...
Bonus, carrier can go up to 650MHz, rpitx is limited to 250 before switching to harmonic mode.
With it, the RF output looks almost like a regular transmitter and is not scary anymore.
Thank you for your amazing work. | main | modulate pll instead clock divider leads to a pure signal is it technically possible to implement this commit from a fork of pifmrds to rpitx url the result of pll modulation is just amazing only harmonics emissions no more spurious heavy intermod phase noise bonus carrier can go up to rpitx is limited to before switching to harmonic mode with it the rf output looks almost like a regular transmitter and is not scary anymore thank you for your amazing work | 1 |
608 | 4,104,473,639 | IssuesEvent | 2016-06-05 11:55:05 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | Error UI components of GAMA | Affects Maintainability OS OSX Version 1.7 beta | ### Steps to reproduce
1. Open standalone, click check for updates
2. Along with a few updates there is a core.UI.feature
3. It shows an error indicating editbox has conflict with two versions.
May be something to fix?
Cannot complete the install because of a conflicting dependency.
Software being installed: UI Components of GAMA 1.7.0.201606010729 (ummisco.gama.feature.core.ui.feature.group 1.7.0.201606010729)
Software currently installed: Dependencies of GAMA 1.7.0.201605240752 (ummisco.gama.feature.dependencies.feature.group 1.7.0.201605240752)
Only one of the following can be installed at once:
Editbox Plug-in for GAML 1.7.0.201606010729 (ummisco.gaml.editbox 1.7.0.201606010729)
Editbox Plug-in for GAML 1.7.0.201605240752 (ummisco.gaml.editbox 1.7.0.201605240752)
Cannot satisfy dependency:
From: UI Components of GAMA 1.7.0.201606010729 (ummisco.gama.feature.core.ui.feature.group 1.7.0.201606010729)
To: ummisco.gaml.editbox [1.7.0.201606010729]
Cannot satisfy dependency:
From: Dependencies of GAMA 1.7.0.201605240752 (ummisco.gama.feature.dependencies.feature.group 1.7.0.201605240752)
To: ummisco.gaml.editbox [1.7.0.201605240752]
| True | Error UI components of GAMA - ### Steps to reproduce
1. Open standalone, click check for updates
2. Along with a few updates there is a core.UI.feature
3. It shows an error indicating editbox has conflict with two versions.
May be something to fix?
Cannot complete the install because of a conflicting dependency.
Software being installed: UI Components of GAMA 1.7.0.201606010729 (ummisco.gama.feature.core.ui.feature.group 1.7.0.201606010729)
Software currently installed: Dependencies of GAMA 1.7.0.201605240752 (ummisco.gama.feature.dependencies.feature.group 1.7.0.201605240752)
Only one of the following can be installed at once:
Editbox Plug-in for GAML 1.7.0.201606010729 (ummisco.gaml.editbox 1.7.0.201606010729)
Editbox Plug-in for GAML 1.7.0.201605240752 (ummisco.gaml.editbox 1.7.0.201605240752)
Cannot satisfy dependency:
From: UI Components of GAMA 1.7.0.201606010729 (ummisco.gama.feature.core.ui.feature.group 1.7.0.201606010729)
To: ummisco.gaml.editbox [1.7.0.201606010729]
Cannot satisfy dependency:
From: Dependencies of GAMA 1.7.0.201605240752 (ummisco.gama.feature.dependencies.feature.group 1.7.0.201605240752)
To: ummisco.gaml.editbox [1.7.0.201605240752]
| main | error ui components of gama steps to reproduce open standalone click check for updates along with a few updates there is a core ui feature it shows an error indicating editbox has conflict with two versions may be something to fix cannot complete the install because of a conflicting dependency software being installed ui components of gama ummisco gama feature core ui feature group software currently installed dependencies of gama ummisco gama feature dependencies feature group only one of the following can be installed at once editbox plug in for gaml ummisco gaml editbox editbox plug in for gaml ummisco gaml editbox cannot satisfy dependency from ui components of gama ummisco gama feature core ui feature group to ummisco gaml editbox cannot satisfy dependency from dependencies of gama ummisco gama feature dependencies feature group to ummisco gaml editbox | 1 |
220,117 | 24,562,376,621 | IssuesEvent | 2022-10-12 21:40:06 | BrianMcDonaldWS/Ignite | https://api.github.com/repos/BrianMcDonaldWS/Ignite | closed | CVE-2021-33502 (High) detected in multiple libraries - autoclosed | security vulnerability | ## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-3.3.0.tgz</b>, <b>normalize-url-2.0.1.tgz</b>, <b>normalize-url-1.9.1.tgz</b></p></summary>
<p>
<details><summary><b>normalize-url-3.3.0.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- cssnano-4.1.10.tgz (Root Library)
- cssnano-preset-default-4.0.7.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **normalize-url-3.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-2.0.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/cacheable-request/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- image-webpack-loader-4.6.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-wrapper-4.1.0.tgz
- download-7.1.0.tgz
- got-8.3.2.tgz
- cacheable-request-2.1.4.tgz
- :x: **normalize-url-2.0.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-1.9.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/humanize-url/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- gh-pages-2.2.0.tgz (Root Library)
- filenamify-url-1.0.0.tgz
- humanize-url-1.0.1.tgz
- :x: **normalize-url-1.9.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/BrianMcDonaldWS/Ignite/commit/73ddc067d6e371d536ab7176304514cc656dd35e">73ddc067d6e371d536ab7176304514cc656dd35e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution (normalize-url): 4.5.1</p>
<p>Direct dependency fix Resolution (cssnano): 5.0.0</p><p>Fix Resolution (normalize-url): 4.5.1</p>
<p>Direct dependency fix Resolution (gh-pages): 3.2.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2021-33502 (High) detected in multiple libraries - autoclosed - ## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-3.3.0.tgz</b>, <b>normalize-url-2.0.1.tgz</b>, <b>normalize-url-1.9.1.tgz</b></p></summary>
<p>
<details><summary><b>normalize-url-3.3.0.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- cssnano-4.1.10.tgz (Root Library)
- cssnano-preset-default-4.0.7.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **normalize-url-3.3.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-2.0.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/cacheable-request/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- image-webpack-loader-4.6.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-wrapper-4.1.0.tgz
- download-7.1.0.tgz
- got-8.3.2.tgz
- cacheable-request-2.1.4.tgz
- :x: **normalize-url-2.0.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-1.9.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/humanize-url/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- gh-pages-2.2.0.tgz (Root Library)
- filenamify-url-1.0.0.tgz
- humanize-url-1.0.1.tgz
- :x: **normalize-url-1.9.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/BrianMcDonaldWS/Ignite/commit/73ddc067d6e371d536ab7176304514cc656dd35e">73ddc067d6e371d536ab7176304514cc656dd35e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution (normalize-url): 4.5.1</p>
<p>Direct dependency fix Resolution (cssnano): 5.0.0</p><p>Fix Resolution (normalize-url): 4.5.1</p>
<p>Direct dependency fix Resolution (gh-pages): 3.2.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_main | cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries normalize url tgz normalize url tgz normalize url tgz normalize url tgz normalize a url library home page a href path to dependency file package json path to vulnerable library node modules normalize url package json dependency hierarchy cssnano tgz root library cssnano preset default tgz postcss normalize url tgz x normalize url tgz vulnerable library normalize url tgz normalize a url library home page a href path to dependency file package json path to vulnerable library node modules cacheable request node modules normalize url package json dependency hierarchy image webpack loader tgz root library imagemin gifsicle tgz gifsicle tgz bin wrapper tgz download tgz got tgz cacheable request tgz x normalize url tgz vulnerable library normalize url tgz normalize a url library home page a href path to dependency file package json path to vulnerable library node modules humanize url node modules normalize url package json dependency hierarchy gh pages tgz root library filenamify url tgz humanize url tgz x normalize url tgz vulnerable library found in head commit a href vulnerability details the normalize url package before x before and x before for node js has a redos regular expression denial of service issue because it has exponential performance for data urls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution normalize url direct dependency fix resolution cssnano fix resolution normalize url direct dependency fix resolution gh pages check this box to open an automated fix pr | 0 |
5,498 | 27,431,751,099 | IssuesEvent | 2023-03-02 02:16:53 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | closed | Install Error Hancom Word | awaiting maintainer feedback | ### Verification
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/Homebrew/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
- [X] I have retried my command with `--force`.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I have checked the instructions for [reporting bugs](https://github.com/Homebrew/homebrew-cask#reporting-bugs).
- [X] I made doubly sure this is not a [checksum does not match](https://docs.brew.sh/Common-Issues#cask---checksum-does-not-match) error.
### Description of issue
Downloading Hangul and computer fails.
You can download it normally from the official site.
The file name has not changed or the URL has not changed.
```
$HOME/Library/Caches/Homebrew/downloads
```
I deleted the file and tried again, but the error still remains.
### Command that failed
brew install --cask --verbose --debug hancom-word
### Output of command with `--verbose --debug`
```shell
leejongyoung@MacBook-Air ~ % brew install --cask --verbose --debug hancom-word
/opt/homebrew/Library/Homebrew/brew.rb (Cask::CaskLoader::FromAPILoader): loading hancom-word
==> Cask::Installer#install
==> Printing caveats
==> Cask::Installer#fetch
==> Downloading https://cdn.hancom.com/pds/hnc/DOWN/HancomDocs/HwpMac_HancomDocs.pkg
/usr/bin/env /opt/homebrew/Library/Homebrew/shims/shared/curl --disable --cookie /dev/null --globoff --show-error --user-agent Homebrew/4.0.1\ \(Macintosh\;\ arm64\ Mac\ OS\ X\ 13.2.1\)\ curl/7.86.0 --header Accept-Language:\ en --retry 3 --location --silent --head --request GET https://cdn.hancom.com/pds/hnc/DOWN/HancomDocs/HwpMac_HancomDocs.pkg
Already downloaded: /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
==> Checking quarantine support
/usr/bin/env /usr/bin/xattr -h
/usr/bin/env /usr/bin/swift -target arm64-apple-macosx13 /opt/homebrew/Library/Homebrew/cask/utils/quarantine.swift
==> Quarantine is available.
==> Verifying Gatekeeper status of /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
/usr/bin/env /usr/bin/xattr -p com.apple.quarantine /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
==> /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c-- is quarantined
Warning: No checksum defined for cask 'hancom-word', skipping verification.
/usr/bin/env hdiutil imageinfo -format /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
==> Installing Cask hancom-word
==> Cask::Installer#stage
==> Extracting primary container
==> Using container class UnpackStrategy::Uncompressed for /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
cp -p /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c-- /opt/homebrew/Caskroom/hancom-word/12.30.0,4281
==> Purging files for version 12.30.0,4281 of Cask hancom-word
Error: Is a directory - read
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1388:in `copy_stream'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1388:in `block (2 levels) in copy_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1387:in `open'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1387:in `block in copy_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1386:in `open'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1386:in `copy_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:492:in `copy_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:419:in `block in cp'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1558:in `block in fu_each_src_dest'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1572:in `fu_each_src_dest0'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1556:in `fu_each_src_dest'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:418:in `cp'
/opt/homebrew/Library/Homebrew/unpack_strategy/uncompressed.rb:19:in `extract_to_dir'
/opt/homebrew/Library/Homebrew/unpack_strategy.rb:142:in `extract'
/opt/homebrew/Library/Homebrew/unpack_strategy/uncompressed.rb:12:in `extract_nestedly'
/opt/homebrew/Library/Homebrew/cask/installer.rb:220:in `extract_primary_container'
/opt/homebrew/Library/Homebrew/cask/installer.rb:79:in `stage'
/opt/homebrew/Library/Homebrew/cask/installer.rb:107:in `install'
/opt/homebrew/Library/Homebrew/cask/cmd/install.rb:110:in `block in install_casks'
/opt/homebrew/Library/Homebrew/cask/cmd/install.rb:109:in `each'
/opt/homebrew/Library/Homebrew/cask/cmd/install.rb:109:in `install_casks'
/opt/homebrew/Library/Homebrew/cmd/install.rb:185:in `install'
/opt/homebrew/Library/Homebrew/brew.rb:93:in `<main>'
leejongyoung@MacBook-Air ~ %
```
### Output of `brew doctor` and `brew config`
```shell
Your system is ready to brew.
leejongyoung@MacBook-Air ~ % brew config
HOMEBREW_VERSION: 4.0.1
ORIGIN: https://github.com/Homebrew/brew
HEAD: 17c872fb5275d87922a56416587cb439a5064354
Last commit: 4 days ago
Core tap JSON: 20 Feb 02:47 UTC
HOMEBREW_PREFIX: /opt/homebrew
HOMEBREW_CASK_OPTS: []
HOMEBREW_MAKE_JOBS: 8
Homebrew Ruby: 2.6.10 => /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/bin/ruby
CPU: octa-core 64-bit arm_firestorm_icestorm
Clang: 14.0.0 build 1400
Git: 2.37.1 => /Library/Developer/CommandLineTools/usr/bin/git
Curl: 7.86.0 => /usr/bin/curl
macOS: 13.2.1-arm64
CLT: 14.2.0.0.1.1668646533
Xcode: 14.2
Rosetta 2: false
```
### Output of `brew tap`
```shell
homebrew/cask
```
| True | Install Error Hancom Word - ### Verification
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/Homebrew/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
- [X] I have retried my command with `--force`.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I have checked the instructions for [reporting bugs](https://github.com/Homebrew/homebrew-cask#reporting-bugs).
- [X] I made doubly sure this is not a [checksum does not match](https://docs.brew.sh/Common-Issues#cask---checksum-does-not-match) error.
### Description of issue
Downloading Hangul and computer fails.
You can download it normally from the official site.
The file name has not changed or the URL has not changed.
```
$HOME/Library/Caches/Homebrew/downloads
```
I deleted the file and tried again, but the error still remains.
### Command that failed
brew install --cask --verbose --debug hancom-word
### Output of command with `--verbose --debug`
```shell
leejongyoung@MacBook-Air ~ % brew install --cask --verbose --debug hancom-word
/opt/homebrew/Library/Homebrew/brew.rb (Cask::CaskLoader::FromAPILoader): loading hancom-word
==> Cask::Installer#install
==> Printing caveats
==> Cask::Installer#fetch
==> Downloading https://cdn.hancom.com/pds/hnc/DOWN/HancomDocs/HwpMac_HancomDocs.pkg
/usr/bin/env /opt/homebrew/Library/Homebrew/shims/shared/curl --disable --cookie /dev/null --globoff --show-error --user-agent Homebrew/4.0.1\ \(Macintosh\;\ arm64\ Mac\ OS\ X\ 13.2.1\)\ curl/7.86.0 --header Accept-Language:\ en --retry 3 --location --silent --head --request GET https://cdn.hancom.com/pds/hnc/DOWN/HancomDocs/HwpMac_HancomDocs.pkg
Already downloaded: /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
==> Checking quarantine support
/usr/bin/env /usr/bin/xattr -h
/usr/bin/env /usr/bin/swift -target arm64-apple-macosx13 /opt/homebrew/Library/Homebrew/cask/utils/quarantine.swift
==> Quarantine is available.
==> Verifying Gatekeeper status of /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
/usr/bin/env /usr/bin/xattr -p com.apple.quarantine /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
==> /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c-- is quarantined
Warning: No checksum defined for cask 'hancom-word', skipping verification.
/usr/bin/env hdiutil imageinfo -format /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
==> Installing Cask hancom-word
==> Cask::Installer#stage
==> Extracting primary container
==> Using container class UnpackStrategy::Uncompressed for /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c--
cp -p /Users/leejongyoung/Library/Caches/Homebrew/downloads/3edba80c3369f86447bd6495b1d4f62fb5b653c54b668412154ed93d8374d11c-- /opt/homebrew/Caskroom/hancom-word/12.30.0,4281
==> Purging files for version 12.30.0,4281 of Cask hancom-word
Error: Is a directory - read
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1388:in `copy_stream'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1388:in `block (2 levels) in copy_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1387:in `open'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1387:in `block in copy_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1386:in `open'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1386:in `copy_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:492:in `copy_file'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:419:in `block in cp'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1558:in `block in fu_each_src_dest'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1572:in `fu_each_src_dest0'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:1556:in `fu_each_src_dest'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/fileutils.rb:418:in `cp'
/opt/homebrew/Library/Homebrew/unpack_strategy/uncompressed.rb:19:in `extract_to_dir'
/opt/homebrew/Library/Homebrew/unpack_strategy.rb:142:in `extract'
/opt/homebrew/Library/Homebrew/unpack_strategy/uncompressed.rb:12:in `extract_nestedly'
/opt/homebrew/Library/Homebrew/cask/installer.rb:220:in `extract_primary_container'
/opt/homebrew/Library/Homebrew/cask/installer.rb:79:in `stage'
/opt/homebrew/Library/Homebrew/cask/installer.rb:107:in `install'
/opt/homebrew/Library/Homebrew/cask/cmd/install.rb:110:in `block in install_casks'
/opt/homebrew/Library/Homebrew/cask/cmd/install.rb:109:in `each'
/opt/homebrew/Library/Homebrew/cask/cmd/install.rb:109:in `install_casks'
/opt/homebrew/Library/Homebrew/cmd/install.rb:185:in `install'
/opt/homebrew/Library/Homebrew/brew.rb:93:in `<main>'
leejongyoung@MacBook-Air ~ %
```
### Output of `brew doctor` and `brew config`
```shell
Your system is ready to brew.
leejongyoung@MacBook-Air ~ % brew config
HOMEBREW_VERSION: 4.0.1
ORIGIN: https://github.com/Homebrew/brew
HEAD: 17c872fb5275d87922a56416587cb439a5064354
Last commit: 4 days ago
Core tap JSON: 20 Feb 02:47 UTC
HOMEBREW_PREFIX: /opt/homebrew
HOMEBREW_CASK_OPTS: []
HOMEBREW_MAKE_JOBS: 8
Homebrew Ruby: 2.6.10 => /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/bin/ruby
CPU: octa-core 64-bit arm_firestorm_icestorm
Clang: 14.0.0 build 1400
Git: 2.37.1 => /Library/Developer/CommandLineTools/usr/bin/git
Curl: 7.86.0 => /usr/bin/curl
macOS: 13.2.1-arm64
CLT: 14.2.0.0.1.1668646533
Xcode: 14.2
Rosetta 2: false
```
### Output of `brew tap`
```shell
homebrew/cask
```
| main | install error hancom word verification i understand that i have retried my command with force i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i have checked the instructions for i made doubly sure this is not a error description of issue downloading hangul and computer fails you can download it normally from the official site the file name has not changed or the url has not changed home library caches homebrew downloads i deleted the file and tried again but the error still remains command that failed brew install cask verbose debug hancom word output of command with verbose debug shell leejongyoung macbook air brew install cask verbose debug hancom word opt homebrew library homebrew brew rb cask caskloader fromapiloader loading hancom word cask installer install printing caveats cask installer fetch downloading usr bin env opt homebrew library homebrew shims shared curl disable cookie dev null globoff show error user agent homebrew macintosh mac os x curl header accept language en retry location silent head request get already downloaded users leejongyoung library caches homebrew downloads checking quarantine support usr bin env usr bin xattr h usr bin env usr bin swift target apple opt homebrew library homebrew cask utils quarantine swift quarantine is available verifying gatekeeper status of users leejongyoung library caches homebrew downloads usr bin env usr bin xattr p com apple quarantine users leejongyoung library caches homebrew downloads users leejongyoung library caches homebrew downloads is quarantined warning no checksum defined for cask hancom word skipping verification usr bin env hdiutil imageinfo format users leejongyoung library caches homebrew downloads installing cask hancom word cask installer stage extracting primary container using container class unpackstrategy uncompressed for users leejongyoung library caches homebrew downloads cp p users leejongyoung library caches homebrew downloads opt homebrew caskroom hancom word purging files for version of cask hancom word error is a directory read system library frameworks ruby framework versions usr lib ruby fileutils rb in copy stream system library frameworks ruby framework versions usr lib ruby fileutils rb in block levels in copy file system library frameworks ruby framework versions usr lib ruby fileutils rb in open system library frameworks ruby framework versions usr lib ruby fileutils rb in block in copy file system library frameworks ruby framework versions usr lib ruby fileutils rb in open system library frameworks ruby framework versions usr lib ruby fileutils rb in copy file system library frameworks ruby framework versions usr lib ruby fileutils rb in copy file system library frameworks ruby framework versions usr lib ruby fileutils rb in block in cp system library frameworks ruby framework versions usr lib ruby fileutils rb in block in fu each src dest system library frameworks ruby framework versions usr lib ruby fileutils rb in fu each src system library frameworks ruby framework versions usr lib ruby fileutils rb in fu each src dest system library frameworks ruby framework versions usr lib ruby fileutils rb in cp opt homebrew library homebrew unpack strategy uncompressed rb in extract to dir opt homebrew library homebrew unpack strategy rb in extract opt homebrew library homebrew unpack strategy uncompressed rb in extract nestedly opt homebrew library homebrew cask installer rb in extract primary container opt homebrew library homebrew cask installer rb in stage opt homebrew library homebrew cask installer rb in install opt homebrew library homebrew cask cmd install rb in block in install casks opt homebrew library homebrew cask cmd install rb in each opt homebrew library homebrew cask cmd install rb in install casks opt homebrew library homebrew cmd install rb in install opt homebrew library homebrew brew rb in leejongyoung macbook air output of brew doctor and brew config shell your system is ready to brew leejongyoung macbook air brew config homebrew version origin head last commit days ago core tap json feb utc homebrew prefix opt homebrew homebrew cask opts homebrew make jobs homebrew ruby system library frameworks ruby framework versions usr bin ruby cpu octa core bit arm firestorm icestorm clang build git library developer commandlinetools usr bin git curl usr bin curl macos clt xcode rosetta false output of brew tap shell homebrew cask | 1 |
5,790 | 30,661,299,048 | IssuesEvent | 2023-07-25 15:09:44 | ocbe-uio/trajpy | https://api.github.com/repos/ocbe-uio/trajpy | opened | Implement new parser with `singledispatch` and move type handling from `trajpy.__init__` | enhancement help wanted maintainability | Currently the class trajpy accepts either a csv file or a numpy array for initialising the object. However, we can improve this by implementing a parser with [`functools.singledispatch`](https://peps.python.org/pep-0443/).
Since trajpy's aims to be a general framework for trajectory analysis, it is critical to put more work on the parser for providing broad support for different file formats. [`singledispatch`](https://peps.python.org/pep-0443/) offers an elegant way for this implementation.
https://github.com/ocbe-uio/trajpy/blob/8381bedfc3f0d696072af1d66f08af497eb0cced/trajpy/trajpy.py#L27-L35 | True | Implement new parser with `singledispatch` and move type handling from `trajpy.__init__` - Currently the class trajpy accepts either a csv file or a numpy array for initialising the object. However, we can improve this by implementing a parser with [`functools.singledispatch`](https://peps.python.org/pep-0443/).
Since trajpy's aims to be a general framework for trajectory analysis, it is critical to put more work on the parser for providing broad support for different file formats. [`singledispatch`](https://peps.python.org/pep-0443/) offers an elegant way for this implementation.
https://github.com/ocbe-uio/trajpy/blob/8381bedfc3f0d696072af1d66f08af497eb0cced/trajpy/trajpy.py#L27-L35 | main | implement new parser with singledispatch and move type handling from trajpy init currently the class trajpy accepts either a csv file or a numpy array for initialising the object however we can improve this by implementing a parser with since trajpy s aims to be a general framework for trajectory analysis it is critical to put more work on the parser for providing broad support for different file formats offers an elegant way for this implementation | 1 |
35,940 | 12,394,505,861 | IssuesEvent | 2020-05-20 17:02:52 | rammatzkvosky/123456 | https://api.github.com/repos/rammatzkvosky/123456 | closed | CVE-2014-0472 (Medium) detected in django-1.4 | security vulnerability | ## CVE-2014-0472 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>django1.4</b></p></summary>
<p>
<p>The Web framework for perfectionists with deadlines.</p>
<p>Library home page: <a href=https://github.com/django/django.git>https://github.com/django/django.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/123456/commit/a7e13152bd161876557b71b8112a67c84aee88df">a7e13152bd161876557b71b8112a67c84aee88df</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (7)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /123456/django/validators.py
- /123456/django/paginator.py
- /123456/django/signing.py
- /123456/django/xheaders.py
- /123456/django/exceptions.py
- /123456/django/context_processors.py
- /123456/django/urlresolvers.py
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The django.core.urlresolvers.reverse function in Django before 1.4.11, 1.5.x before 1.5.6, 1.6.x before 1.6.3, and 1.7.x before 1.7 beta 2 allows remote attackers to import and execute arbitrary Python modules by leveraging a view that constructs URLs using user input and a "dotted Python path."
<p>Publish Date: 2014-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0472>CVE-2014-0472</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-0472">https://nvd.nist.gov/vuln/detail/CVE-2014-0472</a></p>
<p>Release Date: 2014-04-23</p>
<p>Fix Resolution: 1.4.11,1.5.6,1.6.3,1.7 beta 2</p>
</p>
</details>
<p></p>
| True | CVE-2014-0472 (Medium) detected in django-1.4 - ## CVE-2014-0472 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>django1.4</b></p></summary>
<p>
<p>The Web framework for perfectionists with deadlines.</p>
<p>Library home page: <a href=https://github.com/django/django.git>https://github.com/django/django.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/123456/commit/a7e13152bd161876557b71b8112a67c84aee88df">a7e13152bd161876557b71b8112a67c84aee88df</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (7)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /123456/django/validators.py
- /123456/django/paginator.py
- /123456/django/signing.py
- /123456/django/xheaders.py
- /123456/django/exceptions.py
- /123456/django/context_processors.py
- /123456/django/urlresolvers.py
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The django.core.urlresolvers.reverse function in Django before 1.4.11, 1.5.x before 1.5.6, 1.6.x before 1.6.3, and 1.7.x before 1.7 beta 2 allows remote attackers to import and execute arbitrary Python modules by leveraging a view that constructs URLs using user input and a "dotted Python path."
<p>Publish Date: 2014-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-0472>CVE-2014-0472</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-0472">https://nvd.nist.gov/vuln/detail/CVE-2014-0472</a></p>
<p>Release Date: 2014-04-23</p>
<p>Fix Resolution: 1.4.11,1.5.6,1.6.3,1.7 beta 2</p>
</p>
</details>
<p></p>
| non_main | cve medium detected in django cve medium severity vulnerability vulnerable library the web framework for perfectionists with deadlines library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries django validators py django paginator py django signing py django xheaders py django exceptions py django context processors py django urlresolvers py vulnerability details the django core urlresolvers reverse function in django before x before x before and x before beta allows remote attackers to import and execute arbitrary python modules by leveraging a view that constructs urls using user input and a dotted python path publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution beta | 0 |
5,058 | 5,412,034,068 | IssuesEvent | 2017-03-01 13:31:27 | SatelliteQE/robottelo | https://api.github.com/repos/SatelliteQE/robottelo | closed | UI: Container creation fails with Internal Server Error ("unable to open database file") | High Infrastructure test-failure | RHEL6
these tests have it in setUpClass
- [ ] ui.test_adusergroup.ActiveDirectoryUserGroupTestCase.* (all 4 tests)
- [ ] ui.test_bookmark.BookmarkTestCase.* (all 11 tests)
- [ ] ui.test_docker.DockerRegistryTestCase.* (all 6 tests)
- [ ] ui.test_subscription.SubscriptionTestCase.* (all 1 test)
RHEL7
these tests have it in setUpClass
- [ ] ui.test_adusergroup.ActiveDirectoryUserGroupTestCase.* (all 4 tests)
- [ ] ui.test_bookmark.BookmarkTestCase.* (all 11 tests)
- [ ] ui.test_docker.DockerRegistryTestCase.* (all 6 tests)
- [ ] ui.test_subscription.SubscriptionTestCase.* (all 1 test)
others that failed on rhel7 have it in ```setUp``` and failed majority of tests (but not all)
- [ ] + over 100 ui tests
```
robottelo/test.py:256: in setUp
self._docker_browser.start()
robottelo/ui/browser.py:92: in start
self._create_container()
robottelo/ui/browser.py:183: in _create_container
ports=[4444],
../../shiningpanda/jobs/9c131512/virtualenvs/d41d8cd9/lib/python2.7/site-packages/docker/api/container.py:135: in create_container
return self.create_container_from_config(config, name)
../../shiningpanda/jobs/9c131512/virtualenvs/d41d8cd9/lib/python2.7/site-packages/docker/api/container.py:146: in create_container_from_config
return self._result(res, True)
../../shiningpanda/jobs/9c131512/virtualenvs/d41d8cd9/lib/python2.7/site-packages/docker/client.py:178: in _result
self._raise_for_status(response)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.client.Client object at 0x7f31b9e2b290>
response = <Response [500]>, explanation = None
def _raise_for_status(self, response, explanation=None):
"""Raises stored :class:`APIError`, if one occurred."""
try:
response.raise_for_status()
except requests.exceptions.HTTPError as e:
if e.response.status_code == 404:
raise errors.NotFound(e, response, explanation=explanation)
> raise errors.APIError(e, response, explanation=explanation)
E APIError: 500 Server Error: Internal Server Error ("unable to open database file")
../../shiningpanda/jobs/9c131512/virtualenvs/d41d8cd9/lib/python2.7/site-packages/docker/client.py:174: APIError
``` | 1.0 | UI: Container creation fails with Internal Server Error ("unable to open database file") - RHEL6
these tests have it in setUpClass
- [ ] ui.test_adusergroup.ActiveDirectoryUserGroupTestCase.* (all 4 tests)
- [ ] ui.test_bookmark.BookmarkTestCase.* (all 11 tests)
- [ ] ui.test_docker.DockerRegistryTestCase.* (all 6 tests)
- [ ] ui.test_subscription.SubscriptionTestCase.* (all 1 test)
RHEL7
these tests have it in setUpClass
- [ ] ui.test_adusergroup.ActiveDirectoryUserGroupTestCase.* (all 4 tests)
- [ ] ui.test_bookmark.BookmarkTestCase.* (all 11 tests)
- [ ] ui.test_docker.DockerRegistryTestCase.* (all 6 tests)
- [ ] ui.test_subscription.SubscriptionTestCase.* (all 1 test)
others that failed on rhel7 have it in ```setUp``` and failed majority of tests (but not all)
- [ ] + over 100 ui tests
```
robottelo/test.py:256: in setUp
self._docker_browser.start()
robottelo/ui/browser.py:92: in start
self._create_container()
robottelo/ui/browser.py:183: in _create_container
ports=[4444],
../../shiningpanda/jobs/9c131512/virtualenvs/d41d8cd9/lib/python2.7/site-packages/docker/api/container.py:135: in create_container
return self.create_container_from_config(config, name)
../../shiningpanda/jobs/9c131512/virtualenvs/d41d8cd9/lib/python2.7/site-packages/docker/api/container.py:146: in create_container_from_config
return self._result(res, True)
../../shiningpanda/jobs/9c131512/virtualenvs/d41d8cd9/lib/python2.7/site-packages/docker/client.py:178: in _result
self._raise_for_status(response)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.client.Client object at 0x7f31b9e2b290>
response = <Response [500]>, explanation = None
def _raise_for_status(self, response, explanation=None):
"""Raises stored :class:`APIError`, if one occurred."""
try:
response.raise_for_status()
except requests.exceptions.HTTPError as e:
if e.response.status_code == 404:
raise errors.NotFound(e, response, explanation=explanation)
> raise errors.APIError(e, response, explanation=explanation)
E APIError: 500 Server Error: Internal Server Error ("unable to open database file")
../../shiningpanda/jobs/9c131512/virtualenvs/d41d8cd9/lib/python2.7/site-packages/docker/client.py:174: APIError
``` | non_main | ui container creation fails with internal server error unable to open database file these tests have it in setupclass ui test adusergroup activedirectoryusergrouptestcase all tests ui test bookmark bookmarktestcase all tests ui test docker dockerregistrytestcase all tests ui test subscription subscriptiontestcase all test these tests have it in setupclass ui test adusergroup activedirectoryusergrouptestcase all tests ui test bookmark bookmarktestcase all tests ui test docker dockerregistrytestcase all tests ui test subscription subscriptiontestcase all test others that failed on have it in setup and failed majority of tests but not all over ui tests robottelo test py in setup self docker browser start robottelo ui browser py in start self create container robottelo ui browser py in create container ports shiningpanda jobs virtualenvs lib site packages docker api container py in create container return self create container from config config name shiningpanda jobs virtualenvs lib site packages docker api container py in create container from config return self result res true shiningpanda jobs virtualenvs lib site packages docker client py in result self raise for status response self response explanation none def raise for status self response explanation none raises stored class apierror if one occurred try response raise for status except requests exceptions httperror as e if e response status code raise errors notfound e response explanation explanation raise errors apierror e response explanation explanation e apierror server error internal server error unable to open database file shiningpanda jobs virtualenvs lib site packages docker client py apierror | 0 |
2,210 | 7,809,001,983 | IssuesEvent | 2018-06-11 22:12:34 | react-navigation/react-navigation | https://api.github.com/repos/react-navigation/react-navigation | closed | navigation.goBack() to previous page is impossible if that page was on another stackNavigator | needs response from maintainer related: goBack | Update: added snack and example
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.x
| react-native | expo 27
| node | 9.11.1
| npm or yarn | yarn 1.7
```
tab:
stack1:
screen1,
detailScreen1
(default route screen1)
stack2:
screen2,
detailScreen2
(default route screen2)
```
so, for eg, from Screen1 i call
`navigation.navigate('detailScreen2')`
using on detailScreen2
` headerLeft: <GoBack onPress={() => navigation.goBack(null)} />`
move me to Screen2 instead of screen1 .
In fact, goBack call stack2 (who call screen2 as default screen for the stack).
How to fix this? Because if i call `navigation.navigate('screen1')`, the screen stack will be still full, so when i will press on stack2 tab icon, the new screen will be detailScreen2 instead of screen2 (because of the old opened screen on stack2)
Update:
[snack](https://snack.expo.io/ryuU43bg7)
Easy example made from navigation playground.
On home Tab go to Profile page, after go to notification, then go back.
The back will move you to the true parent (settings screen) instead to the previous page (profile screen).
| True | navigation.goBack() to previous page is impossible if that page was on another stackNavigator - Update: added snack and example
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.x
| react-native | expo 27
| node | 9.11.1
| npm or yarn | yarn 1.7
```
tab:
stack1:
screen1,
detailScreen1
(default route screen1)
stack2:
screen2,
detailScreen2
(default route screen2)
```
so, for eg, from Screen1 i call
`navigation.navigate('detailScreen2')`
using on detailScreen2
` headerLeft: <GoBack onPress={() => navigation.goBack(null)} />`
move me to Screen2 instead of screen1 .
In fact, goBack call stack2 (who call screen2 as default screen for the stack).
How to fix this? Because if i call `navigation.navigate('screen1')`, the screen stack will be still full, so when i will press on stack2 tab icon, the new screen will be detailScreen2 instead of screen2 (because of the old opened screen on stack2)
Update:
[snack](https://snack.expo.io/ryuU43bg7)
Easy example made from navigation playground.
On home Tab go to Profile page, after go to notification, then go back.
The back will move you to the true parent (settings screen) instead to the previous page (profile screen).
| main | navigation goback to previous page is impossible if that page was on another stacknavigator update added snack and example your environment software version react navigation x react native expo node npm or yarn yarn tab default route default route so for eg from i call navigation navigate using on headerleft navigation goback null move me to instead of in fact goback call who call as default screen for the stack how to fix this because if i call navigation navigate the screen stack will be still full so when i will press on tab icon the new screen will be instead of because of the old opened screen on update easy example made from navigation playground on home tab go to profile page after go to notification then go back the back will move you to the true parent settings screen instead to the previous page profile screen | 1 |
3,734 | 5,938,415,977 | IssuesEvent | 2017-05-25 00:03:14 | BD2KGenomics/dcc-ops | https://api.github.com/repos/BD2KGenomics/dcc-ops | opened | swagger docs for redwood servers | service: redwood | Swagger docs for each of dcc-auth, dcc-metadata, and dcc-storage documenting the rest endpoints would be nice to have.
Some [bildung](http://www.baeldung.com/swagger-2-documentation-for-spring-rest-api) | 1.0 | swagger docs for redwood servers - Swagger docs for each of dcc-auth, dcc-metadata, and dcc-storage documenting the rest endpoints would be nice to have.
Some [bildung](http://www.baeldung.com/swagger-2-documentation-for-spring-rest-api) | non_main | swagger docs for redwood servers swagger docs for each of dcc auth dcc metadata and dcc storage documenting the rest endpoints would be nice to have some | 0 |
4,686 | 24,204,978,485 | IssuesEvent | 2022-09-25 04:38:24 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | opened | Spurious test failure: /mob/living/simple_animal/mouse was unable to be GC'd | Maintainability/Hinders improvements | ## Reproduction:
```
## REF SEARCH Beginning search for references to a /mob/living/simple_animal/mouse.
## REF SEARCH Finished searching globals
## REF SEARCH Finished searching native globals
## REF SEARCH Finished searching atoms
## REF SEARCH Found /mob/living/simple_animal/mouse [0x300011d] in list Datums -> /datum/spatial_grid_cell [0x21007b6d] -> hearing_contents (list).
## REF SEARCH Finished searching datums
## REF SEARCH Finished searching clients
## REF SEARCH Completed search for references to a /mob/living/simple_animal/mouse.
## TESTING: GC: -- [0x300011d] | /mob/living/simple_animal/mouse was unable to be GC'd --
Error: /mob/living/simple_animal/mouse hard deleted 1 times out of a total del count of 26
FAIL: /datum/unit_test/create_and_destroy 518.4s
REASON #1: /mob/living/simple_animal/mouse hard deleted 1 times out of a total del count of 26 at code/modules/unit_tests/create_and_destroy.dm:170
```
https://github.com/tgstation/tgstation/actions/runs/3120786625/jobs/5061636026
I'm assuming it's related to https://github.com/tgstation/tgstation/pull/70051 | True | Spurious test failure: /mob/living/simple_animal/mouse was unable to be GC'd - ## Reproduction:
```
## REF SEARCH Beginning search for references to a /mob/living/simple_animal/mouse.
## REF SEARCH Finished searching globals
## REF SEARCH Finished searching native globals
## REF SEARCH Finished searching atoms
## REF SEARCH Found /mob/living/simple_animal/mouse [0x300011d] in list Datums -> /datum/spatial_grid_cell [0x21007b6d] -> hearing_contents (list).
## REF SEARCH Finished searching datums
## REF SEARCH Finished searching clients
## REF SEARCH Completed search for references to a /mob/living/simple_animal/mouse.
## TESTING: GC: -- [0x300011d] | /mob/living/simple_animal/mouse was unable to be GC'd --
Error: /mob/living/simple_animal/mouse hard deleted 1 times out of a total del count of 26
FAIL: /datum/unit_test/create_and_destroy 518.4s
REASON #1: /mob/living/simple_animal/mouse hard deleted 1 times out of a total del count of 26 at code/modules/unit_tests/create_and_destroy.dm:170
```
https://github.com/tgstation/tgstation/actions/runs/3120786625/jobs/5061636026
I'm assuming it's related to https://github.com/tgstation/tgstation/pull/70051 | main | spurious test failure mob living simple animal mouse was unable to be gc d reproduction ref search beginning search for references to a mob living simple animal mouse ref search finished searching globals ref search finished searching native globals ref search finished searching atoms ref search found mob living simple animal mouse in list datums datum spatial grid cell hearing contents list ref search finished searching datums ref search finished searching clients ref search completed search for references to a mob living simple animal mouse testing gc mob living simple animal mouse was unable to be gc d error mob living simple animal mouse hard deleted times out of a total del count of fail datum unit test create and destroy reason mob living simple animal mouse hard deleted times out of a total del count of at code modules unit tests create and destroy dm i m assuming it s related to | 1 |
2,261 | 7,937,339,432 | IssuesEvent | 2018-07-09 12:35:16 | DynamoRIO/drmemory | https://api.github.com/repos/DynamoRIO/drmemory | opened | use clang-format for automated formatting | Maintainability Type-Feature | For DR we've gone to full clang-format: https://github.com/DynamoRIO/dynamorio/issues/2876
This issue covers doing the same for Dr. Memory. | True | use clang-format for automated formatting - For DR we've gone to full clang-format: https://github.com/DynamoRIO/dynamorio/issues/2876
This issue covers doing the same for Dr. Memory. | main | use clang format for automated formatting for dr we ve gone to full clang format this issue covers doing the same for dr memory | 1 |
447,071 | 31,621,744,136 | IssuesEvent | 2023-09-06 00:00:28 | unb-mds/2023-2-Squad09 | https://api.github.com/repos/unb-mds/2023-2-Squad09 | closed | Contato com OpenKnowledge | documentation Estudos | # Contato com OpenKnowledge
## Descrição do problema
Contato com os membros da OpenKnowledge para melhor entendimento do projeto Querido diário
## Como o problema pode ser resolvido
Neste contato estamos buscando saber: qual o tipo de tecnologia utilizada, quais cidades ainda não foram mapeadas e como podemos contribuir para o projeto
## Critérios de aceitação
- [ ] Arquivo com detalhes do que foi discutido (.md) | 1.0 | Contato com OpenKnowledge - # Contato com OpenKnowledge
## Descrição do problema
Contato com os membros da OpenKnowledge para melhor entendimento do projeto Querido diário
## Como o problema pode ser resolvido
Neste contato estamos buscando saber: qual o tipo de tecnologia utilizada, quais cidades ainda não foram mapeadas e como podemos contribuir para o projeto
## Critérios de aceitação
- [ ] Arquivo com detalhes do que foi discutido (.md) | non_main | contato com openknowledge contato com openknowledge descrição do problema contato com os membros da openknowledge para melhor entendimento do projeto querido diário como o problema pode ser resolvido neste contato estamos buscando saber qual o tipo de tecnologia utilizada quais cidades ainda não foram mapeadas e como podemos contribuir para o projeto critérios de aceitação arquivo com detalhes do que foi discutido md | 0 |
5,462 | 12,513,095,044 | IssuesEvent | 2020-06-03 00:51:33 | FusionAuth/fusionauth-issues | https://api.github.com/repos/FusionAuth/fusionauth-issues | closed | refresh token isnt returned after a user is registered. | architecture enhancement | ## (Put bug title here)
### Description
After we register a new user with fusionauth, only access_token is returned but not the refresh token, the login api provides the setting to return both the access and refresh token but the registration api doesnt have that setting to return refresh token.
### Steps to reproduce
Steps to reproduce the behavior:
1. Use user registration api to register a user
2. the response from the api is only the token, no refresh token returned
3. refresh token should we returned as well.
### Expected behavior
The api should return refresh token as well
### Platform
Using the latest fusionauth docker image
| 1.0 | refresh token isnt returned after a user is registered. - ## (Put bug title here)
### Description
After we register a new user with fusionauth, only access_token is returned but not the refresh token, the login api provides the setting to return both the access and refresh token but the registration api doesnt have that setting to return refresh token.
### Steps to reproduce
Steps to reproduce the behavior:
1. Use user registration api to register a user
2. the response from the api is only the token, no refresh token returned
3. refresh token should we returned as well.
### Expected behavior
The api should return refresh token as well
### Platform
Using the latest fusionauth docker image
| non_main | refresh token isnt returned after a user is registered put bug title here description after we register a new user with fusionauth only access token is returned but not the refresh token the login api provides the setting to return both the access and refresh token but the registration api doesnt have that setting to return refresh token steps to reproduce steps to reproduce the behavior use user registration api to register a user the response from the api is only the token no refresh token returned refresh token should we returned as well expected behavior the api should return refresh token as well platform using the latest fusionauth docker image | 0 |
2,746 | 9,789,277,974 | IssuesEvent | 2019-06-10 09:24:00 | diofant/diofant | https://api.github.com/repos/diofant/diofant | closed | sha-sums on pypi and release page must match | bug maintainability | For latest release
```
sk@note:~/tmp $ sha256sum on-pypi/*
8ccabf6bf643c298dd2c3730751de70d1b459a522fded09c33b1b1a4e2c93b03 on-pypi/Diofant-0.10.0-py3-none-any.whl
1ced513e42458042c02062eafca97ec1e5b847bfb78260b2cf1d33f16b054f62 on-pypi/Diofant-0.10.0.tar.gz
sk@note:~/tmp $ sha256sum on-gh/*
e364fda8dfcf4be9b33c0977dfe7be757a81c1411b479601ca7e027d047ac5d9 on-gh/Diofant-0.10.0-py3-none-any.whl
088a3897ff5704024ac3d7c0536e0718699e2795252f5b7f4bc37e18c7cb3405 on-gh/Diofant-0.10.0.tar.gz
```
We must fix deployment phase on Travis-CI to upload archives with identical sums.
| True | sha-sums on pypi and release page must match - For latest release
```
sk@note:~/tmp $ sha256sum on-pypi/*
8ccabf6bf643c298dd2c3730751de70d1b459a522fded09c33b1b1a4e2c93b03 on-pypi/Diofant-0.10.0-py3-none-any.whl
1ced513e42458042c02062eafca97ec1e5b847bfb78260b2cf1d33f16b054f62 on-pypi/Diofant-0.10.0.tar.gz
sk@note:~/tmp $ sha256sum on-gh/*
e364fda8dfcf4be9b33c0977dfe7be757a81c1411b479601ca7e027d047ac5d9 on-gh/Diofant-0.10.0-py3-none-any.whl
088a3897ff5704024ac3d7c0536e0718699e2795252f5b7f4bc37e18c7cb3405 on-gh/Diofant-0.10.0.tar.gz
```
We must fix deployment phase on Travis-CI to upload archives with identical sums.
| main | sha sums on pypi and release page must match for latest release sk note tmp on pypi on pypi diofant none any whl on pypi diofant tar gz sk note tmp on gh on gh diofant none any whl on gh diofant tar gz we must fix deployment phase on travis ci to upload archives with identical sums | 1 |
4,870 | 25,020,350,766 | IssuesEvent | 2022-11-03 23:32:07 | aws/serverless-application-model | https://api.github.com/repos/aws/serverless-application-model | closed | Tracing field of Serverless::Function does not support intrinsics | type/bug area/resource/function area/intrinsics maintainer/need-followup | <!--
Before reporting a new issue, make sure we don't have any duplicates already open or closed by
searching the issues list. If there is a duplicate, re-open or add a comment to the
existing issue instead of creating a new one. If you are reporting a bug,
make sure to include relevant information asked below to help with debugging.
## GENERAL HELP QUESTIONS ##
Github Issues is for bug reports and feature requests. If you have general support
questions, the following locations are a good place:
- Post a question in StackOverflow with "aws-sam" tag
-->
**Description:**
Tracing needs to support "Fn::If" and other intrinsic functions
<!-- Briefly describe the problem you are facing -->
**Steps to reproduce the issue:**
1. Create a template with "Tracing" set to an !If function
2. Deploy
**Observed result:**
Condition is ignored
**Expected result:**
Condition to work | True | Tracing field of Serverless::Function does not support intrinsics - <!--
Before reporting a new issue, make sure we don't have any duplicates already open or closed by
searching the issues list. If there is a duplicate, re-open or add a comment to the
existing issue instead of creating a new one. If you are reporting a bug,
make sure to include relevant information asked below to help with debugging.
## GENERAL HELP QUESTIONS ##
Github Issues is for bug reports and feature requests. If you have general support
questions, the following locations are a good place:
- Post a question in StackOverflow with "aws-sam" tag
-->
**Description:**
Tracing needs to support "Fn::If" and other intrinsic functions
<!-- Briefly describe the problem you are facing -->
**Steps to reproduce the issue:**
1. Create a template with "Tracing" set to an !If function
2. Deploy
**Observed result:**
Condition is ignored
**Expected result:**
Condition to work | main | tracing field of serverless function does not support intrinsics before reporting a new issue make sure we don t have any duplicates already open or closed by searching the issues list if there is a duplicate re open or add a comment to the existing issue instead of creating a new one if you are reporting a bug make sure to include relevant information asked below to help with debugging general help questions github issues is for bug reports and feature requests if you have general support questions the following locations are a good place post a question in stackoverflow with aws sam tag description tracing needs to support fn if and other intrinsic functions steps to reproduce the issue create a template with tracing set to an if function deploy observed result condition is ignored expected result condition to work | 1 |
80,987 | 7,762,781,648 | IssuesEvent | 2018-06-01 14:33:21 | lintol/lintol-frontend | https://api.github.com/repos/lintol/lintol-frontend | closed | Reports - filter by profile | 0.11 ready for test | The filter by profile dropdown list doesn't seem to have all the profile options. There should be 5 but only 2 show.

| 1.0 | Reports - filter by profile - The filter by profile dropdown list doesn't seem to have all the profile options. There should be 5 but only 2 show.

| non_main | reports filter by profile the filter by profile dropdown list doesn t seem to have all the profile options there should be but only show | 0 |
2,322 | 8,307,473,037 | IssuesEvent | 2018-09-23 09:44:17 | lansuite/lansuite | https://api.github.com/repos/lansuite/lansuite | closed | Check required, if account was already registered with a given email adress | bug pending-maintainer-response | I just had a case where a user created three accounts with the same username and email address.
It seems that uniqueness of neither username nor email address is currently enforced.
But this should be the case for at least one of them. Which one is up for discussion, as both have their advantages and disadvantages
## Expected Behaviour
Entered user data should be checked against existing users and uniqueness of either username or email address should enforced. Both on application and database level.
## Current Behaviour
Users are able to create additional accounts with the same username and email address.
Those get a new userid assigned. It has not been tested what account is picked for login when the correct credentials are entered
## Possible Solution
Entered user data should be checked against existing users.
Uniqueness of either username or email address should be enforced.
Preferably both on application and database level.
## Steps to Reproduce (for bugs)
Create a new user, use same data as for one of the existing ones
## Context
This is needed to reduce the amount of dead accounts in the system and make it easier for both guests and administrators to find the right user account for a given username / email address
## Your Environment
<!--- Include as many relevant details about the environment you experienced the problem in -->
* Version used: GIT:maluz/master
* Operating System and version: Win10
* Enabled features: | True | Check required, if account was already registered with a given email adress - I just had a case where a user created three accounts with the same username and email address.
It seems that uniqueness of neither username nor email address is currently enforced.
But this should be the case for at least one of them. Which one is up for discussion, as both have their advantages and disadvantages
## Expected Behaviour
Entered user data should be checked against existing users and uniqueness of either username or email address should enforced. Both on application and database level.
## Current Behaviour
Users are able to create additional accounts with the same username and email address.
Those get a new userid assigned. It has not been tested what account is picked for login when the correct credentials are entered
## Possible Solution
Entered user data should be checked against existing users.
Uniqueness of either username or email address should be enforced.
Preferably both on application and database level.
## Steps to Reproduce (for bugs)
Create a new user, use same data as for one of the existing ones
## Context
This is needed to reduce the amount of dead accounts in the system and make it easier for both guests and administrators to find the right user account for a given username / email address
## Your Environment
<!--- Include as many relevant details about the environment you experienced the problem in -->
* Version used: GIT:maluz/master
* Operating System and version: Win10
* Enabled features: | main | check required if account was already registered with a given email adress i just had a case where a user created three accounts with the same username and email address it seems that uniqueness of neither username nor email address is currently enforced but this should be the case for at least one of them which one is up for discussion as both have their advantages and disadvantages expected behaviour entered user data should be checked against existing users and uniqueness of either username or email address should enforced both on application and database level current behaviour users are able to create additional accounts with the same username and email address those get a new userid assigned it has not been tested what account is picked for login when the correct credentials are entered possible solution entered user data should be checked against existing users uniqueness of either username or email address should be enforced preferably both on application and database level steps to reproduce for bugs create a new user use same data as for one of the existing ones context this is needed to reduce the amount of dead accounts in the system and make it easier for both guests and administrators to find the right user account for a given username email address your environment version used git maluz master operating system and version enabled features | 1 |
809,628 | 30,202,465,300 | IssuesEvent | 2023-07-05 07:08:05 | googleapis/google-cloud-go | https://api.github.com/repos/googleapis/google-cloud-go | reopened | storage: TestRetryConformance failed | type: bug api: storage priority: p1 flakybot: issue flakybot: flaky | Note: #7968 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 5cdf4e2668ef73b6b38c3d8c89073863e7f3dc77
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/ce31ef57-1946-443c-a18d-42c926218e96), [Sponge](http://sponge2/ce31ef57-1946-443c-a18d-42c926218e96)
status: failed
<details><summary>Test output</summary><br><pre> retry_conformance_test.go:753: roundtrip error (may be expected): write tcp 127.0.0.1:40802->127.0.0.1:9000: write: broken pipe
request: POST /upload/storage/v1/b/bucket-20230629-27053015565025-0060/o?alt=json&ifGenerationMatch=0&name=new-object.txt&prettyPrint=false&projection=full&uploadType=multipart HTTP/1.1
Host: localhost:9000
Content-Type: multipart/related; boundary=1739ef15cdc290d3e85cbd78ec609dd43fa37280611573acabf8cc11bdde
User-Agent: google-api-go-client/0.5
X-Goog-Api-Client: gccl-invocation-id/e6a40c82-0049-4c15-9026-ce9f0c87e26c gccl-attempt-count/1 gl-go/1.20.4 gccl/1.31.0
X-Goog-Gcs-Idempotency-Token: e6a40c82-0049-4c15-9026-ce9f0c87e26c
X-Retry-Test-Id: 491f3b670e3746beb3dc3072b7c4d104
retry_conformance_test.go:539: want success, got Writer.Close: Post "http://localhost:9000/upload/storage/v1/b/bucket-20230629-27053015565025-0060/o?alt=json&ifGenerationMatch=0&name=new-object.txt&prettyPrint=false&projection=full&uploadType=multipart": write tcp 127.0.0.1:40802->127.0.0.1:9000: write: broken pipe
retry_conformance_test.go:718: test not completed; unused instructions: map[storage.objects.insert:[return-reset-connection]]</pre></details> | 1.0 | storage: TestRetryConformance failed - Note: #7968 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 5cdf4e2668ef73b6b38c3d8c89073863e7f3dc77
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/ce31ef57-1946-443c-a18d-42c926218e96), [Sponge](http://sponge2/ce31ef57-1946-443c-a18d-42c926218e96)
status: failed
<details><summary>Test output</summary><br><pre> retry_conformance_test.go:753: roundtrip error (may be expected): write tcp 127.0.0.1:40802->127.0.0.1:9000: write: broken pipe
request: POST /upload/storage/v1/b/bucket-20230629-27053015565025-0060/o?alt=json&ifGenerationMatch=0&name=new-object.txt&prettyPrint=false&projection=full&uploadType=multipart HTTP/1.1
Host: localhost:9000
Content-Type: multipart/related; boundary=1739ef15cdc290d3e85cbd78ec609dd43fa37280611573acabf8cc11bdde
User-Agent: google-api-go-client/0.5
X-Goog-Api-Client: gccl-invocation-id/e6a40c82-0049-4c15-9026-ce9f0c87e26c gccl-attempt-count/1 gl-go/1.20.4 gccl/1.31.0
X-Goog-Gcs-Idempotency-Token: e6a40c82-0049-4c15-9026-ce9f0c87e26c
X-Retry-Test-Id: 491f3b670e3746beb3dc3072b7c4d104
retry_conformance_test.go:539: want success, got Writer.Close: Post "http://localhost:9000/upload/storage/v1/b/bucket-20230629-27053015565025-0060/o?alt=json&ifGenerationMatch=0&name=new-object.txt&prettyPrint=false&projection=full&uploadType=multipart": write tcp 127.0.0.1:40802->127.0.0.1:9000: write: broken pipe
retry_conformance_test.go:718: test not completed; unused instructions: map[storage.objects.insert:[return-reset-connection]]</pre></details> | non_main | storage testretryconformance failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output retry conformance test go roundtrip error may be expected write tcp write broken pipe request post upload storage b bucket o alt json ifgenerationmatch name new object txt prettyprint false projection full uploadtype multipart http host localhost content type multipart related boundary user agent google api go client x goog api client gccl invocation id gccl attempt count gl go gccl x goog gcs idempotency token x retry test id retry conformance test go want success got writer close post write tcp write broken pipe retry conformance test go test not completed unused instructions map | 0 |
5,804 | 30,750,364,420 | IssuesEvent | 2023-07-28 18:38:00 | cosmos/ibc-rs | https://api.github.com/repos/cosmos/ibc-rs | closed | [ICS24] Enhancements and fixes for `ChainId` handling and validation | A: bug A: breaking O: maintainability | <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
### Problem Statement
Upon reviewing our existing design around `ChainId` the following issues/suggestions were brought up, some of which were raised thanks to @mina86 in #698, #721, and #725.
- Utilize existing [identifier validator functions](https://github.com/cosmos/ibc-rs/blob/ee16dc2da38808e3c68c9bdf9fef42f69b3b627d/crates/ibc/src/core/ics24_host/identifier/validate.rs#L1-L79) for both character and length checks, ofc with some customization, given that:
- **(bug)**`ChainId`s like ` -1` and `comsos hub-1` (with whitespace) are considered valid!
- **(bug)** The [ChainId's length check](https://github.com/cosmos/ibc-rs/blob/fa22b0c3087e737c4041cf4afb6ead51ae19e977/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L150-L156) for the new Tendermint client creation is incorrect. It shouldn't allow lengths greater than `MaxChainIdLen - 20` (considering `u64::MAX` length), as it can lead to an overflow.
- Redundant `is_epoch_format` call by `from_string` in `ChainId` creating
- Use `revision_number` instead of `version` to get naming consistent with other places (Looking at `Height` and `HeightTimeout`) and taking [`ibc-go`](https://github.com/cosmos/ibc-go/blob/39adb01a33009ecd95e4ad22379640717832585b/modules/core/02-client/types/height.go#L153-L184) as the reference implementation.
- Unclear why invalid identifiers create a default `ChainId` with `version = 0`. This should be documented or completely disallow creating `ChainId` with strings not in format. This way conversions from a string can fail, and the error of `FromStr` can't be infallible.
- Remove [default implementation](https://github.com/cosmos/ibc-rs/blob/98099399137b544e66d50af45ecfc8c8e508d0b2/crates/ibc/src/core/ics24_host/identifier.rs#L164-L168) for `ChainId`. We can’t assume anything about that.
### Highlight
Taking [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements#paths-identifiers-separators) as our reference for identifier validation (As it is so for other identifiers as well), basically, we should not enforce stricter checks on the identifier's format, and leave it up to users if needed. Even though we may allow creating odd `ChainId` like `chainA-a-1` or `--chainA-1`,
### Some related issues in `ibc-go`
- https://github.com/cosmos/cosmos-sdk/pull/7280
- https://github.com/cosmos/ibc-go/issues/686
| True | [ICS24] Enhancements and fixes for `ChainId` handling and validation - <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
### Problem Statement
Upon reviewing our existing design around `ChainId` the following issues/suggestions were brought up, some of which were raised thanks to @mina86 in #698, #721, and #725.
- Utilize existing [identifier validator functions](https://github.com/cosmos/ibc-rs/blob/ee16dc2da38808e3c68c9bdf9fef42f69b3b627d/crates/ibc/src/core/ics24_host/identifier/validate.rs#L1-L79) for both character and length checks, ofc with some customization, given that:
- **(bug)**`ChainId`s like ` -1` and `comsos hub-1` (with whitespace) are considered valid!
- **(bug)** The [ChainId's length check](https://github.com/cosmos/ibc-rs/blob/fa22b0c3087e737c4041cf4afb6ead51ae19e977/crates/ibc/src/clients/ics07_tendermint/client_state.rs#L150-L156) for the new Tendermint client creation is incorrect. It shouldn't allow lengths greater than `MaxChainIdLen - 20` (considering `u64::MAX` length), as it can lead to an overflow.
- Redundant `is_epoch_format` call by `from_string` in `ChainId` creating
- Use `revision_number` instead of `version` to get naming consistent with other places (Looking at `Height` and `HeightTimeout`) and taking [`ibc-go`](https://github.com/cosmos/ibc-go/blob/39adb01a33009ecd95e4ad22379640717832585b/modules/core/02-client/types/height.go#L153-L184) as the reference implementation.
- Unclear why invalid identifiers create a default `ChainId` with `version = 0`. This should be documented or completely disallow creating `ChainId` with strings not in format. This way conversions from a string can fail, and the error of `FromStr` can't be infallible.
- Remove [default implementation](https://github.com/cosmos/ibc-rs/blob/98099399137b544e66d50af45ecfc8c8e508d0b2/crates/ibc/src/core/ics24_host/identifier.rs#L164-L168) for `ChainId`. We can’t assume anything about that.
### Highlight
Taking [ICS-24](https://github.com/cosmos/ibc/tree/main/spec/core/ics-024-host-requirements#paths-identifiers-separators) as our reference for identifier validation (As it is so for other identifiers as well), basically, we should not enforce stricter checks on the identifier's format, and leave it up to users if needed. Even though we may allow creating odd `ChainId` like `chainA-a-1` or `--chainA-1`,
### Some related issues in `ibc-go`
- https://github.com/cosmos/cosmos-sdk/pull/7280
- https://github.com/cosmos/ibc-go/issues/686
| main | enhancements and fixes for chainid handling and validation ☺ v ✰ thanks for opening an issue ✰ v before smashing the submit button please review the template v please also ensure that this is not a duplicate issue ☺ problem statement upon reviewing our existing design around chainid the following issues suggestions were brought up some of which were raised thanks to in and utilize existing for both character and length checks ofc with some customization given that bug chainid s like and comsos hub with whitespace are considered valid bug the for the new tendermint client creation is incorrect it shouldn t allow lengths greater than maxchainidlen considering max length as it can lead to an overflow redundant is epoch format call by from string in chainid creating use revision number instead of version to get naming consistent with other places looking at height and heighttimeout and taking as the reference implementation unclear why invalid identifiers create a default chainid with version this should be documented or completely disallow creating chainid with strings not in format this way conversions from a string can fail and the error of fromstr can t be infallible remove for chainid we can’t assume anything about that highlight taking as our reference for identifier validation as it is so for other identifiers as well basically we should not enforce stricter checks on the identifier s format and leave it up to users if needed even though we may allow creating odd chainid like chaina a or chaina some related issues in ibc go | 1 |
783,328 | 27,526,537,334 | IssuesEvent | 2023-03-06 18:28:52 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | PF 12.1 ZEN: Generate new cert on reboot | Type: Bug Priority: High | Issue on the rc.local where it re-generate the cert upon reboot. | 1.0 | PF 12.1 ZEN: Generate new cert on reboot - Issue on the rc.local where it re-generate the cert upon reboot. | non_main | pf zen generate new cert on reboot issue on the rc local where it re generate the cert upon reboot | 0 |
124 | 3,398,042,355 | IssuesEvent | 2015-12-02 00:49:13 | gillesdegottex/dfasma | https://api.github.com/repos/gillesdegottex/dfasma | closed | qmake5 fails on Centos6 | Portability/Distribution | Hello Gilles,
on our centos 6 work stations I get the following error when running qmake-qt5
503> qmake-qt5 -makefile -cache cache.tmp "INCLUDEPATH+=/u/formes/share/include" "LIBS+=-L/u/formes/share/lib/x86_64-Linux-rh65" dfasma.pro
Project MESSAGE: CONFIG=lex yacc debug exceptions depend_includepath testcase_targets import_plugins import_qpa_plugin qt warn_on release link_prl incremental shared qpa no_mocdepend release qt_no_framework linux unix posix gcc fft_fftw3 file_audio_libsndfile file_sdif precision_double
Project MESSAGE: Git: Version: v1.3.4
Project MESSAGE: Git: Branch: HEAD
Project MESSAGE: PREFIX=/usr/local
Project MESSAGE: PREFIXSHORTCUT=/usr/local
Project MESSAGE: For Linux
Project MESSAGE: Using GCC compiler
Project MESSAGE: For 64bits
Project MESSAGE: With double precision
Project MESSAGE: Audio file reader: libsndfile
Project MESSAGE: FFT Implementation: FFTW3
Project MESSAGE: Files: SDIF support: YES
Project ERROR: Unknown module(s) in QT: multimedia
we have libQTMultimedia available for Qt5 - in fact I had compîled dfasma 1.2 on the very same machines a few weeks ago. Any idea what could go wrong here?
I have checked out tag 1.3.4
Thanks
Axel | True | qmake5 fails on Centos6 - Hello Gilles,
on our centos 6 work stations I get the following error when running qmake-qt5
503> qmake-qt5 -makefile -cache cache.tmp "INCLUDEPATH+=/u/formes/share/include" "LIBS+=-L/u/formes/share/lib/x86_64-Linux-rh65" dfasma.pro
Project MESSAGE: CONFIG=lex yacc debug exceptions depend_includepath testcase_targets import_plugins import_qpa_plugin qt warn_on release link_prl incremental shared qpa no_mocdepend release qt_no_framework linux unix posix gcc fft_fftw3 file_audio_libsndfile file_sdif precision_double
Project MESSAGE: Git: Version: v1.3.4
Project MESSAGE: Git: Branch: HEAD
Project MESSAGE: PREFIX=/usr/local
Project MESSAGE: PREFIXSHORTCUT=/usr/local
Project MESSAGE: For Linux
Project MESSAGE: Using GCC compiler
Project MESSAGE: For 64bits
Project MESSAGE: With double precision
Project MESSAGE: Audio file reader: libsndfile
Project MESSAGE: FFT Implementation: FFTW3
Project MESSAGE: Files: SDIF support: YES
Project ERROR: Unknown module(s) in QT: multimedia
we have libQTMultimedia available for Qt5 - in fact I had compîled dfasma 1.2 on the very same machines a few weeks ago. Any idea what could go wrong here?
I have checked out tag 1.3.4
Thanks
Axel | non_main | fails on hello gilles on our centos work stations i get the following error when running qmake qmake makefile cache cache tmp includepath u formes share include libs l u formes share lib linux dfasma pro project message config lex yacc debug exceptions depend includepath testcase targets import plugins import qpa plugin qt warn on release link prl incremental shared qpa no mocdepend release qt no framework linux unix posix gcc fft file audio libsndfile file sdif precision double project message git version project message git branch head project message prefix usr local project message prefixshortcut usr local project message for linux project message using gcc compiler project message for project message with double precision project message audio file reader libsndfile project message fft implementation project message files sdif support yes project error unknown module s in qt multimedia we have libqtmultimedia available for in fact i had compîled dfasma on the very same machines a few weeks ago any idea what could go wrong here i have checked out tag thanks axel | 0 |
476,618 | 13,747,793,025 | IssuesEvent | 2020-10-06 08:09:40 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.1 develop-76] 'Submit' button in password window doesn't respond to 'Enter' | Category: UI Priority: Low | Now you can close it by ESC, but enter doesn't work. | 1.0 | [0.9.1 develop-76] 'Submit' button in password window doesn't respond to 'Enter' - Now you can close it by ESC, but enter doesn't work. | non_main | submit button in password window doesn t respond to enter now you can close it by esc but enter doesn t work | 0 |
211,425 | 16,240,362,489 | IssuesEvent | 2021-05-07 08:47:47 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Manual Testing Workbench Group 1 | ISIS Team: Core Manual Tests |
You have been assigned manual testing. The hope is to catch as many problems with the code before release, so it would be great if you can take some time to give a serious test to your assigned area. Thank you!!
The general guide to manual testing:
* The tests must be performed on the installer versions of the final release candidate. Not on local compiled code.
* Serious errors involving loss of functionality, crashes etc. should be raised
as issues with the current release as a milestone and an email sent to the project manager immediately.
* Minor and cosmetic issues should be raised as issues against the forthcoming
releases.
* First try things that should work, then try to break Mantid, e.g. entering invalid values, unexpected characters etc.
* Don't spend more than a few hours on the testing as fatigue will kick in.
* If you find errors in the documentation, please correct them.
* Comment against this ticket the OS environment you are testing against.
* Close the this issue once you are done.
### Specific Notes:
http://developer.mantidproject.org/Testing/Core/Core.html | 1.0 | Manual Testing Workbench Group 1 -
You have been assigned manual testing. The hope is to catch as many problems with the code before release, so it would be great if you can take some time to give a serious test to your assigned area. Thank you!!
The general guide to manual testing:
* The tests must be performed on the installer versions of the final release candidate. Not on local compiled code.
* Serious errors involving loss of functionality, crashes etc. should be raised
as issues with the current release as a milestone and an email sent to the project manager immediately.
* Minor and cosmetic issues should be raised as issues against the forthcoming
releases.
* First try things that should work, then try to break Mantid, e.g. entering invalid values, unexpected characters etc.
* Don't spend more than a few hours on the testing as fatigue will kick in.
* If you find errors in the documentation, please correct them.
* Comment against this ticket the OS environment you are testing against.
* Close the this issue once you are done.
### Specific Notes:
http://developer.mantidproject.org/Testing/Core/Core.html | non_main | manual testing workbench group you have been assigned manual testing the hope is to catch as many problems with the code before release so it would be great if you can take some time to give a serious test to your assigned area thank you the general guide to manual testing the tests must be performed on the installer versions of the final release candidate not on local compiled code serious errors involving loss of functionality crashes etc should be raised as issues with the current release as a milestone and an email sent to the project manager immediately minor and cosmetic issues should be raised as issues against the forthcoming releases first try things that should work then try to break mantid e g entering invalid values unexpected characters etc don t spend more than a few hours on the testing as fatigue will kick in if you find errors in the documentation please correct them comment against this ticket the os environment you are testing against close the this issue once you are done specific notes | 0 |
178,430 | 13,779,453,493 | IssuesEvent | 2020-10-08 13:47:04 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | opened | Visual Studio version 16.8 Preview 3.0 does not restore SDK projects | Area-IDE Disabled Test IDE-Project Integration-Test Regression Urgency-Now | **Version Used**: 16.8 Preview 3.0
**Steps to Reproduce**:
Run integration tests linked to this bug in CI.
**Expected Behavior**:
Tests pass.
**Actual Behavior**:
Tests fail due to failure to restore SDK references. | 2.0 | Visual Studio version 16.8 Preview 3.0 does not restore SDK projects - **Version Used**: 16.8 Preview 3.0
**Steps to Reproduce**:
Run integration tests linked to this bug in CI.
**Expected Behavior**:
Tests pass.
**Actual Behavior**:
Tests fail due to failure to restore SDK references. | non_main | visual studio version preview does not restore sdk projects version used preview steps to reproduce run integration tests linked to this bug in ci expected behavior tests pass actual behavior tests fail due to failure to restore sdk references | 0 |
1,796 | 6,575,902,631 | IssuesEvent | 2017-09-11 17:46:20 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_ami_find fails due missing 'creationDate' when run against a Helion Eucalyptus EC2 cloud | affects_2.1 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ec2_ami_find
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
in ./ansible.cfg:
```
[defaults]
jinja2_extensions = jinja2.ext.with_
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 16.04
##### SUMMARY
<!--- Explain the problem briefly -->
The `ec2_ami_find` module fails when run against a Helion Eucalyptus EC2 cloud with error on missing 'creationDate'
```
Traceback (most recent call last):
File "/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py", line 423, in <module>
main()
File "/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py", line 377, in main
'creationDate': image.creationDate,
AttributeError: 'Image' object has no attribute 'creationDate'
```
Reason is that the Helion Eucalyptus does not report a "creationDate" for images. I verified this by observing the HTTP response from Eucalyptus. Also in the command `euca-describe-images` there is no mentioning of a creation Date, see [latest docs](http://docs.hpcloud.com/eucalyptus/4.3.0/#euca2ools-guide/euca-describe-images.html).
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Just run `ec2_ami_find` against a Eucalyptus EC2 endpoint.
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: all
connection: local
gather_facts: false
tasks:
- name: Find images
ec2_ami_find:
ec2_url: '{{ ec2_url }}' # use Eucalyptus endpoint
aws_access_key: '{{ aws_access_key }}'
aws_secret_key: '{{ aws_secret_key }}'
name: some-name*
register: images
- debug:
msg: '{{ images }}'
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Successful run of `ec2_ami_find`, printed list of images
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Traceback of failure.
<!--- Paste verbatim command output between quotes below -->
```
Traceback (most recent call last):
File "/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py", line 423, in <module>
main()
File "/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py", line 377, in main
'creationDate': image.creationDate,
AttributeError: 'Image' object has no attribute 'creationDate'
```
| True | ec2_ami_find fails due missing 'creationDate' when run against a Helion Eucalyptus EC2 cloud - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ec2_ami_find
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
in ./ansible.cfg:
```
[defaults]
jinja2_extensions = jinja2.ext.with_
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 16.04
##### SUMMARY
<!--- Explain the problem briefly -->
The `ec2_ami_find` module fails when run against a Helion Eucalyptus EC2 cloud with error on missing 'creationDate'
```
Traceback (most recent call last):
File "/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py", line 423, in <module>
main()
File "/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py", line 377, in main
'creationDate': image.creationDate,
AttributeError: 'Image' object has no attribute 'creationDate'
```
Reason is that the Helion Eucalyptus does not report a "creationDate" for images. I verified this by observing the HTTP response from Eucalyptus. Also in the command `euca-describe-images` there is no mentioning of a creation Date, see [latest docs](http://docs.hpcloud.com/eucalyptus/4.3.0/#euca2ools-guide/euca-describe-images.html).
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Just run `ec2_ami_find` against a Eucalyptus EC2 endpoint.
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: all
connection: local
gather_facts: false
tasks:
- name: Find images
ec2_ami_find:
ec2_url: '{{ ec2_url }}' # use Eucalyptus endpoint
aws_access_key: '{{ aws_access_key }}'
aws_secret_key: '{{ aws_secret_key }}'
name: some-name*
register: images
- debug:
msg: '{{ images }}'
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Successful run of `ec2_ami_find`, printed list of images
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Traceback of failure.
<!--- Paste verbatim command output between quotes below -->
```
Traceback (most recent call last):
File "/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py", line 423, in <module>
main()
File "/tmp/ansible_2KmiX9/ansible_module_ec2_ami_find.py", line 377, in main
'creationDate': image.creationDate,
AttributeError: 'Image' object has no attribute 'creationDate'
```
| main | ami find fails due missing creationdate when run against a helion eucalyptus cloud issue type bug report component name ami find ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables in ansible cfg extensions ext with os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary the ami find module fails when run against a helion eucalyptus cloud with error on missing creationdate traceback most recent call last file tmp ansible ansible module ami find py line in main file tmp ansible ansible module ami find py line in main creationdate image creationdate attributeerror image object has no attribute creationdate reason is that the helion eucalyptus does not report a creationdate for images i verified this by observing the http response from eucalyptus also in the command euca describe images there is no mentioning of a creation date see steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used just run ami find against a eucalyptus endpoint hosts all connection local gather facts false tasks name find images ami find url url use eucalyptus endpoint aws access key aws access key aws secret key aws secret key name some name register images debug msg images expected results successful run of ami find printed list of images actual results traceback of failure traceback most recent call last file tmp ansible ansible module ami find py line in main file tmp ansible ansible module ami find py line in main creationdate image creationdate attributeerror image object has no attribute creationdate | 1 |
5,836 | 30,926,967,167 | IssuesEvent | 2023-08-06 15:51:43 | Windham-High-School/CubeServer | https://api.github.com/repos/Windham-High-School/CubeServer | closed | Replace Team.customlink with something more Flask-y | enhancement maintainability | This code is bad:
```python
@property
def custom_link(self) -> str: # TODO: Make better
return f"http://whsproject.club/team/success?team_secret={self.secret}&team_name={quote_plus(self.name)}"
```
... because it assumes terrible things about the url format being permanent | True | Replace Team.customlink with something more Flask-y - This code is bad:
```python
@property
def custom_link(self) -> str: # TODO: Make better
return f"http://whsproject.club/team/success?team_secret={self.secret}&team_name={quote_plus(self.name)}"
```
... because it assumes terrible things about the url format being permanent | main | replace team customlink with something more flask y this code is bad python property def custom link self str todo make better return f because it assumes terrible things about the url format being permanent | 1 |
788,818 | 27,768,355,115 | IssuesEvent | 2023-03-16 12:59:42 | npm-officialk/depensort | https://api.github.com/repos/npm-officialk/depensort | closed | Feature: Add cli options for user to customize sorting with key or value | issue: feature ⬆️ priority: normal 🟡 status: backlog 📚 Stale | ### 📄 Summary of the suggestion/request
Using this CLI option the user can customize the sorting to consider keys or values in the package file
### 📚 Please describe the suggestion/request in detail
Here's what we could do,
add a new CLI option to enable the user to customize the sorting using keys or values length which lets the user also sort as per the version of the dependency,
which helps us achieve a better user experience as the user will have more control over the sorting order
### 🧪 Proposed solution
- [ ] add CLI option `--use | -u` that accepts the values `"keys"/"values"`
- [ ] add a getVersion function which gets the version number from the version string
- [ ] add a parameter `use` to the sorter function that will determine how the object will be sorted
### 🤔 Alternatives considered/tried
_No response_
### ⚒️ Will you be helping to create the feature?
None
### 😊 Any other notes you want to add
_No response_ | 1.0 | Feature: Add cli options for user to customize sorting with key or value - ### 📄 Summary of the suggestion/request
Using this CLI option the user can customize the sorting to consider keys or values in the package file
### 📚 Please describe the suggestion/request in detail
Here's what we could do,
add a new CLI option to enable the user to customize the sorting using keys or values length which lets the user also sort as per the version of the dependency,
which helps us achieve a better user experience as the user will have more control over the sorting order
### 🧪 Proposed solution
- [ ] add CLI option `--use | -u` that accepts the values `"keys"/"values"`
- [ ] add a getVersion function which gets the version number from the version string
- [ ] add a parameter `use` to the sorter function that will determine how the object will be sorted
### 🤔 Alternatives considered/tried
_No response_
### ⚒️ Will you be helping to create the feature?
None
### 😊 Any other notes you want to add
_No response_ | non_main | feature add cli options for user to customize sorting with key or value 📄 summary of the suggestion request using this cli option the user can customize the sorting to consider keys or values in the package file 📚 please describe the suggestion request in detail here s what we could do add a new cli option to enable the user to customize the sorting using keys or values length which lets the user also sort as per the version of the dependency which helps us achieve a better user experience as the user will have more control over the sorting order 🧪 proposed solution add cli option use u that accepts the values keys values add a getversion function which gets the version number from the version string add a parameter use to the sorter function that will determine how the object will be sorted 🤔 alternatives considered tried no response ⚒️ will you be helping to create the feature none 😊 any other notes you want to add no response | 0 |
168,149 | 26,606,049,928 | IssuesEvent | 2023-01-23 19:25:55 | Agoric/agoric-sdk | https://api.github.com/repos/Agoric/agoric-sdk | closed | refine bankManager / VBANK to provide purses on demand | enhancement wallet performance needs-design | ## What is the Problem Being Solved?
#5397 adds a permissionless (though expensive) way to call `E(bankManager).addAsset()`. Currently, that results in a new purse in every user's wallet.
The purses should only be created on demand.
## Description of the Design
_TBD_. @michaelfig and I discussed a few options... nothing detailed yet.
## Security Considerations
_?_
## Test Plan
_TBD_
| 1.0 | refine bankManager / VBANK to provide purses on demand - ## What is the Problem Being Solved?
#5397 adds a permissionless (though expensive) way to call `E(bankManager).addAsset()`. Currently, that results in a new purse in every user's wallet.
The purses should only be created on demand.
## Description of the Design
_TBD_. @michaelfig and I discussed a few options... nothing detailed yet.
## Security Considerations
_?_
## Test Plan
_TBD_
| non_main | refine bankmanager vbank to provide purses on demand what is the problem being solved adds a permissionless though expensive way to call e bankmanager addasset currently that results in a new purse in every user s wallet the purses should only be created on demand description of the design tbd michaelfig and i discussed a few options nothing detailed yet security considerations test plan tbd | 0 |
4,420 | 5,051,955,193 | IssuesEvent | 2016-12-20 23:46:37 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Insertion tool no longer producing "NO CHANGES" messages if there are no changes | Area-Infrastructure | This support was added in dotnet/roslyn-internal@46b00bfe2fd33daa0c2c9f9faab94dfa4ddbc747 to do "NO CHANGES" instead of an error if there are no changes to insert. It looks like that was recently broken, or isn't working in reasonable scenarios.
| 1.0 | Insertion tool no longer producing "NO CHANGES" messages if there are no changes - This support was added in dotnet/roslyn-internal@46b00bfe2fd33daa0c2c9f9faab94dfa4ddbc747 to do "NO CHANGES" instead of an error if there are no changes to insert. It looks like that was recently broken, or isn't working in reasonable scenarios.
| non_main | insertion tool no longer producing no changes messages if there are no changes this support was added in dotnet roslyn internal to do no changes instead of an error if there are no changes to insert it looks like that was recently broken or isn t working in reasonable scenarios | 0 |
1,153 | 5,029,405,418 | IssuesEvent | 2016-12-15 21:05:59 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | files module copy: error copying files into fuse filesystem | affects_2.2 bug_report easyfix waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
copy
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = [...]/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[privilege_escalation]
become=true
[defaults]
host_key_checking=False
```
##### OS / ENVIRONMENT
Host OS:
* macOS Sierra 10.12.1
* Python 2.7.12
Managed OS:
* Debian Jessie with Proxmox 4.3 installed
* Python 2.7.9
##### SUMMARY
I want to copy the firewall configuration file of Proxmox to the managed server.
But instead of copying, I get an error.
I was able to isolate the issue to copying the file into a FUSE filesystem.
##### STEPS TO REPRODUCE
Module invokation:
```
copy:
src: 'cluster.fw'
dest: '/etc/pve/firewall/cluster.fw'
```
Unfortunately my *nix expertise is not sufficient to give you any hints of how to configure a fuse filesystem for yourself, but after installing proxmox, the folder /etc/pve is actually a mount point to a fuse filesystem.
But I think this problem occurs with every fuse filesystem?
##### EXPECTED RESULTS
The file should be copied without an error or at least the error message should be more descriptive.
##### ACTUAL RESULTS
The error message is as follows (I inserted some line breaks into "module_stdout" for better readability):
```
fatal: [my_host]: FAILED! => {
"changed": false,
"checksum": "783551ae407f9d3396749507d505c2c22f8fc09f",
"failed": true,
"invocation": {
"module_args": {
"dest": "/etc/pve/firewall/cluster.fw",
"src": "cluster.fw"
},
"module_name": "copy"
},
"module_stderr": "",
"module_stdout": "Traceback (most recent call last):\r\n
File \"/tmp/ansible_7ulKiR/ansible_module_copy.py\", line 364, in <module>\r\n
main()\r\n File \"/tmp/ansible_7ulKiR/ansible_module_copy.py\", line 343, in main\r\n
module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])\r\n
File \"/tmp/ansible_7ulKiR/ansible_modlib.zip/ansible/module_utils/basic.py\", line 2003, in
atomic_move\r\nNameError: global name 'exception' is not defined\r\n",
"msg": "MODULE FAILURE"
}
```
If I copy the file to another folder (e.g. /etc or /tmp) the error doesn't occur.
Interestingly enough, if I copy the file to /tmp and then invoke the copy module as follows, the error occurs neither:
```
copy:
src: '/tmp/cluster.fw'
remote_src: yes
dest: '/etc/pve/firewall/cluster.fw'
```
| True | files module copy: error copying files into fuse filesystem - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
copy
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = [...]/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[privilege_escalation]
become=true
[defaults]
host_key_checking=False
```
##### OS / ENVIRONMENT
Host OS:
* macOS Sierra 10.12.1
* Python 2.7.12
Managed OS:
* Debian Jessie with Proxmox 4.3 installed
* Python 2.7.9
##### SUMMARY
I want to copy the firewall configuration file of Proxmox to the managed server.
But instead of copying, I get an error.
I was able to isolate the issue to copying the file into a FUSE filesystem.
##### STEPS TO REPRODUCE
Module invokation:
```
copy:
src: 'cluster.fw'
dest: '/etc/pve/firewall/cluster.fw'
```
Unfortunately my *nix expertise is not sufficient to give you any hints of how to configure a fuse filesystem for yourself, but after installing proxmox, the folder /etc/pve is actually a mount point to a fuse filesystem.
But I think this problem occurs with every fuse filesystem?
##### EXPECTED RESULTS
The file should be copied without an error or at least the error message should be more descriptive.
##### ACTUAL RESULTS
The error message is as follows (I inserted some line breaks into "module_stdout" for better readability):
```
fatal: [my_host]: FAILED! => {
"changed": false,
"checksum": "783551ae407f9d3396749507d505c2c22f8fc09f",
"failed": true,
"invocation": {
"module_args": {
"dest": "/etc/pve/firewall/cluster.fw",
"src": "cluster.fw"
},
"module_name": "copy"
},
"module_stderr": "",
"module_stdout": "Traceback (most recent call last):\r\n
File \"/tmp/ansible_7ulKiR/ansible_module_copy.py\", line 364, in <module>\r\n
main()\r\n File \"/tmp/ansible_7ulKiR/ansible_module_copy.py\", line 343, in main\r\n
module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])\r\n
File \"/tmp/ansible_7ulKiR/ansible_modlib.zip/ansible/module_utils/basic.py\", line 2003, in
atomic_move\r\nNameError: global name 'exception' is not defined\r\n",
"msg": "MODULE FAILURE"
}
```
If I copy the file to another folder (e.g. /etc or /tmp) the error doesn't occur.
Interestingly enough, if I copy the file to /tmp and then invoke the copy module as follows, the error occurs neither:
```
copy:
src: '/tmp/cluster.fw'
remote_src: yes
dest: '/etc/pve/firewall/cluster.fw'
```
| main | files module copy error copying files into fuse filesystem issue type bug report component name copy ansible version ansible config file ansible cfg configured module search path default w o overrides configuration become true host key checking false os environment host os macos sierra python managed os debian jessie with proxmox installed python summary i want to copy the firewall configuration file of proxmox to the managed server but instead of copying i get an error i was able to isolate the issue to copying the file into a fuse filesystem steps to reproduce module invokation copy src cluster fw dest etc pve firewall cluster fw unfortunately my nix expertise is not sufficient to give you any hints of how to configure a fuse filesystem for yourself but after installing proxmox the folder etc pve is actually a mount point to a fuse filesystem but i think this problem occurs with every fuse filesystem expected results the file should be copied without an error or at least the error message should be more descriptive actual results the error message is as follows i inserted some line breaks into module stdout for better readability fatal failed changed false checksum failed true invocation module args dest etc pve firewall cluster fw src cluster fw module name copy module stderr module stdout traceback most recent call last r n file tmp ansible ansible module copy py line in r n main r n file tmp ansible ansible module copy py line in main r n module atomic move b mysrc dest unsafe writes module params r n file tmp ansible ansible modlib zip ansible module utils basic py line in atomic move r nnameerror global name exception is not defined r n msg module failure if i copy the file to another folder e g etc or tmp the error doesn t occur interestingly enough if i copy the file to tmp and then invoke the copy module as follows the error occurs neither copy src tmp cluster fw remote src yes dest etc pve firewall cluster fw | 1 |
2,139 | 7,359,619,973 | IssuesEvent | 2018-03-10 09:06:41 | jramell/Choice | https://api.github.com/repos/jramell/Choice | opened | Create Sound Manager | maintainability performance | Currently, most sounds are managed only by the scripts that play them. Also, in cases like the DoorPanel, each individual object contains its own AudioSource component. This makes it easier to add and remove objects from the scene, but has a performance cost. It could be resolved by using a Sound Manager that contained the frequently used sounds. | True | Create Sound Manager - Currently, most sounds are managed only by the scripts that play them. Also, in cases like the DoorPanel, each individual object contains its own AudioSource component. This makes it easier to add and remove objects from the scene, but has a performance cost. It could be resolved by using a Sound Manager that contained the frequently used sounds. | main | create sound manager currently most sounds are managed only by the scripts that play them also in cases like the doorpanel each individual object contains its own audiosource component this makes it easier to add and remove objects from the scene but has a performance cost it could be resolved by using a sound manager that contained the frequently used sounds | 1 |
194,562 | 22,262,026,255 | IssuesEvent | 2022-06-10 02:00:16 | Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2021-33034 | https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2021-33034 | reopened | WS-2021-0279 (Medium) detected in linuxlinux-4.19.239 | security vulnerability | ## WS-2021-0279 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.239</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2021-33034/commit/19525e8c58fe9ba0d7cb0f7a1a87d31d30380de6">19525e8c58fe9ba0d7cb0f7a1a87d31d30380de6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/btrfs/tree-log.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux Kernel in versions v2.6.29-rc1 to v5.12.9 there is an error handling in fixup_inode_link_counts which could lead to memory leak.
<p>Publish Date: 2021-06-25
<p>URL: <a href=https://github.com/gregkh/linux/commit/4cd303735bdfacd115ee20a6f3235b0084924174>WS-2021-0279</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1000819">https://osv.dev/vulnerability/UVI-2021-1000819</a></p>
<p>Release Date: 2021-06-25</p>
<p>Fix Resolution: v5.12.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0279 (Medium) detected in linuxlinux-4.19.239 - ## WS-2021-0279 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.239</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2021-33034/commit/19525e8c58fe9ba0d7cb0f7a1a87d31d30380de6">19525e8c58fe9ba0d7cb0f7a1a87d31d30380de6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/btrfs/tree-log.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux Kernel in versions v2.6.29-rc1 to v5.12.9 there is an error handling in fixup_inode_link_counts which could lead to memory leak.
<p>Publish Date: 2021-06-25
<p>URL: <a href=https://github.com/gregkh/linux/commit/4cd303735bdfacd115ee20a6f3235b0084924174>WS-2021-0279</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1000819">https://osv.dev/vulnerability/UVI-2021-1000819</a></p>
<p>Release Date: 2021-06-25</p>
<p>Fix Resolution: v5.12.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws medium detected in linuxlinux ws medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files fs btrfs tree log c vulnerability details linux kernel in versions to there is an error handling in fixup inode link counts which could lead to memory leak publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
5,151 | 26,251,534,749 | IssuesEvent | 2023-01-05 19:47:31 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Users & permissions meta issue | type: enhancement work: backend work: frontend status: draft restricted: maintainers | This is a meta issue tracking Users & Permissions work.
## Draft issue - more details to come. | True | Users & permissions meta issue - This is a meta issue tracking Users & Permissions work.
## Draft issue - more details to come. | main | users permissions meta issue this is a meta issue tracking users permissions work draft issue more details to come | 1 |
24,646 | 4,103,799,397 | IssuesEvent | 2016-06-04 22:58:50 | pyca/cryptography | https://api.github.com/repos/pyca/cryptography | closed | We need a 32-bit builder | testing | All our testing is done against 64-bit, but 32-bit does still exist. Let's get a builder in place (and add it to our `-Werror` jenkins job) for this.
refs #1134 | 1.0 | We need a 32-bit builder - All our testing is done against 64-bit, but 32-bit does still exist. Let's get a builder in place (and add it to our `-Werror` jenkins job) for this.
refs #1134 | non_main | we need a bit builder all our testing is done against bit but bit does still exist let s get a builder in place and add it to our werror jenkins job for this refs | 0 |
857 | 4,525,194,418 | IssuesEvent | 2016-09-07 03:17:35 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Unable to use vmware_datacenter module | bug_report cloud vmware waiting_on_maintainer | Issue type: Bug report
Ansible version: 2.1.0 devel 6bf2f45
Ansible configuration: Default
Environment: Kubuntu 15.10 (4.2.0-25-generic)
Summary: I'm trying to use vmware_datacenter module to create a datacenter inside of the VMware. In fact, datacenter is created, but task finishes with an error:
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 2308, in <module>
main()
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 171, in main
datacenter_states[desired_state][current_state](module)
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 141, in state_exit_unchanged
module.exit_json(changed=False)
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 1749, in exit_json
kwargs = remove_values(kwargs, self.no_log_values)
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in remove_values
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in <genexpr>
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in remove_values
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in <genexpr>
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in remove_values
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in <genexpr>
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 600, in remove_values
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
TypeError: Value of unknown type: <class 'pyVmomi.VmomiSupport.vim.Datacenter'>, 'vim.Datacenter:datacenter-22'
```
Steps to reproduce:
Example of the role:
```yaml
---
- name: Create datacenter
local_action: vmware_datacenter hostname="vcenter_hostname" username="vcenter_user" password="vcenter_password" datacenter_name="datacenter_name" state=present
```
Expected result: Task should finish without errors.
Actual results: Datacenter is created, but task finishes with an error.
Thanks. | True | Unable to use vmware_datacenter module - Issue type: Bug report
Ansible version: 2.1.0 devel 6bf2f45
Ansible configuration: Default
Environment: Kubuntu 15.10 (4.2.0-25-generic)
Summary: I'm trying to use vmware_datacenter module to create a datacenter inside of the VMware. In fact, datacenter is created, but task finishes with an error:
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 2308, in <module>
main()
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 171, in main
datacenter_states[desired_state][current_state](module)
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 141, in state_exit_unchanged
module.exit_json(changed=False)
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 1749, in exit_json
kwargs = remove_values(kwargs, self.no_log_values)
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in remove_values
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in <genexpr>
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in remove_values
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in <genexpr>
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in remove_values
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 591, in <genexpr>
return dict((k, remove_values(v, no_log_strings)) for k, v in value.items())
File "/home/kamil/.ansible/tmp/ansible-tmp-1453922153.83-136038268793171/vmware_datacenter", line 600, in remove_values
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
TypeError: Value of unknown type: <class 'pyVmomi.VmomiSupport.vim.Datacenter'>, 'vim.Datacenter:datacenter-22'
```
Steps to reproduce:
Example of the role:
```yaml
---
- name: Create datacenter
local_action: vmware_datacenter hostname="vcenter_hostname" username="vcenter_user" password="vcenter_password" datacenter_name="datacenter_name" state=present
```
Expected result: Task should finish without errors.
Actual results: Datacenter is created, but task finishes with an error.
Thanks. | main | unable to use vmware datacenter module issue type bug report ansible version devel ansible configuration default environment kubuntu generic summary i m trying to use vmware datacenter module to create a datacenter inside of the vmware in fact datacenter is created but task finishes with an error an exception occurred during task execution the full traceback is traceback most recent call last file home kamil ansible tmp ansible tmp vmware datacenter line in main file home kamil ansible tmp ansible tmp vmware datacenter line in main datacenter states module file home kamil ansible tmp ansible tmp vmware datacenter line in state exit unchanged module exit json changed false file home kamil ansible tmp ansible tmp vmware datacenter line in exit json kwargs remove values kwargs self no log values file home kamil ansible tmp ansible tmp vmware datacenter line in remove values return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in remove values return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in remove values return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in return dict k remove values v no log strings for k v in value items file home kamil ansible tmp ansible tmp vmware datacenter line in remove values raise typeerror value of unknown type s s type value value typeerror value of unknown type vim datacenter datacenter steps to reproduce example of the role yaml name create datacenter local action vmware datacenter hostname vcenter hostname username vcenter user password vcenter password datacenter name datacenter name state present expected result task should finish without errors actual results datacenter is created but task finishes with an error thanks | 1 |
170,978 | 20,905,374,459 | IssuesEvent | 2022-03-24 01:10:40 | turkdevops/playwright | https://api.github.com/repos/turkdevops/playwright | closed | CVE-2018-1000180 (High) detected in bcprov-jdk15on-1.56.jar - autoclosed | security vulnerability | ## CVE-2018-1000180 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.56.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /packages/playwright-core/src/server/android/driver/app/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.56/a153c6f9744a3e9dd6feab5e210e1c9861362ec7/bcprov-jdk15on-1.56.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-27.1.0.jar (Root Library)
- builder-4.1.0.jar
- :x: **bcprov-jdk15on-1.56.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/playwright/commit/3fa99509b24c58adcc2e4747e5d2e201eb736502">3fa99509b24c58adcc2e4747e5d2e201eb736502</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Bouncy Castle BC 1.54 - 1.59, BC-FJA 1.0.0, BC-FJA 1.0.1 and earlier have a flaw in the Low-level interface to RSA key pair generator, specifically RSA Key Pairs generated in low-level API with added certainty may have less M-R tests than expected. This appears to be fixed in versions BC 1.60 beta 4 and later, BC-FJA 1.0.2 and later.
<p>Publish Date: 2018-06-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000180>CVE-2018-1000180</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000180">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000180</a></p>
<p>Release Date: 2018-06-05</p>
<p>Fix Resolution: org.bouncycastle:bc-fips:1.0.2;org.bouncycastle:bcprov-jdk15on:1.60;org.bouncycastle:bcprov-jdk14:1.60;org.bouncycastle:bcprov-ext-jdk14:1.60;org.bouncycastle:bcprov-ext-jdk15on:1.60;org.bouncycastle:bcprov-debug-jdk14:1.60;org.bouncycastle:bcprov-debug-jdk15on:1.60</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-1000180 (High) detected in bcprov-jdk15on-1.56.jar - autoclosed - ## CVE-2018-1000180 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.56.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /packages/playwright-core/src/server/android/driver/app/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.bouncycastle/bcprov-jdk15on/1.56/a153c6f9744a3e9dd6feab5e210e1c9861362ec7/bcprov-jdk15on-1.56.jar</p>
<p>
Dependency Hierarchy:
- lint-gradle-27.1.0.jar (Root Library)
- builder-4.1.0.jar
- :x: **bcprov-jdk15on-1.56.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/playwright/commit/3fa99509b24c58adcc2e4747e5d2e201eb736502">3fa99509b24c58adcc2e4747e5d2e201eb736502</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Bouncy Castle BC 1.54 - 1.59, BC-FJA 1.0.0, BC-FJA 1.0.1 and earlier have a flaw in the Low-level interface to RSA key pair generator, specifically RSA Key Pairs generated in low-level API with added certainty may have less M-R tests than expected. This appears to be fixed in versions BC 1.60 beta 4 and later, BC-FJA 1.0.2 and later.
<p>Publish Date: 2018-06-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000180>CVE-2018-1000180</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000180">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000180</a></p>
<p>Release Date: 2018-06-05</p>
<p>Fix Resolution: org.bouncycastle:bc-fips:1.0.2;org.bouncycastle:bcprov-jdk15on:1.60;org.bouncycastle:bcprov-jdk14:1.60;org.bouncycastle:bcprov-ext-jdk14:1.60;org.bouncycastle:bcprov-ext-jdk15on:1.60;org.bouncycastle:bcprov-debug-jdk14:1.60;org.bouncycastle:bcprov-debug-jdk15on:1.60</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in bcprov jar autoclosed cve high severity vulnerability vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk to jdk library home page a href path to dependency file packages playwright core src server android driver app build gradle path to vulnerable library home wss scanner gradle caches modules files org bouncycastle bcprov bcprov jar dependency hierarchy lint gradle jar root library builder jar x bcprov jar vulnerable library found in head commit a href found in base branch master vulnerability details bouncy castle bc bc fja bc fja and earlier have a flaw in the low level interface to rsa key pair generator specifically rsa key pairs generated in low level api with added certainty may have less m r tests than expected this appears to be fixed in versions bc beta and later bc fja and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org bouncycastle bc fips org bouncycastle bcprov org bouncycastle bcprov org bouncycastle bcprov ext org bouncycastle bcprov ext org bouncycastle bcprov debug org bouncycastle bcprov debug step up your open source security game with whitesource | 0 |
3,088 | 11,739,054,628 | IssuesEvent | 2020-03-11 17:03:26 | alacritty/alacritty | https://api.github.com/repos/alacritty/alacritty | closed | Request: `alacritty -- <command>...` to replace `--command` | C - waiting on maintainer | I think `--command`/`-e` is unusual when compared to the other flags as it must be the last. This unusual-ness is also inconsistent amongst different terminals.
GNOME Terminal also uses `--` and deprecates `-e` flag.
| True | Request: `alacritty -- <command>...` to replace `--command` - I think `--command`/`-e` is unusual when compared to the other flags as it must be the last. This unusual-ness is also inconsistent amongst different terminals.
GNOME Terminal also uses `--` and deprecates `-e` flag.
| main | request alacritty to replace command i think command e is unusual when compared to the other flags as it must be the last this unusual ness is also inconsistent amongst different terminals gnome terminal also uses and deprecates e flag | 1 |
431,759 | 30,249,793,061 | IssuesEvent | 2023-07-06 19:31:51 | ash-sxn/GSoC-2023-docker-based-quickstart | https://api.github.com/repos/ash-sxn/GSoC-2023-docker-based-quickstart | opened | Forget about Pipeline, move to MultiBranch Pipeline | documentation enhancement feature | To be honest, a simple pipeline project is very limiting, as we would have to create one pipeline project per branch.
Let's move all the tutorials to MultiBranch pipelines. | 1.0 | Forget about Pipeline, move to MultiBranch Pipeline - To be honest, a simple pipeline project is very limiting, as we would have to create one pipeline project per branch.
Let's move all the tutorials to MultiBranch pipelines. | non_main | forget about pipeline move to multibranch pipeline to be honest a simple pipeline project is very limiting as we would have to create one pipeline project per branch let s move all the tutorials to multibranch pipelines | 0 |
528 | 3,925,713,600 | IssuesEvent | 2016-04-22 20:05:19 | heiglandreas/authLdap | https://api.github.com/repos/heiglandreas/authLdap | closed | Stop Password-Change Email on password-Update via LDAP | bug maintainer reply expected | Currently a user will get an Email after login that the password has changed when password-caching is enabled. The preffered behaviour would be to simply not send a password-change-Email when a user-password is changed via LDAP.
THis has been reported on https://wordpress.org/support/topic/authldap-authentication-triggers-email-password-change-notification | True | Stop Password-Change Email on password-Update via LDAP - Currently a user will get an Email after login that the password has changed when password-caching is enabled. The preffered behaviour would be to simply not send a password-change-Email when a user-password is changed via LDAP.
THis has been reported on https://wordpress.org/support/topic/authldap-authentication-triggers-email-password-change-notification | main | stop password change email on password update via ldap currently a user will get an email after login that the password has changed when password caching is enabled the preffered behaviour would be to simply not send a password change email when a user password is changed via ldap this has been reported on | 1 |
847 | 4,506,610,199 | IssuesEvent | 2016-09-02 05:08:09 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | zypper module: notify crashes because of changed dict | bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
zypper
notify
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel acd69bcc77) last updated 2016/08/31 16:06:22 (GMT +200)
lib/ansible/modules/core: (detached HEAD 5310bab12f) last updated 2016/08/31 16:06:30 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 2ef4a34eee) last updated 2016/08/31 16:06:30 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
SLES 12
##### SUMMARY
<!--- Explain the problem briefly -->
The package module calls zypper with a list of packages/programs to install. The operation completes without errors. In my opinion, notify executes now and raises an error because package or zypper returned "changed" = {} instead of "changed" = "false".
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
The following will raise an exeption even if everything is fine and unchanged.
<!--- Paste example playbooks or commands between quotes below -->
```
#example code - apache2-utils is installed when apache is installed so both is fine
#from role/myrole/tasks/main.yml
- name: install apache and apache modules
package: name={{ item }} state=latest
with_items:
- "apache2"
- "apache2-utils"
notify:
- "restart apache"
#frome /role/myrole/handlers/main.yml
- name: restart apache
service: name=apache2 state=restarted
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
During the first run, when the packages aren't present, I expected that the handler is called. Instead => exception
During the second run, when the packages are present, I expectet that the handler isn't called. Instead => exception
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Running zypper
Using module file /home/****/ansible/lib/ansible/modules/extras/packaging/os/zypper.py
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r node '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279 `" && echo ansible-tmp-1472731543.66-70288465940279="` echo $HOME/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279 `" ) && sleep 0'"'"''
<node> PUT /tmp/tmpd7stK7 TO /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py
<node> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r '[node]'
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r node '/bin/sh -c '"'"'chmod u+x /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/ /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py && sleep 0'"'"''
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r -tt node '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=MYSECRET] password: " -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-MYSECRET; /usr/bin/python /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py; rm -rf "/home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [node] => (item=[u'apache2', u'apache2-utils']) => {
"changed": {},
"cmd": [
"/usr/bin/zypper",
"--quiet",
"--non-interactive",
"--xmlout",
"install",
"--type",
"package",
"--auto-agree-with-licenses",
"--no-recommends",
"--",
"apache2-utils",
"apache2"
],
"invocation": {
"module_args": {
"disable_gpg_check": false,
"disable_recommends": true,
"force": false,
"name": [
"apache2",
"apache2-utils"
],
"oldpackage": false,
"state": "latest",
"type": "package",
"update_cache": false
}
},
"item": [
"apache2",
"apache2-utils"
],
"name": [
"apache2",
"apache2-utils"
],
"rc": 0,
"state": "latest",
"update_cache": false
}
ERROR! Unexpected Exception: unsupported operand type(s) for |=: 'bool' and 'dict'
the full traceback was:
Traceback (most recent call last):
File "/home/****/ansible/bin/ansible-playbook", line 97, in <module>
exit_code = cli.run()
File "/home/****/ansible/lib/ansible/cli/playbook.py", line 154, in run
results = pbex.run()
File "/home/****/ansible/lib/ansible/executor/playbook_executor.py", line 147, in run
result = self._tqm.run(play=play)
File "/home/****/ansible/lib/ansible/executor/task_queue_manager.py", line 281, in run
play_return = strategy.run(iterator, play_context)
File "/home/****/ansible/lib/ansible/plugins/strategy/linear.py", line 269, in run
results += self._wait_on_pending_results(iterator)
File "/home/****/ansible/lib/ansible/plugins/strategy/__init__.py", line 514, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/home/****/ansible/lib/ansible/plugins/strategy/__init__.py", line 370, in _process_pending_results
if task_result.is_changed():
File "/home/****/ansible/lib/ansible/executor/task_result.py", line 40, in is_changed
return self._check_key('changed')
File "/home/****/ansible/lib/ansible/executor/task_result.py", line 69, in _check_key
flag |= res.get(key, False)
TypeError: unsupported operand type(s) for |=: 'bool' and 'dict'
AND WITHOUT -vvvv
TASK [apache-proxy : install apache and apache modules] ************************
ok: [sazvl0021.saz.bosch-si.com] => (item=[u'apache2', u'apache2-utils'])
ERROR! Unexpected Exception: unsupported operand type(s) for |=: 'bool' and 'dict'
```
| True | zypper module: notify crashes because of changed dict - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
zypper
notify
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel acd69bcc77) last updated 2016/08/31 16:06:22 (GMT +200)
lib/ansible/modules/core: (detached HEAD 5310bab12f) last updated 2016/08/31 16:06:30 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 2ef4a34eee) last updated 2016/08/31 16:06:30 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
SLES 12
##### SUMMARY
<!--- Explain the problem briefly -->
The package module calls zypper with a list of packages/programs to install. The operation completes without errors. In my opinion, notify executes now and raises an error because package or zypper returned "changed" = {} instead of "changed" = "false".
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
The following will raise an exeption even if everything is fine and unchanged.
<!--- Paste example playbooks or commands between quotes below -->
```
#example code - apache2-utils is installed when apache is installed so both is fine
#from role/myrole/tasks/main.yml
- name: install apache and apache modules
package: name={{ item }} state=latest
with_items:
- "apache2"
- "apache2-utils"
notify:
- "restart apache"
#frome /role/myrole/handlers/main.yml
- name: restart apache
service: name=apache2 state=restarted
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
During the first run, when the packages aren't present, I expected that the handler is called. Instead => exception
During the second run, when the packages are present, I expectet that the handler isn't called. Instead => exception
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Running zypper
Using module file /home/****/ansible/lib/ansible/modules/extras/packaging/os/zypper.py
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r node '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279 `" && echo ansible-tmp-1472731543.66-70288465940279="` echo $HOME/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279 `" ) && sleep 0'"'"''
<node> PUT /tmp/tmpd7stK7 TO /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py
<node> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r '[node]'
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r node '/bin/sh -c '"'"'chmod u+x /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/ /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py && sleep 0'"'"''
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r -tt node '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=MYSECRET] password: " -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-MYSECRET; /usr/bin/python /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py; rm -rf "/home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [node] => (item=[u'apache2', u'apache2-utils']) => {
"changed": {},
"cmd": [
"/usr/bin/zypper",
"--quiet",
"--non-interactive",
"--xmlout",
"install",
"--type",
"package",
"--auto-agree-with-licenses",
"--no-recommends",
"--",
"apache2-utils",
"apache2"
],
"invocation": {
"module_args": {
"disable_gpg_check": false,
"disable_recommends": true,
"force": false,
"name": [
"apache2",
"apache2-utils"
],
"oldpackage": false,
"state": "latest",
"type": "package",
"update_cache": false
}
},
"item": [
"apache2",
"apache2-utils"
],
"name": [
"apache2",
"apache2-utils"
],
"rc": 0,
"state": "latest",
"update_cache": false
}
ERROR! Unexpected Exception: unsupported operand type(s) for |=: 'bool' and 'dict'
the full traceback was:
Traceback (most recent call last):
File "/home/****/ansible/bin/ansible-playbook", line 97, in <module>
exit_code = cli.run()
File "/home/****/ansible/lib/ansible/cli/playbook.py", line 154, in run
results = pbex.run()
File "/home/****/ansible/lib/ansible/executor/playbook_executor.py", line 147, in run
result = self._tqm.run(play=play)
File "/home/****/ansible/lib/ansible/executor/task_queue_manager.py", line 281, in run
play_return = strategy.run(iterator, play_context)
File "/home/****/ansible/lib/ansible/plugins/strategy/linear.py", line 269, in run
results += self._wait_on_pending_results(iterator)
File "/home/****/ansible/lib/ansible/plugins/strategy/__init__.py", line 514, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/home/****/ansible/lib/ansible/plugins/strategy/__init__.py", line 370, in _process_pending_results
if task_result.is_changed():
File "/home/****/ansible/lib/ansible/executor/task_result.py", line 40, in is_changed
return self._check_key('changed')
File "/home/****/ansible/lib/ansible/executor/task_result.py", line 69, in _check_key
flag |= res.get(key, False)
TypeError: unsupported operand type(s) for |=: 'bool' and 'dict'
AND WITHOUT -vvvv
TASK [apache-proxy : install apache and apache modules] ************************
ok: [sazvl0021.saz.bosch-si.com] => (item=[u'apache2', u'apache2-utils'])
ERROR! Unexpected Exception: unsupported operand type(s) for |=: 'bool' and 'dict'
```
| main | zypper module notify crashes because of changed dict issue type bug report component name zypper notify ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific sles summary the package module calls zypper with a list of packages programs to install the operation completes without errors in my opinion notify executes now and raises an error because package or zypper returned changed instead of changed false steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used the following will raise an exeption even if everything is fine and unchanged example code utils is installed when apache is installed so both is fine from role myrole tasks main yml name install apache and apache modules package name item state latest with items utils notify restart apache frome role myrole handlers main yml name restart apache service name state restarted expected results during the first run when the packages aren t present i expected that the handler is called instead exception during the second run when the packages are present i expectet that the handler isn t called instead exception actual results running zypper using module file home ansible lib ansible modules extras packaging os zypper py establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r node bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible tmp ansible tmp zypper py ssh exec sftp b vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r node bin sh c chmod u x home ansible tmp ansible tmp home ansible tmp ansible tmp zypper py sleep establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r tt node bin sh c sudo h s p password u root bin sh c echo become success mysecret usr bin python home ansible tmp ansible tmp zypper py rm rf home ansible tmp ansible tmp dev null sleep ok item changed cmd usr bin zypper quiet non interactive xmlout install type package auto agree with licenses no recommends utils invocation module args disable gpg check false disable recommends true force false name utils oldpackage false state latest type package update cache false item utils name utils rc state latest update cache false error unexpected exception unsupported operand type s for bool and dict the full traceback was traceback most recent call last file home ansible bin ansible playbook line in exit code cli run file home ansible lib ansible cli playbook py line in run results pbex run file home ansible lib ansible executor playbook executor py line in run result self tqm run play play file home ansible lib ansible executor task queue manager py line in run play return strategy run iterator play context file home ansible lib ansible plugins strategy linear py line in run results self wait on pending results iterator file home ansible lib ansible plugins strategy init py line in wait on pending results results self process pending results iterator file home ansible lib ansible plugins strategy init py line in process pending results if task result is changed file home ansible lib ansible executor task result py line in is changed return self check key changed file home ansible lib ansible executor task result py line in check key flag res get key false typeerror unsupported operand type s for bool and dict and without vvvv task ok item error unexpected exception unsupported operand type s for bool and dict | 1 |
2,037 | 6,848,858,417 | IssuesEvent | 2017-11-13 19:58:37 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Port of DP 6 Wordpress Import Module | Maintainer application Port status | I nedded it, so I did.
I've ported the Wordpress Import Module from
https://www.drupal.org/project/wordpress_import
Issue under:
https://www.drupal.org/node/2920635
It is already working for me - I've ported my Wordpress site successfully to Backdrop, but the module needs review. So it might be a very good Idea to join the Backdrop Contributed Project Group - so this is my official request.
You can find the ongoing work here:
https://github.com/bjoern-st/wordpress_import
Because of the Importer being able to import multilingual WP-Sites (by now only sites translated with Stella Press are supported) and to create menu entries for pages, I added a little submodule to display the language for each menu entry on the menu entry overview site .. maybe you find it useful.
Cheers Bjoern
| True | Port of DP 6 Wordpress Import Module - I nedded it, so I did.
I've ported the Wordpress Import Module from
https://www.drupal.org/project/wordpress_import
Issue under:
https://www.drupal.org/node/2920635
It is already working for me - I've ported my Wordpress site successfully to Backdrop, but the module needs review. So it might be a very good Idea to join the Backdrop Contributed Project Group - so this is my official request.
You can find the ongoing work here:
https://github.com/bjoern-st/wordpress_import
Because of the Importer being able to import multilingual WP-Sites (by now only sites translated with Stella Press are supported) and to create menu entries for pages, I added a little submodule to display the language for each menu entry on the menu entry overview site .. maybe you find it useful.
Cheers Bjoern
| main | port of dp wordpress import module i nedded it so i did i ve ported the wordpress import module from issue under it is already working for me i ve ported my wordpress site successfully to backdrop but the module needs review so it might be a very good idea to join the backdrop contributed project group so this is my official request you can find the ongoing work here because of the importer being able to import multilingual wp sites by now only sites translated with stella press are supported and to create menu entries for pages i added a little submodule to display the language for each menu entry on the menu entry overview site maybe you find it useful cheers bjoern | 1 |
740,521 | 25,755,675,383 | IssuesEvent | 2022-12-08 16:14:09 | georchestra/mapstore2-cadastrapp | https://api.github.com/repos/georchestra/mapstore2-cadastrapp | closed | positionning conflict between cadastrapp pane and omnibar/burger menu | Priority: Medium Need community discussion | This is a followup to #73 which only fixed **one** conflict between cadastrapp pane and identify tool popup.
Users of cadastrapp are (once the cadastrapp pane is open) being confused by not being able to enter an address in the omnibar, or they complain they cant print the map or access the annotation tool pane or being able to add more layers via layer catalog... (eg all the tools available in the burger menu).
Sure, one can always close cadastrapp pane to reaccess all those features, but then one manually needs to readd the cadastre layer to be able to draw on it or print an area of interest.
Sure, i'll try to add the cadastre layer to the default map shipped in the context, but then i guess it might/will conflict with the same layer being loaded by cadastrapp when opening the pane.
So, all in all, i'd like to have a discussion on how to avoid all those positioning conflicts of tools hiding each other and making other tools unaccessible.. cant the burger menu and the omnibar position be relative to the cadastrapp pane, like this is the case for the other tools available in the bottom right toolbar ?
@jusabatier @MaelREBOUX @catmorales ? | 1.0 | positionning conflict between cadastrapp pane and omnibar/burger menu - This is a followup to #73 which only fixed **one** conflict between cadastrapp pane and identify tool popup.
Users of cadastrapp are (once the cadastrapp pane is open) being confused by not being able to enter an address in the omnibar, or they complain they cant print the map or access the annotation tool pane or being able to add more layers via layer catalog... (eg all the tools available in the burger menu).
Sure, one can always close cadastrapp pane to reaccess all those features, but then one manually needs to readd the cadastre layer to be able to draw on it or print an area of interest.
Sure, i'll try to add the cadastre layer to the default map shipped in the context, but then i guess it might/will conflict with the same layer being loaded by cadastrapp when opening the pane.
So, all in all, i'd like to have a discussion on how to avoid all those positioning conflicts of tools hiding each other and making other tools unaccessible.. cant the burger menu and the omnibar position be relative to the cadastrapp pane, like this is the case for the other tools available in the bottom right toolbar ?
@jusabatier @MaelREBOUX @catmorales ? | non_main | positionning conflict between cadastrapp pane and omnibar burger menu this is a followup to which only fixed one conflict between cadastrapp pane and identify tool popup users of cadastrapp are once the cadastrapp pane is open being confused by not being able to enter an address in the omnibar or they complain they cant print the map or access the annotation tool pane or being able to add more layers via layer catalog eg all the tools available in the burger menu sure one can always close cadastrapp pane to reaccess all those features but then one manually needs to readd the cadastre layer to be able to draw on it or print an area of interest sure i ll try to add the cadastre layer to the default map shipped in the context but then i guess it might will conflict with the same layer being loaded by cadastrapp when opening the pane so all in all i d like to have a discussion on how to avoid all those positioning conflicts of tools hiding each other and making other tools unaccessible cant the burger menu and the omnibar position be relative to the cadastrapp pane like this is the case for the other tools available in the bottom right toolbar jusabatier maelreboux catmorales | 0 |
5,682 | 5,114,773,238 | IssuesEvent | 2017-01-06 19:35:58 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM Check failed: stream->parent()-GetConvolveAlgorithms(&algorithms) ``` | type:bug/performance |
I'm trying to use tensorflow for this project: https://github.com/ibab/tensorflow-wavenet
I've gotten to the point where when I import tensorflow, I get the messages that all the CUDA libraries are successfully opened locally.
I can run the following python code from https://www.tensorflow.org/get_started/os_setup#run_tensorflow_from_the_command_line and it works fine.
> import tensorflow as tf
> hello = tf.constant('Hello, TensorFlow!')
> sess = tf.Session()
> print(sess.run(hello))
Hello, TensorFlow!
> a = tf.constant(10)
> b = tf.constant(32)
> print(sess.run(a + b))
42
>
However when I run the wavenet project, I get the following error messages and then python crashes.
```
c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 950
major: 5 minor: 2 memoryClockRate (GHz) 1.19
pciBusID 0000:01:00.0
Total memory: 2.00GiB
Free memory: 1.65GiB
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:906] DMA: 0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:916] 0: Y
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 950, pci bus id: 0000:01:00.0)
WARNING:tensorflow:From train.py:249 in main.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
Trying to restore saved checkpoints from ./logdir\train\2017-01-02T16-17-15 ... No checkpoint found.
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
F c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\kernels\conv_ops.cc:532] Check failed: stream->parent()-GetConvolveAlgorithms(&algorithms)
```
### What related GitHub issues or StackOverflow threads have you found by searching the web for your problem?
https://github.com/tensorflow/tensorflow/issues/4251
### Environment info
Operating System:
Windows
Installed version of CUDA and cuDNN:
cuDNN v5.1 (August 10, 2016), for CUDA 8.0
If installed from binary pip package, provide:
1. A link to the pip package you installed:
2. The output from `python -c "import tensorflow; print(tensorflow.__version__)"`.
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cudnn64_5.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library nvcuda.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library curand64_80.dll locally
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'tensor' is not defined
If installed from source, provide
1. The commit hash (`git rev-parse HEAD`)
2. The output of `bazel version`
### What other attempted solutions have you tried?
I have tried reinstalling Cuda, different versions of cudnn. Looked at different issues with same error messages but nothing seemed to help.
| True | could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM Check failed: stream->parent()-GetConvolveAlgorithms(&algorithms) ``` -
I'm trying to use tensorflow for this project: https://github.com/ibab/tensorflow-wavenet
I've gotten to the point where when I import tensorflow, I get the messages that all the CUDA libraries are successfully opened locally.
I can run the following python code from https://www.tensorflow.org/get_started/os_setup#run_tensorflow_from_the_command_line and it works fine.
> import tensorflow as tf
> hello = tf.constant('Hello, TensorFlow!')
> sess = tf.Session()
> print(sess.run(hello))
Hello, TensorFlow!
> a = tf.constant(10)
> b = tf.constant(32)
> print(sess.run(a + b))
42
>
However when I run the wavenet project, I get the following error messages and then python crashes.
```
c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 950
major: 5 minor: 2 memoryClockRate (GHz) 1.19
pciBusID 0000:01:00.0
Total memory: 2.00GiB
Free memory: 1.65GiB
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:906] DMA: 0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:916] 0: Y
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 950, pci bus id: 0000:01:00.0)
WARNING:tensorflow:From train.py:249 in main.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
Trying to restore saved checkpoints from ./logdir\train\2017-01-02T16-17-15 ... No checkpoint found.
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
F c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\kernels\conv_ops.cc:532] Check failed: stream->parent()-GetConvolveAlgorithms(&algorithms)
```
### What related GitHub issues or StackOverflow threads have you found by searching the web for your problem?
https://github.com/tensorflow/tensorflow/issues/4251
### Environment info
Operating System:
Windows
Installed version of CUDA and cuDNN:
cuDNN v5.1 (August 10, 2016), for CUDA 8.0
If installed from binary pip package, provide:
1. A link to the pip package you installed:
2. The output from `python -c "import tensorflow; print(tensorflow.__version__)"`.
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cudnn64_5.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library nvcuda.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library curand64_80.dll locally
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'tensor' is not defined
If installed from source, provide
1. The commit hash (`git rev-parse HEAD`)
2. The output of `bazel version`
### What other attempted solutions have you tried?
I have tried reinstalling Cuda, different versions of cudnn. Looked at different issues with same error messages but nothing seemed to help.
| non_main | could not destroy cudnn handle cudnn status bad param check failed stream parent getconvolvealgorithms algorithms i m trying to use tensorflow for this project i ve gotten to the point where when i import tensorflow i get the messages that all the cuda libraries are successfully opened locally i can run the following python code from and it works fine import tensorflow as tf hello tf constant hello tensorflow sess tf session print sess run hello hello tensorflow a tf constant b tf constant print sess run a b however when i run the wavenet project i get the following error messages and then python crashes c tf jenkins home workspace release win device gpu os windows tensorflow core common runtime gpu gpu device cc found device with properties name geforce gtx major minor memoryclockrate ghz pcibusid total memory free memory i c tf jenkins home workspace release win device gpu os windows tensorflow core common runtime gpu gpu device cc dma i c tf jenkins home workspace release win device gpu os windows tensorflow core common runtime gpu gpu device cc y i c tf jenkins home workspace release win device gpu os windows tensorflow core common runtime gpu gpu device cc creating tensorflow device gpu device name geforce gtx pci bus id warning tensorflow from train py in main initialize all variables from tensorflow python ops variables is deprecated and will be removed after instructions for updating use tf global variables initializer instead trying to restore saved checkpoints from logdir train no checkpoint found e c tf jenkins home workspace release win device gpu os windows tensorflow stream executor cuda cuda dnn cc could not create cudnn handle cudnn status internal error e c tf jenkins home workspace release win device gpu os windows tensorflow stream executor cuda cuda dnn cc could not destroy cudnn handle cudnn status bad param f c tf jenkins home workspace release win device gpu os windows tensorflow core kernels conv ops cc check failed stream parent getconvolvealgorithms algorithms what related github issues or stackoverflow threads have you found by searching the web for your problem environment info operating system windows installed version of cuda and cudnn cudnn august for cuda if installed from binary pip package provide a link to the pip package you installed the output from python c import tensorflow print tensorflow version i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library dll locally i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library dll locally i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library dll locally i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library nvcuda dll locally i c tf jenkins home workspace release win device gpu os windows tensorflow stream executor dso loader cc successfully opened cuda library dll locally traceback most recent call last file line in nameerror name tensor is not defined if installed from source provide the commit hash git rev parse head the output of bazel version what other attempted solutions have you tried i have tried reinstalling cuda different versions of cudnn looked at different issues with same error messages but nothing seemed to help | 0 |
632,711 | 20,205,134,614 | IssuesEvent | 2022-02-11 19:24:14 | googleapis/gapic-generator-python | https://api.github.com/repos/googleapis/gapic-generator-python | opened | Refactor unit test templates | type: feature request priority: p3 | Currently, unit tests in templates code are all concentrated in a single file `test_%service.py.j2`. The file has 3K lines and should be refactored.
One idea is to refactor across two dimensions: transport and sync/async (cc @software-dov). It's a big effort, so perhaps a smaller task is to first identify the common boilerplate code in the existing test file and lift it to a separate file.
| 1.0 | Refactor unit test templates - Currently, unit tests in templates code are all concentrated in a single file `test_%service.py.j2`. The file has 3K lines and should be refactored.
One idea is to refactor across two dimensions: transport and sync/async (cc @software-dov). It's a big effort, so perhaps a smaller task is to first identify the common boilerplate code in the existing test file and lift it to a separate file.
| non_main | refactor unit test templates currently unit tests in templates code are all concentrated in a single file test service py the file has lines and should be refactored one idea is to refactor across two dimensions transport and sync async cc software dov it s a big effort so perhaps a smaller task is to first identify the common boilerplate code in the existing test file and lift it to a separate file | 0 |
4,149 | 19,758,264,013 | IssuesEvent | 2022-01-16 00:57:15 | TabbycatDebate/tabbycat | https://api.github.com/repos/TabbycatDebate/tabbycat | closed | Investigate cache stampede mitigation | performance awaiting maintainer | Also known as the dog-piling / thundering herd / cache hammering problem. While this is typically a problem on very large sites it does seem to be the root of why Tabbycat can slow down — this typically comes when a couple of slow-loading pages are shown to the public and hit simultaneously. This seemingly causes requests to timeout as each request is hitting a cold cache.
#662 could go a way to avoiding this as the queries needed to construct the cache would presumably be cached.
The other option would be to warm the cache asynchronously, so multiple requests all wait for a single thread to calculate the view rather than each attempting to do so independently. A project such as [django-cacheback](https://github.com/codeinthehole/django-cacheback) implements this mechanism. It would require Celery and worker dynos to manage the queue although that's on the road map for shifting some tasks to workers already. Would need to also check that it plays nice with daphne / channels. | True | Investigate cache stampede mitigation - Also known as the dog-piling / thundering herd / cache hammering problem. While this is typically a problem on very large sites it does seem to be the root of why Tabbycat can slow down — this typically comes when a couple of slow-loading pages are shown to the public and hit simultaneously. This seemingly causes requests to timeout as each request is hitting a cold cache.
#662 could go a way to avoiding this as the queries needed to construct the cache would presumably be cached.
The other option would be to warm the cache asynchronously, so multiple requests all wait for a single thread to calculate the view rather than each attempting to do so independently. A project such as [django-cacheback](https://github.com/codeinthehole/django-cacheback) implements this mechanism. It would require Celery and worker dynos to manage the queue although that's on the road map for shifting some tasks to workers already. Would need to also check that it plays nice with daphne / channels. | main | investigate cache stampede mitigation also known as the dog piling thundering herd cache hammering problem while this is typically a problem on very large sites it does seem to be the root of why tabbycat can slow down — this typically comes when a couple of slow loading pages are shown to the public and hit simultaneously this seemingly causes requests to timeout as each request is hitting a cold cache could go a way to avoiding this as the queries needed to construct the cache would presumably be cached the other option would be to warm the cache asynchronously so multiple requests all wait for a single thread to calculate the view rather than each attempting to do so independently a project such as implements this mechanism it would require celery and worker dynos to manage the queue although that s on the road map for shifting some tasks to workers already would need to also check that it plays nice with daphne channels | 1 |
1,485 | 6,419,173,777 | IssuesEvent | 2017-08-08 20:37:14 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Conversions: Mb to MB is picked up as Megabyte to Bits | Bug Category: Highest Impact Tasks Maintainer Input Requested Topic: Conversions | Link with images for reference. https://twitter.com/Tzeejay/status/888073621083758592
Just so that you understand what I was trying to achieve, I was checking on network speeds and bits to bytes conversion in the head are always kind of impossible for me...
It would be great to see this fixed 👍🏻
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @pjhampton | True | Conversions: Mb to MB is picked up as Megabyte to Bits - Link with images for reference. https://twitter.com/Tzeejay/status/888073621083758592
Just so that you understand what I was trying to achieve, I was checking on network speeds and bits to bytes conversion in the head are always kind of impossible for me...
It would be great to see this fixed 👍🏻
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @pjhampton | main | conversions mb to mb is picked up as megabyte to bits link with images for reference just so that you understand what i was trying to achieve i was checking on network speeds and bits to bytes conversion in the head are always kind of impossible for me it would be great to see this fixed 👍🏻 ia page pjhampton | 1 |
4,078 | 19,268,945,605 | IssuesEvent | 2021-12-10 01:27:56 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | aws sam local start-api regression on "provided" runtime missing header "lambda-runtime-invoked-function-arn" | type/bug area/local/start-api maintainer/need-response | ### Description:
Using AWS sam 1.3.2, runtime provided, it is possible to run a rustlang AWS api using sam local with runtime: provided. Under newer versions, the local versions of the deployment fail: (edit: the deployment itself succeeds, but the running lambda is not usable)
```
main' panicked at 'no entry found for key "lambda-runtime-invoked-function-arn"', /home/richja/.cargo/registry/src/github.com-1ecc6299db9ec823/lambda_runtime-0.3.0/src/types.rs2021-03-15 09:24:50,041 TRACE [want] signal found waiting giver, notifying
```
this seems to be triggered in : lambda-runtime-0.3.0, src/types.rs
```
invoked_function_arn: headers["lambda-runtime-invoked-function-arn"]
.to_str()
.expect("Missing arn; this is a bug")
.to_owned(),
```
### Steps to reproduce:
Make a rust lambda, using sam 1.3.2 - it will work.
Make a rust lambda, using sam 1.19 - it will work when deployed into AWS, but not locally, failing with above panic.
### Observed result:
panic in lambda_runtime
```
15: 0x7f78e1fb6139 - <lambda_runtime::types::Context as core::convert::TryFrom<http::header::map::HeaderMap>>::try_from::h0ca51065315b7aa3
16: 0x7f78e1f38cb8 - lambda_runtime::run::{{closure}}::h444da055591230ba
17: 0x7f78e1f44786 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::h88d3f3d579db64e0
18: 0x7f78e1f3b47b - std::thread::local::LocalKey<T>::with::h09bc194ffe5b62c8
19: 0x7f78e1f9ec8d - tokio::park::thread::CachedParkThread::block_on::h230b34efeab4dda5
20: 0x7f78e1f8e494 - tokio::runtime::thread_pool::ThreadPool::block_on::h4aed425d9cb7bbeb
21: 0x7f78e1f5f5b8 - tokio::runtime::Runtime::block_on::h00f620b9f5b57519
```
this happens always, with any input
example template:
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
HelloRustFunction:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: makefile
Properties:
FunctionName: HelloRust
Handler: bootstrap.is.real.handler
CodeUri: .
Runtime: provided
Timeout: 300
MemorySize: 2048
Environment:
Variables:
RUST_BACKTRACE: full
Events:
RootEvent:
Type: Api
Properties:
Path: /
Method: any
GatewayEvent:
Type: Api
Properties:
Path: /{proxy+}
Method: any
```
| True | aws sam local start-api regression on "provided" runtime missing header "lambda-runtime-invoked-function-arn" - ### Description:
Using AWS sam 1.3.2, runtime provided, it is possible to run a rustlang AWS api using sam local with runtime: provided. Under newer versions, the local versions of the deployment fail: (edit: the deployment itself succeeds, but the running lambda is not usable)
```
main' panicked at 'no entry found for key "lambda-runtime-invoked-function-arn"', /home/richja/.cargo/registry/src/github.com-1ecc6299db9ec823/lambda_runtime-0.3.0/src/types.rs2021-03-15 09:24:50,041 TRACE [want] signal found waiting giver, notifying
```
this seems to be triggered in : lambda-runtime-0.3.0, src/types.rs
```
invoked_function_arn: headers["lambda-runtime-invoked-function-arn"]
.to_str()
.expect("Missing arn; this is a bug")
.to_owned(),
```
### Steps to reproduce:
Make a rust lambda, using sam 1.3.2 - it will work.
Make a rust lambda, using sam 1.19 - it will work when deployed into AWS, but not locally, failing with above panic.
### Observed result:
panic in lambda_runtime
```
15: 0x7f78e1fb6139 - <lambda_runtime::types::Context as core::convert::TryFrom<http::header::map::HeaderMap>>::try_from::h0ca51065315b7aa3
16: 0x7f78e1f38cb8 - lambda_runtime::run::{{closure}}::h444da055591230ba
17: 0x7f78e1f44786 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::h88d3f3d579db64e0
18: 0x7f78e1f3b47b - std::thread::local::LocalKey<T>::with::h09bc194ffe5b62c8
19: 0x7f78e1f9ec8d - tokio::park::thread::CachedParkThread::block_on::h230b34efeab4dda5
20: 0x7f78e1f8e494 - tokio::runtime::thread_pool::ThreadPool::block_on::h4aed425d9cb7bbeb
21: 0x7f78e1f5f5b8 - tokio::runtime::Runtime::block_on::h00f620b9f5b57519
```
this happens always, with any input
example template:
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
HelloRustFunction:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: makefile
Properties:
FunctionName: HelloRust
Handler: bootstrap.is.real.handler
CodeUri: .
Runtime: provided
Timeout: 300
MemorySize: 2048
Environment:
Variables:
RUST_BACKTRACE: full
Events:
RootEvent:
Type: Api
Properties:
Path: /
Method: any
GatewayEvent:
Type: Api
Properties:
Path: /{proxy+}
Method: any
```
| main | aws sam local start api regression on provided runtime missing header lambda runtime invoked function arn description using aws sam runtime provided it is possible to run a rustlang aws api using sam local with runtime provided under newer versions the local versions of the deployment fail edit the deployment itself succeeds but the running lambda is not usable main panicked at no entry found for key lambda runtime invoked function arn home richja cargo registry src github com lambda runtime src types trace signal found waiting giver notifying this seems to be triggered in lambda runtime src types rs invoked function arn headers to str expect missing arn this is a bug to owned steps to reproduce make a rust lambda using sam it will work make a rust lambda using sam it will work when deployed into aws but not locally failing with above panic observed result panic in lambda runtime try from lambda runtime run closure as core future future future poll std thread local localkey with tokio park thread cachedparkthread block on tokio runtime thread pool threadpool block on tokio runtime runtime block on this happens always with any input example template awstemplateformatversion transform aws serverless resources hellorustfunction type aws serverless function metadata buildmethod makefile properties functionname hellorust handler bootstrap is real handler codeuri runtime provided timeout memorysize environment variables rust backtrace full events rootevent type api properties path method any gatewayevent type api properties path proxy method any | 1 |
265,384 | 23,164,160,958 | IssuesEvent | 2022-07-29 21:40:29 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | DISABLED test_profiler_experimental_tree (__main__.TestProfilerTree) | module: flaky-tests skipped module: unknown | Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_profiler_experimental_tree&suite=TestProfilerTree&file=test_profiler_tree.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7584372078).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 red and 5 green. | 1.0 | DISABLED test_profiler_experimental_tree (__main__.TestProfilerTree) - Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_profiler_experimental_tree&suite=TestProfilerTree&file=test_profiler_tree.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7584372078).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 red and 5 green. | non_main | disabled test profiler experimental tree main testprofilertree platforms linux rocm this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green | 0 |
3,737 | 15,688,011,371 | IssuesEvent | 2021-03-25 14:15:57 | DynamoRIO/dynamorio | https://api.github.com/repos/DynamoRIO/dynamorio | opened | Improve isolation of IR from arch code | Maintainability | Xref #1409 which is another refactoring issue for better isolation into libraries
For #1684 we moved the IR files from core/arch to core/ir (PR #4321):
Moves all of the IR-related files (instruction generation, encoding,
decoding, disassembly, instructions, operands, instruction lists) from
core/arch to core/ir, mirroring the arch-specific subdirectories under
core/ir. This is a code cleanup step toward properly isolating the
drdecode library, as well as moving us toward the ability to build for
a separate target architecture from the host architecture and
eventually perhaps building in multiple target architectures in the
same binary for decoding and IR manipulation.
However, that was not a perfect split, as there is still code in ir/ that relates to
managed execution (mangling, etc.), and we still do not have clean library
boundaries: we're still pulling individual files from across the boundaries and
compiling them directly. This issue covers better isolation into true libraries. | True | Improve isolation of IR from arch code - Xref #1409 which is another refactoring issue for better isolation into libraries
For #1684 we moved the IR files from core/arch to core/ir (PR #4321):
Moves all of the IR-related files (instruction generation, encoding,
decoding, disassembly, instructions, operands, instruction lists) from
core/arch to core/ir, mirroring the arch-specific subdirectories under
core/ir. This is a code cleanup step toward properly isolating the
drdecode library, as well as moving us toward the ability to build for
a separate target architecture from the host architecture and
eventually perhaps building in multiple target architectures in the
same binary for decoding and IR manipulation.
However, that was not a perfect split, as there is still code in ir/ that relates to
managed execution (mangling, etc.), and we still do not have clean library
boundaries: we're still pulling individual files from across the boundaries and
compiling them directly. This issue covers better isolation into true libraries. | main | improve isolation of ir from arch code xref which is another refactoring issue for better isolation into libraries for we moved the ir files from core arch to core ir pr moves all of the ir related files instruction generation encoding decoding disassembly instructions operands instruction lists from core arch to core ir mirroring the arch specific subdirectories under core ir this is a code cleanup step toward properly isolating the drdecode library as well as moving us toward the ability to build for a separate target architecture from the host architecture and eventually perhaps building in multiple target architectures in the same binary for decoding and ir manipulation however that was not a perfect split as there is still code in ir that relates to managed execution mangling etc and we still do not have clean library boundaries we re still pulling individual files from across the boundaries and compiling them directly this issue covers better isolation into true libraries | 1 |
3,333 | 12,945,932,568 | IssuesEvent | 2020-07-18 17:03:32 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | opened | Update Vicino for N-gram clusterer bug fix | bug maintainability to be reviewed | We've got a bug fix for the SIMILE Vicino N-gram clusterer sitting at https://github.com/OpenRefine/simile-vicino/commit/e9e9eda18bf905f5a0ee6c04cc6a1b48d621b8c0 which never got published. We should publish a new version with the bug fix and update OpenRefine to use it.
We could use this opportunity to clean up the OpenRefine dependencies a little by:
- moving the `secondstring` and `arithcode` dependencies from OpenRefine to https://github.com/OpenRefine/simile-vicino
- switching to the official `secondstring` dependency from the original author instead of publishing our own
- switching vicino bzip2 dependency to use Apache commons-compress instead of ant-tools, as we've done for OpenRefine
- updating to arithcode-1.2. This is a very minor release, but it includes tests, which is a plus.
| True | Update Vicino for N-gram clusterer bug fix - We've got a bug fix for the SIMILE Vicino N-gram clusterer sitting at https://github.com/OpenRefine/simile-vicino/commit/e9e9eda18bf905f5a0ee6c04cc6a1b48d621b8c0 which never got published. We should publish a new version with the bug fix and update OpenRefine to use it.
We could use this opportunity to clean up the OpenRefine dependencies a little by:
- moving the `secondstring` and `arithcode` dependencies from OpenRefine to https://github.com/OpenRefine/simile-vicino
- switching to the official `secondstring` dependency from the original author instead of publishing our own
- switching vicino bzip2 dependency to use Apache commons-compress instead of ant-tools, as we've done for OpenRefine
- updating to arithcode-1.2. This is a very minor release, but it includes tests, which is a plus.
| main | update vicino for n gram clusterer bug fix we ve got a bug fix for the simile vicino n gram clusterer sitting at which never got published we should publish a new version with the bug fix and update openrefine to use it we could use this opportunity to clean up the openrefine dependencies a little by moving the secondstring and arithcode dependencies from openrefine to switching to the official secondstring dependency from the original author instead of publishing our own switching vicino dependency to use apache commons compress instead of ant tools as we ve done for openrefine updating to arithcode this is a very minor release but it includes tests which is a plus | 1 |
1,939 | 6,620,292,609 | IssuesEvent | 2017-09-21 15:05:45 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | BRT: The server doesn't work anymore | Maintainer Approved Status: Tolerated | The server http://as777.brt.it is unreachable. Is there an other way to get the tracking information?
---
IA Page: http://duck.co/ia/view/brt
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @afelicioni
| True | BRT: The server doesn't work anymore - The server http://as777.brt.it is unreachable. Is there an other way to get the tracking information?
---
IA Page: http://duck.co/ia/view/brt
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @afelicioni
| main | brt the server doesn t work anymore the server is unreachable is there an other way to get the tracking information ia page afelicioni | 1 |
5,661 | 29,184,575,106 | IssuesEvent | 2023-05-19 14:27:34 | toolbx-images/images | https://api.github.com/repos/toolbx-images/images | opened | Withdraw distribution: Ubuntu | new-image-request maintainers-wanted | ### Distribution name and versions requested
Ubuntu versions 16.04, 18.04, 20.04, 22.04 and 22.10.
### Where are the official container images from the distribution published?
See: https://github.com/containers/toolbox/pull/483
The sources for the Ubuntu images are now being maintained at [github.com/containers/toolbox](https://github.com/containers/toolbox/tree/main/images/ubuntu), and are slowly [starting](https://github.com/containers/toolbox/pull/1291) to [diverge](https://github.com/containers/toolbox/pull/1292) from the sources in this repository. Therefore, it might be time to withdraw the old sources from here.
Note that the Ubuntu images are still being consumed from `quay.io/toolbx-images`. We might be changing it to `quay.io/toolbx` in the near future, but until that happens the built images shouldn't be withdrawn from the registry.
### Will you be interested in maintaining this image?
n/a | True | Withdraw distribution: Ubuntu - ### Distribution name and versions requested
Ubuntu versions 16.04, 18.04, 20.04, 22.04 and 22.10.
### Where are the official container images from the distribution published?
See: https://github.com/containers/toolbox/pull/483
The sources for the Ubuntu images are now being maintained at [github.com/containers/toolbox](https://github.com/containers/toolbox/tree/main/images/ubuntu), and are slowly [starting](https://github.com/containers/toolbox/pull/1291) to [diverge](https://github.com/containers/toolbox/pull/1292) from the sources in this repository. Therefore, it might be time to withdraw the old sources from here.
Note that the Ubuntu images are still being consumed from `quay.io/toolbx-images`. We might be changing it to `quay.io/toolbx` in the near future, but until that happens the built images shouldn't be withdrawn from the registry.
### Will you be interested in maintaining this image?
n/a | main | withdraw distribution ubuntu distribution name and versions requested ubuntu versions and where are the official container images from the distribution published see the sources for the ubuntu images are now being maintained at and are slowly to from the sources in this repository therefore it might be time to withdraw the old sources from here note that the ubuntu images are still being consumed from quay io toolbx images we might be changing it to quay io toolbx in the near future but until that happens the built images shouldn t be withdrawn from the registry will you be interested in maintaining this image n a | 1 |
23,289 | 10,867,734,742 | IssuesEvent | 2019-11-15 01:00:47 | LevyForchh/cruise-control | https://api.github.com/repos/LevyForchh/cruise-control | opened | CVE-2018-17196 (High) detected in kafka-clients-0.11.0.2.jar | security vulnerability | ## CVE-2018-17196 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kafka-clients-0.11.0.2.jar</b></p></summary>
<p>null</p>
<p>Library home page: <a href="http://kafka.apache.org">http://kafka.apache.org</a></p>
<p>Path to dependency file: /cruise-control/build.gradle</p>
<p>Path to vulnerable library: /tmp/git/cruise-control/build.gradle,/cruise-control,/tmp/git/cruise-control,/cruise-control/build.gradle</p>
<p>
Dependency Hierarchy:
- :x: **kafka-clients-0.11.0.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Kafka versions between 0.11.0.0 and 2.1.0, it is possible to manually craft a Produce request which bypasses transaction/idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability. Users should upgrade to 2.1.1 or later where this vulnerability has been fixed.
<p>Publish Date: 2019-07-11
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17196>CVE-2018-17196</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17196">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17196</a></p>
<p>Release Date: 2019-07-11</p>
<p>Fix Resolution: org.apache.kafka:kafka-clients:2.1.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix MR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.kafka","packageName":"kafka-clients","packageVersion":"0.11.0.2","isTransitiveDependency":false,"dependencyTree":"org.apache.kafka:kafka-clients:0.11.0.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.kafka:kafka-clients:2.1.1"}],"vulnerabilityIdentifier":"CVE-2018-17196","vulnerabilityDetails":"In Apache Kafka versions between 0.11.0.0 and 2.1.0, it is possible to manually craft a Produce request which bypasses transaction/idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability. Users should upgrade to 2.1.1 or later where this vulnerability has been fixed.","vulnerabilityUrl":"https://cve.mitre.org/cgi-bin/cvename.cgi?name\u003dCVE-2018-17196","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-17196 (High) detected in kafka-clients-0.11.0.2.jar - ## CVE-2018-17196 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kafka-clients-0.11.0.2.jar</b></p></summary>
<p>null</p>
<p>Library home page: <a href="http://kafka.apache.org">http://kafka.apache.org</a></p>
<p>Path to dependency file: /cruise-control/build.gradle</p>
<p>Path to vulnerable library: /tmp/git/cruise-control/build.gradle,/cruise-control,/tmp/git/cruise-control,/cruise-control/build.gradle</p>
<p>
Dependency Hierarchy:
- :x: **kafka-clients-0.11.0.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Kafka versions between 0.11.0.0 and 2.1.0, it is possible to manually craft a Produce request which bypasses transaction/idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability. Users should upgrade to 2.1.1 or later where this vulnerability has been fixed.
<p>Publish Date: 2019-07-11
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17196>CVE-2018-17196</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17196">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17196</a></p>
<p>Release Date: 2019-07-11</p>
<p>Fix Resolution: org.apache.kafka:kafka-clients:2.1.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix MR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.kafka","packageName":"kafka-clients","packageVersion":"0.11.0.2","isTransitiveDependency":false,"dependencyTree":"org.apache.kafka:kafka-clients:0.11.0.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.kafka:kafka-clients:2.1.1"}],"vulnerabilityIdentifier":"CVE-2018-17196","vulnerabilityDetails":"In Apache Kafka versions between 0.11.0.0 and 2.1.0, it is possible to manually craft a Produce request which bypasses transaction/idempotent ACL validation. Only authenticated clients with Write permission on the respective topics are able to exploit this vulnerability. Users should upgrade to 2.1.1 or later where this vulnerability has been fixed.","vulnerabilityUrl":"https://cve.mitre.org/cgi-bin/cvename.cgi?name\u003dCVE-2018-17196","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in kafka clients jar cve high severity vulnerability vulnerable library kafka clients jar null library home page a href path to dependency file cruise control build gradle path to vulnerable library tmp git cruise control build gradle cruise control tmp git cruise control cruise control build gradle dependency hierarchy x kafka clients jar vulnerable library vulnerability details in apache kafka versions between and it is possible to manually craft a produce request which bypasses transaction idempotent acl validation only authenticated clients with write permission on the respective topics are able to exploit this vulnerability users should upgrade to or later where this vulnerability has been fixed publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache kafka kafka clients check this box to open an automated fix mr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in apache kafka versions between and it is possible to manually craft a produce request which bypasses transaction idempotent acl validation only authenticated clients with write permission on the respective topics are able to exploit this vulnerability users should upgrade to or later where this vulnerability has been fixed vulnerabilityurl | 0 |
22,781 | 11,732,868,525 | IssuesEvent | 2020-03-11 05:17:20 | postmanlabs/postman-app-support | https://api.github.com/repos/postmanlabs/postman-app-support | closed | Postman crashes while running GET in collection runner (large response) | bug collection-runner performance product/runtime | **Describe the bug**
The application crashes when running a GET request that has large response (34.42 MB, 518039 lines) in collection runner (there is no other request in the collection).
**To Reproduce**
Steps to reproduce the behavior:
1. In Collection runner, call a request that is expected to receive a large response
* environment is set
* 1 iteration
* 0 dealy
* no data file
* do not log response
**Expected behavior**
The app does not crash, the response is received
**Screenshots**
* Runner settings: https://share.getcloudapp.com/P8uWNnkb
* Postman stopped working: https://share.getcloudapp.com/geu0O6bk
**App information (please complete the following information):**
- App Type: Native App
- Postman v7.6.0
- OS: Windows 10
**Additional info**
- It didn't crash when I was using the older version (7.3.6)
- The response is received without any issue if the request is sent within the request builder (outside of the runner)
- If I send the request via Collection runner expecting a much smaller response, it does not crash | True | Postman crashes while running GET in collection runner (large response) - **Describe the bug**
The application crashes when running a GET request that has large response (34.42 MB, 518039 lines) in collection runner (there is no other request in the collection).
**To Reproduce**
Steps to reproduce the behavior:
1. In Collection runner, call a request that is expected to receive a large response
* environment is set
* 1 iteration
* 0 dealy
* no data file
* do not log response
**Expected behavior**
The app does not crash, the response is received
**Screenshots**
* Runner settings: https://share.getcloudapp.com/P8uWNnkb
* Postman stopped working: https://share.getcloudapp.com/geu0O6bk
**App information (please complete the following information):**
- App Type: Native App
- Postman v7.6.0
- OS: Windows 10
**Additional info**
- It didn't crash when I was using the older version (7.3.6)
- The response is received without any issue if the request is sent within the request builder (outside of the runner)
- If I send the request via Collection runner expecting a much smaller response, it does not crash | non_main | postman crashes while running get in collection runner large response describe the bug the application crashes when running a get request that has large response mb lines in collection runner there is no other request in the collection to reproduce steps to reproduce the behavior in collection runner call a request that is expected to receive a large response environment is set iteration dealy no data file do not log response expected behavior the app does not crash the response is received screenshots runner settings postman stopped working app information please complete the following information app type native app postman os windows additional info it didn t crash when i was using the older version the response is received without any issue if the request is sent within the request builder outside of the runner if i send the request via collection runner expecting a much smaller response it does not crash | 0 |
2,891 | 10,319,638,965 | IssuesEvent | 2019-08-30 18:05:40 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Become a contributor | Maintainer application | I have high hopes for using Backdrop (and convincing others to do the same). Thought I could assist with documentation first and move on to code later. Thanks for the consideration. | True | Become a contributor - I have high hopes for using Backdrop (and convincing others to do the same). Thought I could assist with documentation first and move on to code later. Thanks for the consideration. | main | become a contributor i have high hopes for using backdrop and convincing others to do the same thought i could assist with documentation first and move on to code later thanks for the consideration | 1 |
1,179 | 5,096,337,648 | IssuesEvent | 2017-01-03 17:52:41 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | user module on darwin: add support for hidden user | affects_2.0 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
user
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.0.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Orchestrator: Ubuntu trusty
Target: Macos 10.11
##### SUMMARY
user module doesn't support hidden status for user
##### STEPS TO REPRODUCE
from
```- user: name={{ adduser_user_name }} password="{{ adduser_password }}" comment="{{ adduser_user_comments }}" shell=/bin/bash groups=admin append=yes
```
no parameter for hidden.
Corresponding command line documentation
https://support.apple.com/en-ca/HT203998
sudo dscl . create /Users/hiddenuser IsHidden 1
##### EXPECTED RESULTS
you should be able to specify hidden settings, apply it and respect idempotency
##### ACTUAL RESULTS
unsupported
```
| True | user module on darwin: add support for hidden user - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
user
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.0.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Orchestrator: Ubuntu trusty
Target: Macos 10.11
##### SUMMARY
user module doesn't support hidden status for user
##### STEPS TO REPRODUCE
from
```- user: name={{ adduser_user_name }} password="{{ adduser_password }}" comment="{{ adduser_user_comments }}" shell=/bin/bash groups=admin append=yes
```
no parameter for hidden.
Corresponding command line documentation
https://support.apple.com/en-ca/HT203998
sudo dscl . create /Users/hiddenuser IsHidden 1
##### EXPECTED RESULTS
you should be able to specify hidden settings, apply it and respect idempotency
##### ACTUAL RESULTS
unsupported
```
| main | user module on darwin add support for hidden user issue type feature idea component name user ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration n a os environment orchestrator ubuntu trusty target macos summary user module doesn t support hidden status for user steps to reproduce from user name adduser user name password adduser password comment adduser user comments shell bin bash groups admin append yes no parameter for hidden corresponding command line documentation sudo dscl create users hiddenuser ishidden expected results you should be able to specify hidden settings apply it and respect idempotency actual results unsupported | 1 |
258,643 | 8,178,600,109 | IssuesEvent | 2018-08-28 14:14:28 | Theophilix/event-table-edit | https://api.github.com/repos/Theophilix/event-table-edit | closed | Frontend: Deleting column is still clickable | bug high priority | Even if in backend the use of the deleting column is not allowed for public users, it is still active and clickable:


We should have the same "greyed out if not allowed by ACL" - function also for the deleting column.
| 1.0 | Frontend: Deleting column is still clickable - Even if in backend the use of the deleting column is not allowed for public users, it is still active and clickable:


We should have the same "greyed out if not allowed by ACL" - function also for the deleting column.
| non_main | frontend deleting column is still clickable even if in backend the use of the deleting column is not allowed for public users it is still active and clickable we should have the same greyed out if not allowed by acl function also for the deleting column | 0 |
70,862 | 13,541,309,579 | IssuesEvent | 2020-09-16 15:43:04 | vektorprogrammet/vektorprogrammet | https://api.github.com/repos/vektorprogrammet/vektorprogrammet | closed | Symfony Insight: Your project must not use PHP super globals | Code Quality | Another critical suggestion that must be fixed in order for Symfony Insight to pass on our PRs

| 1.0 | Symfony Insight: Your project must not use PHP super globals - Another critical suggestion that must be fixed in order for Symfony Insight to pass on our PRs

| non_main | symfony insight your project must not use php super globals another critical suggestion that must be fixed in order for symfony insight to pass on our prs | 0 |
4,763 | 24,534,108,036 | IssuesEvent | 2022-10-11 19:03:53 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | AssertionError when attempting to reflect our transformed library data | type: bug work: backend status: ready restricted: maintainers | ## Reproduce
1. Use our old Library data from cycle 2, supplied in [schema.zip](https://github.com/centerofci/mathesar/files/9759004/schema.zip).
1. Load `books_sim.tsv` and `patrons_sim.tsv` through the UI.
1. Execute `transform.sql` via psql.
1. Manually initiate a reflection via
`docker exec -it mathesar_service ./manage.py shell`
```py
from mathesar.state import reset_reflection
reset_reflection()
```
1. Observe an error `'Column' object has no attribute 'primary_key'`
1. Attempt to load the web interface and observe the following error:
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/mathesar_tables/1/
Django Version: 3.1.14
Python Version: 3.9.9
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/code/mathesar/models/base.py", line 675, in __getattribute__
return super().__getattribute__(name)
During handling of the above exception ('Column' object has no attribute 'primary_key'), another exception occurred:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/contrib/auth/decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/code/mathesar/views.py", line 120, in schema_home
'common_data': get_common_data(request, database, schema)
File "/code/mathesar/views.py", line 68, in get_common_data
'schemas': get_schema_list(request, database),
File "/code/mathesar/views.py", line 15, in get_schema_list
Schema.objects.filter(database=database),
File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/code/mathesar/models/base.py", line 70, in get_queryset
make_sure_initial_reflection_happened()
File "/code/mathesar/state/base.py", line 8, in make_sure_initial_reflection_happened
reset_reflection()
File "/code/mathesar/state/base.py", line 27, in reset_reflection
_trigger_django_model_reflection()
File "/code/mathesar/state/base.py", line 31, in _trigger_django_model_reflection
reflect_db_objects(metadata=get_cached_metadata())
File "/code/mathesar/state/django.py", line 44, in reflect_db_objects
reflect_columns_from_tables(tables, metadata=metadata)
File "/code/mathesar/state/django.py", line 123, in reflect_columns_from_tables
models._compute_preview_template(table)
File "/code/mathesar/models/base.py", line 859, in _compute_preview_template
if column.primary_key:
File "/code/mathesar/models/base.py", line 681, in __getattribute__
return getattr(self._sa_column, name)
File "/code/mathesar/models/base.py", line 675, in __getattribute__
return super().__getattribute__(name)
File "/code/mathesar/models/base.py", line 696, in _sa_column
return self.table.sa_columns[self.name]
File "/code/mathesar/models/base.py", line 675, in __getattribute__
return super().__getattribute__(name)
File "/code/mathesar/state/cached_property.py", line 62, in __get__
new_value = self.original_get_fn(instance)
File "/code/mathesar/models/base.py", line 718, in name
assert type(name) is str
Exception Type: AssertionError at /mathesar_tables/1/
Exception Value:
```
</details>
I'm putting this in "Post-Release Improvements" because it seems unlikely that a user would hit it.
| True | AssertionError when attempting to reflect our transformed library data - ## Reproduce
1. Use our old Library data from cycle 2, supplied in [schema.zip](https://github.com/centerofci/mathesar/files/9759004/schema.zip).
1. Load `books_sim.tsv` and `patrons_sim.tsv` through the UI.
1. Execute `transform.sql` via psql.
1. Manually initiate a reflection via
`docker exec -it mathesar_service ./manage.py shell`
```py
from mathesar.state import reset_reflection
reset_reflection()
```
1. Observe an error `'Column' object has no attribute 'primary_key'`
1. Attempt to load the web interface and observe the following error:
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/mathesar_tables/1/
Django Version: 3.1.14
Python Version: 3.9.9
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/code/mathesar/models/base.py", line 675, in __getattribute__
return super().__getattribute__(name)
During handling of the above exception ('Column' object has no attribute 'primary_key'), another exception occurred:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/contrib/auth/decorators.py", line 21, in _wrapped_view
return view_func(request, *args, **kwargs)
File "/code/mathesar/views.py", line 120, in schema_home
'common_data': get_common_data(request, database, schema)
File "/code/mathesar/views.py", line 68, in get_common_data
'schemas': get_schema_list(request, database),
File "/code/mathesar/views.py", line 15, in get_schema_list
Schema.objects.filter(database=database),
File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/code/mathesar/models/base.py", line 70, in get_queryset
make_sure_initial_reflection_happened()
File "/code/mathesar/state/base.py", line 8, in make_sure_initial_reflection_happened
reset_reflection()
File "/code/mathesar/state/base.py", line 27, in reset_reflection
_trigger_django_model_reflection()
File "/code/mathesar/state/base.py", line 31, in _trigger_django_model_reflection
reflect_db_objects(metadata=get_cached_metadata())
File "/code/mathesar/state/django.py", line 44, in reflect_db_objects
reflect_columns_from_tables(tables, metadata=metadata)
File "/code/mathesar/state/django.py", line 123, in reflect_columns_from_tables
models._compute_preview_template(table)
File "/code/mathesar/models/base.py", line 859, in _compute_preview_template
if column.primary_key:
File "/code/mathesar/models/base.py", line 681, in __getattribute__
return getattr(self._sa_column, name)
File "/code/mathesar/models/base.py", line 675, in __getattribute__
return super().__getattribute__(name)
File "/code/mathesar/models/base.py", line 696, in _sa_column
return self.table.sa_columns[self.name]
File "/code/mathesar/models/base.py", line 675, in __getattribute__
return super().__getattribute__(name)
File "/code/mathesar/state/cached_property.py", line 62, in __get__
new_value = self.original_get_fn(instance)
File "/code/mathesar/models/base.py", line 718, in name
assert type(name) is str
Exception Type: AssertionError at /mathesar_tables/1/
Exception Value:
```
</details>
I'm putting this in "Post-Release Improvements" because it seems unlikely that a user would hit it.
| main | assertionerror when attempting to reflect our transformed library data reproduce use our old library data from cycle supplied in load books sim tsv and patrons sim tsv through the ui execute transform sql via psql manually initiate a reflection via docker exec it mathesar service manage py shell py from mathesar state import reset reflection reset reflection observe an error column object has no attribute primary key attempt to load the web interface and observe the following error traceback environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file code mathesar models base py line in getattribute return super getattribute name during handling of the above exception column object has no attribute primary key another exception occurred file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django contrib auth decorators py line in wrapped view return view func request args kwargs file code mathesar views py line in schema home common data get common data request database schema file code mathesar views py line in get common data schemas get schema list request database file code mathesar views py line in get schema list schema objects filter database database file usr local lib site packages django db models manager py line in manager method return getattr self get queryset name args kwargs file code mathesar models base py line in get queryset make sure initial reflection happened file code mathesar state base py line in make sure initial reflection happened reset reflection file code mathesar state base py line in reset reflection trigger django model reflection file code mathesar state base py line in trigger django model reflection reflect db objects metadata get cached metadata file code mathesar state django py line in reflect db objects reflect columns from tables tables metadata metadata file code mathesar state django py line in reflect columns from tables models compute preview template table file code mathesar models base py line in compute preview template if column primary key file code mathesar models base py line in getattribute return getattr self sa column name file code mathesar models base py line in getattribute return super getattribute name file code mathesar models base py line in sa column return self table sa columns file code mathesar models base py line in getattribute return super getattribute name file code mathesar state cached property py line in get new value self original get fn instance file code mathesar models base py line in name assert type name is str exception type assertionerror at mathesar tables exception value i m putting this in post release improvements because it seems unlikely that a user would hit it | 1 |
616,807 | 19,321,244,049 | IssuesEvent | 2021-12-14 06:00:03 | codyseibert/fire-stone-hosting | https://api.github.com/repos/codyseibert/fire-stone-hosting | reopened | [Bug]: upsert call in createNodePersistence.ts method causes postgres to shutdown | bug high priority | ### Describe the bug
When I start the agent, it seems to shutdown the postgres database. All the agent does when it first starts is invoke the api and send some requests about it's memory and ip. If I hit the APIs /nodes endpoint directly using thunderclient, I don't have any issues, but for some reason when the agent hits the endpoint, it seems like postgres (which is from docker compose) shutsdown.
```
2021-12-14 05:31:38.942 UTC [1] LOG: received fast shutdown request
2021-12-14 05:31:38.949 UTC [1] LOG: aborting any active transactions
2021-12-14 05:31:38.952 UTC [1] LOG: background worker "logical replication launcher" (PID 27) exited with exit code 1
2021-12-14 05:31:38.952 UTC [29] FATAL: terminating connection due to administrator command
2021-12-14 05:31:38.952 UTC [28] FATAL: terminating connection due to administrator command
2021-12-14 05:31:38.957 UTC [22] LOG: shutting down
2021-12-14 05:31:39.008 UTC [1] LOG: database system is shut down
```
### Steps to reproduce the behavior
1. start the api
2. start the agent
3. the agent will hit the api to register itself
4. after the upsert call, the postgres database will shutdown
### Expected behavior
postgres should not shutdown
### Actual behavior
postgres is shutting down
| 1.0 | [Bug]: upsert call in createNodePersistence.ts method causes postgres to shutdown - ### Describe the bug
When I start the agent, it seems to shutdown the postgres database. All the agent does when it first starts is invoke the api and send some requests about it's memory and ip. If I hit the APIs /nodes endpoint directly using thunderclient, I don't have any issues, but for some reason when the agent hits the endpoint, it seems like postgres (which is from docker compose) shutsdown.
```
2021-12-14 05:31:38.942 UTC [1] LOG: received fast shutdown request
2021-12-14 05:31:38.949 UTC [1] LOG: aborting any active transactions
2021-12-14 05:31:38.952 UTC [1] LOG: background worker "logical replication launcher" (PID 27) exited with exit code 1
2021-12-14 05:31:38.952 UTC [29] FATAL: terminating connection due to administrator command
2021-12-14 05:31:38.952 UTC [28] FATAL: terminating connection due to administrator command
2021-12-14 05:31:38.957 UTC [22] LOG: shutting down
2021-12-14 05:31:39.008 UTC [1] LOG: database system is shut down
```
### Steps to reproduce the behavior
1. start the api
2. start the agent
3. the agent will hit the api to register itself
4. after the upsert call, the postgres database will shutdown
### Expected behavior
postgres should not shutdown
### Actual behavior
postgres is shutting down
| non_main | upsert call in createnodepersistence ts method causes postgres to shutdown describe the bug when i start the agent it seems to shutdown the postgres database all the agent does when it first starts is invoke the api and send some requests about it s memory and ip if i hit the apis nodes endpoint directly using thunderclient i don t have any issues but for some reason when the agent hits the endpoint it seems like postgres which is from docker compose shutsdown utc log received fast shutdown request utc log aborting any active transactions utc log background worker logical replication launcher pid exited with exit code utc fatal terminating connection due to administrator command utc fatal terminating connection due to administrator command utc log shutting down utc log database system is shut down steps to reproduce the behavior start the api start the agent the agent will hit the api to register itself after the upsert call the postgres database will shutdown expected behavior postgres should not shutdown actual behavior postgres is shutting down | 0 |
1,040 | 4,845,426,833 | IssuesEvent | 2016-11-10 08:10:37 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Haproxy module fails - itertools.imap object has no attribute __getitem__ | affects_2.2 bug_report networking waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
haproxy module
##### ANSIBLE VERSION
```
ansible 2.2.0.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu 14.04.1 LTS
##### SUMMARY
Can't change backend status using haproxy module with ansible 2.2.0.0. It fails with unexpected error.
##### STEPS TO REPRODUCE
playbook :
```yaml
- name: "disable backend server"
haproxy: "state=disabled host={{ haproxy_server_name }} socket={{ hostvars[item]['haproxy_socket_path'] }} backend={{ haproxy_backend_name }} wait=yes"
delegate_to: "{{ item }}"
with_items: "{{ groups['proxy'] }}"
become_user: root
```
inventory:
```yaml
haproxy-01
haproxy-02
[proxy]
haproxy-01
haproxy-02
```
group_vars:
```yaml
haproxy_socket_path: /run/haproxy.sock
```
host_vars:
``` yaml
haproxy_server_name: app01
```
vars:
``` yaml
haproxy_backend_name: myapp
```
##### EXPECTED RESULTS
```
TASK [disable backend server] ***************************
changed: [appserver01 -> 172.X.X.X] => (item=haproxy-01)
```
##### ACTUAL RESULTS
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'itertools.imap' object has no attribute '__getitem__'
failed: [appserver01 -> 172.X.X.X] (item=haproxy-01) => {"failed": true, "item": "haproxy-01", "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 350, in <module>\n main()\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 345, in main\n ansible_haproxy.act()\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 306, in act\n self.disabled(self.host, self.backend, self.shutdown_sessions)\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 291, in disabled\n self.execute_for_backends(cmd, backend, host, 'MAINT')\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 237, in execute_for_backends\n self.wait_until_status(backend, svname, wait_for_status)\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 262, in wait_until_status\n if state[0]['status'] == status:\nTypeError: 'itertools.imap' object has no attribute '__getitem__'\n", "module_stdout": "", "msg": "MODULE FAILURE"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'itertools.imap' object has no attribute '__getitem__'
failed: [appserver01 -> 172.X.X.Y] (item=haproxy-02) => {"failed": true, "item": "haproxy-02", "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 350, in <module>\n main()\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 345, in main\n ansible_haproxy.act()\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 306, in act\n self.disabled(self.host, self.backend, self.shutdown_sessions)\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 291, in disabled\n self.execute_for_backends(cmd, backend, host, 'MAINT')\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 237, in execute_for_backends\n self.wait_until_status(backend, svname, wait_for_status)\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 262, in wait_until_status\n if state[0]['status'] == status:\nTypeError: 'itertools.imap' object has no attribute '__getitem__'\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
This relates to https://github.com/ansible/ansible-modules-extras/issues/3364. | True | Haproxy module fails - itertools.imap object has no attribute __getitem__ - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
haproxy module
##### ANSIBLE VERSION
```
ansible 2.2.0.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu 14.04.1 LTS
##### SUMMARY
Can't change backend status using haproxy module with ansible 2.2.0.0. It fails with unexpected error.
##### STEPS TO REPRODUCE
playbook :
```yaml
- name: "disable backend server"
haproxy: "state=disabled host={{ haproxy_server_name }} socket={{ hostvars[item]['haproxy_socket_path'] }} backend={{ haproxy_backend_name }} wait=yes"
delegate_to: "{{ item }}"
with_items: "{{ groups['proxy'] }}"
become_user: root
```
inventory:
```yaml
haproxy-01
haproxy-02
[proxy]
haproxy-01
haproxy-02
```
group_vars:
```yaml
haproxy_socket_path: /run/haproxy.sock
```
host_vars:
``` yaml
haproxy_server_name: app01
```
vars:
``` yaml
haproxy_backend_name: myapp
```
##### EXPECTED RESULTS
```
TASK [disable backend server] ***************************
changed: [appserver01 -> 172.X.X.X] => (item=haproxy-01)
```
##### ACTUAL RESULTS
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'itertools.imap' object has no attribute '__getitem__'
failed: [appserver01 -> 172.X.X.X] (item=haproxy-01) => {"failed": true, "item": "haproxy-01", "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 350, in <module>\n main()\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 345, in main\n ansible_haproxy.act()\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 306, in act\n self.disabled(self.host, self.backend, self.shutdown_sessions)\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 291, in disabled\n self.execute_for_backends(cmd, backend, host, 'MAINT')\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 237, in execute_for_backends\n self.wait_until_status(backend, svname, wait_for_status)\n File \"/tmp/ansible_eJMmvQ/ansible_module_haproxy.py\", line 262, in wait_until_status\n if state[0]['status'] == status:\nTypeError: 'itertools.imap' object has no attribute '__getitem__'\n", "module_stdout": "", "msg": "MODULE FAILURE"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'itertools.imap' object has no attribute '__getitem__'
failed: [appserver01 -> 172.X.X.Y] (item=haproxy-02) => {"failed": true, "item": "haproxy-02", "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 350, in <module>\n main()\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 345, in main\n ansible_haproxy.act()\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 306, in act\n self.disabled(self.host, self.backend, self.shutdown_sessions)\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 291, in disabled\n self.execute_for_backends(cmd, backend, host, 'MAINT')\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 237, in execute_for_backends\n self.wait_until_status(backend, svname, wait_for_status)\n File \"/tmp/ansible_X3ixtX/ansible_module_haproxy.py\", line 262, in wait_until_status\n if state[0]['status'] == status:\nTypeError: 'itertools.imap' object has no attribute '__getitem__'\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
This relates to https://github.com/ansible/ansible-modules-extras/issues/3364. | main | haproxy module fails itertools imap object has no attribute getitem issue type bug report component name haproxy module ansible version ansible configuration os environment ubuntu lts summary can t change backend status using haproxy module with ansible it fails with unexpected error steps to reproduce playbook yaml name disable backend server haproxy state disabled host haproxy server name socket hostvars backend haproxy backend name wait yes delegate to item with items groups become user root inventory yaml haproxy haproxy haproxy haproxy group vars yaml haproxy socket path run haproxy sock host vars yaml haproxy server name vars yaml haproxy backend name myapp expected results task changed item haproxy actual results an exception occurred during task execution to see the full traceback use vvv the error was typeerror itertools imap object has no attribute getitem failed item haproxy failed true item haproxy module stderr traceback most recent call last n file tmp ansible ejmmvq ansible module haproxy py line in n main n file tmp ansible ejmmvq ansible module haproxy py line in main n ansible haproxy act n file tmp ansible ejmmvq ansible module haproxy py line in act n self disabled self host self backend self shutdown sessions n file tmp ansible ejmmvq ansible module haproxy py line in disabled n self execute for backends cmd backend host maint n file tmp ansible ejmmvq ansible module haproxy py line in execute for backends n self wait until status backend svname wait for status n file tmp ansible ejmmvq ansible module haproxy py line in wait until status n if state status ntypeerror itertools imap object has no attribute getitem n module stdout msg module failure an exception occurred during task execution to see the full traceback use vvv the error was typeerror itertools imap object has no attribute getitem failed item haproxy failed true item haproxy module stderr traceback most recent call last n file tmp ansible ansible module haproxy py line in n main n file tmp ansible ansible module haproxy py line in main n ansible haproxy act n file tmp ansible ansible module haproxy py line in act n self disabled self host self backend self shutdown sessions n file tmp ansible ansible module haproxy py line in disabled n self execute for backends cmd backend host maint n file tmp ansible ansible module haproxy py line in execute for backends n self wait until status backend svname wait for status n file tmp ansible ansible module haproxy py line in wait until status n if state status ntypeerror itertools imap object has no attribute getitem n module stdout msg module failure this relates to | 1 |
3,485 | 13,543,287,381 | IssuesEvent | 2020-09-16 18:43:04 | tgstation/tgstation-server | https://api.github.com/repos/tgstation/tgstation-server | opened | Integration Regression Test: Head and Tail includes | Area: Compiler Good First Issue Hacktoberfest Maintainability Issue | See #1122
Adjust one of the DMAPI test environments so that it will fail without some sort of head AND tail include. | True | Integration Regression Test: Head and Tail includes - See #1122
Adjust one of the DMAPI test environments so that it will fail without some sort of head AND tail include. | main | integration regression test head and tail includes see adjust one of the dmapi test environments so that it will fail without some sort of head and tail include | 1 |
503 | 3,835,896,273 | IssuesEvent | 2016-04-01 15:55:06 | canadainc/sunnah10 | https://api.github.com/repos/canadainc/sunnah10 | opened | Implement RSS feed generator | Component-Logic Component-UI enhancement Maintainability Usability | Allow generating JSON files that can be used in the various BB10 apps for importing. | True | Implement RSS feed generator - Allow generating JSON files that can be used in the various BB10 apps for importing. | main | implement rss feed generator allow generating json files that can be used in the various apps for importing | 1 |
3,307 | 12,810,620,534 | IssuesEvent | 2020-07-03 19:20:02 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Spurious test failures in ColumnAdditionByFetchingURLsOperationTests | bug maintainability to be reviewed | **Describe the bug**
Sometimes the following test fails:
```
[ERROR] Tests run: 704, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 75.502 s <<< FAILURE! - in TestSuite
[ERROR] testHttpHeaders(com.google.refine.operations.column.ColumnAdditionByFetchingURLsOperationTests) Time elapsed: 1.751 s <<< FAILURE!
java.lang.AssertionError: expected [false] but found [true]
at com.google.refine.operations.column.ColumnAdditionByFetchingURLsOperationTests.runAndWait(ColumnAdditionByFetchingURLsOperationTests.java:133)
at com.google.refine.operations.column.ColumnAdditionByFetchingURLsOperationTests.testHttpHeaders(ColumnAdditionByFetchingURLsOperationTests.java:266)
```
**To Reproduce**
Run the tests many times? I find it curious that it does not happen on Travis more often, perhaps my machine is just slow.
**Desktop<!-- (please complete the following information)-->:**
- OS: Linux
- JRE or JDK Version: OpenJDK 11
**OpenRefine <!--(please complete the following information)-->:**
- master branch | True | Spurious test failures in ColumnAdditionByFetchingURLsOperationTests - **Describe the bug**
Sometimes the following test fails:
```
[ERROR] Tests run: 704, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 75.502 s <<< FAILURE! - in TestSuite
[ERROR] testHttpHeaders(com.google.refine.operations.column.ColumnAdditionByFetchingURLsOperationTests) Time elapsed: 1.751 s <<< FAILURE!
java.lang.AssertionError: expected [false] but found [true]
at com.google.refine.operations.column.ColumnAdditionByFetchingURLsOperationTests.runAndWait(ColumnAdditionByFetchingURLsOperationTests.java:133)
at com.google.refine.operations.column.ColumnAdditionByFetchingURLsOperationTests.testHttpHeaders(ColumnAdditionByFetchingURLsOperationTests.java:266)
```
**To Reproduce**
Run the tests many times? I find it curious that it does not happen on Travis more often, perhaps my machine is just slow.
**Desktop<!-- (please complete the following information)-->:**
- OS: Linux
- JRE or JDK Version: OpenJDK 11
**OpenRefine <!--(please complete the following information)-->:**
- master branch | main | spurious test failures in columnadditionbyfetchingurlsoperationtests describe the bug sometimes the following test fails tests run failures errors skipped time elapsed s failure in testsuite testhttpheaders com google refine operations column columnadditionbyfetchingurlsoperationtests time elapsed s failure java lang assertionerror expected but found at com google refine operations column columnadditionbyfetchingurlsoperationtests runandwait columnadditionbyfetchingurlsoperationtests java at com google refine operations column columnadditionbyfetchingurlsoperationtests testhttpheaders columnadditionbyfetchingurlsoperationtests java to reproduce run the tests many times i find it curious that it does not happen on travis more often perhaps my machine is just slow desktop os linux jre or jdk version openjdk openrefine master branch | 1 |
1,622 | 6,572,646,371 | IssuesEvent | 2017-09-11 04:02:38 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | user module should allow setting of primary group id | affects_2.0 bug_report waiting_on_maintainer | ##### Issue Type:
- Bug Report
##### Component Name:
user module
##### Ansible Version:
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Environment:
Most relevant to Unix targets.
##### Summary:
A common pattern (e.g. it's the default behaviour in Debian) is to create user and group with the same name and id - e.g. user mcv21 group mcv21 have the same UID. The user module is unable to do this - you can specify user id and primary group, but not the id of the primary group
##### Steps To Reproduce:
An example usage:
```
- name: create a user
user: name=foo uid=1021 group=foo gid=1021
```
| True | user module should allow setting of primary group id - ##### Issue Type:
- Bug Report
##### Component Name:
user module
##### Ansible Version:
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Environment:
Most relevant to Unix targets.
##### Summary:
A common pattern (e.g. it's the default behaviour in Debian) is to create user and group with the same name and id - e.g. user mcv21 group mcv21 have the same UID. The user module is unable to do this - you can specify user id and primary group, but not the id of the primary group
##### Steps To Reproduce:
An example usage:
```
- name: create a user
user: name=foo uid=1021 group=foo gid=1021
```
| main | user module should allow setting of primary group id issue type bug report component name user module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides environment most relevant to unix targets summary a common pattern e g it s the default behaviour in debian is to create user and group with the same name and id e g user group have the same uid the user module is unable to do this you can specify user id and primary group but not the id of the primary group steps to reproduce an example usage name create a user user name foo uid group foo gid | 1 |
3,250 | 12,394,781,494 | IssuesEvent | 2020-05-20 17:30:11 | DaveOri/snowScatt | https://api.github.com/repos/DaveOri/snowScatt | opened | index of snowLibrary | enhancement maintainability | Currently, the snow particle properties are loaded into a dictionary of dictionaries in snowProperties._readProperties.py
We need to find a better way. It is already crowded, chances to make a mistake are high (happened already for a typo in model A and B of rimed particles) and the source file is only gonna be more polluted
Ideas:
1) separate .py file for the library (dict of dicts)
2) .csv index file (contributors will have to load table file and edit the index)
3) A periodic gathering of data into a binary hdf5 or netCDF file (separate code execution from the library, loose easy version control, better to upload the binaries on a different location) | True | index of snowLibrary - Currently, the snow particle properties are loaded into a dictionary of dictionaries in snowProperties._readProperties.py
We need to find a better way. It is already crowded, chances to make a mistake are high (happened already for a typo in model A and B of rimed particles) and the source file is only gonna be more polluted
Ideas:
1) separate .py file for the library (dict of dicts)
2) .csv index file (contributors will have to load table file and edit the index)
3) A periodic gathering of data into a binary hdf5 or netCDF file (separate code execution from the library, loose easy version control, better to upload the binaries on a different location) | main | index of snowlibrary currently the snow particle properties are loaded into a dictionary of dictionaries in snowproperties readproperties py we need to find a better way it is already crowded chances to make a mistake are high happened already for a typo in model a and b of rimed particles and the source file is only gonna be more polluted ideas separate py file for the library dict of dicts csv index file contributors will have to load table file and edit the index a periodic gathering of data into a binary or netcdf file separate code execution from the library loose easy version control better to upload the binaries on a different location | 1 |
9,702 | 6,973,973,274 | IssuesEvent | 2017-12-11 22:29:43 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | Performance benchmarking : allow multi-phase warmup | area/performance/benchmarking | A language like Java needs a multi-phase warmup with a quiet period in between to enable the JIT to work. This will require changes to both the driver and client.
| True | Performance benchmarking : allow multi-phase warmup - A language like Java needs a multi-phase warmup with a quiet period in between to enable the JIT to work. This will require changes to both the driver and client.
| non_main | performance benchmarking allow multi phase warmup a language like java needs a multi phase warmup with a quiet period in between to enable the jit to work this will require changes to both the driver and client | 0 |
1,575 | 6,572,335,970 | IssuesEvent | 2017-09-11 01:29:46 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | S3 Maven repos with a . in the bucket name generates invalid urls. | affects_2.3 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
s3_bucket module
##### ANSIBLE VERSION
N/A
##### SUMMARY
I have a valid bucket with a maven repository:
$ aws s3 ls s3://maven.tigerteamconsulting.local/
PRE release/
PRE snapshot/
And when I use the following task:
- name: install application
maven_artifact:
group_id: com.tigerteamconsulting
artifact_id: someapp
extension: war
repository_url: s3://maven.tigerteamconsulting.local/
I see the following:
fatal: [timesheet.test.tigerteamconsulting.local]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to download maven-metadata.xml because of HTTP Error 403: Forbiddenfor URL https://maven.s3.amazonaws.com/com/tigerteamconsulting/timesheet/maven-metadata.xml?AWSAccessKeyId=REDACTED"}
I think the correct url should be https://maven.tigerteamconsulting.local.s3.amazonaws.com/
| True | S3 Maven repos with a . in the bucket name generates invalid urls. - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
s3_bucket module
##### ANSIBLE VERSION
N/A
##### SUMMARY
I have a valid bucket with a maven repository:
$ aws s3 ls s3://maven.tigerteamconsulting.local/
PRE release/
PRE snapshot/
And when I use the following task:
- name: install application
maven_artifact:
group_id: com.tigerteamconsulting
artifact_id: someapp
extension: war
repository_url: s3://maven.tigerteamconsulting.local/
I see the following:
fatal: [timesheet.test.tigerteamconsulting.local]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to download maven-metadata.xml because of HTTP Error 403: Forbiddenfor URL https://maven.s3.amazonaws.com/com/tigerteamconsulting/timesheet/maven-metadata.xml?AWSAccessKeyId=REDACTED"}
I think the correct url should be https://maven.tigerteamconsulting.local.s3.amazonaws.com/
| main | maven repos with a in the bucket name generates invalid urls issue type bug report component name bucket module ansible version n a summary i have a valid bucket with a maven repository aws ls maven tigerteamconsulting local pre release pre snapshot and when i use the following task name install application maven artifact group id com tigerteamconsulting artifact id someapp extension war repository url maven tigerteamconsulting local i see the following fatal failed changed false failed true msg failed to download maven metadata xml because of http error forbiddenfor url i think the correct url should be | 1 |
131,957 | 18,447,958,380 | IssuesEvent | 2021-10-15 06:32:01 | woocommerce/woocommerce-gutenberg-products-block | https://api.github.com/repos/woocommerce/woocommerce-gutenberg-products-block | opened | Decide on a design for the template block placeholders in the editor | needs design category: fse 🔹 block-type: templates | ## Context
For the [WooCommerce Store Editing Templates v1](https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/4926), we're converting a set of WooCommerce PHP templates to a block each. Initially, the goal is not to create fully-featured blocks but instead create one basic block per PHP template that renders a placeholder in the editor and the template on the Frontend.
## Goal
The goal of this particular issue is to decide on the placeholder design and copy for the editor.
## Ideas
We created some basic prototypes to demonstrate the design challenge and offer some initial ideas on potential solutions.
### Option 1: simple text-based placeholder
The first option would be to display a simple text-based placeholder that provides the merchant with an explanation that a) they are seeing a placeholder they can move and add blocks around and b) that the placeholder will be replaced by the related WooCommerce template on the Frontend.
This simple option has a couple of key advantages:
- Its compact size makes it easy for a merchant to move around and edit the page with other blocks. For example, if they'd like to move the template block into a column layout, it's easier to move a compact block than a prominent wireframe placeholder like below.
- It's very straightforward to implement. Given that these placeholders are a temporary stopgap solution, it is at least up for debate on how much work should go into creating them.
Note: we just used the `Notice` component inside the `Placeholder` component for the prototype here. As per [Block Editor Handbook](https://developer.wordpress.org/block-editor/reference-guides/components/notice/) it sounds like the `Notice` should only be used for top-level notices. We'd have to use something else here.
<img width="500" alt="Screenshot 2021-10-14 at 16 34 16" src="https://user-images.githubusercontent.com/1562646/137440146-d24c2807-c2cc-49b5-99b4-ea4ded1ce4e1.png">
### Option 2: text-based replica placeholder
The second option is inspired by the recent effort of [unifying Site Editing block placeholders](https://github.com/WordPress/gutenberg/issues/35501). In situations in which the blocks have no data to reference, the blocks are now showing basic descriptions in place of where the relevant elements would be e.g. "Product title" where the title of the product would show.
The main downsides of this option are:
- Given that the placeholder mimics the actual template that will be rendered on the Frontend, the placeholder can get very large which makes it difficult for a merchant to move it around in the editor. The user experience arguably isn't great.
- Implementing this option means we'll have to basically replicate each entire template in the editor. Given that these placeholders are a temporary stopgap solution, it is up for debate if this is worth the time commitment required.
<img width="500" alt="Screenshot 2021-10-14 at 17 20 00" src="https://user-images.githubusercontent.com/1562646/137441560-f43d1b24-4fe0-4afd-894f-7495fd0470ad.png">
### Option 2: wireframe replica placeholder
The third option is inspired by the navigation block empty state placeholder. In situations in which the block has no data, it's showing a basic wireframe representation of the content.
The main downsides of this option are:
- Given that the placeholder mimics the actual template that will be rendered on the Frontend, the placeholder can get very large which makes it difficult for a merchant to move it around in the editor. The user experience arguably isn't great.
- While this option is a little less involved than option 2, implementing this option still means we'll have to basically replicate each entire template in the editor. Given that these placeholders are a temporary stopgap solution, it is up for debate if this is worth the time commitment required.
<img width="921" alt="Screenshot 2021-10-14 at 17 52 47" src="https://user-images.githubusercontent.com/1562646/137442283-975c8a26-fa4e-4204-a7e2-9c0e78173a25.png">
| 1.0 | Decide on a design for the template block placeholders in the editor - ## Context
For the [WooCommerce Store Editing Templates v1](https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/4926), we're converting a set of WooCommerce PHP templates to a block each. Initially, the goal is not to create fully-featured blocks but instead create one basic block per PHP template that renders a placeholder in the editor and the template on the Frontend.
## Goal
The goal of this particular issue is to decide on the placeholder design and copy for the editor.
## Ideas
We created some basic prototypes to demonstrate the design challenge and offer some initial ideas on potential solutions.
### Option 1: simple text-based placeholder
The first option would be to display a simple text-based placeholder that provides the merchant with an explanation that a) they are seeing a placeholder they can move and add blocks around and b) that the placeholder will be replaced by the related WooCommerce template on the Frontend.
This simple option has a couple of key advantages:
- Its compact size makes it easy for a merchant to move around and edit the page with other blocks. For example, if they'd like to move the template block into a column layout, it's easier to move a compact block than a prominent wireframe placeholder like below.
- It's very straightforward to implement. Given that these placeholders are a temporary stopgap solution, it is at least up for debate on how much work should go into creating them.
Note: we just used the `Notice` component inside the `Placeholder` component for the prototype here. As per [Block Editor Handbook](https://developer.wordpress.org/block-editor/reference-guides/components/notice/) it sounds like the `Notice` should only be used for top-level notices. We'd have to use something else here.
<img width="500" alt="Screenshot 2021-10-14 at 16 34 16" src="https://user-images.githubusercontent.com/1562646/137440146-d24c2807-c2cc-49b5-99b4-ea4ded1ce4e1.png">
### Option 2: text-based replica placeholder
The second option is inspired by the recent effort of [unifying Site Editing block placeholders](https://github.com/WordPress/gutenberg/issues/35501). In situations in which the blocks have no data to reference, the blocks are now showing basic descriptions in place of where the relevant elements would be e.g. "Product title" where the title of the product would show.
The main downsides of this option are:
- Given that the placeholder mimics the actual template that will be rendered on the Frontend, the placeholder can get very large which makes it difficult for a merchant to move it around in the editor. The user experience arguably isn't great.
- Implementing this option means we'll have to basically replicate each entire template in the editor. Given that these placeholders are a temporary stopgap solution, it is up for debate if this is worth the time commitment required.
<img width="500" alt="Screenshot 2021-10-14 at 17 20 00" src="https://user-images.githubusercontent.com/1562646/137441560-f43d1b24-4fe0-4afd-894f-7495fd0470ad.png">
### Option 2: wireframe replica placeholder
The third option is inspired by the navigation block empty state placeholder. In situations in which the block has no data, it's showing a basic wireframe representation of the content.
The main downsides of this option are:
- Given that the placeholder mimics the actual template that will be rendered on the Frontend, the placeholder can get very large which makes it difficult for a merchant to move it around in the editor. The user experience arguably isn't great.
- While this option is a little less involved than option 2, implementing this option still means we'll have to basically replicate each entire template in the editor. Given that these placeholders are a temporary stopgap solution, it is up for debate if this is worth the time commitment required.
<img width="921" alt="Screenshot 2021-10-14 at 17 52 47" src="https://user-images.githubusercontent.com/1562646/137442283-975c8a26-fa4e-4204-a7e2-9c0e78173a25.png">
| non_main | decide on a design for the template block placeholders in the editor context for the we re converting a set of woocommerce php templates to a block each initially the goal is not to create fully featured blocks but instead create one basic block per php template that renders a placeholder in the editor and the template on the frontend goal the goal of this particular issue is to decide on the placeholder design and copy for the editor ideas we created some basic prototypes to demonstrate the design challenge and offer some initial ideas on potential solutions option simple text based placeholder the first option would be to display a simple text based placeholder that provides the merchant with an explanation that a they are seeing a placeholder they can move and add blocks around and b that the placeholder will be replaced by the related woocommerce template on the frontend this simple option has a couple of key advantages its compact size makes it easy for a merchant to move around and edit the page with other blocks for example if they d like to move the template block into a column layout it s easier to move a compact block than a prominent wireframe placeholder like below it s very straightforward to implement given that these placeholders are a temporary stopgap solution it is at least up for debate on how much work should go into creating them note we just used the notice component inside the placeholder component for the prototype here as per it sounds like the notice should only be used for top level notices we d have to use something else here img width alt screenshot at src option text based replica placeholder the second option is inspired by the recent effort of in situations in which the blocks have no data to reference the blocks are now showing basic descriptions in place of where the relevant elements would be e g product title where the title of the product would show the main downsides of this option are given that the placeholder mimics the actual template that will be rendered on the frontend the placeholder can get very large which makes it difficult for a merchant to move it around in the editor the user experience arguably isn t great implementing this option means we ll have to basically replicate each entire template in the editor given that these placeholders are a temporary stopgap solution it is up for debate if this is worth the time commitment required img width alt screenshot at src option wireframe replica placeholder the third option is inspired by the navigation block empty state placeholder in situations in which the block has no data it s showing a basic wireframe representation of the content the main downsides of this option are given that the placeholder mimics the actual template that will be rendered on the frontend the placeholder can get very large which makes it difficult for a merchant to move it around in the editor the user experience arguably isn t great while this option is a little less involved than option implementing this option still means we ll have to basically replicate each entire template in the editor given that these placeholders are a temporary stopgap solution it is up for debate if this is worth the time commitment required img width alt screenshot at src | 0 |
3,980 | 18,298,728,229 | IssuesEvent | 2021-10-05 23:30:47 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: The bx--slider-text-input input field accepts invalid inputs including certain types of script injections | type: bug 🐛 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
carbon-components
### Browser
Firefox ESR
### Package version
10.17
### Description
The `bx--slider-text-input` input field stores/caches invalid inputs including certain types of script injections
### CodeSandbox example
https://codesandbox.io/s/frosty-hamilton-so9n8
### Steps to reproduce
* In the attached code sandbox enter an alphanumeric string that looks like scientific notation but isn't, such as "6y7", into the input field
* Hit ENTER on your keyboard
🪲 **BUG - The invalid input is stored / cached**🪲

* In the attached code sandbox field paste the following injection that contains an esoteric attack vector into the input field
`<META HTTP-EQUIV="Set-Cookie" Content="USERID=<SCRIPT>alert('XSS')</SCRIPT>">`
* Hit ENTER on your keyboard
🪲 **BUG - The invalid inputs are stored / cached**🪲

### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: The bx--slider-text-input input field accepts invalid inputs including certain types of script injections - ### Package
carbon-components
### Browser
Firefox ESR
### Package version
10.17
### Description
The `bx--slider-text-input` input field stores/caches invalid inputs including certain types of script injections
### CodeSandbox example
https://codesandbox.io/s/frosty-hamilton-so9n8
### Steps to reproduce
* In the attached code sandbox enter an alphanumeric string that looks like scientific notation but isn't, such as "6y7", into the input field
* Hit ENTER on your keyboard
🪲 **BUG - The invalid input is stored / cached**🪲

* In the attached code sandbox field paste the following injection that contains an esoteric attack vector into the input field
`<META HTTP-EQUIV="Set-Cookie" Content="USERID=<SCRIPT>alert('XSS')</SCRIPT>">`
* Hit ENTER on your keyboard
🪲 **BUG - The invalid inputs are stored / cached**🪲

### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | the bx slider text input input field accepts invalid inputs including certain types of script injections package carbon components browser firefox esr package version description the bx slider text input input field stores caches invalid inputs including certain types of script injections codesandbox example steps to reproduce in the attached code sandbox enter an alphanumeric string that looks like scientific notation but isn t such as into the input field hit enter on your keyboard 🪲 bug the invalid input is stored cached 🪲 in the attached code sandbox field paste the following injection that contains an esoteric attack vector into the input field hit enter on your keyboard 🪲 bug the invalid inputs are stored cached 🪲 code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
1,887 | 6,577,527,227 | IssuesEvent | 2017-09-12 01:32:08 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | git module tries to outsmart git and ssh and fails | affects_2.0 bug_report waiting_on_maintainer | ##### Issue Type:
- Bug Report
##### Plugin Name:
git
##### Ansible Version:
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Environment:
from ArchLinux too Debian Jessie
##### Summary:
When trying to clone a repo using an ssh "alias" everything fails.
##### Steps To Reproduce:
<!-- For bugs, please show exactly how to reproduce the problem.
For new features, show how the feature would be used. -->
playbook:
```
- git:
repo: "git@A-github.internal:myrepo.git"
dest: ~/myrepo
version: "master"
```
~/.ssh/config:
```
Host A-github.internal
IdentityFile ~/.ssh/deploykey_A
HostName github.internal
```
hostkey for github.internal is present
`git clone "git@A-github.internal:myrepo.git"` works
<!-- You can also paste gist.github.com links for larger files. -->
##### Expected Results:
the git repo gets cloned.
##### Actual Results:
error:
```
A-github.internal has an unknown hostkey. Set accept_hostkey to True or manually add the hostkey prior to running the git module
```
##### Attempted workaround:
If i set accept_hostkey to true, then it fails trying to resolve A-github.internal
(same if i try to add a hostkey with that name using "known_hosts")
| True | git module tries to outsmart git and ssh and fails - ##### Issue Type:
- Bug Report
##### Plugin Name:
git
##### Ansible Version:
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Environment:
from ArchLinux too Debian Jessie
##### Summary:
When trying to clone a repo using an ssh "alias" everything fails.
##### Steps To Reproduce:
<!-- For bugs, please show exactly how to reproduce the problem.
For new features, show how the feature would be used. -->
playbook:
```
- git:
repo: "git@A-github.internal:myrepo.git"
dest: ~/myrepo
version: "master"
```
~/.ssh/config:
```
Host A-github.internal
IdentityFile ~/.ssh/deploykey_A
HostName github.internal
```
hostkey for github.internal is present
`git clone "git@A-github.internal:myrepo.git"` works
<!-- You can also paste gist.github.com links for larger files. -->
##### Expected Results:
the git repo gets cloned.
##### Actual Results:
error:
```
A-github.internal has an unknown hostkey. Set accept_hostkey to True or manually add the hostkey prior to running the git module
```
##### Attempted workaround:
If i set accept_hostkey to true, then it fails trying to resolve A-github.internal
(same if i try to add a hostkey with that name using "known_hosts")
| main | git module tries to outsmart git and ssh and fails issue type bug report plugin name git ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides environment from archlinux too debian jessie summary when trying to clone a repo using an ssh alias everything fails steps to reproduce for bugs please show exactly how to reproduce the problem for new features show how the feature would be used playbook git repo git a github internal myrepo git dest myrepo version master ssh config host a github internal identityfile ssh deploykey a hostname github internal hostkey for github internal is present git clone git a github internal myrepo git works expected results the git repo gets cloned actual results error a github internal has an unknown hostkey set accept hostkey to true or manually add the hostkey prior to running the git module attempted workaround if i set accept hostkey to true then it fails trying to resolve a github internal same if i try to add a hostkey with that name using known hosts | 1 |
415,199 | 28,022,479,237 | IssuesEvent | 2023-03-28 06:50:31 | oracle-samples/macaron | https://api.github.com/repos/oracle-samples/macaron | opened | Update THIRD_PARTY_LICENSES.txt for SQLAlchemy, CUE and Souffle | documentation dependencies task | This ticket documents the task to update the THIRD_PARTY_LICENSES.txt file with the new third party dependencies for the open source v0.1.0 release. | 1.0 | Update THIRD_PARTY_LICENSES.txt for SQLAlchemy, CUE and Souffle - This ticket documents the task to update the THIRD_PARTY_LICENSES.txt file with the new third party dependencies for the open source v0.1.0 release. | non_main | update third party licenses txt for sqlalchemy cue and souffle this ticket documents the task to update the third party licenses txt file with the new third party dependencies for the open source release | 0 |
178,357 | 14,667,838,080 | IssuesEvent | 2020-12-29 19:39:54 | Fruity-Loops/ALTA | https://api.github.com/repos/Fruity-Loops/ALTA | closed | Release Refactoring Report | Developer Story documentation | Write up the refactoring report for the release report.
List the commits containing refactoring information. The commit links should point exactly to the lines where the refactoring took place. You should also provide a briefexplanation about the reason(s) you performed each refactoring.The complete list of refactoring types is available at https://refactoring.com/catalog/ | 1.0 | Release Refactoring Report - Write up the refactoring report for the release report.
List the commits containing refactoring information. The commit links should point exactly to the lines where the refactoring took place. You should also provide a briefexplanation about the reason(s) you performed each refactoring.The complete list of refactoring types is available at https://refactoring.com/catalog/ | non_main | release refactoring report write up the refactoring report for the release report list the commits containing refactoring information the commit links should point exactly to the lines where the refactoring took place you should also provide a briefexplanation about the reason s you performed each refactoring the complete list of refactoring types is available at | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.