Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,003 | 4,771,184,968 | IssuesEvent | 2016-10-26 17:15:39 | spyder-ide/spyder | https://api.github.com/repos/spyder-ide/spyder | reopened | Add ciocheck to test mode and update Pull Request guidelines | OS-All Type-Maintainability | https://github.com/ContinuumIO/ciocheck
Package available at: https://anaconda.org/continuumcrew/ciocheck
`conda install ciocheck -c conda-forge -c continuumcrew`
Moving to pypi and conda-forge soon. | True | Add ciocheck to test mode and update Pull Request guidelines - https://github.com/ContinuumIO/ciocheck
Package available at: https://anaconda.org/continuumcrew/ciocheck
`conda install ciocheck -c conda-forge -c continuumcrew`
Moving to pypi and conda-forge soon. | main | add ciocheck to test mode and update pull request guidelines package available at conda install ciocheck c conda forge c continuumcrew moving to pypi and conda forge soon | 1 |
1,560 | 6,572,254,684 | IssuesEvent | 2017-09-11 00:39:54 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | please make lineinfile support edit in-place | affects_2.3 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
lineinfile module
##### ANSIBLE VERSION
N/A
##### SUMMARY
Docker makes /etc/resolv.conf a mount point, so that we can't do "mv some_tmp /etc/resolv.conf", "lineinfile" fails to update /etc/resolv.conf.
Currently I use module "shell" to bypass this issue, but this is long and ugly, and I don't like Ansible always reports something changed, please consider adding an "in-place" option to "lineinfile", thanks!
| True | please make lineinfile support edit in-place - ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
lineinfile module
##### ANSIBLE VERSION
N/A
##### SUMMARY
Docker makes /etc/resolv.conf a mount point, so that we can't do "mv some_tmp /etc/resolv.conf", "lineinfile" fails to update /etc/resolv.conf.
Currently I use module "shell" to bypass this issue, but this is long and ugly, and I don't like Ansible always reports something changed, please consider adding an "in-place" option to "lineinfile", thanks!
| main | please make lineinfile support edit in place issue type feature idea component name lineinfile module ansible version n a summary docker makes etc resolv conf a mount point so that we can t do mv some tmp etc resolv conf lineinfile fails to update etc resolv conf currently i use module shell to bypass this issue but this is long and ugly and i don t like ansible always reports something changed please consider adding an in place option to lineinfile thanks | 1 |
92,270 | 10,739,745,478 | IssuesEvent | 2019-10-29 16:53:50 | nus-cs2103-AY1920S1/alpha-dev-response | https://api.github.com/repos/nus-cs2103-AY1920S1/alpha-dev-response | opened | Wakanda Forever | severity.Medium team.3 tutorial.CS2103T-F14 type.DocumentationBug | # Wakanda Falls
### Thanos and Minions Strike
Thanos and his troops have conquered Wakanda. The avengers are in a state of despair and have their backs against the wall. Can they stage a monumental comeback, or will Thanos accomplish his goal and retire in his garden home?
<hr><sub>[original: nus-cs2103-AY1920S1/alpha-interim#1314]<br/>
</sub> | 1.0 | Wakanda Forever - # Wakanda Falls
### Thanos and Minions Strike
Thanos and his troops have conquered Wakanda. The avengers are in a state of despair and have their backs against the wall. Can they stage a monumental comeback, or will Thanos accomplish his goal and retire in his garden home?
<hr><sub>[original: nus-cs2103-AY1920S1/alpha-interim#1314]<br/>
</sub> | non_main | wakanda forever wakanda falls thanos and minions strike thanos and his troops have conquered wakanda the avengers are in a state of despair and have their backs against the wall can they stage a monumental comeback or will thanos accomplish his goal and retire in his garden home | 0 |
289,813 | 21,793,203,028 | IssuesEvent | 2022-05-15 08:16:12 | scikit-rf/scikit-rf | https://api.github.com/repos/scikit-rf/scikit-rf | closed | Erroneous Demo in "Measuring a Mutiport Device with a 2-Port Network Analyzer" | Bug Documentation Calibration | In the tutorial *Measuring a Mutiport Device with a 2-Port Network Analyzer*, the section *[Test For Accuracy](https://scikit-rf.readthedocs.io/en/latest/examples/metrology/Measuring%20a%20Mutiport%20Device%20with%20a%202-Port%20Network%20Analyzer.html#Test-For-Accuracy)* appears erroneous, it has probably been broken by a regression.
Two years ago, this section reads:
> Making sure our composite network is the same as our DUT
>
> [10]: composite == dut
> [10]: True
>
> Nice!. How close ?
>
> [11]: sum((composite - dut).s_mag)
> [11]: 9.917536367984054e-13
But it now reads:
> Making sure our composite network is the same as our DUT
>
> [10]: composite == dut
> [10]: False
>
> Nice!. How close ?
>
> [11]: sum((composite - dut).s_mag)
> [11]: 880.7934663060666 | 1.0 | Erroneous Demo in "Measuring a Mutiport Device with a 2-Port Network Analyzer" - In the tutorial *Measuring a Mutiport Device with a 2-Port Network Analyzer*, the section *[Test For Accuracy](https://scikit-rf.readthedocs.io/en/latest/examples/metrology/Measuring%20a%20Mutiport%20Device%20with%20a%202-Port%20Network%20Analyzer.html#Test-For-Accuracy)* appears erroneous, it has probably been broken by a regression.
Two years ago, this section reads:
> Making sure our composite network is the same as our DUT
>
> [10]: composite == dut
> [10]: True
>
> Nice!. How close ?
>
> [11]: sum((composite - dut).s_mag)
> [11]: 9.917536367984054e-13
But it now reads:
> Making sure our composite network is the same as our DUT
>
> [10]: composite == dut
> [10]: False
>
> Nice!. How close ?
>
> [11]: sum((composite - dut).s_mag)
> [11]: 880.7934663060666 | non_main | erroneous demo in measuring a mutiport device with a port network analyzer in the tutorial measuring a mutiport device with a port network analyzer the section appears erroneous it has probably been broken by a regression two years ago this section reads making sure our composite network is the same as our dut composite dut true nice how close sum composite dut s mag but it now reads making sure our composite network is the same as our dut composite dut false nice how close sum composite dut s mag | 0 |
352,361 | 10,540,897,890 | IssuesEvent | 2019-10-02 09:28:42 | UniversityOfHelsinkiCS/fuksilaiterekisteri | https://api.github.com/repos/UniversityOfHelsinkiCS/fuksilaiterekisteri | closed | season over -update | enhancement high priority management | - [x] stop new regs
- [x] update ineligibility-page
- [x] send email lists of ready and wants to Pekka
| 1.0 | season over -update - - [x] stop new regs
- [x] update ineligibility-page
- [x] send email lists of ready and wants to Pekka
| non_main | season over update stop new regs update ineligibility page send email lists of ready and wants to pekka | 0 |
422 | 3,508,966,034 | IssuesEvent | 2016-01-08 20:20:39 | antigenomics/vdjdb-db | https://api.github.com/repos/antigenomics/vdjdb-db | closed | DB maintainance | maintainance | * Consistency checks via Travis
* Add CDR3 fixing functional from vdjdb
* Merge chunks into final table (say, with pandas) | True | DB maintainance - * Consistency checks via Travis
* Add CDR3 fixing functional from vdjdb
* Merge chunks into final table (say, with pandas) | main | db maintainance consistency checks via travis add fixing functional from vdjdb merge chunks into final table say with pandas | 1 |
514 | 3,875,242,333 | IssuesEvent | 2016-04-11 23:53:19 | spyder-ide/spyder | https://api.github.com/repos/spyder-ide/spyder | closed | Migrate to qtpy: Remove internal Qt shim used by Spyder. | Enhancement Maintainability | Remove internal Qt shim used by Spyder and replace by qtpy package. | True | Migrate to qtpy: Remove internal Qt shim used by Spyder. - Remove internal Qt shim used by Spyder and replace by qtpy package. | main | migrate to qtpy remove internal qt shim used by spyder remove internal qt shim used by spyder and replace by qtpy package | 1 |
569,795 | 17,016,187,099 | IssuesEvent | 2021-07-02 12:24:00 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | opened | Add new tunnel values to roads preset | Component: potlatch2 Priority: minor Type: enhancement | **[Submitted to the original trac issue database at 11.56am, Sunday, 2nd December 2012]**
The predefined road presets should be updated, so that the new tunnel values can be selected. See approved proposal: https://wiki.openstreetmap.org/wiki/Proposed_features/building_passage | 1.0 | Add new tunnel values to roads preset - **[Submitted to the original trac issue database at 11.56am, Sunday, 2nd December 2012]**
The predefined road presets should be updated, so that the new tunnel values can be selected. See approved proposal: https://wiki.openstreetmap.org/wiki/Proposed_features/building_passage | non_main | add new tunnel values to roads preset the predefined road presets should be updated so that the new tunnel values can be selected see approved proposal | 0 |
80,204 | 23,139,881,519 | IssuesEvent | 2022-07-28 17:23:43 | foundry-rs/foundry | https://api.github.com/repos/foundry-rs/foundry | closed | Forge caching issues | T-bug C-forge T-meta Cmd-forge-build | ### Component
Forge
### Have you ensured that all of these are up to date?
- [X] Foundry
- [X] Foundryup
### What version of Foundry are you on?
forge 0.2.0 (05e72b6 2022-05-19T00:03:30.647862736Z)
### What command(s) is the bug in?
forge build
### Operating System
Linux
### Describe the bug
This is a consolidation of bugs that are suspected to be cache related:
- https://github.com/foundry-rs/foundry/issues/1629
- https://github.com/foundry-rs/foundry/issues/1344
## Issue 1
- [ ] Check if solved
**Bug**: Sometimes `forge test` does not run any test files.
**Workaround**: You can run `forge clean` before `forge test`
## Issue 2
- [ ] Check if solved
**Bug**: Sometimes a file is changed and that edit is not picked up by `forge test`, which will then not recompile that contract. There is no reliable way to reproduce this currently. | 1.0 | Forge caching issues - ### Component
Forge
### Have you ensured that all of these are up to date?
- [X] Foundry
- [X] Foundryup
### What version of Foundry are you on?
forge 0.2.0 (05e72b6 2022-05-19T00:03:30.647862736Z)
### What command(s) is the bug in?
forge build
### Operating System
Linux
### Describe the bug
This is a consolidation of bugs that are suspected to be cache related:
- https://github.com/foundry-rs/foundry/issues/1629
- https://github.com/foundry-rs/foundry/issues/1344
## Issue 1
- [ ] Check if solved
**Bug**: Sometimes `forge test` does not run any test files.
**Workaround**: You can run `forge clean` before `forge test`
## Issue 2
- [ ] Check if solved
**Bug**: Sometimes a file is changed and that edit is not picked up by `forge test`, which will then not recompile that contract. There is no reliable way to reproduce this currently. | non_main | forge caching issues component forge have you ensured that all of these are up to date foundry foundryup what version of foundry are you on forge what command s is the bug in forge build operating system linux describe the bug this is a consolidation of bugs that are suspected to be cache related issue check if solved bug sometimes forge test does not run any test files workaround you can run forge clean before forge test issue check if solved bug sometimes a file is changed and that edit is not picked up by forge test which will then not recompile that contract there is no reliable way to reproduce this currently | 0 |
3,995 | 18,516,865,223 | IssuesEvent | 2021-10-20 11:06:57 | plaguesec/plaguesec-os | https://api.github.com/repos/plaguesec/plaguesec-os | closed | calendar versioning approach | maintainers only p3 todo tweak | Great guide for versioning a penetration testing operating system.
We use for this project is [Calendar Versioning](https://calver.org/).
This is our format (which is the same as Ubuntu but we have our own version).
``YY.0M.MICRO-MODFIER.Patch`` or ``21.10.1-alpha.1``
Cheers! 🚀🎉 | True | calendar versioning approach - Great guide for versioning a penetration testing operating system.
We use for this project is [Calendar Versioning](https://calver.org/).
This is our format (which is the same as Ubuntu but we have our own version).
``YY.0M.MICRO-MODFIER.Patch`` or ``21.10.1-alpha.1``
Cheers! 🚀🎉 | main | calendar versioning approach great guide for versioning a penetration testing operating system we use for this project is this is our format which is the same as ubuntu but we have our own version yy micro modfier patch or alpha cheers 🚀🎉 | 1 |
65,996 | 3,249,426,301 | IssuesEvent | 2015-10-18 05:22:58 | littlefoot32/gitiles | https://api.github.com/repos/littlefoot32/gitiles | closed | API for +log views | auto-migrated Priority-Medium Type-Enhancement | ```
-Support JSON/TEXT format for +log
-Support additional arguments similar to git log, e.g. filtering by author or
date.
```
Original issue reported on code.google.com by `dborowitz@google.com` on 11 Apr 2013 at 7:29 | 1.0 | API for +log views - ```
-Support JSON/TEXT format for +log
-Support additional arguments similar to git log, e.g. filtering by author or
date.
```
Original issue reported on code.google.com by `dborowitz@google.com` on 11 Apr 2013 at 7:29 | non_main | api for log views support json text format for log support additional arguments similar to git log e g filtering by author or date original issue reported on code google com by dborowitz google com on apr at | 0 |
3,306 | 12,806,593,279 | IssuesEvent | 2020-07-03 09:44:35 | geolexica/geolexica-server | https://api.github.com/repos/geolexica/geolexica-server | closed | Disable Liquid processing for some machine-readable pages | maintainability | Jekyll 4 allows to disable liquid processing via front matter variable: https://github.com/jekyll/jekyll/pull/6824. We should use it in favour of `{% raw %}` wherever it's appropriate, e.g. in `Jekyll::Geolexica::ConceptPage` which is meant to produce some non-Liquid content. | True | Disable Liquid processing for some machine-readable pages - Jekyll 4 allows to disable liquid processing via front matter variable: https://github.com/jekyll/jekyll/pull/6824. We should use it in favour of `{% raw %}` wherever it's appropriate, e.g. in `Jekyll::Geolexica::ConceptPage` which is meant to produce some non-Liquid content. | main | disable liquid processing for some machine readable pages jekyll allows to disable liquid processing via front matter variable we should use it in favour of raw wherever it s appropriate e g in jekyll geolexica conceptpage which is meant to produce some non liquid content | 1 |
1,697 | 6,574,278,501 | IssuesEvent | 2017-09-11 12:16:18 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | docker_service got an unexpected keyword argument | affects_2.3 bug_report cloud docker module needs_maintainer support:core | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* ansible
* ansible-playbook
* docker_service
* ansible-container
##### ANSIBLE VERSION
```
ansible 2.3.1.0
config file =
configured module search path = Default w/o overrides
python version = 2.7.12 (default, Sep 1 2016, 22:14:00) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
```
AWS Linux 64 bit
Python 2.7.12
pip 9.0.1 from /usr/local/lib/python2.7/site-packages (python 2.7)
ansible==2.3.1.0
-e git+https://github.com/ansible/ansible-container@0d54b0efba8a6bc205c0619226397201381e8e20#egg=ansible_container
aws-cfn-bootstrap==1.4
awscli==1.11.83
Babel==0.9.4
backports.ssl-match-hostname==3.5.0.1
boto==2.42.0
botocore==1.5.46
cached-property==1.3.0
certifi==2017.4.17
chardet==3.0.4
cloud-init==0.7.6
colorama==0.3.9
configobj==4.7.2
dictdiffer==0.6.1
docker==2.0.1
docker-compose==1.14.0
docker-pycreds==0.2.1
dockerpty==0.4.1
docopt==0.6.2
docutils==0.11
ecdsa==0.11
enum34==1.1.6
functools32==3.2.3.post2
futures==3.0.3
httplib2==0.10.3
idna==2.5
iniparse==0.3.1
ipaddress==1.0.18
Jinja2==2.9.6
jmespath==0.9.2
jsonpatch==1.2
jsonpointer==1.0
jsonschema==2.6.0
kitchen==1.1.1
kubernetes==1.0.2
lockfile==0.8
MarkupSafe==1.0
oauth2client==4.1.2
openshift==0.0.1
paramiko==1.15.1
PIL==1.1.6
ply==3.4
pyasn1==0.1.7
pyasn1-modules==0.0.9
pycrypto==2.6.1
pycurl==7.19.0
pygpgme==0.3
pyliblzma==0.5.3
pystache==0.5.3
python-daemon==1.5.2
python-dateutil==2.1
python-string-utils==0.6.0
pyxattr==0.5.0
PyYAML==3.12
requests==2.11.1
rsa==3.4.1
ruamel.ordereddict==0.4.9
ruamel.yaml==0.15.18
simplejson==3.6.5
six==1.10.0
structlog==17.2.0
texttable==0.8.8
urlgrabber==3.10
urllib3==1.21.1
virtualenv==12.0.7
websocket-client==0.44.0
yum-metadata-parser==1.1.4
```
##### SUMMARY
Playbook, generated by ansible-container, fails to run.
There were already some basic error in the generated playbook which i fixed.
The images now download but then there is this error message
```
TASK [docker_service] **********************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "", "msg": "Error starting project __init__() got an unexpected keyword argument 'cpu_count'"}
to retry, use: --limit @/home/ec2-user/orson/orson.retry
```
##### STEPS TO REPRODUCE
Run this command on the playbook. You can either put other generic images in place of mine or you can contact me and I will give you temporary access to our private registry.
```
ansible-playbook --step -t start orson.yml
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- gather_facts: false
tasks:
- docker_login: username=mchassy password=XXXXXXX email=mchassy@orsontestdata.com
tags:
- start
- docker_service:
definition:
services: &id001
tomee:
image: orsontestdata/orson-tomee:1.2.0-ansible
working_dir: /opt
entrypoint:
- /entrypoint.sh
user: orson
command:
- /usr/bin/dumb-init
- /opt/orson/tomee/bin/catalina.sh
- run
ports:
- 8080:8080
mongo:
image: orsontestdata/orson-mongo:1.2.0-ansible
working_dir: /opt
entrypoint:
- /entrypoint.sh
user: orson
volumes:
- mongo_data:/opt/orson/mongo/data
command:
- /usr/bin/dumb-init
- /opt/orson/mongo/bin/mongod --config /opt/orson/mongo/conf/mongod.conf
expose:
- 27017
version: '2'
volumes:
mongo_data: &id002 {}
state: present
project_name: orson
tags:
- start
```
##### EXPECTED RESULTS
images pulled and services started
##### ACTUAL RESULTS
images are pulled but services don't start
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "", "msg": "Error starting project __init__() got an unexpected keyword argument 'cpu_count'"}
to retry, use: --limit @/home/ec2-user/orson/orson.retry
```
| True | docker_service got an unexpected keyword argument - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* ansible
* ansible-playbook
* docker_service
* ansible-container
##### ANSIBLE VERSION
```
ansible 2.3.1.0
config file =
configured module search path = Default w/o overrides
python version = 2.7.12 (default, Sep 1 2016, 22:14:00) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
```
AWS Linux 64 bit
Python 2.7.12
pip 9.0.1 from /usr/local/lib/python2.7/site-packages (python 2.7)
ansible==2.3.1.0
-e git+https://github.com/ansible/ansible-container@0d54b0efba8a6bc205c0619226397201381e8e20#egg=ansible_container
aws-cfn-bootstrap==1.4
awscli==1.11.83
Babel==0.9.4
backports.ssl-match-hostname==3.5.0.1
boto==2.42.0
botocore==1.5.46
cached-property==1.3.0
certifi==2017.4.17
chardet==3.0.4
cloud-init==0.7.6
colorama==0.3.9
configobj==4.7.2
dictdiffer==0.6.1
docker==2.0.1
docker-compose==1.14.0
docker-pycreds==0.2.1
dockerpty==0.4.1
docopt==0.6.2
docutils==0.11
ecdsa==0.11
enum34==1.1.6
functools32==3.2.3.post2
futures==3.0.3
httplib2==0.10.3
idna==2.5
iniparse==0.3.1
ipaddress==1.0.18
Jinja2==2.9.6
jmespath==0.9.2
jsonpatch==1.2
jsonpointer==1.0
jsonschema==2.6.0
kitchen==1.1.1
kubernetes==1.0.2
lockfile==0.8
MarkupSafe==1.0
oauth2client==4.1.2
openshift==0.0.1
paramiko==1.15.1
PIL==1.1.6
ply==3.4
pyasn1==0.1.7
pyasn1-modules==0.0.9
pycrypto==2.6.1
pycurl==7.19.0
pygpgme==0.3
pyliblzma==0.5.3
pystache==0.5.3
python-daemon==1.5.2
python-dateutil==2.1
python-string-utils==0.6.0
pyxattr==0.5.0
PyYAML==3.12
requests==2.11.1
rsa==3.4.1
ruamel.ordereddict==0.4.9
ruamel.yaml==0.15.18
simplejson==3.6.5
six==1.10.0
structlog==17.2.0
texttable==0.8.8
urlgrabber==3.10
urllib3==1.21.1
virtualenv==12.0.7
websocket-client==0.44.0
yum-metadata-parser==1.1.4
```
##### SUMMARY
Playbook, generated by ansible-container, fails to run.
There were already some basic error in the generated playbook which i fixed.
The images now download but then there is this error message
```
TASK [docker_service] **********************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "", "msg": "Error starting project __init__() got an unexpected keyword argument 'cpu_count'"}
to retry, use: --limit @/home/ec2-user/orson/orson.retry
```
##### STEPS TO REPRODUCE
Run this command on the playbook. You can either put other generic images in place of mine or you can contact me and I will give you temporary access to our private registry.
```
ansible-playbook --step -t start orson.yml
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- gather_facts: false
tasks:
- docker_login: username=mchassy password=XXXXXXX email=mchassy@orsontestdata.com
tags:
- start
- docker_service:
definition:
services: &id001
tomee:
image: orsontestdata/orson-tomee:1.2.0-ansible
working_dir: /opt
entrypoint:
- /entrypoint.sh
user: orson
command:
- /usr/bin/dumb-init
- /opt/orson/tomee/bin/catalina.sh
- run
ports:
- 8080:8080
mongo:
image: orsontestdata/orson-mongo:1.2.0-ansible
working_dir: /opt
entrypoint:
- /entrypoint.sh
user: orson
volumes:
- mongo_data:/opt/orson/mongo/data
command:
- /usr/bin/dumb-init
- /opt/orson/mongo/bin/mongod --config /opt/orson/mongo/conf/mongod.conf
expose:
- 27017
version: '2'
volumes:
mongo_data: &id002 {}
state: present
project_name: orson
tags:
- start
```
##### EXPECTED RESULTS
images pulled and services started
##### ACTUAL RESULTS
images are pulled but services don't start
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "", "msg": "Error starting project __init__() got an unexpected keyword argument 'cpu_count'"}
to retry, use: --limit @/home/ec2-user/orson/orson.retry
```
| main | docker service got an unexpected keyword argument issue type bug report component name ansible ansible playbook docker service ansible container ansible version ansible config file configured module search path default w o overrides python version default sep configuration os environment aws linux bit python pip from usr local lib site packages python ansible e git aws cfn bootstrap awscli babel backports ssl match hostname boto botocore cached property certifi chardet cloud init colorama configobj dictdiffer docker docker compose docker pycreds dockerpty docopt docutils ecdsa futures idna iniparse ipaddress jmespath jsonpatch jsonpointer jsonschema kitchen kubernetes lockfile markupsafe openshift paramiko pil ply modules pycrypto pycurl pygpgme pyliblzma pystache python daemon python dateutil python string utils pyxattr pyyaml requests rsa ruamel ordereddict ruamel yaml simplejson six structlog texttable urlgrabber virtualenv websocket client yum metadata parser summary playbook generated by ansible container fails to run there were already some basic error in the generated playbook which i fixed the images now download but then there is this error message task fatal failed changed false failed true module stderr module stdout msg error starting project init got an unexpected keyword argument cpu count to retry use limit home user orson orson retry steps to reproduce run this command on the playbook you can either put other generic images in place of mine or you can contact me and i will give you temporary access to our private registry ansible playbook step t start orson yml yaml gather facts false tasks docker login username mchassy password xxxxxxx email mchassy orsontestdata com tags start docker service definition services tomee image orsontestdata orson tomee ansible working dir opt entrypoint entrypoint sh user orson command usr bin dumb init opt orson tomee bin catalina sh run ports mongo image orsontestdata orson mongo ansible working dir opt entrypoint entrypoint sh user orson volumes mongo data opt orson mongo data command usr bin dumb init opt orson mongo bin mongod config opt orson mongo conf mongod conf expose version volumes mongo data state present project name orson tags start expected results images pulled and services started actual results images are pulled but services don t start fatal failed changed false failed true module stderr module stdout msg error starting project init got an unexpected keyword argument cpu count to retry use limit home user orson orson retry | 1 |
113,409 | 24,413,794,953 | IssuesEvent | 2022-10-05 14:24:12 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | opened | Parameterised queries only work with `SELECT` queries | unfinished code | **Describe the unexpected behaviour**
When trying to use parameterised for queries other than `SELECT`s I get database exceptions for unexpected strings. Apologies if this is intended behaviour, I tried searching the issues but couldn't find anything open or closed.
**How to reproduce**
* ClickHouse 22.8.5.29
* Both HTTP and native interfaces
* Queries to run that lead to unexpected result:
```
POST http://localhost:8123/?param_uname=test¶m_password=qwerty
CREATE USER {uname:Identifier} IDENITIFIED WITH plaintext_password BY {password:String}
```
**Expected behavior**
I would expect the substitutions to work in the same way `Identifier` or other data type placeholders do for `SELECT` queries.
**Error message and/or stacktrace**
Example:
```
Code: 62. DB::Exception: Syntax error: failed at position X ('{'): {uname:Identifier}. Expected one of: IF NOT EXISTS, OR REPLACE, UserNamesWithHost, identifier, string literal. (SYNTAX_ERROR)
Code: 62. DB::Exception: Syntax error: failed at position X ('{'): {password:String}. Expected one of: string literal, end of query. (SYNTAX_ERROR)
```
**Additional context**
I also found the same behaviour when creating roles and row policies, I haven't tried other statements yet, I just know it works fine for `SELECT`s.
| 1.0 | Parameterised queries only work with `SELECT` queries - **Describe the unexpected behaviour**
When trying to use parameterised for queries other than `SELECT`s I get database exceptions for unexpected strings. Apologies if this is intended behaviour, I tried searching the issues but couldn't find anything open or closed.
**How to reproduce**
* ClickHouse 22.8.5.29
* Both HTTP and native interfaces
* Queries to run that lead to unexpected result:
```
POST http://localhost:8123/?param_uname=test¶m_password=qwerty
CREATE USER {uname:Identifier} IDENITIFIED WITH plaintext_password BY {password:String}
```
**Expected behavior**
I would expect the substitutions to work in the same way `Identifier` or other data type placeholders do for `SELECT` queries.
**Error message and/or stacktrace**
Example:
```
Code: 62. DB::Exception: Syntax error: failed at position X ('{'): {uname:Identifier}. Expected one of: IF NOT EXISTS, OR REPLACE, UserNamesWithHost, identifier, string literal. (SYNTAX_ERROR)
Code: 62. DB::Exception: Syntax error: failed at position X ('{'): {password:String}. Expected one of: string literal, end of query. (SYNTAX_ERROR)
```
**Additional context**
I also found the same behaviour when creating roles and row policies, I haven't tried other statements yet, I just know it works fine for `SELECT`s.
| non_main | parameterised queries only work with select queries describe the unexpected behaviour when trying to use parameterised for queries other than select s i get database exceptions for unexpected strings apologies if this is intended behaviour i tried searching the issues but couldn t find anything open or closed how to reproduce clickhouse both http and native interfaces queries to run that lead to unexpected result post create user uname identifier idenitified with plaintext password by password string expected behavior i would expect the substitutions to work in the same way identifier or other data type placeholders do for select queries error message and or stacktrace example code db exception syntax error failed at position x uname identifier expected one of if not exists or replace usernameswithhost identifier string literal syntax error code db exception syntax error failed at position x password string expected one of string literal end of query syntax error additional context i also found the same behaviour when creating roles and row policies i haven t tried other statements yet i just know it works fine for select s | 0 |
1,411 | 2,753,689,192 | IssuesEvent | 2015-04-25 00:08:11 | deis/deis | https://api.github.com/repos/deis/deis | closed | Buildpack exception not canceling build | builder | I have an exception occurring when pushing an app using heroku-buildpack-ruby. The buildpack process throws and error, and yet the build goes through and deploys the app.
This is in a newly provisioned 1.4.0 cluster. | 1.0 | Buildpack exception not canceling build - I have an exception occurring when pushing an app using heroku-buildpack-ruby. The buildpack process throws and error, and yet the build goes through and deploys the app.
This is in a newly provisioned 1.4.0 cluster. | non_main | buildpack exception not canceling build i have an exception occurring when pushing an app using heroku buildpack ruby the buildpack process throws and error and yet the build goes through and deploys the app this is in a newly provisioned cluster | 0 |
11,181 | 7,460,420,362 | IssuesEvent | 2018-03-30 19:36:40 | smith-chem-wisc/MetaMorpheus | https://api.github.com/repos/smith-chem-wisc/MetaMorpheus | closed | Search stalled in new MM version and lengthy searches | Performance | Hi,
I've downloaded the new version and ran a search. search stalled in "picking MS2 spectra". when I shut the run down, I got "index out of range error". the search was done on a single file running a search and G-PTM-D pipeline.
the second question is of slow searches. before downloading the new version i've put a quant set to search running an initial search with wide ppm values and small DB, calibration and then full search and G-PTM-D.
calibration took forever - 3 files took more than 8 hours. the software is installed on a 26 core VM on a Windows Enterprise 2008 R2 version. could that be influencing the search speed?
Thanks!
David. | True | Search stalled in new MM version and lengthy searches - Hi,
I've downloaded the new version and ran a search. search stalled in "picking MS2 spectra". when I shut the run down, I got "index out of range error". the search was done on a single file running a search and G-PTM-D pipeline.
the second question is of slow searches. before downloading the new version i've put a quant set to search running an initial search with wide ppm values and small DB, calibration and then full search and G-PTM-D.
calibration took forever - 3 files took more than 8 hours. the software is installed on a 26 core VM on a Windows Enterprise 2008 R2 version. could that be influencing the search speed?
Thanks!
David. | non_main | search stalled in new mm version and lengthy searches hi i ve downloaded the new version and ran a search search stalled in picking spectra when i shut the run down i got index out of range error the search was done on a single file running a search and g ptm d pipeline the second question is of slow searches before downloading the new version i ve put a quant set to search running an initial search with wide ppm values and small db calibration and then full search and g ptm d calibration took forever files took more than hours the software is installed on a core vm on a windows enterprise version could that be influencing the search speed thanks david | 0 |
154,372 | 13,546,865,761 | IssuesEvent | 2020-09-17 02:30:04 | openmobilityfoundation/mobility-data-specification | https://api.github.com/repos/openmobilityfoundation/mobility-data-specification | closed | Create translation tables/example scripts for reconciled state machine | Agency Provider State Machine documentation | After #506 is merged, we'll have a reconciled state machine between Agency and Provider. A follow-up task is to create additional documentation/guidance around how to translate event data from pre-1.0.0 into the newly reconciled 1.0.0 set of states.
The guidance itself should live in the [governance repo](https://github.com/openmobilityfoundation/governance) and we should link to it from the Vehicle State docs that #506 will introduce. | 1.0 | Create translation tables/example scripts for reconciled state machine - After #506 is merged, we'll have a reconciled state machine between Agency and Provider. A follow-up task is to create additional documentation/guidance around how to translate event data from pre-1.0.0 into the newly reconciled 1.0.0 set of states.
The guidance itself should live in the [governance repo](https://github.com/openmobilityfoundation/governance) and we should link to it from the Vehicle State docs that #506 will introduce. | non_main | create translation tables example scripts for reconciled state machine after is merged we ll have a reconciled state machine between agency and provider a follow up task is to create additional documentation guidance around how to translate event data from pre into the newly reconciled set of states the guidance itself should live in the and we should link to it from the vehicle state docs that will introduce | 0 |
575,876 | 17,064,639,238 | IssuesEvent | 2021-07-07 05:10:21 | nerdguyahmad/randomstuff.py | https://api.github.com/repos/nerdguyahmad/randomstuff.py | opened | Plans not working with Joke and images endpoint | Priority: MEDIUM Type: Bug | Plans seem to not work and return HTTPError for Bad request when using on jokes and images endpoint. | 1.0 | Plans not working with Joke and images endpoint - Plans seem to not work and return HTTPError for Bad request when using on jokes and images endpoint. | non_main | plans not working with joke and images endpoint plans seem to not work and return httperror for bad request when using on jokes and images endpoint | 0 |
31,422 | 14,961,388,018 | IssuesEvent | 2021-01-27 07:40:31 | FenPhoenix/AngelLoader | https://api.github.com/repos/FenPhoenix/AngelLoader | opened | Improve 7-Zip scan performance to the extent possible | performance | More and more people are releasing FMs in .7z format, so even though they will never have acceptable performance IMO, I should at least try to minimize the performance hit to the extent I can.
Things we can do:
- **Smarter .7z extract in the scanner:** Currently, the scanner simply extracts the whole .7z archive to a temp folder and thereafter treats it like an extracted FM. This works, but is *extremely* slow. We unfortunately can't just extract only the files we need, because we won't know exactly what files we'll need until we read other files (missflag.str etc.). But what we can do is just extract every file we *may possibly* need, which will at least be some amount less than the total file count.
- **Caching of readmes during scans:** Even though readme caching is reasonably performant in most cases, we can make it instantaneous by telling the scanner to simply copy the readmes from the temp extract folder into the cache folder as soon as it has them. That way only one extract is need. See T2X.7z, where it takes a million years to scan and then immediately afterward takes another million years to cache the readme. | True | Improve 7-Zip scan performance to the extent possible - More and more people are releasing FMs in .7z format, so even though they will never have acceptable performance IMO, I should at least try to minimize the performance hit to the extent I can.
Things we can do:
- **Smarter .7z extract in the scanner:** Currently, the scanner simply extracts the whole .7z archive to a temp folder and thereafter treats it like an extracted FM. This works, but is *extremely* slow. We unfortunately can't just extract only the files we need, because we won't know exactly what files we'll need until we read other files (missflag.str etc.). But what we can do is just extract every file we *may possibly* need, which will at least be some amount less than the total file count.
- **Caching of readmes during scans:** Even though readme caching is reasonably performant in most cases, we can make it instantaneous by telling the scanner to simply copy the readmes from the temp extract folder into the cache folder as soon as it has them. That way only one extract is need. See T2X.7z, where it takes a million years to scan and then immediately afterward takes another million years to cache the readme. | non_main | improve zip scan performance to the extent possible more and more people are releasing fms in format so even though they will never have acceptable performance imo i should at least try to minimize the performance hit to the extent i can things we can do smarter extract in the scanner currently the scanner simply extracts the whole archive to a temp folder and thereafter treats it like an extracted fm this works but is extremely slow we unfortunately can t just extract only the files we need because we won t know exactly what files we ll need until we read other files missflag str etc but what we can do is just extract every file we may possibly need which will at least be some amount less than the total file count caching of readmes during scans even though readme caching is reasonably performant in most cases we can make it instantaneous by telling the scanner to simply copy the readmes from the temp extract folder into the cache folder as soon as it has them that way only one extract is need see where it takes a million years to scan and then immediately afterward takes another million years to cache the readme | 0 |
368,836 | 10,885,206,986 | IssuesEvent | 2019-11-18 09:56:31 | kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines | closed | System performance degrades with large numbers of runs | area/backend area/pipelines kind/bug priority/p0 | **What happened:**
On systems with lots of runs, listing experiments becomes unbearably slow.
**What did you expect to happen:**
The performance remains constant
**What steps did you take:**
I enabled slow query logging and identified the culprit query. Change #1836 attempted to address the issue but it doesn't.
1) the query that's generated by gorm is inherently inefficient. No index can make that query faster as it's currently written. In particular, the left join that does a select * from resource_references will need to materialize a temp table. That temp table will have no index for the next part of the join
Here's an explain statement from our cluster that contains ~70k resource_references:
```
mysql> explain extended SELECT subq.*, CONCAT("[",GROUP_CONCAT(r.Payload SEPARATOR ","),"]") AS refs FROM (SELECT rd.*, CONCAT("[",GROUP_CONCAT(m.Payload SEPARATOR ","),"]") AS metrics FROM (SELECT UUID, DisplayName, Name, StorageState, Namespace, Description, CreatedAtInSec, ScheduledAtInSec, FinishedAtInSec, Conditions, PipelineId, PipelineSpecManifest, WorkflowSpecManifest, Parameters, pipelineRuntimeManifest, WorkflowRuntimeManifest FROM run_details) AS rd LEFT JOIN run_metrics AS m ON rd.UUID=m.RunUUID GROUP BY rd.UUID) AS subq LEFT JOIN (select * from resource_references where ResourceType='Run') AS r ON subq.UUID=r.ResourceUUID WHERE UUID = '5e9c12b0-d27d-11e9-86a0-42010a8000e5' GROUP BY subq.UUID LIMIT 1;
+----+-------------+---------------------+------+---------------+-------------+---------+-------+-------+----------+----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+---------------------+------+---------------+-------------+---------+-------+-------+----------+----------------------------------------------------+
| 1 | PRIMARY | <derived2> | ref | <auto_key0> | <auto_key0> | 257 | const | 10 | 100.00 | Using index condition |
| 1 | PRIMARY | <derived4> | ref | <auto_key0> | <auto_key0> | 257 | const | 10 | 100.00 | Using where |
| 4 | DERIVED | resource_references | ALL | NULL | NULL | NULL | NULL | 63229 | 100.00 | Using where |
| 2 | DERIVED | <derived3> | ALL | NULL | NULL | NULL | NULL | 19546 | 100.00 | Using temporary; Using filesort |
| 2 | DERIVED | m | ALL | PRIMARY | NULL | NULL | NULL | 1 | 100.00 | Using where; Using join buffer (Block Nested Loop) |
| 3 | DERIVED | run_details | ALL | NULL | NULL | NULL | NULL | 19546 | 100.00 | NULL |
+----+-------------+---------------------+------+---------------+-------------+---------+-------+-------+----------+----------------------------------------------------+
6 rows in set, 1 warning (6.90 sec)
```
**Anything else you would like to add:**
I'm not familiar enough with gorm to know how to force it to generate a more efficient query.
| 1.0 | System performance degrades with large numbers of runs - **What happened:**
On systems with lots of runs, listing experiments becomes unbearably slow.
**What did you expect to happen:**
The performance remains constant
**What steps did you take:**
I enabled slow query logging and identified the culprit query. Change #1836 attempted to address the issue but it doesn't.
1) the query that's generated by gorm is inherently inefficient. No index can make that query faster as it's currently written. In particular, the left join that does a select * from resource_references will need to materialize a temp table. That temp table will have no index for the next part of the join
Here's an explain statement from our cluster that contains ~70k resource_references:
```
mysql> explain extended SELECT subq.*, CONCAT("[",GROUP_CONCAT(r.Payload SEPARATOR ","),"]") AS refs FROM (SELECT rd.*, CONCAT("[",GROUP_CONCAT(m.Payload SEPARATOR ","),"]") AS metrics FROM (SELECT UUID, DisplayName, Name, StorageState, Namespace, Description, CreatedAtInSec, ScheduledAtInSec, FinishedAtInSec, Conditions, PipelineId, PipelineSpecManifest, WorkflowSpecManifest, Parameters, pipelineRuntimeManifest, WorkflowRuntimeManifest FROM run_details) AS rd LEFT JOIN run_metrics AS m ON rd.UUID=m.RunUUID GROUP BY rd.UUID) AS subq LEFT JOIN (select * from resource_references where ResourceType='Run') AS r ON subq.UUID=r.ResourceUUID WHERE UUID = '5e9c12b0-d27d-11e9-86a0-42010a8000e5' GROUP BY subq.UUID LIMIT 1;
+----+-------------+---------------------+------+---------------+-------------+---------+-------+-------+----------+----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+---------------------+------+---------------+-------------+---------+-------+-------+----------+----------------------------------------------------+
| 1 | PRIMARY | <derived2> | ref | <auto_key0> | <auto_key0> | 257 | const | 10 | 100.00 | Using index condition |
| 1 | PRIMARY | <derived4> | ref | <auto_key0> | <auto_key0> | 257 | const | 10 | 100.00 | Using where |
| 4 | DERIVED | resource_references | ALL | NULL | NULL | NULL | NULL | 63229 | 100.00 | Using where |
| 2 | DERIVED | <derived3> | ALL | NULL | NULL | NULL | NULL | 19546 | 100.00 | Using temporary; Using filesort |
| 2 | DERIVED | m | ALL | PRIMARY | NULL | NULL | NULL | 1 | 100.00 | Using where; Using join buffer (Block Nested Loop) |
| 3 | DERIVED | run_details | ALL | NULL | NULL | NULL | NULL | 19546 | 100.00 | NULL |
+----+-------------+---------------------+------+---------------+-------------+---------+-------+-------+----------+----------------------------------------------------+
6 rows in set, 1 warning (6.90 sec)
```
**Anything else you would like to add:**
I'm not familiar enough with gorm to know how to force it to generate a more efficient query.
| non_main | system performance degrades with large numbers of runs what happened on systems with lots of runs listing experiments becomes unbearably slow what did you expect to happen the performance remains constant what steps did you take i enabled slow query logging and identified the culprit query change attempted to address the issue but it doesn t the query that s generated by gorm is inherently inefficient no index can make that query faster as it s currently written in particular the left join that does a select from resource references will need to materialize a temp table that temp table will have no index for the next part of the join here s an explain statement from our cluster that contains resource references mysql explain extended select subq concat as refs from select rd concat as metrics from select uuid displayname name storagestate namespace description createdatinsec scheduledatinsec finishedatinsec conditions pipelineid pipelinespecmanifest workflowspecmanifest parameters pipelineruntimemanifest workflowruntimemanifest from run details as rd left join run metrics as m on rd uuid m runuuid group by rd uuid as subq left join select from resource references where resourcetype run as r on subq uuid r resourceuuid where uuid group by subq uuid limit id select type table type possible keys key key len ref rows filtered extra primary ref const using index condition primary ref const using where derived resource references all null null null null using where derived all null null null null using temporary using filesort derived m all primary null null null using where using join buffer block nested loop derived run details all null null null null null rows in set warning sec anything else you would like to add i m not familiar enough with gorm to know how to force it to generate a more efficient query | 0 |
5,399 | 27,115,670,019 | IssuesEvent | 2023-02-15 18:22:00 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | closed | Make granularity work for more regions | Type: Maintainance Status: Inactive | **What is the expected state?**
The expected state would be to have the ability to filter down to additional granularity beyond province and district and for the code to be more generalized.
**What is the actual state?**
As it stands, things are hard-coded to "province" and "district" only.
**Relevant context**
- **```va_explorer/va_analytics/dash_apps/va_dashboard.py```**
See references to hard-coded values/comparisons of ```granularity```
| True | Make granularity work for more regions - **What is the expected state?**
The expected state would be to have the ability to filter down to additional granularity beyond province and district and for the code to be more generalized.
**What is the actual state?**
As it stands, things are hard-coded to "province" and "district" only.
**Relevant context**
- **```va_explorer/va_analytics/dash_apps/va_dashboard.py```**
See references to hard-coded values/comparisons of ```granularity```
| main | make granularity work for more regions what is the expected state the expected state would be to have the ability to filter down to additional granularity beyond province and district and for the code to be more generalized what is the actual state as it stands things are hard coded to province and district only relevant context va explorer va analytics dash apps va dashboard py see references to hard coded values comparisons of granularity | 1 |
24,401 | 12,290,680,504 | IssuesEvent | 2020-05-10 05:39:25 | conda-forge/status | https://api.github.com/repos/conda-forge/status | closed | Travis-CI ppc64le builds sometimes fail with no space left on device | degraded performance | One node on Travis-CI workers is the cause of this and restarting the build usually work.
Travis-CI has been informed. | True | Travis-CI ppc64le builds sometimes fail with no space left on device - One node on Travis-CI workers is the cause of this and restarting the build usually work.
Travis-CI has been informed. | non_main | travis ci builds sometimes fail with no space left on device one node on travis ci workers is the cause of this and restarting the build usually work travis ci has been informed | 0 |
1,109 | 2,532,226,039 | IssuesEvent | 2015-01-23 14:43:46 | AAndharia/ZIMS-School-Mgmt | https://api.github.com/repos/AAndharia/ZIMS-School-Mgmt | closed | Student Fee Receipt > Receipt number issue | bug Tested & Verified | http://screencast.com/t/MdoRXhGB
1. There is already one receipt number "201501/00003" in database ... but in new receipt it is showing same receipt number...
2. I am not sure how you are assigning receipt number... but if there is 0001 receipt number then next should be 0002... it directly came 0003... Please check the logic | 1.0 | Student Fee Receipt > Receipt number issue - http://screencast.com/t/MdoRXhGB
1. There is already one receipt number "201501/00003" in database ... but in new receipt it is showing same receipt number...
2. I am not sure how you are assigning receipt number... but if there is 0001 receipt number then next should be 0002... it directly came 0003... Please check the logic | non_main | student fee receipt receipt number issue there is already one receipt number in database but in new receipt it is showing same receipt number i am not sure how you are assigning receipt number but if there is receipt number then next should be it directly came please check the logic | 0 |
5,209 | 26,464,332,116 | IssuesEvent | 2023-01-16 21:17:43 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | closed | Flag --incompatible_disable_starlark_host_transitions will break IntelliJ UE Plugin Google in Bazel 7.0 | type: bug product: IntelliJ topic: bazel awaiting-maintainer | Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ UE Plugin Google. Please migrate to fix this and unblock the flip of this flag.
The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032).
Please check the following CI builds for build and test results:
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f8a-4362-8723-a4a3623eea43)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f86-4245-92df-9232ddf91098)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f95-4b12-a5b0-5b835e6d7624)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f91-4ff0-9e9e-12f7b44155d6)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f98-44a8-9f91-f363f6c96d5e)
Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything.
If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration. | True | Flag --incompatible_disable_starlark_host_transitions will break IntelliJ UE Plugin Google in Bazel 7.0 - Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ UE Plugin Google. Please migrate to fix this and unblock the flip of this flag.
The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032).
Please check the following CI builds for build and test results:
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f8a-4362-8723-a4a3623eea43)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f86-4245-92df-9232ddf91098)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f95-4b12-a5b0-5b835e6d7624)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f91-4ff0-9e9e-12f7b44155d6)
- [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-1f98-44a8-9f91-f363f6c96d5e)
Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything.
If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration. | main | flag incompatible disable starlark host transitions will break intellij ue plugin google in bazel incompatible flag incompatible disable starlark host transitions will be enabled by default in the next major release bazel thus breaking intellij ue plugin google please migrate to fix this and unblock the flip of this flag the flag is documented here please check the following ci builds for build and test results never heard of incompatible flags before we have that explains everything if you have any questions please file an issue in | 1 |
715,028 | 24,584,256,159 | IssuesEvent | 2022-10-13 18:13:01 | ScicraftLearn/MineLabs | https://api.github.com/repos/ScicraftLearn/MineLabs | closed | Portal block does not work when first creating a world | bug low priority Subatomic dimension outdated | When creating a new world, the first time a portal block is used it does not work. When reloading the world this issue is fixed | 1.0 | Portal block does not work when first creating a world - When creating a new world, the first time a portal block is used it does not work. When reloading the world this issue is fixed | non_main | portal block does not work when first creating a world when creating a new world the first time a portal block is used it does not work when reloading the world this issue is fixed | 0 |
1,284 | 5,429,658,091 | IssuesEvent | 2017-03-03 19:02:12 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Ruby on Rails 4 Cheat Sheet: | Maintainer Input Requested Suggestion | Maybe we should update this to Rails 5 now? Alternatively we can change the trigger to `Rails 4 cheatsheet` instead of `Rails cheatsheet`
---
IA Page: http://duck.co/ia/view/rails_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @Sayanc93
| True | Ruby on Rails 4 Cheat Sheet: - Maybe we should update this to Rails 5 now? Alternatively we can change the trigger to `Rails 4 cheatsheet` instead of `Rails cheatsheet`
---
IA Page: http://duck.co/ia/view/rails_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @Sayanc93
| main | ruby on rails cheat sheet maybe we should update this to rails now alternatively we can change the trigger to rails cheatsheet instead of rails cheatsheet ia page | 1 |
109,204 | 23,738,546,208 | IssuesEvent | 2022-08-31 10:15:50 | Onelinerhub/onelinerhub | https://api.github.com/repos/Onelinerhub/onelinerhub | opened | Short solution needed: "How to overlay (merge) two images" (php-gd) | help wanted good first issue code php-gd | Please help us write most modern and shortest code solution for this issue:
**How to overlay (merge) two images** (technology: [php-gd](https://onelinerhub.com/php-gd))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution.
3. Link to this issue in comments of pull request. | 1.0 | Short solution needed: "How to overlay (merge) two images" (php-gd) - Please help us write most modern and shortest code solution for this issue:
**How to overlay (merge) two images** (technology: [php-gd](https://onelinerhub.com/php-gd))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution.
3. Link to this issue in comments of pull request. | non_main | short solution needed how to overlay merge two images php gd please help us write most modern and shortest code solution for this issue how to overlay merge two images technology fast way just write the code solution in the comments prefered way create with a new code file inside don t forget to explain solution link to this issue in comments of pull request | 0 |
62,493 | 8,615,548,080 | IssuesEvent | 2018-11-19 20:54:33 | Azure/azure-iot-sdk-csharp | https://api.github.com/repos/Azure/azure-iot-sdk-csharp | closed | Message.DeliveryCount property description in reference documentation seems incorrect | area-documentation bug | Hi there! I'm the docs author at the [Azure .NET SDK docs site](https://docs.microsoft.com/dotnet/azure), and a customer submitted the following issue with documentation generated from the `///` comments in your API. Please be sure to @ them in your response. Thanks!
@ali-nazem commented on [Thu Jun 07 2018](https://github.com/Azure/azure-docs-sdk-dotnet/issues/668)
"Time when the message was received by the server" seems to be an irrelevant description for a property called "DeliveryCount".
It looks like this description was copy pasted from the description of "EnqueuedTimeUtc" property, hence still unclear what DeliveryCount property provides.
Best regards,
Ali
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b9e589d5-b37d-c203-794e-edb04529240a
* Version Independent ID: a9d44046-5cbb-5ace-835d-26c797dd9709
* Content: [Message.DeliveryCount Property (Microsoft.Azure.Devices.Client)](https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.devices.client.message.deliverycount?view=azure-dotnet#Microsoft_Azure_Devices_Client_Message_DeliveryCount)
* Content Source: [xml/Microsoft.Azure.Devices.Client/Message.xml](https://github.com/Azure/azure-docs-sdk-dotnet/blob/master/xml/Microsoft.Azure.Devices.Client/Message.xml)
* Service: **iot-hub**
* GitHub Login: @erickson-doug
* Microsoft Alias: **douge**
| 1.0 | Message.DeliveryCount property description in reference documentation seems incorrect - Hi there! I'm the docs author at the [Azure .NET SDK docs site](https://docs.microsoft.com/dotnet/azure), and a customer submitted the following issue with documentation generated from the `///` comments in your API. Please be sure to @ them in your response. Thanks!
@ali-nazem commented on [Thu Jun 07 2018](https://github.com/Azure/azure-docs-sdk-dotnet/issues/668)
"Time when the message was received by the server" seems to be an irrelevant description for a property called "DeliveryCount".
It looks like this description was copy pasted from the description of "EnqueuedTimeUtc" property, hence still unclear what DeliveryCount property provides.
Best regards,
Ali
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b9e589d5-b37d-c203-794e-edb04529240a
* Version Independent ID: a9d44046-5cbb-5ace-835d-26c797dd9709
* Content: [Message.DeliveryCount Property (Microsoft.Azure.Devices.Client)](https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.devices.client.message.deliverycount?view=azure-dotnet#Microsoft_Azure_Devices_Client_Message_DeliveryCount)
* Content Source: [xml/Microsoft.Azure.Devices.Client/Message.xml](https://github.com/Azure/azure-docs-sdk-dotnet/blob/master/xml/Microsoft.Azure.Devices.Client/Message.xml)
* Service: **iot-hub**
* GitHub Login: @erickson-doug
* Microsoft Alias: **douge**
| non_main | message deliverycount property description in reference documentation seems incorrect hi there i m the docs author at the and a customer submitted the following issue with documentation generated from the comments in your api please be sure to them in your response thanks ali nazem commented on time when the message was received by the server seems to be an irrelevant description for a property called deliverycount it looks like this description was copy pasted from the description of enqueuedtimeutc property hence still unclear what deliverycount property provides best regards ali document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service iot hub github login erickson doug microsoft alias douge | 0 |
3,465 | 13,282,885,567 | IssuesEvent | 2020-08-24 01:12:41 | amyjko/faculty | https://api.github.com/repos/amyjko/faculty | closed | Separate data from presentation | maintainability | It'd be much more convenient to edit data files without having to rebuild the js. Find a clean way to load them separately from source. | True | Separate data from presentation - It'd be much more convenient to edit data files without having to rebuild the js. Find a clean way to load them separately from source. | main | separate data from presentation it d be much more convenient to edit data files without having to rebuild the js find a clean way to load them separately from source | 1 |
5,115 | 26,045,632,933 | IssuesEvent | 2022-12-22 14:10:45 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Summarization suggestion aggregates columns instead of grouping, when the base column is a unique key column | type: bug work: backend status: ready restricted: maintainers | ## Description
* Select a table with a unique key column as the base table in Data Explorer Eg., Patrons
* Add the unique key column, along with a few other columns. Eg., Email, first name, last name
* Summarize by the unique key column. Expect the other columns to be grouped, instead notice that they are aggregated as a list.
Related comment/issue: https://github.com/centerofci/mathesar/issues/2145#issuecomment-1362358413
> _Summarize based on patron email (there should be LESS than 180 rows at this point, and only "Book Title" should show up as a List)_
> For this point, we'd expect patron first name and last name to be part of the grouped columns, instead they are getting aggregated as list. Summarization suggestion aggregates them instead of grouping.
> Currently, the list has a single value but it's still an array column so it appears inside a pill, which looks like this:

> What we'd expect is this:

| True | Summarization suggestion aggregates columns instead of grouping, when the base column is a unique key column - ## Description
* Select a table with a unique key column as the base table in Data Explorer Eg., Patrons
* Add the unique key column, along with a few other columns. Eg., Email, first name, last name
* Summarize by the unique key column. Expect the other columns to be grouped, instead notice that they are aggregated as a list.
Related comment/issue: https://github.com/centerofci/mathesar/issues/2145#issuecomment-1362358413
> _Summarize based on patron email (there should be LESS than 180 rows at this point, and only "Book Title" should show up as a List)_
> For this point, we'd expect patron first name and last name to be part of the grouped columns, instead they are getting aggregated as list. Summarization suggestion aggregates them instead of grouping.
> Currently, the list has a single value but it's still an array column so it appears inside a pill, which looks like this:

> What we'd expect is this:

| main | summarization suggestion aggregates columns instead of grouping when the base column is a unique key column description select a table with a unique key column as the base table in data explorer eg patrons add the unique key column along with a few other columns eg email first name last name summarize by the unique key column expect the other columns to be grouped instead notice that they are aggregated as a list related comment issue summarize based on patron email there should be less than rows at this point and only book title should show up as a list for this point we d expect patron first name and last name to be part of the grouped columns instead they are getting aggregated as list summarization suggestion aggregates them instead of grouping currently the list has a single value but it s still an array column so it appears inside a pill which looks like this what we d expect is this | 1 |
3,566 | 14,272,421,936 | IssuesEvent | 2020-11-21 16:57:32 | MarcusWolschon/osmeditor4android | https://api.github.com/repos/MarcusWolschon/osmeditor4android | opened | Re-factor string literals in de.blau.android.osm.* | Maintainability Task | THere are quite a few duplicate string literals in the OsmXml and OsmParser that should be turned in to constants. | True | Re-factor string literals in de.blau.android.osm.* - THere are quite a few duplicate string literals in the OsmXml and OsmParser that should be turned in to constants. | main | re factor string literals in de blau android osm there are quite a few duplicate string literals in the osmxml and osmparser that should be turned in to constants | 1 |
193,338 | 22,216,135,113 | IssuesEvent | 2022-06-08 01:59:34 | AlexRogalskiy/github-action-open-jscharts | https://api.github.com/repos/AlexRogalskiy/github-action-open-jscharts | closed | CVE-2021-44907 (High) detected in qs-6.5.2.tgz - autoclosed | security vulnerability | ## CVE-2021-44907 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>qs-6.5.2.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.2.tgz">https://registry.npmjs.org/qs/-/qs-6.5.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/node_modules/qs/package.json,/node_modules/request/node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- jest-27.0.0-next.2.tgz (Root Library)
- jest-cli-27.0.0-next.2.tgz
- jest-config-27.0.0-next.2.tgz
- jest-environment-jsdom-27.0.0-next.1.tgz
- jsdom-16.4.0.tgz
- request-2.88.2.tgz
- :x: **qs-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-open-jscharts/commit/bd85a148d84f20fa7abd580c8fce60cb02f3ba92">bd85a148d84f20fa7abd580c8fce60cb02f3ba92</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior.
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907>CVE-2021-44907</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44907">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44907</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: qs - 6.8.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-44907 (High) detected in qs-6.5.2.tgz - autoclosed - ## CVE-2021-44907 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>qs-6.5.2.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.2.tgz">https://registry.npmjs.org/qs/-/qs-6.5.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/node_modules/qs/package.json,/node_modules/request/node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- jest-27.0.0-next.2.tgz (Root Library)
- jest-cli-27.0.0-next.2.tgz
- jest-config-27.0.0-next.2.tgz
- jest-environment-jsdom-27.0.0-next.1.tgz
- jsdom-16.4.0.tgz
- request-2.88.2.tgz
- :x: **qs-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-open-jscharts/commit/bd85a148d84f20fa7abd580c8fce60cb02f3ba92">bd85a148d84f20fa7abd580c8fce60cb02f3ba92</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior.
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907>CVE-2021-44907</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44907">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44907</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: qs - 6.8.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in qs tgz autoclosed cve high severity vulnerability vulnerable library qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file package json path to vulnerable library node modules npm node modules qs package json node modules request node modules qs package json dependency hierarchy jest next tgz root library jest cli next tgz jest config next tgz jest environment jsdom next tgz jsdom tgz request tgz x qs tgz vulnerable library found in head commit a href vulnerability details a denial of service vulnerability exists in qs up to due to insufficient sanitization of property in the gs parse function the merge function allows the assignment of properties on an array in the query for any property being assigned a value in the array is converted to an object containing these properties essentially this means that the property whose expected type is array always has to be checked with array isarray by the user this may not be obvious to the user and can cause unexpected behavior publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution qs step up your open source security game with whitesource | 0 |
3,194 | 12,227,801,423 | IssuesEvent | 2020-05-03 16:43:23 | gfleetwood/asteres | https://api.github.com/repos/gfleetwood/asteres | opened | ericmjl/data-testing-tutorial (36561859) | Jupyter Notebook data science maintain | https://github.com/ericmjl/data-testing-tutorial
A short tutorial for data scientists on how to write tests for code + data. | True | ericmjl/data-testing-tutorial (36561859) - https://github.com/ericmjl/data-testing-tutorial
A short tutorial for data scientists on how to write tests for code + data. | main | ericmjl data testing tutorial a short tutorial for data scientists on how to write tests for code data | 1 |
52,891 | 27,817,441,858 | IssuesEvent | 2023-03-18 21:07:55 | defold/defold | https://api.github.com/repos/defold/defold | closed | The editor is laggy on my system. | bug editor linux performance | **Describe the bug (REQUIRED)**
A clear and concise description of what the bug is.
The entire application runs very slow. It uses 100% cpu despite only scrolling down the game.project file.
**To Reproduce (REQUIRED)**
Steps to reproduce the behavior:
1. Open Application
2. See that it runs very slow.
**Expected behavior (REQUIRED)**
A clear and concise description of what you expected to happen.
1. Open Application
2. See that it runs smoothly.
**Defold version (REQUIRED):**
- Version 1.2.187
**Platforms (REQUIRED):**
- Platforms: Linux (Kernal: 5.14.2-1-MANJARO)
- OS: Manjaro
- Device: Computer
- CPU: AMD Ryzen 9 5950x
- GPU: AMD Radeon RX 6900 XT (Running bleeding edge mesa-git drivers)
- MEMORY: 32GB
**Screenshots (OPTIONAL):**
If applicable, add screenshots to help explain your problem.

| True | The editor is laggy on my system. - **Describe the bug (REQUIRED)**
A clear and concise description of what the bug is.
The entire application runs very slow. It uses 100% cpu despite only scrolling down the game.project file.
**To Reproduce (REQUIRED)**
Steps to reproduce the behavior:
1. Open Application
2. See that it runs very slow.
**Expected behavior (REQUIRED)**
A clear and concise description of what you expected to happen.
1. Open Application
2. See that it runs smoothly.
**Defold version (REQUIRED):**
- Version 1.2.187
**Platforms (REQUIRED):**
- Platforms: Linux (Kernal: 5.14.2-1-MANJARO)
- OS: Manjaro
- Device: Computer
- CPU: AMD Ryzen 9 5950x
- GPU: AMD Radeon RX 6900 XT (Running bleeding edge mesa-git drivers)
- MEMORY: 32GB
**Screenshots (OPTIONAL):**
If applicable, add screenshots to help explain your problem.

| non_main | the editor is laggy on my system describe the bug required a clear and concise description of what the bug is the entire application runs very slow it uses cpu despite only scrolling down the game project file to reproduce required steps to reproduce the behavior open application see that it runs very slow expected behavior required a clear and concise description of what you expected to happen open application see that it runs smoothly defold version required version platforms required platforms linux kernal manjaro os manjaro device computer cpu amd ryzen gpu amd radeon rx xt running bleeding edge mesa git drivers memory screenshots optional if applicable add screenshots to help explain your problem | 0 |
3,747 | 15,771,508,241 | IssuesEvent | 2021-03-31 20:38:22 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Welding tools override tool_check_callback | Maintainability/Hinders improvements Not a bug | https://github.com/tgstation/tgstation/blob/master/code/game/objects/items.dm#L767
https://github.com/tgstation/tgstation/blob/master/code/game/objects/items/tools/weldingtool.dm#L240
As an aside, why does nearly everything call tool_start_check and use_tool with amount=0? | True | Welding tools override tool_check_callback - https://github.com/tgstation/tgstation/blob/master/code/game/objects/items.dm#L767
https://github.com/tgstation/tgstation/blob/master/code/game/objects/items/tools/weldingtool.dm#L240
As an aside, why does nearly everything call tool_start_check and use_tool with amount=0? | main | welding tools override tool check callback as an aside why does nearly everything call tool start check and use tool with amount | 1 |
87,545 | 17,296,375,449 | IssuesEvent | 2021-07-25 20:18:52 | tefra/xsdata | https://api.github.com/repos/tefra/xsdata | closed | Redesign sequential fields | codegen proposal | ```xml
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="first">
<xs:annotation>
<xs:documentation>
Sequence is not repeating, fields are declared in the same order: do nothing
<a />
<b />
<b />
<c />
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="a" type="xs:string" />
<xs:element name="b" type="xs:string" maxOccurs="2" />
<xs:element name="c" type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="second">
<xs:annotation>
<xs:documentation>
Repeatable sequence of non list elements: current sequential
<a />
<b />
<c />
<a />
<b />
<c />
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence maxOccurs="2">
<xs:element name="a" type="xs:string" />
<xs:element name="b" type="xs:string" />
<xs:element name="c" type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="third">
<xs:annotation>
<xs:documentation>
Repeatable sequence of both list and non list elements: we rely on compound fields to get it working
<a />
<a />
<b />
<c />
<c />
<a />
<a />
<b />
<c />
<c />
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence maxOccurs="2">
<xs:element name="a" type="xs:string" maxOccurs="2" />
<xs:element name="b" type="xs:string" />
<xs:element name="c" type="xs:string" maxOccurs="2" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="fourth">
<xs:annotation>
<xs:documentation>
Whatever....Why...
<a />
<b />
<c />
<c />
<a />
<b />
<b />
<a />
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="a" type="xs:string" />
<xs:element name="b" type="xs:string" />
<xs:element name="c" type="xs:string" maxOccurs="2" />
<xs:sequence maxOccurs="2">
<xs:element name="a" type="xs:string" />
<xs:element name="b" type="xs:string" minOccurs="0" maxOccurs="2" />
</xs:sequence>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
```
To get the 3-4 cases working xsdata relies on effective choices feature #433 and compound fields.
Currently the sequential metadata property is boolean and the xs:sequence min/max occurs is merged into the element's normal min/max occurs. Things get more complicated when extending types and the class analyzer tries to merge inherited restrictions.
I want to try to decouple sequence min/max occurs from the child elements min/max occurs during class analyzer, and use the sanitizer in the end to set the fields real min/max occurs values and the sequential **break** number.
The fourth case is all over the place and I am not sure it can even be done with dataclasses without the compound fields. Maybe turn the sequential property into a list of break points?
I was kind of excited until I reached writing down case no4, worth the effort?? I am not so sure but it still bothers me the fact that the sequence and elements min/max occurs are tight coupled. | 1.0 | Redesign sequential fields - ```xml
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="first">
<xs:annotation>
<xs:documentation>
Sequence is not repeating, fields are declared in the same order: do nothing
<a />
<b />
<b />
<c />
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="a" type="xs:string" />
<xs:element name="b" type="xs:string" maxOccurs="2" />
<xs:element name="c" type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="second">
<xs:annotation>
<xs:documentation>
Repeatable sequence of non list elements: current sequential
<a />
<b />
<c />
<a />
<b />
<c />
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence maxOccurs="2">
<xs:element name="a" type="xs:string" />
<xs:element name="b" type="xs:string" />
<xs:element name="c" type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="third">
<xs:annotation>
<xs:documentation>
Repeatable sequence of both list and non list elements: we rely on compound fields to get it working
<a />
<a />
<b />
<c />
<c />
<a />
<a />
<b />
<c />
<c />
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence maxOccurs="2">
<xs:element name="a" type="xs:string" maxOccurs="2" />
<xs:element name="b" type="xs:string" />
<xs:element name="c" type="xs:string" maxOccurs="2" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="fourth">
<xs:annotation>
<xs:documentation>
Whatever....Why...
<a />
<b />
<c />
<c />
<a />
<b />
<b />
<a />
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="a" type="xs:string" />
<xs:element name="b" type="xs:string" />
<xs:element name="c" type="xs:string" maxOccurs="2" />
<xs:sequence maxOccurs="2">
<xs:element name="a" type="xs:string" />
<xs:element name="b" type="xs:string" minOccurs="0" maxOccurs="2" />
</xs:sequence>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
```
To get the 3-4 cases working xsdata relies on effective choices feature #433 and compound fields.
Currently the sequential metadata property is boolean and the xs:sequence min/max occurs is merged into the element's normal min/max occurs. Things get more complicated when extending types and the class analyzer tries to merge inherited restrictions.
I want to try to decouple sequence min/max occurs from the child elements min/max occurs during class analyzer, and use the sanitizer in the end to set the fields real min/max occurs values and the sequential **break** number.
The fourth case is all over the place and I am not sure it can even be done with dataclasses without the compound fields. Maybe turn the sequential property into a list of break points?
I was kind of excited until I reached writing down case no4, worth the effort?? I am not so sure but it still bothers me the fact that the sequence and elements min/max occurs are tight coupled. | non_main | redesign sequential fields xml xs schema xmlns xs sequence is not repeating fields are declared in the same order do nothing repeatable sequence of non list elements current sequential repeatable sequence of both list and non list elements we rely on compound fields to get it working whatever why to get the cases working xsdata relies on effective choices feature and compound fields currently the sequential metadata property is boolean and the xs sequence min max occurs is merged into the element s normal min max occurs things get more complicated when extending types and the class analyzer tries to merge inherited restrictions i want to try to decouple sequence min max occurs from the child elements min max occurs during class analyzer and use the sanitizer in the end to set the fields real min max occurs values and the sequential break number the fourth case is all over the place and i am not sure it can even be done with dataclasses without the compound fields maybe turn the sequential property into a list of break points i was kind of excited until i reached writing down case worth the effort i am not so sure but it still bothers me the fact that the sequence and elements min max occurs are tight coupled | 0 |
276,341 | 30,449,420,559 | IssuesEvent | 2023-07-16 05:00:22 | KOSASIH/Intersocial | https://api.github.com/repos/KOSASIH/Intersocial | opened | h2-2.1.214.jar: 1 vulnerabilities (highest severity is: 7.8) | Mend: dependency security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>h2-2.1.214.jar</b></p></summary>
<p>H2 Database Engine</p>
<p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/h2database/h2/2.1.214/h2-2.1.214.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Intersocial/commit/be2e84e93cb27ba11fa27ebc684da911e1ff09d4">be2e84e93cb27ba11fa27ebc684da911e1ff09d4</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (h2 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-45868](https://www.mend.io/vulnerability-database/CVE-2022-45868) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.8 | h2-2.1.214.jar | Direct | com.h2database:h2:2.2.220 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-45868</summary>
### Vulnerable Library - <b>h2-2.1.214.jar</b></p>
<p>H2 Database Engine</p>
<p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/h2database/h2/2.1.214/h2-2.1.214.jar</p>
<p>
Dependency Hierarchy:
- :x: **h2-2.1.214.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Intersocial/commit/be2e84e93cb27ba11fa27ebc684da911e1ff09d4">be2e84e93cb27ba11fa27ebc684da911e1ff09d4</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The web-based admin console in H2 Database Engine through 2.1.214 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states "This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that."
<p>Publish Date: 2022-11-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-45868>CVE-2022-45868</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-22wj-vf5f-wrvj">https://github.com/advisories/GHSA-22wj-vf5f-wrvj</a></p>
<p>Release Date: 2022-11-23</p>
<p>Fix Resolution: com.h2database:h2:2.2.220</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | True | h2-2.1.214.jar: 1 vulnerabilities (highest severity is: 7.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>h2-2.1.214.jar</b></p></summary>
<p>H2 Database Engine</p>
<p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/h2database/h2/2.1.214/h2-2.1.214.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Intersocial/commit/be2e84e93cb27ba11fa27ebc684da911e1ff09d4">be2e84e93cb27ba11fa27ebc684da911e1ff09d4</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (h2 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-45868](https://www.mend.io/vulnerability-database/CVE-2022-45868) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.8 | h2-2.1.214.jar | Direct | com.h2database:h2:2.2.220 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-45868</summary>
### Vulnerable Library - <b>h2-2.1.214.jar</b></p>
<p>H2 Database Engine</p>
<p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/h2database/h2/2.1.214/h2-2.1.214.jar</p>
<p>
Dependency Hierarchy:
- :x: **h2-2.1.214.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KOSASIH/Intersocial/commit/be2e84e93cb27ba11fa27ebc684da911e1ff09d4">be2e84e93cb27ba11fa27ebc684da911e1ff09d4</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The web-based admin console in H2 Database Engine through 2.1.214 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states "This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that."
<p>Publish Date: 2022-11-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-45868>CVE-2022-45868</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-22wj-vf5f-wrvj">https://github.com/advisories/GHSA-22wj-vf5f-wrvj</a></p>
<p>Release Date: 2022-11-23</p>
<p>Fix Resolution: com.h2database:h2:2.2.220</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | non_main | jar vulnerabilities highest severity is vulnerable library jar database engine library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in version remediation available high jar direct com details cve vulnerable library jar database engine library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details the web based admin console in database engine through can be started via the cli with the argument webadminpassword which allows the user to specify the password in cleartext for the web admin console consequently a local user or an attacker that has obtained local access through some means would be able to discover the password by listing processes and their arguments note the vendor states this is not a vulnerability of console passwords should never be passed on the command line and every qualified dba or system administrator is expected to know that publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com step up your open source security game with mend | 0 |
289,848 | 8,877,256,863 | IssuesEvent | 2019-01-12 22:40:33 | techforbetter/myPickle | https://api.github.com/repos/techforbetter/myPickle | opened | Create log out functionality | priority-3 | Logged in user can select to log out
- [ ] removes token from localstorage
- [ ] redirects user to homepage | 1.0 | Create log out functionality - Logged in user can select to log out
- [ ] removes token from localstorage
- [ ] redirects user to homepage | non_main | create log out functionality logged in user can select to log out removes token from localstorage redirects user to homepage | 0 |
1,011 | 4,792,786,552 | IssuesEvent | 2016-10-31 16:24:26 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | dnf module doesn't work with dnf-2.0 API | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
packaging/os/dnf.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
devel, 2.2, 2.1
Current RPM versions:
* ansible-2.1.2.0-1.fc24.noarch
* python2-dnf-2.0.0-0.rc1.4.fc26.noarch (on the managed host)
```
##### OS / ENVIRONMENT
Fedora Rawhide
##### SUMMARY
Originally reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1387621
When trying to use the 'dnf' module in a task, we're getting tracebacks instead of successful install. Diagnosed in the bugzilla report as being usage of functions in DNF-1.x that have not been kept in the DNF-2.0 API. Since the module will run against both dnf-1.x and dnf-2.x hosts, the fix for this will need to be able to do the right thing for both.
##### STEPS TO REPRODUCE
On a fedora rawhide host:
```
ansible localhost -m dnf -a 'state=present name=python3-q '
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Successful install of the package
##### ACTUAL RESULTS
```
fatal: [IP_ADDRESS_REMOVED]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "No handlers could be found for logger \"dnf\"\r\nTraceback (most recent call last):
File \"/tmp/ansible_LDnNIF/ansible_module_dnf.py\", line 355, in <module>
main()
File \"/tmp/ansible_LDnNIF/ansible_module_dnf.py\", line 349, in main
ensure(module, base, params['state'], params['name'])
File \"/tmp/ansible_LDnNIF/ansible_module_dnf.py\", line 240, in ensure
pkg_specs, group_specs, filenames = cli.commands.parse_spec_group_file(
AttributeError: 'module' object has no attribute 'parse_spec_group_file'\r\n", "msg": "MODULE FAILURE"}
```
| True | dnf module doesn't work with dnf-2.0 API - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
packaging/os/dnf.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
devel, 2.2, 2.1
Current RPM versions:
* ansible-2.1.2.0-1.fc24.noarch
* python2-dnf-2.0.0-0.rc1.4.fc26.noarch (on the managed host)
```
##### OS / ENVIRONMENT
Fedora Rawhide
##### SUMMARY
Originally reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1387621
When trying to use the 'dnf' module in a task, we're getting tracebacks instead of successful install. Diagnosed in the bugzilla report as being usage of functions in DNF-1.x that have not been kept in the DNF-2.0 API. Since the module will run against both dnf-1.x and dnf-2.x hosts, the fix for this will need to be able to do the right thing for both.
##### STEPS TO REPRODUCE
On a fedora rawhide host:
```
ansible localhost -m dnf -a 'state=present name=python3-q '
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Successful install of the package
##### ACTUAL RESULTS
```
fatal: [IP_ADDRESS_REMOVED]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "No handlers could be found for logger \"dnf\"\r\nTraceback (most recent call last):
File \"/tmp/ansible_LDnNIF/ansible_module_dnf.py\", line 355, in <module>
main()
File \"/tmp/ansible_LDnNIF/ansible_module_dnf.py\", line 349, in main
ensure(module, base, params['state'], params['name'])
File \"/tmp/ansible_LDnNIF/ansible_module_dnf.py\", line 240, in ensure
pkg_specs, group_specs, filenames = cli.commands.parse_spec_group_file(
AttributeError: 'module' object has no attribute 'parse_spec_group_file'\r\n", "msg": "MODULE FAILURE"}
```
| main | dnf module doesn t work with dnf api issue type bug report component name packaging os dnf py ansible version devel current rpm versions ansible noarch dnf noarch on the managed host os environment fedora rawhide summary originally reported here when trying to use the dnf module in a task we re getting tracebacks instead of successful install diagnosed in the bugzilla report as being usage of functions in dnf x that have not been kept in the dnf api since the module will run against both dnf x and dnf x hosts the fix for this will need to be able to do the right thing for both steps to reproduce on a fedora rawhide host ansible localhost m dnf a state present name q expected results successful install of the package actual results fatal failed changed false failed true module stderr module stdout no handlers could be found for logger dnf r ntraceback most recent call last file tmp ansible ldnnif ansible module dnf py line in main file tmp ansible ldnnif ansible module dnf py line in main ensure module base params params file tmp ansible ldnnif ansible module dnf py line in ensure pkg specs group specs filenames cli commands parse spec group file attributeerror module object has no attribute parse spec group file r n msg module failure | 1 |
126,055 | 17,868,608,626 | IssuesEvent | 2021-09-06 12:41:32 | Jellyfrog/librenms | https://api.github.com/repos/Jellyfrog/librenms | opened | CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz | security vulnerability | ## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: librenms/package.json</p>
<p>Path to vulnerable library: librenms/node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-6.0.19.tgz (Root Library)
- webpack-dev-server-4.0.0-beta.2.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Jellyfrog/librenms/commit/ca5eeedb8f1d5925eee53f8b965f4978bceec84f">ca5eeedb8f1d5925eee53f8b965f4978bceec84f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz - ## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: librenms/package.json</p>
<p>Path to vulnerable library: librenms/node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-6.0.19.tgz (Root Library)
- webpack-dev-server-4.0.0-beta.2.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Jellyfrog/librenms/commit/ca5eeedb8f1d5925eee53f8b965f4978bceec84f">ca5eeedb8f1d5925eee53f8b965f4978bceec84f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in ansi html tgz cve high severity vulnerability vulnerable library ansi html tgz an elegant lib that converts the chalked ansi text to html library home page a href path to dependency file librenms package json path to vulnerable library librenms node modules ansi html package json dependency hierarchy laravel mix tgz root library webpack dev server beta tgz x ansi html tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects all versions of package ansi html if an attacker provides a malicious string it will get stuck processing the input for an extremely long time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
3,484 | 13,494,613,409 | IssuesEvent | 2020-09-11 21:50:46 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Update sandbox panel | Maintainability/Hinders improvements Not a bug | Sandbox panel is too old and need updating. Here is my suggestions about it
Need buttons to spawn BZ and Freon canisters
Change suit in button "Gear up (Space Travel Gear)" to normal EVA or other suit
Add button to quick spawn RPD (Rapid Pipe Dispenser)
Need buttons for quick spawn other minerals, e.g 50 Plasma sheets, 50 Diamond sheets, 50 Uranium sheets...
Change button from "Spawn toolbox" to "Spawn full belt", because the toolbox is useless
Need button for spawn mesons | True | Update sandbox panel - Sandbox panel is too old and need updating. Here is my suggestions about it
Need buttons to spawn BZ and Freon canisters
Change suit in button "Gear up (Space Travel Gear)" to normal EVA or other suit
Add button to quick spawn RPD (Rapid Pipe Dispenser)
Need buttons for quick spawn other minerals, e.g 50 Plasma sheets, 50 Diamond sheets, 50 Uranium sheets...
Change button from "Spawn toolbox" to "Spawn full belt", because the toolbox is useless
Need button for spawn mesons | main | update sandbox panel sandbox panel is too old and need updating here is my suggestions about it need buttons to spawn bz and freon canisters change suit in button gear up space travel gear to normal eva or other suit add button to quick spawn rpd rapid pipe dispenser need buttons for quick spawn other minerals e g plasma sheets diamond sheets uranium sheets change button from spawn toolbox to spawn full belt because the toolbox is useless need button for spawn mesons | 1 |
80,318 | 23,169,516,579 | IssuesEvent | 2022-07-30 13:37:55 | php/php-src | https://api.github.com/repos/php/php-src | closed | Obnoxious error in build/gen_stub.php "PHPDoc param type is unnecessary" | Feature Category: Build System Status: Verified | ### Description
`build/gen_stub.php` bails out if it encounters doc comments like this:
```
* @param int $size The maximum number of Workers this Pool can create
* @param string $class The class for new Workers
* @param array $ctor An array of arguments to be passed to new Workers
```
complaining that the @param annotations are unnecessary. It's my opinion that bailing out here is entirely unnecessary and annoying. A warning would be perfectly sufficient.
In addition, it makes it inconvenient to write doc comments in stubs. | 1.0 | Obnoxious error in build/gen_stub.php "PHPDoc param type is unnecessary" - ### Description
`build/gen_stub.php` bails out if it encounters doc comments like this:
```
* @param int $size The maximum number of Workers this Pool can create
* @param string $class The class for new Workers
* @param array $ctor An array of arguments to be passed to new Workers
```
complaining that the @param annotations are unnecessary. It's my opinion that bailing out here is entirely unnecessary and annoying. A warning would be perfectly sufficient.
In addition, it makes it inconvenient to write doc comments in stubs. | non_main | obnoxious error in build gen stub php phpdoc param type is unnecessary description build gen stub php bails out if it encounters doc comments like this param int size the maximum number of workers this pool can create param string class the class for new workers param array ctor an array of arguments to be passed to new workers complaining that the param annotations are unnecessary it s my opinion that bailing out here is entirely unnecessary and annoying a warning would be perfectly sufficient in addition it makes it inconvenient to write doc comments in stubs | 0 |
4,549 | 23,702,302,830 | IssuesEvent | 2022-08-29 20:15:30 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Apache Kafka event source for local development | type/feature maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->
### Describe your idea/feature/enhancement
I wish SAM CLI would add a new kafka event source to trigger a lambda function locally. The idea is to set a breakpoint in the FunctionHandler() which should be hit if I create a new message in kafka which runs in a local docker container on windows. Also it would be great if you would extend the payload generator at https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-generate-event.html
### Proposal
Add a new event source type to the existing once described at https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-eventsource.html
### Additional Details
There is already the event source type "MSK" described at https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-msk.html which only seems to work with AWS and not within a local environment | True | Apache Kafka event source for local development - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->
### Describe your idea/feature/enhancement
I wish SAM CLI would add a new kafka event source to trigger a lambda function locally. The idea is to set a breakpoint in the FunctionHandler() which should be hit if I create a new message in kafka which runs in a local docker container on windows. Also it would be great if you would extend the payload generator at https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-generate-event.html
### Proposal
Add a new event source type to the existing once described at https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-eventsource.html
### Additional Details
There is already the event source type "MSK" described at https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-msk.html which only seems to work with AWS and not within a local environment | main | apache kafka event source for local development describe your idea feature enhancement i wish sam cli would add a new kafka event source to trigger a lambda function locally the idea is to set a breakpoint in the functionhandler which should be hit if i create a new message in kafka which runs in a local docker container on windows also it would be great if you would extend the payload generator at proposal add a new event source type to the existing once described at additional details there is already the event source type msk described at which only seems to work with aws and not within a local environment | 1 |
30,606 | 6,192,417,563 | IssuesEvent | 2017-07-05 01:38:53 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | translate behaviour doesn't seem to patch translations on edit | behaviors Defect i18n On hold | This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.4.7
* Platform and Target: ubuntu 16.04
### What you did
I created form fields using as follows:
`_translations.en_GB.name`
etc..
according to the docs here:
https://book.cakephp.org/3.0/en/orm/behaviors/translate.html#saving-multiple-translations
### What happened
when I use
```php
$entity = $this->ModelTable->find('translations')->where(['id'=>'1]->first();
$this->ModelTable->patchEntity($entity, $this->request->getData(), ['translations'=>true]);
```
translated fields are not patched even though the data from the form seems to be correct.
### What you expected to happen
I would expect translated fields to be patched as well
| 1.0 | translate behaviour doesn't seem to patch translations on edit - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.4.7
* Platform and Target: ubuntu 16.04
### What you did
I created form fields using as follows:
`_translations.en_GB.name`
etc..
according to the docs here:
https://book.cakephp.org/3.0/en/orm/behaviors/translate.html#saving-multiple-translations
### What happened
when I use
```php
$entity = $this->ModelTable->find('translations')->where(['id'=>'1]->first();
$this->ModelTable->patchEntity($entity, $this->request->getData(), ['translations'=>true]);
```
translated fields are not patched even though the data from the form seems to be correct.
### What you expected to happen
I would expect translated fields to be patched as well
| non_main | translate behaviour doesn t seem to patch translations on edit this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target ubuntu what you did i created form fields using as follows translations en gb name etc according to the docs here what happened when i use php entity this modeltable find translations where first this modeltable patchentity entity this request getdata translated fields are not patched even though the data from the form seems to be correct what you expected to happen i would expect translated fields to be patched as well | 0 |
2,265 | 7,963,232,871 | IssuesEvent | 2018-07-13 16:48:57 | Subsurface-divelog/subsurface | https://api.github.com/repos/Subsurface-divelog/subsurface | closed | Can´t configure a OSTC 2 TR with Subsurface 4.8 | dive-computer needs-maintainer-feedback | <!-- Lines like this one are comments and will not be shown in the final output. -->
<!-- If you are a collaborator, please add labels and assign other collaborators for a review. -->
### Describe the issue:
<!-- Replace [ ] with [x] to select options. -->
- [X] Bug
- [ ] Change request
- [ ] New feature request
- [ ] Discussion request
### Issue long description:
I can´t configure a OSTC 2 TR with Subsurface 4.8 on Windows.
When i try to connect to the divecomputer over Bluetooth to change settings i get the error message "Unsupported operation"
### Operating system:
<!-- What OS are you running, including OS version and the language that you are running in -->
<!-- What device are you using? -->
<!-- Only answer this question if you have tried: Does the same happen on another OS? -->
Windows 10 x64 build 1803
OSTC 2 TR Firmware 2.97 SP1
### Subsurface version:
<!-- What version of Subsurface are you running? -->
<!-- Does the same happen on another Subsurface version? -->
<!-- Are you using official release, test build, or compiled yourself? -->
<!-- Provide Git hash if your are building Subsurface yourself. -->
The error is in Subsurface 2.8, 2.7.8 is working correctly
### Steps to reproduce:
<!-- Provide reproduction steps separated with new lines - 1), 2), 3)... -->
Try to connect to the divecomputer over Bluetooth to configure it.
### Current behavior:
Error message "Unsupported operation"
### Expected behavior:
Should connect without error message
### Additional information:
<!-- If a simple dive log file can reproduce the issue, please attach that to the report. -->
<!-- With dive computer download issues consider adding Subsurface log file and Subsurface dumpfile to the report. -->

I get the error with Subsurface 4.8, with 4.7.8 it works well. Here are the libdivecomputer logs from both versions
[subsurface_4.7.8.log](https://github.com/Subsurface-divelog/subsurface/files/2172887/subsurface_4.7.8.log)
[subsurface_4.8.log](https://github.com/Subsurface-divelog/subsurface/files/2172888/subsurface_4.8.log)
### Mentions:
<!-- Mention users that you want to review your issue with @<user-name>. Leave empty if not sure. -->
| True | Can´t configure a OSTC 2 TR with Subsurface 4.8 - <!-- Lines like this one are comments and will not be shown in the final output. -->
<!-- If you are a collaborator, please add labels and assign other collaborators for a review. -->
### Describe the issue:
<!-- Replace [ ] with [x] to select options. -->
- [X] Bug
- [ ] Change request
- [ ] New feature request
- [ ] Discussion request
### Issue long description:
I can´t configure a OSTC 2 TR with Subsurface 4.8 on Windows.
When i try to connect to the divecomputer over Bluetooth to change settings i get the error message "Unsupported operation"
### Operating system:
<!-- What OS are you running, including OS version and the language that you are running in -->
<!-- What device are you using? -->
<!-- Only answer this question if you have tried: Does the same happen on another OS? -->
Windows 10 x64 build 1803
OSTC 2 TR Firmware 2.97 SP1
### Subsurface version:
<!-- What version of Subsurface are you running? -->
<!-- Does the same happen on another Subsurface version? -->
<!-- Are you using official release, test build, or compiled yourself? -->
<!-- Provide Git hash if your are building Subsurface yourself. -->
The error is in Subsurface 2.8, 2.7.8 is working correctly
### Steps to reproduce:
<!-- Provide reproduction steps separated with new lines - 1), 2), 3)... -->
Try to connect to the divecomputer over Bluetooth to configure it.
### Current behavior:
Error message "Unsupported operation"
### Expected behavior:
Should connect without error message
### Additional information:
<!-- If a simple dive log file can reproduce the issue, please attach that to the report. -->
<!-- With dive computer download issues consider adding Subsurface log file and Subsurface dumpfile to the report. -->

I get the error with Subsurface 4.8, with 4.7.8 it works well. Here are the libdivecomputer logs from both versions
[subsurface_4.7.8.log](https://github.com/Subsurface-divelog/subsurface/files/2172887/subsurface_4.7.8.log)
[subsurface_4.8.log](https://github.com/Subsurface-divelog/subsurface/files/2172888/subsurface_4.8.log)
### Mentions:
<!-- Mention users that you want to review your issue with @<user-name>. Leave empty if not sure. -->
| main | can´t configure a ostc tr with subsurface describe the issue bug change request new feature request discussion request issue long description i can´t configure a ostc tr with subsurface on windows when i try to connect to the divecomputer over bluetooth to change settings i get the error message unsupported operation operating system windows build ostc tr firmware subsurface version the error is in subsurface is working correctly steps to reproduce try to connect to the divecomputer over bluetooth to configure it current behavior error message unsupported operation expected behavior should connect without error message additional information i get the error with subsurface with it works well here are the libdivecomputer logs from both versions mentions leave empty if not sure | 1 |
413,151 | 27,946,134,721 | IssuesEvent | 2023-03-24 03:21:11 | xamarin/xamarin-macios | https://api.github.com/repos/xamarin/xamarin-macios | closed | [templates] Add comments to help developers | enhancement good first issue documentation | It can be difficult to get everything correctly aligned for publishing, so we might want to add comments to templates to show developers what they need to do depending on what they want to do.
A list of problems for Mac Catalyst are described here: https://github.com/dotnet/maui/issues/12293
* [ ] App rejected because: `The product archive package's signature is invalid. Ensure that it is signed with your "3rd Party Mac Developer Installer" certificate.` A commented section in csproj explaining what they need to do might help here.
* [ ] App rejected because: `The product archive is invalid. The Info.plist must contain a LSApplicationCategoryType key, whose value is the UTI for a valid category. For more details, see "Submitting your Mac apps to the App Store".` A commented section in the Info.plist might be helpful.
* [ ] App rejected because: `Invalid bundle. The bundle supports arm64 but not Intel-based Mac computers. Your build must include the x86_64 architecture to support Intel-based Mac computers.` A commented section in csproj explaining that if only x64 or arm64+x64 (but not just arm64) is valid for publishing would help.
* [ ] App rejected because: `Invalid Bundle. The key UIDeviceFamily in the app's Info.plist file contains one or more unsupported values '1'. We should probably validate this at build time.
* [ ] App rejected because: `App sandbox not enabled. The following executables must include the "com.apple.security.app-sandbox" entitlement with a Boolean value of true in the entitlements property list [...]`. A commented section in the Entitlements.plist might be helpful. Alternatively we could default to include this entitlement by default for release builds (but verify that we don't end up with builds that won't execute locally).
* [ ] "I had to add <key>ITSAppUsesNonExemptEncryption</key><false/> to Info.plist." A commented section in the Info.plist might be helpful.
| 1.0 | [templates] Add comments to help developers - It can be difficult to get everything correctly aligned for publishing, so we might want to add comments to templates to show developers what they need to do depending on what they want to do.
A list of problems for Mac Catalyst are described here: https://github.com/dotnet/maui/issues/12293
* [ ] App rejected because: `The product archive package's signature is invalid. Ensure that it is signed with your "3rd Party Mac Developer Installer" certificate.` A commented section in csproj explaining what they need to do might help here.
* [ ] App rejected because: `The product archive is invalid. The Info.plist must contain a LSApplicationCategoryType key, whose value is the UTI for a valid category. For more details, see "Submitting your Mac apps to the App Store".` A commented section in the Info.plist might be helpful.
* [ ] App rejected because: `Invalid bundle. The bundle supports arm64 but not Intel-based Mac computers. Your build must include the x86_64 architecture to support Intel-based Mac computers.` A commented section in csproj explaining that if only x64 or arm64+x64 (but not just arm64) is valid for publishing would help.
* [ ] App rejected because: `Invalid Bundle. The key UIDeviceFamily in the app's Info.plist file contains one or more unsupported values '1'. We should probably validate this at build time.
* [ ] App rejected because: `App sandbox not enabled. The following executables must include the "com.apple.security.app-sandbox" entitlement with a Boolean value of true in the entitlements property list [...]`. A commented section in the Entitlements.plist might be helpful. Alternatively we could default to include this entitlement by default for release builds (but verify that we don't end up with builds that won't execute locally).
* [ ] "I had to add <key>ITSAppUsesNonExemptEncryption</key><false/> to Info.plist." A commented section in the Info.plist might be helpful.
| non_main | add comments to help developers it can be difficult to get everything correctly aligned for publishing so we might want to add comments to templates to show developers what they need to do depending on what they want to do a list of problems for mac catalyst are described here app rejected because the product archive package s signature is invalid ensure that it is signed with your party mac developer installer certificate a commented section in csproj explaining what they need to do might help here app rejected because the product archive is invalid the info plist must contain a lsapplicationcategorytype key whose value is the uti for a valid category for more details see submitting your mac apps to the app store a commented section in the info plist might be helpful app rejected because invalid bundle the bundle supports but not intel based mac computers your build must include the architecture to support intel based mac computers a commented section in csproj explaining that if only or but not just is valid for publishing would help app rejected because invalid bundle the key uidevicefamily in the app s info plist file contains one or more unsupported values we should probably validate this at build time app rejected because app sandbox not enabled the following executables must include the com apple security app sandbox entitlement with a boolean value of true in the entitlements property list a commented section in the entitlements plist might be helpful alternatively we could default to include this entitlement by default for release builds but verify that we don t end up with builds that won t execute locally i had to add itsappusesnonexemptencryption to info plist a commented section in the info plist might be helpful | 0 |
5,676 | 29,659,918,088 | IssuesEvent | 2023-06-10 02:56:56 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | opened | Apply() proc is not in the code folder and is not checked by CI | Maintainability/Hinders improvements | ``tools/ScrollAnimationAssembler/apply.dm`` is a file with code errors that would normally get caught by grep, but it is not because grep only checks files in the ``code/`` folder (in this case, it's ``var/`` being in the args).

| True | Apply() proc is not in the code folder and is not checked by CI - ``tools/ScrollAnimationAssembler/apply.dm`` is a file with code errors that would normally get caught by grep, but it is not because grep only checks files in the ``code/`` folder (in this case, it's ``var/`` being in the args).

| main | apply proc is not in the code folder and is not checked by ci tools scrollanimationassembler apply dm is a file with code errors that would normally get caught by grep but it is not because grep only checks files in the code folder in this case it s var being in the args | 1 |
57,757 | 14,213,832,055 | IssuesEvent | 2020-11-17 03:33:19 | ErezDasa/RB2 | https://api.github.com/repos/ErezDasa/RB2 | opened | CVE-2018-1000873 (Medium) detected in jackson-datatype-jsr310-2.9.7.jar | security vulnerability | ## CVE-2018-1000873 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-datatype-jsr310-2.9.7.jar</b></p></summary>
<p>Add-on module to support JSR-310 (Java 8 Date & Time API) data types.</p>
<p>Library home page: <a href="https://github.com/FasterXML/jackson-modules-java8/">https://github.com/FasterXML/jackson-modules-java8/</a></p>
<p>Path to dependency file: RB2/resource_bundle_github/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jsr310/2.9.7/jackson-datatype-jsr310-2.9.7.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.6.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.6.RELEASE.jar
- :x: **jackson-datatype-jsr310-2.9.7.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ErezDasa/RB2/commit/ff0c9bd18136c8a1b451e76083652aadc9a74633">ff0c9bd18136c8a1b451e76083652aadc9a74633</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Fasterxml Jackson version Before 2.9.8 contains a CWE-20: Improper Input Validation vulnerability in Jackson-Modules-Java8 that can result in Causes a denial-of-service (DoS). This attack appear to be exploitable via The victim deserializes malicious input, specifically very large values in the nanoseconds field of a time value. This vulnerability appears to have been fixed in 2.9.8.
<p>Publish Date: 2018-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000873>CVE-2018-1000873</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1000873">https://nvd.nist.gov/vuln/detail/CVE-2018-1000873</a></p>
<p>Release Date: 2018-12-20</p>
<p>Fix Resolution: 2.9.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-1000873 (Medium) detected in jackson-datatype-jsr310-2.9.7.jar - ## CVE-2018-1000873 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-datatype-jsr310-2.9.7.jar</b></p></summary>
<p>Add-on module to support JSR-310 (Java 8 Date & Time API) data types.</p>
<p>Library home page: <a href="https://github.com/FasterXML/jackson-modules-java8/">https://github.com/FasterXML/jackson-modules-java8/</a></p>
<p>Path to dependency file: RB2/resource_bundle_github/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jsr310/2.9.7/jackson-datatype-jsr310-2.9.7.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.6.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.6.RELEASE.jar
- :x: **jackson-datatype-jsr310-2.9.7.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ErezDasa/RB2/commit/ff0c9bd18136c8a1b451e76083652aadc9a74633">ff0c9bd18136c8a1b451e76083652aadc9a74633</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Fasterxml Jackson version Before 2.9.8 contains a CWE-20: Improper Input Validation vulnerability in Jackson-Modules-Java8 that can result in Causes a denial-of-service (DoS). This attack appear to be exploitable via The victim deserializes malicious input, specifically very large values in the nanoseconds field of a time value. This vulnerability appears to have been fixed in 2.9.8.
<p>Publish Date: 2018-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000873>CVE-2018-1000873</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1000873">https://nvd.nist.gov/vuln/detail/CVE-2018-1000873</a></p>
<p>Release Date: 2018-12-20</p>
<p>Fix Resolution: 2.9.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in jackson datatype jar cve medium severity vulnerability vulnerable library jackson datatype jar add on module to support jsr java date time api data types library home page a href path to dependency file resource bundle github pom xml path to vulnerable library home wss scanner repository com fasterxml jackson datatype jackson datatype jackson datatype jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson datatype jar vulnerable library found in head commit a href vulnerability details fasterxml jackson version before contains a cwe improper input validation vulnerability in jackson modules that can result in causes a denial of service dos this attack appear to be exploitable via the victim deserializes malicious input specifically very large values in the nanoseconds field of a time value this vulnerability appears to have been fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
652,263 | 21,526,952,301 | IssuesEvent | 2022-04-28 19:29:05 | o3de/o3de | https://api.github.com/repos/o3de/o3de | closed | Material Editor Viewport settings have a typo | kind/bug needs-triage sig/ui-ux priority/minor | **Describe the bug**
Material Editor Viewport settings have a typo in the sentence “Maxmum Exposure” under the Lightning Settings → Exposure section. Typo is present on both Windows and Linux.
**Steps to reproduce**
Steps to reproduce the behavior:
1. Launch the Material Editor.
2. Enable Viewport Settings via the View menu.
3. Observe the typo in the word “Maximum”.
**Expected behavior**
The parameter name is "Maximum Exposure".
**Actual behavior**
The parameter name is "Maxmum Exposure".
**Screenshots/Video**

**Found in Branch**
Development (611598c)
**Desktop/Device:**
- Device: PC
- OS: Windows
- Version 10
- CPU AMD Ryzen 3600
- GPU Nvidia GeForce RTX 2060 SUPER
- Memory 16GB | 1.0 | Material Editor Viewport settings have a typo - **Describe the bug**
Material Editor Viewport settings have a typo in the sentence “Maxmum Exposure” under the Lightning Settings → Exposure section. Typo is present on both Windows and Linux.
**Steps to reproduce**
Steps to reproduce the behavior:
1. Launch the Material Editor.
2. Enable Viewport Settings via the View menu.
3. Observe the typo in the word “Maximum”.
**Expected behavior**
The parameter name is "Maximum Exposure".
**Actual behavior**
The parameter name is "Maxmum Exposure".
**Screenshots/Video**

**Found in Branch**
Development (611598c)
**Desktop/Device:**
- Device: PC
- OS: Windows
- Version 10
- CPU AMD Ryzen 3600
- GPU Nvidia GeForce RTX 2060 SUPER
- Memory 16GB | non_main | material editor viewport settings have a typo describe the bug material editor viewport settings have a typo in the sentence “maxmum exposure” under the lightning settings → exposure section typo is present on both windows and linux steps to reproduce steps to reproduce the behavior launch the material editor enable viewport settings via the view menu observe the typo in the word “maximum” expected behavior the parameter name is maximum exposure actual behavior the parameter name is maxmum exposure screenshots video found in branch development desktop device device pc os windows version cpu amd ryzen gpu nvidia geforce rtx super memory | 0 |
128,369 | 12,371,126,558 | IssuesEvent | 2020-05-18 18:00:34 | dotnet/tye | https://api.github.com/repos/dotnet/tye | closed | Document ContainerBaseImage and ContainerBaseTag | documentation | Now that https://github.com/dotnet/tye/pull/447/ is merged, we should add documentation for that in the next tye release. | 1.0 | Document ContainerBaseImage and ContainerBaseTag - Now that https://github.com/dotnet/tye/pull/447/ is merged, we should add documentation for that in the next tye release. | non_main | document containerbaseimage and containerbasetag now that is merged we should add documentation for that in the next tye release | 0 |
1,217 | 5,197,285,372 | IssuesEvent | 2017-01-23 15:16:33 | Particular/PBot | https://api.github.com/repos/Particular/PBot | closed | Copy all comments into a single comments when moving an issue | Impact: M Size: S Tag: Maintainer Prio Type: Feature Withdrawn: Won't Fix | The copying of all comments when moving an issue [was reduced](https://github.com/Particular/PBot/commit/237fa0f22f878ddbcc01fd5f2e3d5a6a1a9f22ba) to only the first comment (often referred to as the "description") and the next comment after that.
This continues to surprise people and it makes it awkward to track the history of the issue.
IIRC this was done to reduce the number of email notifications.
I propose that we restore the copying of all comments, but we copy all of them into a single comment on the new issue, in an appropriate format.
| True | Copy all comments into a single comments when moving an issue - The copying of all comments when moving an issue [was reduced](https://github.com/Particular/PBot/commit/237fa0f22f878ddbcc01fd5f2e3d5a6a1a9f22ba) to only the first comment (often referred to as the "description") and the next comment after that.
This continues to surprise people and it makes it awkward to track the history of the issue.
IIRC this was done to reduce the number of email notifications.
I propose that we restore the copying of all comments, but we copy all of them into a single comment on the new issue, in an appropriate format.
| main | copy all comments into a single comments when moving an issue the copying of all comments when moving an issue to only the first comment often referred to as the description and the next comment after that this continues to surprise people and it makes it awkward to track the history of the issue iirc this was done to reduce the number of email notifications i propose that we restore the copying of all comments but we copy all of them into a single comment on the new issue in an appropriate format | 1 |
145,914 | 5,583,941,895 | IssuesEvent | 2017-03-29 02:36:47 | groundwired/salesforce-food-bank | https://api.github.com/repos/groundwired/salesforce-food-bank | opened | duplicated and unduplicated is tracking incorrectly | bug high priority | Jennifer Muzia
to Laura, me
Hi Laura and Evan,
I just wanted to give some feedback that the duplicated and unduplicated is tracking incorrectly. I logged my first visit and it was marked as duplicated. The second visit logged as unduplicated. This should have been reverse. I tried it with adding a couple of clients and that happened each time. Your first visit is unduplicated, and your repeat visit is duplicated.
| 1.0 | duplicated and unduplicated is tracking incorrectly - Jennifer Muzia
to Laura, me
Hi Laura and Evan,
I just wanted to give some feedback that the duplicated and unduplicated is tracking incorrectly. I logged my first visit and it was marked as duplicated. The second visit logged as unduplicated. This should have been reverse. I tried it with adding a couple of clients and that happened each time. Your first visit is unduplicated, and your repeat visit is duplicated.
| non_main | duplicated and unduplicated is tracking incorrectly jennifer muzia to laura me hi laura and evan i just wanted to give some feedback that the duplicated and unduplicated is tracking incorrectly i logged my first visit and it was marked as duplicated the second visit logged as unduplicated this should have been reverse i tried it with adding a couple of clients and that happened each time your first visit is unduplicated and your repeat visit is duplicated | 0 |
720,959 | 24,812,646,337 | IssuesEvent | 2022-10-25 10:39:30 | poja/RL | https://api.github.com/repos/poja/RL | opened | Fix uci test on remote | priority-low | The test doesn't find the tensorflow library (.so), even though the other tests do find it.
I tried to replicate the Github Actions environment using the `act` program, but had even more problems with the tensorflow installation (I think it's because my host is Mac) | 1.0 | Fix uci test on remote - The test doesn't find the tensorflow library (.so), even though the other tests do find it.
I tried to replicate the Github Actions environment using the `act` program, but had even more problems with the tensorflow installation (I think it's because my host is Mac) | non_main | fix uci test on remote the test doesn t find the tensorflow library so even though the other tests do find it i tried to replicate the github actions environment using the act program but had even more problems with the tensorflow installation i think it s because my host is mac | 0 |
1,440 | 6,256,852,176 | IssuesEvent | 2017-07-14 11:29:52 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Syntax inconsistency in File module | affects_2.0 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
File module
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Nothing.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
```
Linux redbox 4.4.5-1-ARCH #1 SMP PREEMPT Thu Mar 10 07:38:19 CET 2016 x86_64 GNU/Linux (Arch Linux)
```
##### SUMMARY
<!--- Explain the problem briefly -->
Using `mode=` and `mode:` syntax should have the same result. But `mode: 2775` gives a really weird outcome compared `mode=2775`'s result, which is correct.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
- name: Create sites directory
file:
path: /opt/sites
state: directory
group: coopdevs
owner: ubuntu
mode: 2775
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
The following is the expected result, obtained using `mode=` syntax:
```
$ ls -l /opt/
total 4
drwxrwsr-x 3 ubuntu coopdevs 4096 Apr 8 14:58 sites
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
This is the result using `mode:` syntax:
```
$ ls -l /opt/
total 4
d-ws-w-rwt 3 ubuntu coopdevs 4096 Apr 8 14:58 sites
```
##### HINTS
1. It seems related to the first digit `2`, in the documentation it only talks about `0`. And actually `0775` works fine.
2. It seems that wrapping the digits with quotes it actually works, like `mode: '2775'`.
| True | Syntax inconsistency in File module - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
File module
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Nothing.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
```
Linux redbox 4.4.5-1-ARCH #1 SMP PREEMPT Thu Mar 10 07:38:19 CET 2016 x86_64 GNU/Linux (Arch Linux)
```
##### SUMMARY
<!--- Explain the problem briefly -->
Using `mode=` and `mode:` syntax should have the same result. But `mode: 2775` gives a really weird outcome compared `mode=2775`'s result, which is correct.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
- name: Create sites directory
file:
path: /opt/sites
state: directory
group: coopdevs
owner: ubuntu
mode: 2775
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
The following is the expected result, obtained using `mode=` syntax:
```
$ ls -l /opt/
total 4
drwxrwsr-x 3 ubuntu coopdevs 4096 Apr 8 14:58 sites
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
This is the result using `mode:` syntax:
```
$ ls -l /opt/
total 4
d-ws-w-rwt 3 ubuntu coopdevs 4096 Apr 8 14:58 sites
```
##### HINTS
1. It seems related to the first digit `2`, in the documentation it only talks about `0`. And actually `0775` works fine.
2. It seems that wrapping the digits with quotes it actually works, like `mode: '2775'`.
| main | syntax inconsistency in file module issue type bug report component name file module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables nothing os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux redbox arch smp preempt thu mar cet gnu linux arch linux summary using mode and mode syntax should have the same result but mode gives a really weird outcome compared mode s result which is correct steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name create sites directory file path opt sites state directory group coopdevs owner ubuntu mode expected results the following is the expected result obtained using mode syntax ls l opt total drwxrwsr x ubuntu coopdevs apr sites actual results this is the result using mode syntax ls l opt total d ws w rwt ubuntu coopdevs apr sites hints it seems related to the first digit in the documentation it only talks about and actually works fine it seems that wrapping the digits with quotes it actually works like mode | 1 |
20,679 | 6,912,186,191 | IssuesEvent | 2017-11-28 11:02:20 | reactor/reactor-core | https://api.github.com/repos/reactor/reactor-core | closed | Upgrade to Kotlin-Gradle-Plugin 1.1.60 when available | chores/build enhancement kotlin on-hold | Follow-up to #887:
- upgrade to the 1.1.60 version when available
- verify it fixes the duplicate entries in sources jar (` unzip reactor-core-3.1.2.BUILD-SNAPSHOT-sources.jar -d ./temp`)
- verify it doesn't cause any other issue
- remove the "IGNORE_DUPLICATES" workaround from the `build.gradle`
- do the same for `reactor-addons` | 1.0 | Upgrade to Kotlin-Gradle-Plugin 1.1.60 when available - Follow-up to #887:
- upgrade to the 1.1.60 version when available
- verify it fixes the duplicate entries in sources jar (` unzip reactor-core-3.1.2.BUILD-SNAPSHOT-sources.jar -d ./temp`)
- verify it doesn't cause any other issue
- remove the "IGNORE_DUPLICATES" workaround from the `build.gradle`
- do the same for `reactor-addons` | non_main | upgrade to kotlin gradle plugin when available follow up to upgrade to the version when available verify it fixes the duplicate entries in sources jar unzip reactor core build snapshot sources jar d temp verify it doesn t cause any other issue remove the ignore duplicates workaround from the build gradle do the same for reactor addons | 0 |
165,377 | 20,574,543,242 | IssuesEvent | 2022-03-04 02:08:25 | arohablue/skill-india-backend | https://api.github.com/repos/arohablue/skill-india-backend | closed | CVE-2019-17531 (High) detected in jackson-databind-2.9.8.jar - autoclosed | security vulnerability | ## CVE-2019-17531 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /skill-india-backend/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.2.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.
<p>Publish Date: 2019-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p>
<p>Release Date: 2019-10-12</p>
<p>Fix Resolution: 2.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-17531 (High) detected in jackson-databind-2.9.8.jar - autoclosed - ## CVE-2019-17531 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /skill-india-backend/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.2.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.
<p>Publish Date: 2019-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p>
<p>Release Date: 2019-10-12</p>
<p>Fix Resolution: 2.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file skill india backend pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
24,026 | 2,665,517,131 | IssuesEvent | 2015-03-20 21:03:35 | iFixit/iFixitAndroid | https://api.github.com/repos/iFixit/iFixitAndroid | closed | Input validation: Use TextView.setError() for errors | low priority r-All someday | Any input validation should use [`TextView.setError()`](http://developer.android.com/reference/android/widget/TextView.html#setError%28java.lang.CharSequence%29) to display errors. The only place I can think of that doesn't currently do this is login and register. This should simplify the error displaying code and make it look much better too. | 1.0 | Input validation: Use TextView.setError() for errors - Any input validation should use [`TextView.setError()`](http://developer.android.com/reference/android/widget/TextView.html#setError%28java.lang.CharSequence%29) to display errors. The only place I can think of that doesn't currently do this is login and register. This should simplify the error displaying code and make it look much better too. | non_main | input validation use textview seterror for errors any input validation should use to display errors the only place i can think of that doesn t currently do this is login and register this should simplify the error displaying code and make it look much better too | 0 |
4,756 | 24,511,777,062 | IssuesEvent | 2022-10-10 22:30:39 | ocsf/ocsf-schema | https://api.github.com/repos/ocsf/ocsf-schema | closed | Consider defining a JSON schema | question maintainers | As far as I checked, none of the JSON files currently refers to a JSON schema. I would like to suggest to define JSON schemas to make sure that changes in these files are valid according to the rules.
I'm happy to submit a PR but I might need some help to fully understand some design decisions. | True | Consider defining a JSON schema - As far as I checked, none of the JSON files currently refers to a JSON schema. I would like to suggest to define JSON schemas to make sure that changes in these files are valid according to the rules.
I'm happy to submit a PR but I might need some help to fully understand some design decisions. | main | consider defining a json schema as far as i checked none of the json files currently refers to a json schema i would like to suggest to define json schemas to make sure that changes in these files are valid according to the rules i m happy to submit a pr but i might need some help to fully understand some design decisions | 1 |
5,686 | 29,924,486,339 | IssuesEvent | 2023-06-22 03:26:28 | spicetify/spicetify-themes | https://api.github.com/repos/spicetify/spicetify-themes | closed | [BurntSienna] Top Icon Spacing Messed Up | ☠️ unmaintained | **Describe the bug**
Top icon spacing looks off and has gaps. And some of the icons look squished. Large spacing in friends section compared to before.
**To Reproduce**
Steps to reproduce the behavior:
1. Install BurntSienna Theme
**Expected behavior**
Buttons and search bar aren't' supposed to look cut off and squished and have adequate spacing. Applies to friends' section as well. Also, odd spacing between your library and the back and forward buttons that also look off-center and out of place.
**Screenshots**
<img width="1279" alt="Screenshot 2023-03-16 172224" src="https://user-images.githubusercontent.com/69532417/225781217-5da0a215-3043-46c2-90d0-724c25ca1b71.png">
**Specifics (please complete the following information):**
Windows 11
Spotify for Windows
Version 1.2.7.1277.g2b3ce637
Spicetify v2.16.2
Burnt Sienna
| True | [BurntSienna] Top Icon Spacing Messed Up - **Describe the bug**
Top icon spacing looks off and has gaps. And some of the icons look squished. Large spacing in friends section compared to before.
**To Reproduce**
Steps to reproduce the behavior:
1. Install BurntSienna Theme
**Expected behavior**
Buttons and search bar aren't' supposed to look cut off and squished and have adequate spacing. Applies to friends' section as well. Also, odd spacing between your library and the back and forward buttons that also look off-center and out of place.
**Screenshots**
<img width="1279" alt="Screenshot 2023-03-16 172224" src="https://user-images.githubusercontent.com/69532417/225781217-5da0a215-3043-46c2-90d0-724c25ca1b71.png">
**Specifics (please complete the following information):**
Windows 11
Spotify for Windows
Version 1.2.7.1277.g2b3ce637
Spicetify v2.16.2
Burnt Sienna
| main | top icon spacing messed up describe the bug top icon spacing looks off and has gaps and some of the icons look squished large spacing in friends section compared to before to reproduce steps to reproduce the behavior install burntsienna theme expected behavior buttons and search bar aren t supposed to look cut off and squished and have adequate spacing applies to friends section as well also odd spacing between your library and the back and forward buttons that also look off center and out of place screenshots img width alt screenshot src specifics please complete the following information windows spotify for windows version spicetify burnt sienna | 1 |
138,948 | 5,353,227,631 | IssuesEvent | 2017-02-20 04:21:26 | openaddresses/openaddresses | https://api.github.com/repos/openaddresses/openaddresses | closed | Columbia Shuswap Regional District, BC, CA | data-priority-3 data-sources ready for PR | 
**Must Agree to license terms before download* - can someone review before we green light this for OA?
http://www.csrd.bc.ca/services/maps > Cadastral and Property Information
| 1.0 | Columbia Shuswap Regional District, BC, CA - 
**Must Agree to license terms before download* - can someone review before we green light this for OA?
http://www.csrd.bc.ca/services/maps > Cadastral and Property Information
| non_main | columbia shuswap regional district bc ca must agree to license terms before download can someone review before we green light this for oa cadastral and property information | 0 |
539 | 3,955,070,133 | IssuesEvent | 2016-04-29 19:25:13 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | opened | Currency: Too few decimals for XBT | Maintainer Input Requested | When calculating any currency into Bitcoins, there are too few decimals (at least for small amounts).
If I enter 0.20 EUR in XBT the answer is 0.00 XBT.
------
IA Page: http://duck.co/ia/view/currency
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @MrChrisW | True | Currency: Too few decimals for XBT - When calculating any currency into Bitcoins, there are too few decimals (at least for small amounts).
If I enter 0.20 EUR in XBT the answer is 0.00 XBT.
------
IA Page: http://duck.co/ia/view/currency
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @MrChrisW | main | currency too few decimals for xbt when calculating any currency into bitcoins there are too few decimals at least for small amounts if i enter eur in xbt the answer is xbt ia page mrchrisw | 1 |
5,744 | 30,385,923,203 | IssuesEvent | 2023-07-13 00:40:59 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | closed | "optimize:css:copy" task raises errors | bug engineering maintain | ### Describe the bug
"optimize:css:copy" task raises errors
### To Reproduce
Steps to reproduce the behavior:
1. run `docker-compose up`
2. observe error
### Expected behavior
No errors
### Screenshots
<img width="1326" alt="image" src="https://user-images.githubusercontent.com/2896608/215546456-04c9b7ba-278b-4827-852c-7d514b12d54b.png">
| True | "optimize:css:copy" task raises errors - ### Describe the bug
"optimize:css:copy" task raises errors
### To Reproduce
Steps to reproduce the behavior:
1. run `docker-compose up`
2. observe error
### Expected behavior
No errors
### Screenshots
<img width="1326" alt="image" src="https://user-images.githubusercontent.com/2896608/215546456-04c9b7ba-278b-4827-852c-7d514b12d54b.png">
| main | optimize css copy task raises errors describe the bug optimize css copy task raises errors to reproduce steps to reproduce the behavior run docker compose up observe error expected behavior no errors screenshots img width alt image src | 1 |
1,755 | 6,574,971,624 | IssuesEvent | 2017-09-11 14:39:16 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | docker_container - missing cap_drop | affects_2.1 cloud docker feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
docker_container module
##### ANSIBLE VERSION
ansible 2.1.2.0
##### SUMMARY
docker_container module does not support removing capabilities (i.e. docker --cap-drop option).
cap_drop and cap_add were added to the docker module in ansible 2.0 but that module is marked as deprecated now.
| True | docker_container - missing cap_drop - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
docker_container module
##### ANSIBLE VERSION
ansible 2.1.2.0
##### SUMMARY
docker_container module does not support removing capabilities (i.e. docker --cap-drop option).
cap_drop and cap_add were added to the docker module in ansible 2.0 but that module is marked as deprecated now.
| main | docker container missing cap drop issue type feature idea component name docker container module ansible version ansible summary docker container module does not support removing capabilities i e docker cap drop option cap drop and cap add were added to the docker module in ansible but that module is marked as deprecated now | 1 |
4,314 | 21,717,293,922 | IssuesEvent | 2022-05-10 19:16:36 | roadrunner-server/roadrunner | https://api.github.com/repos/roadrunner-server/roadrunner | closed | [💡FEATURE REQUEST]: Native plugins support (so, dll) | C-enhancement W-waiting-on-maintainer | At the moment we use plugins `roadrunner/plugins` folder as golang code. This means that is impossible to load a plugin in runtime without recompile the whole RR code with a new plugin. Golang at the moment supports native plugins via `plugin` package but only on Linux and macOS [Link](https://golang.org/pkg/plugin/).
And according to the discussion in the issue below:
> It is really indisputable that plugins are a half baked feature. They have significant problems even on platforms where they (mostly) work. It was perhaps a mistake to add them at all. I think I approved the change to add plugins myself; I'm sorry for the trouble they are causing.
Tracking issue https://github.com/golang/go/issues/19282 | True | [💡FEATURE REQUEST]: Native plugins support (so, dll) - At the moment we use plugins `roadrunner/plugins` folder as golang code. This means that is impossible to load a plugin in runtime without recompile the whole RR code with a new plugin. Golang at the moment supports native plugins via `plugin` package but only on Linux and macOS [Link](https://golang.org/pkg/plugin/).
And according to the discussion in the issue below:
> It is really indisputable that plugins are a half baked feature. They have significant problems even on platforms where they (mostly) work. It was perhaps a mistake to add them at all. I think I approved the change to add plugins myself; I'm sorry for the trouble they are causing.
Tracking issue https://github.com/golang/go/issues/19282 | main | native plugins support so dll at the moment we use plugins roadrunner plugins folder as golang code this means that is impossible to load a plugin in runtime without recompile the whole rr code with a new plugin golang at the moment supports native plugins via plugin package but only on linux and macos and according to the discussion in the issue below it is really indisputable that plugins are a half baked feature they have significant problems even on platforms where they mostly work it was perhaps a mistake to add them at all i think i approved the change to add plugins myself i m sorry for the trouble they are causing tracking issue | 1 |
3,536 | 13,916,076,005 | IssuesEvent | 2020-10-21 02:22:58 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | opened | MAINT: `ap.c` gcc optimization portability | arch:ARMv8 maintainability | For the source file located at path `analysis/encore/clustering/src/ap.c`, the gcc compile optimization level can affect test results in the `test_encore.py` suite on the Linux ARM64 platform.
At `-O1`, where we are currently pinned, we get a pass on the suite.
At `-O2` and `-O3` we get an incorrect result:
```
_________________________________ TestEncoreClustering.test_clustering_three_ensembles_two_identical _________________________________
self = <MDAnalysisTests.analysis.test_encore.TestEncoreClustering object at 0x7f5a74b750>, ens1 = <Universe with 3341 atoms>
ens2 = <Universe with 3341 atoms>
def test_clustering_three_ensembles_two_identical(self, ens1, ens2):
cluster_collection = encore.cluster([ens1, ens2, ens1])
expected_value = 40
> assert len(cluster_collection) == expected_value, "Unexpected result:" \
" {0}".format(cluster_collection)
E AssertionError: Unexpected result: <ClusterCollection with 42 clusters>
E assert 42 == 40
E + where 42 = len(<ClusterCollection with 42 clusters>)
MDAnalysisTests/analysis/test_encore.py:448: AssertionError
```
I carefully checked the values of C-level constants like `RAND_MAX` and the sizes of types used in the source file via `sizeof()` and found no differences whatsoever between ARM64 and AMD64.
Perhaps this set of observations will be helpful to the low-level gurus for determination of changes needed to preserve performance while improving portability. | True | MAINT: `ap.c` gcc optimization portability - For the source file located at path `analysis/encore/clustering/src/ap.c`, the gcc compile optimization level can affect test results in the `test_encore.py` suite on the Linux ARM64 platform.
At `-O1`, where we are currently pinned, we get a pass on the suite.
At `-O2` and `-O3` we get an incorrect result:
```
_________________________________ TestEncoreClustering.test_clustering_three_ensembles_two_identical _________________________________
self = <MDAnalysisTests.analysis.test_encore.TestEncoreClustering object at 0x7f5a74b750>, ens1 = <Universe with 3341 atoms>
ens2 = <Universe with 3341 atoms>
def test_clustering_three_ensembles_two_identical(self, ens1, ens2):
cluster_collection = encore.cluster([ens1, ens2, ens1])
expected_value = 40
> assert len(cluster_collection) == expected_value, "Unexpected result:" \
" {0}".format(cluster_collection)
E AssertionError: Unexpected result: <ClusterCollection with 42 clusters>
E assert 42 == 40
E + where 42 = len(<ClusterCollection with 42 clusters>)
MDAnalysisTests/analysis/test_encore.py:448: AssertionError
```
I carefully checked the values of C-level constants like `RAND_MAX` and the sizes of types used in the source file via `sizeof()` and found no differences whatsoever between ARM64 and AMD64.
Perhaps this set of observations will be helpful to the low-level gurus for determination of changes needed to preserve performance while improving portability. | main | maint ap c gcc optimization portability for the source file located at path analysis encore clustering src ap c the gcc compile optimization level can affect test results in the test encore py suite on the linux platform at where we are currently pinned we get a pass on the suite at and we get an incorrect result testencoreclustering test clustering three ensembles two identical self def test clustering three ensembles two identical self cluster collection encore cluster expected value assert len cluster collection expected value unexpected result format cluster collection e assertionerror unexpected result e assert e where len mdanalysistests analysis test encore py assertionerror i carefully checked the values of c level constants like rand max and the sizes of types used in the source file via sizeof and found no differences whatsoever between and perhaps this set of observations will be helpful to the low level gurus for determination of changes needed to preserve performance while improving portability | 1 |
4,881 | 25,044,061,446 | IssuesEvent | 2022-11-05 02:45:12 | walbourn/directx-vs-templates | https://api.github.com/repos/walbourn/directx-vs-templates | closed | Move setting game object for user pointer into WM_CREATE | maintainence | The existing templates for Win32 use ``SetWindowsLongPtr`` after creating the window and showing it.
The official DirectX-Graphics-Samples projects use ``WM_CREATE`` and the ``lpParam`` on CreateWindow instead.
```
HWND hwnd = CreateWindowExW(0, ..., g_game.get());
```
```
switch (message)
{
case WM_CREATE:
if (lParam)
{
auto pCreateStruct = reinterpret_cast<LPCREATESTRUCTW>(lParam);
SetWindowLongPtr(hWnd, GWLP_USERDATA, reinterpret_cast<LONG_PTR>(pCreateStruct->lpCreateParams));
}
return 0;
```
Note this works fine, but needs one tweak to DeviceResources:
```
bool DeviceResources::WindowSizeChanged(int width, int height)
{
if (!m_window)
return false;
...
``` | True | Move setting game object for user pointer into WM_CREATE - The existing templates for Win32 use ``SetWindowsLongPtr`` after creating the window and showing it.
The official DirectX-Graphics-Samples projects use ``WM_CREATE`` and the ``lpParam`` on CreateWindow instead.
```
HWND hwnd = CreateWindowExW(0, ..., g_game.get());
```
```
switch (message)
{
case WM_CREATE:
if (lParam)
{
auto pCreateStruct = reinterpret_cast<LPCREATESTRUCTW>(lParam);
SetWindowLongPtr(hWnd, GWLP_USERDATA, reinterpret_cast<LONG_PTR>(pCreateStruct->lpCreateParams));
}
return 0;
```
Note this works fine, but needs one tweak to DeviceResources:
```
bool DeviceResources::WindowSizeChanged(int width, int height)
{
if (!m_window)
return false;
...
``` | main | move setting game object for user pointer into wm create the existing templates for use setwindowslongptr after creating the window and showing it the official directx graphics samples projects use wm create and the lpparam on createwindow instead hwnd hwnd createwindowexw g game get switch message case wm create if lparam auto pcreatestruct reinterpret cast lparam setwindowlongptr hwnd gwlp userdata reinterpret cast pcreatestruct lpcreateparams return note this works fine but needs one tweak to deviceresources bool deviceresources windowsizechanged int width int height if m window return false | 1 |
169 | 2,731,310,444 | IssuesEvent | 2015-04-16 19:36:33 | krico/jas | https://api.github.com/repos/krico/jas | closed | Integrate maven and github | maintainer | We should add [GitHub maven plugins](https://github.com/github/maven-plugins) to our project to better integrate maven with github. | True | Integrate maven and github - We should add [GitHub maven plugins](https://github.com/github/maven-plugins) to our project to better integrate maven with github. | main | integrate maven and github we should add to our project to better integrate maven with github | 1 |
136,217 | 12,703,615,490 | IssuesEvent | 2020-06-22 22:48:11 | slurpcode/slurp | https://api.github.com/repos/slurpcode/slurp | opened | 📃 Can we use YARD ?? 📃 | documentation 📚 enhancement ⭐ ruby 💎 | Would be nice to have another website and learn about the YARD Rubygem.
Official -> `YARD also runs a live documentation server for the community, hosting docs for all RubyGems and Github projects. The project is at RubyDoc.info (formerly rdoc.info)`
https://yardoc.org/index.html
https://rubydoc.info/ | 1.0 | 📃 Can we use YARD ?? 📃 - Would be nice to have another website and learn about the YARD Rubygem.
Official -> `YARD also runs a live documentation server for the community, hosting docs for all RubyGems and Github projects. The project is at RubyDoc.info (formerly rdoc.info)`
https://yardoc.org/index.html
https://rubydoc.info/ | non_main | 📃 can we use yard 📃 would be nice to have another website and learn about the yard rubygem official yard also runs a live documentation server for the community hosting docs for all rubygems and github projects the project is at rubydoc info formerly rdoc info | 0 |
5,105 | 26,026,958,713 | IssuesEvent | 2022-12-21 17:09:53 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Aggregated results contains duplicates when joining multiple tables | type: bug work: backend status: ready restricted: maintainers | * Create an exploration with 'Patrons' as the base table
* Add 'Email' from 'Patrons' table
* Add 'Title' from 'Publications' table
* Add 'Due Date' from 'Checkouts' table
* Summarize by 'Patrons_Email'
**Expect the result to contain unique values in the Publications_Title aggregated list. Notice that it contains duplicates.**

| True | Aggregated results contains duplicates when joining multiple tables - * Create an exploration with 'Patrons' as the base table
* Add 'Email' from 'Patrons' table
* Add 'Title' from 'Publications' table
* Add 'Due Date' from 'Checkouts' table
* Summarize by 'Patrons_Email'
**Expect the result to contain unique values in the Publications_Title aggregated list. Notice that it contains duplicates.**

| main | aggregated results contains duplicates when joining multiple tables create an exploration with patrons as the base table add email from patrons table add title from publications table add due date from checkouts table summarize by patrons email expect the result to contain unique values in the publications title aggregated list notice that it contains duplicates | 1 |
4,831 | 24,910,993,344 | IssuesEvent | 2022-10-29 21:23:33 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | closed | Calling kv::set will freeze the HTTP handler | 🐛 bug 🚧 maintainer issue | **Description of the bug**
That is, using kv capability inside of a http handler will render it frozen.
**To Reproduce**
```rust
#[register_handler]
fn handle_bar(request: Request) -> Result<Response, Error> {
assert_eq!(request.method, Method::Put);
if let Some(body) = request.body {
let kv = crate::Kv::open("my-container").unwrap();
kv.set("key", &body).unwrap();
}
Ok(Response {
headers: Some(request.headers),
body: None,
status: 200,
})
}
```
**Additional context**
| True | Calling kv::set will freeze the HTTP handler - **Description of the bug**
That is, using kv capability inside of a http handler will render it frozen.
**To Reproduce**
```rust
#[register_handler]
fn handle_bar(request: Request) -> Result<Response, Error> {
assert_eq!(request.method, Method::Put);
if let Some(body) = request.body {
let kv = crate::Kv::open("my-container").unwrap();
kv.set("key", &body).unwrap();
}
Ok(Response {
headers: Some(request.headers),
body: None,
status: 200,
})
}
```
**Additional context**
| main | calling kv set will freeze the http handler description of the bug that is using kv capability inside of a http handler will render it frozen to reproduce rust fn handle bar request request result assert eq request method method put if let some body request body let kv crate kv open my container unwrap kv set key body unwrap ok response headers some request headers body none status additional context | 1 |
73,386 | 32,043,793,981 | IssuesEvent | 2023-09-22 22:08:53 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | closed | OCP 4.13 Test full node rebuild | *team/ DXC* *team/ ops and shared services* | After upgrading CLAB perform a node rebuild to ensure everything is working as expected.
Instructions are at https://github.com/bcgov-c/rhcos-ignition-builder
Definition of done:
- [x] One CLAB node is successfully rebuilt | 1.0 | OCP 4.13 Test full node rebuild - After upgrading CLAB perform a node rebuild to ensure everything is working as expected.
Instructions are at https://github.com/bcgov-c/rhcos-ignition-builder
Definition of done:
- [x] One CLAB node is successfully rebuilt | non_main | ocp test full node rebuild after upgrading clab perform a node rebuild to ensure everything is working as expected instructions are at definition of done one clab node is successfully rebuilt | 0 |
1,985 | 6,694,208,752 | IssuesEvent | 2017-10-10 00:20:05 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Maps: Language | Internal Maintainer Input Requested Status: Tolerated Suggestion | When I turn the region button on (ex. France) the language of the maps is still in English while mapbox has a French, German, Spanish, Russian and Chinese version available. I believe these links will come in handy https://www.mapbox.com/help/change-language/ https://www.mapbox.com/mapbox-gl-js/example/language-switch/
---
IA Page: http://duck.co/ia/view/maps_maps
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @nilnilnil
| True | Maps: Language - When I turn the region button on (ex. France) the language of the maps is still in English while mapbox has a French, German, Spanish, Russian and Chinese version available. I believe these links will come in handy https://www.mapbox.com/help/change-language/ https://www.mapbox.com/mapbox-gl-js/example/language-switch/
---
IA Page: http://duck.co/ia/view/maps_maps
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @nilnilnil
| main | maps language when i turn the region button on ex france the language of the maps is still in english while mapbox has a french german spanish russian and chinese version available i believe these links will come in handy ia page nilnilnil | 1 |
5,761 | 30,533,634,969 | IssuesEvent | 2023-07-19 15:45:22 | jupyter-naas/awesome-notebooks | https://api.github.com/repos/jupyter-naas/awesome-notebooks | opened | HubSpot - List communications from contact | templates maintainer | This notebook reads a page of Communications from HubSpot's CRM API. It is usefull for organizations to quickly access and analyze communications from contacts.
| True | HubSpot - List communications from contact - This notebook reads a page of Communications from HubSpot's CRM API. It is usefull for organizations to quickly access and analyze communications from contacts.
| main | hubspot list communications from contact this notebook reads a page of communications from hubspot s crm api it is usefull for organizations to quickly access and analyze communications from contacts | 1 |
214,193 | 7,267,726,355 | IssuesEvent | 2018-02-20 06:59:31 | STEP-tw/battleship-phoenix | https://api.github.com/repos/STEP-tw/battleship-phoenix | opened | player repositions his ship before announcing ready. | Low Priority medium | As a _player_
I want to _reposition my ship_
So that I have _the freedom to change its position as I wish_
**Additional Details**
1. player has already positioned his ships.
2.player has not announced ready.
3.ocean grid will be deactivated when player is ready.
**Acceptance Criteria**
- [ ] Criteria 1
- Given _ships placed on ocean grid_
- When _I reposition it_
- Then _it should be replaced in the new position_
| 1.0 | player repositions his ship before announcing ready. - As a _player_
I want to _reposition my ship_
So that I have _the freedom to change its position as I wish_
**Additional Details**
1. player has already positioned his ships.
2.player has not announced ready.
3.ocean grid will be deactivated when player is ready.
**Acceptance Criteria**
- [ ] Criteria 1
- Given _ships placed on ocean grid_
- When _I reposition it_
- Then _it should be replaced in the new position_
| non_main | player repositions his ship before announcing ready as a player i want to reposition my ship so that i have the freedom to change its position as i wish additional details player has already positioned his ships player has not announced ready ocean grid will be deactivated when player is ready acceptance criteria criteria given ships placed on ocean grid when i reposition it then it should be replaced in the new position | 0 |
4,667 | 24,126,740,536 | IssuesEvent | 2022-09-21 01:40:20 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Hide grouping separators by default in Number columns | type: enhancement work: frontend status: ready restricted: maintainers | ## Current behavior
- We default to displaying grouping separators (e.g. `1,234`) in _all_ numbers.
- Users can turn off grouping separators by customizing their display options.
- This "on-by-default" behavior, feels a bit strange for columns like primary keys and years, where users typically don't see grouping separators.
- During import, Mathesar uses the `numeric` type by default -- even if all values within the import data can be cast to integers. @mathemancer explained that we made this decision early on in order to be flexible, and I think it's worth sticking with it. But it means that years don't get imported as integers by default, making it hard to take the approach we previously laid out in #1527.
- The back end does not set the grouping separator display option during import. It only gets set via the front end if the user manually adjust it.
## Desired behavior
- When the user has not configured any display options:
- Columns with a UI type of "Number" will be rendered without grouping separators (regardless of their Postgres type). This means that the user will need to manually turn them on sometimes.
- Columns with a UI type of "Money" will be rendered _with_ grouping separators. The user can turn them off if needed.
| True | Hide grouping separators by default in Number columns - ## Current behavior
- We default to displaying grouping separators (e.g. `1,234`) in _all_ numbers.
- Users can turn off grouping separators by customizing their display options.
- This "on-by-default" behavior, feels a bit strange for columns like primary keys and years, where users typically don't see grouping separators.
- During import, Mathesar uses the `numeric` type by default -- even if all values within the import data can be cast to integers. @mathemancer explained that we made this decision early on in order to be flexible, and I think it's worth sticking with it. But it means that years don't get imported as integers by default, making it hard to take the approach we previously laid out in #1527.
- The back end does not set the grouping separator display option during import. It only gets set via the front end if the user manually adjust it.
## Desired behavior
- When the user has not configured any display options:
- Columns with a UI type of "Number" will be rendered without grouping separators (regardless of their Postgres type). This means that the user will need to manually turn them on sometimes.
- Columns with a UI type of "Money" will be rendered _with_ grouping separators. The user can turn them off if needed.
| main | hide grouping separators by default in number columns current behavior we default to displaying grouping separators e g in all numbers users can turn off grouping separators by customizing their display options this on by default behavior feels a bit strange for columns like primary keys and years where users typically don t see grouping separators during import mathesar uses the numeric type by default even if all values within the import data can be cast to integers mathemancer explained that we made this decision early on in order to be flexible and i think it s worth sticking with it but it means that years don t get imported as integers by default making it hard to take the approach we previously laid out in the back end does not set the grouping separator display option during import it only gets set via the front end if the user manually adjust it desired behavior when the user has not configured any display options columns with a ui type of number will be rendered without grouping separators regardless of their postgres type this means that the user will need to manually turn them on sometimes columns with a ui type of money will be rendered with grouping separators the user can turn them off if needed | 1 |
2,519 | 8,655,460,259 | IssuesEvent | 2018-11-27 16:00:34 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | qcma hangs after restoring a backup | unmaintained | Every time I restore a system backup, qcma hangs and I have to kill process then start it up again.
Tested with Ubuntu 16.04 and a PS TV. | True | qcma hangs after restoring a backup - Every time I restore a system backup, qcma hangs and I have to kill process then start it up again.
Tested with Ubuntu 16.04 and a PS TV. | main | qcma hangs after restoring a backup every time i restore a system backup qcma hangs and i have to kill process then start it up again tested with ubuntu and a ps tv | 1 |
351,862 | 32,031,741,486 | IssuesEvent | 2023-09-22 12:49:24 | IntellectualSites/FastAsyncWorldEdit | https://api.github.com/repos/IntellectualSites/FastAsyncWorldEdit | opened | Fix how transparency (alpha) works in #saturate | Requires Testing | ### Server Implementation
Paper
### Server Version
1.20.1
### Describe the bug
Due to a player request, @TheMeinerLP and I investigated how #saturate works and found out that the alpha (which is the tranparency) always has to be 255 in order to work.
### To Reproduce
1. Do //re #saturate[0][255][0][1] and set the 1 to a number that is not 255 and below
2. See after a minute that it won't work
### Expected behaviour
It should work and the transparency should be applied -> glass might be set then or something more transparent
### Screenshots / Videos
_No response_
### Error log (if applicable)
None
### Fawe Debugpaste
https://athion.net/ISPaster/paste/view/4a7ba67d0e8347b9ae6d553dd08ff988
### Fawe Version
Is not important here, see info below (we had several tests running at this version and latest 2.7.2)
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit/ and the issue still persists.
### Anything else?
This issue is since 2021 in the code at this place:
`public BlockType getNearestBlock(int color) {
long min = Long.MAX_VALUE;
int closest = 0;
int red1 = (color >> 16) & 0xFF;
int green1 = (color >> 8) & 0xFF;
int blue1 = (color) & 0xFF;
int alpha = (color >> 24) & 0xFF;
for (int i = 0; i < validColors.length; i++) {
int other = validColors[i];
if (((other >> 24) & 0xFF) == alpha) {
long distance = colorDistance(red1, green1, blue1, other);
if (distance < min) {
min = distance;
closest = validBlockIds[i];
}
}
}
if (min == Long.MAX_VALUE) {
return null;
}
return BlockTypesCache.values[closest];
}` | 1.0 | Fix how transparency (alpha) works in #saturate - ### Server Implementation
Paper
### Server Version
1.20.1
### Describe the bug
Due to a player request, @TheMeinerLP and I investigated how #saturate works and found out that the alpha (which is the tranparency) always has to be 255 in order to work.
### To Reproduce
1. Do //re #saturate[0][255][0][1] and set the 1 to a number that is not 255 and below
2. See after a minute that it won't work
### Expected behaviour
It should work and the transparency should be applied -> glass might be set then or something more transparent
### Screenshots / Videos
_No response_
### Error log (if applicable)
None
### Fawe Debugpaste
https://athion.net/ISPaster/paste/view/4a7ba67d0e8347b9ae6d553dd08ff988
### Fawe Version
Is not important here, see info below (we had several tests running at this version and latest 2.7.2)
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit/ and the issue still persists.
### Anything else?
This issue is since 2021 in the code at this place:
`public BlockType getNearestBlock(int color) {
long min = Long.MAX_VALUE;
int closest = 0;
int red1 = (color >> 16) & 0xFF;
int green1 = (color >> 8) & 0xFF;
int blue1 = (color) & 0xFF;
int alpha = (color >> 24) & 0xFF;
for (int i = 0; i < validColors.length; i++) {
int other = validColors[i];
if (((other >> 24) & 0xFF) == alpha) {
long distance = colorDistance(red1, green1, blue1, other);
if (distance < min) {
min = distance;
closest = validBlockIds[i];
}
}
}
if (min == Long.MAX_VALUE) {
return null;
}
return BlockTypesCache.values[closest];
}` | non_main | fix how transparency alpha works in saturate server implementation paper server version describe the bug due to a player request themeinerlp and i investigated how saturate works and found out that the alpha which is the tranparency always has to be in order to work to reproduce do re saturate and set the to a number that is not and below see after a minute that it won t work expected behaviour it should work and the transparency should be applied glass might be set then or something more transparent screenshots videos no response error log if applicable none fawe debugpaste fawe version is not important here see info below we had several tests running at this version and latest checklist i have included a fawe debugpaste i am using the newest build from and the issue still persists anything else this issue is since in the code at this place public blocktype getnearestblock int color long min long max value int closest int color int color int color int alpha color for int i i validcolors length i int other validcolors if other alpha long distance colordistance other if distance min min distance closest validblockids if min long max value return null return blocktypescache values | 0 |
405,907 | 27,540,433,466 | IssuesEvent | 2023-03-07 08:10:34 | mikro-orm/mikro-orm | https://api.github.com/repos/mikro-orm/mikro-orm | closed | docs: missing negation in method documentation Collection.remove | documentation | 
Should be "...does not necessarily ..." | 1.0 | docs: missing negation in method documentation Collection.remove - 
Should be "...does not necessarily ..." | non_main | docs missing negation in method documentation collection remove should be does not necessarily | 0 |
218,916 | 7,332,792,771 | IssuesEvent | 2018-03-05 17:19:09 | NCEAS/metacat | https://api.github.com/repos/NCEAS/metacat | closed | Needs to work for DATA objects as well | Component: Bugzilla-Id Priority: Normal Status: Closed Tracker: Bug | ---
Author Name: **ben leinfelder** (ben leinfelder)
Original Redmine Issue: 6315, https://projects.ecoinformatics.org/ecoinfo/issues/6315
Original Date: 2013-12-18
Original Assignee: Chris Jones
---
Found that calls need to work on data objects where the CN does not actually store the object.
See DataONE tracker: https://redmine.dataone.org/issues/4204
| 1.0 | Needs to work for DATA objects as well - ---
Author Name: **ben leinfelder** (ben leinfelder)
Original Redmine Issue: 6315, https://projects.ecoinformatics.org/ecoinfo/issues/6315
Original Date: 2013-12-18
Original Assignee: Chris Jones
---
Found that calls need to work on data objects where the CN does not actually store the object.
See DataONE tracker: https://redmine.dataone.org/issues/4204
| non_main | needs to work for data objects as well author name ben leinfelder ben leinfelder original redmine issue original date original assignee chris jones found that calls need to work on data objects where the cn does not actually store the object see dataone tracker | 0 |
1,987 | 6,694,249,190 | IssuesEvent | 2017-10-10 00:37:38 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Ski Resorts: overtriggering | Low-Hanging Fruit Maintainer Input Requested Relevancy Triggering | A search for "[o'hare airport map](https://duckduckgo.com/?q=o%27hare+airport+map&ia=skiresort)" triggers the Ski Resort Åre in Sweden.
---
IA Page: http://duck.co/ia/view/ski_resorts
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @felixpalmer
| True | Ski Resorts: overtriggering - A search for "[o'hare airport map](https://duckduckgo.com/?q=o%27hare+airport+map&ia=skiresort)" triggers the Ski Resort Åre in Sweden.
---
IA Page: http://duck.co/ia/view/ski_resorts
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @felixpalmer
| main | ski resorts overtriggering a search for triggers the ski resort åre in sweden ia page felixpalmer | 1 |
1,070 | 4,889,458,959 | IssuesEvent | 2016-11-18 10:17:23 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Timezone: global name 'tzfile' is not defined | affects_2.2 bug_report in progress P2 waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
timezone
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
No changes.
##### OS / ENVIRONMENT
Host: Arch
Client: CentOS 6.8
##### SUMMARY
As of https://github.com/ansible/ansible-modules-extras/commit/e29584ad70d2f4e0a8ef02781d7bc7e977ff0e69 timezone is broken again. If I go back to https://github.com/ansible/ansible-modules-extras/commit/df35d324d62e6034ab86db0fb4a56d3ca122d4b2 it works fine again.
##### STEPS TO REPRODUCE
Run playbook with a timezone defined.
##### EXPECTED RESULTS
Timezone set properly, no errors.
##### ACTUAL RESULTS
Receive the fatal error below.
```
fatal: [vagrant-noc-server]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 192.168.50.3 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible__7aiLU/ansible_module_timezone.py\", line 462, in <module>\r\n main()\r\n File \"/tmp/ansible__7aiLU/ansible_module_timezone.py\", line 435, in main\r\n tz = Timezone(module)\r\n File \"/tmp/ansible__7aiLU/ansible_module_timezone.py\", line 306, in __init__\r\n self.update_timezone += ' %s /etc/localtime' % tzfile\r\nNameError: global name 'tzfile' is not defined\r\n", "msg": "MODULE FAILURE"}
``` | True | Timezone: global name 'tzfile' is not defined - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
timezone
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
No changes.
##### OS / ENVIRONMENT
Host: Arch
Client: CentOS 6.8
##### SUMMARY
As of https://github.com/ansible/ansible-modules-extras/commit/e29584ad70d2f4e0a8ef02781d7bc7e977ff0e69 timezone is broken again. If I go back to https://github.com/ansible/ansible-modules-extras/commit/df35d324d62e6034ab86db0fb4a56d3ca122d4b2 it works fine again.
##### STEPS TO REPRODUCE
Run playbook with a timezone defined.
##### EXPECTED RESULTS
Timezone set properly, no errors.
##### ACTUAL RESULTS
Receive the fatal error below.
```
fatal: [vagrant-noc-server]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 192.168.50.3 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible__7aiLU/ansible_module_timezone.py\", line 462, in <module>\r\n main()\r\n File \"/tmp/ansible__7aiLU/ansible_module_timezone.py\", line 435, in main\r\n tz = Timezone(module)\r\n File \"/tmp/ansible__7aiLU/ansible_module_timezone.py\", line 306, in __init__\r\n self.update_timezone += ' %s /etc/localtime' % tzfile\r\nNameError: global name 'tzfile' is not defined\r\n", "msg": "MODULE FAILURE"}
``` | main | timezone global name tzfile is not defined issue type bug report component name timezone ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration no changes os environment host arch client centos summary as of timezone is broken again if i go back to it works fine again steps to reproduce run playbook with a timezone defined expected results timezone set properly no errors actual results receive the fatal error below fatal failed changed false failed true module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module timezone py line in r n main r n file tmp ansible ansible module timezone py line in main r n tz timezone module r n file tmp ansible ansible module timezone py line in init r n self update timezone s etc localtime tzfile r nnameerror global name tzfile is not defined r n msg module failure | 1 |
440,266 | 12,696,908,560 | IssuesEvent | 2020-06-22 10:52:14 | input-output-hk/cardano-ledger-specs | https://api.github.com/repos/input-output-hk/cardano-ledger-specs | closed | Handle Shelley Bootstrap addresses separately | priority high shelley mainnet | The bootstrap (ie Byron) addresses in Shelley will need to use separate hashing algorithms and slightly different encryption serialization. We will probably need to add new parameters to the `Crypto` type family. | 1.0 | Handle Shelley Bootstrap addresses separately - The bootstrap (ie Byron) addresses in Shelley will need to use separate hashing algorithms and slightly different encryption serialization. We will probably need to add new parameters to the `Crypto` type family. | non_main | handle shelley bootstrap addresses separately the bootstrap ie byron addresses in shelley will need to use separate hashing algorithms and slightly different encryption serialization we will probably need to add new parameters to the crypto type family | 0 |
20,628 | 2,622,855,104 | IssuesEvent | 2015-03-04 08:07:20 | max99x/pagemon-chrome-ext | https://api.github.com/repos/max99x/pagemon-chrome-ext | closed | Monitoring specific elements | auto-migrated Priority-Medium | ```
What steps will reproduce the problem? Please include a URL.
Picking specific elements on a page to monitor doesn't appear to be working for
me. I am a Steam (gaming network) user and I would like to monitor specific
elements on the Steam Community Market page, however, although I've selected
elements that contain the specific elements - or even highlighted the exact
elements I want to monitor, other features of the page constantly cause my
alert to go off when they change.
To be more exact - I'm trying to monitor only the cheapest "earbuds" on the SCM
found here -
http://steamcommunity.com/market/listings/440/Earbuds?filter=earbud they would
be the very top price (the cheapest) of the earbuds for sale. Since picking it
isn't working, is there a way to use the html more precisely? I have limited
knowledge of html and do not know anything else. Inspect element and copy the
element's code that you want? Well, I tried that and it stopped monitoring all
together! (see screenshot: http://i.imgur.com/57NI3NU.png?1)
I would like to know how to best monitor these specific elements such as an
item's price only, however, I'm not sure how to use the regex option and the
picker seems to include other elements on the page. Also, is there a way to
monitor multiple completely separate entities on a page? It seems that right
now you can only monitor items that are connected via div tags or whatever.
I'm using the newest version of Chrome on Windows 7 64bit OS.
```
Original issue reported on code.google.com by `adamjess...@gmail.com` on 3 Dec 2014 at 7:05 | 1.0 | Monitoring specific elements - ```
What steps will reproduce the problem? Please include a URL.
Picking specific elements on a page to monitor doesn't appear to be working for
me. I am a Steam (gaming network) user and I would like to monitor specific
elements on the Steam Community Market page, however, although I've selected
elements that contain the specific elements - or even highlighted the exact
elements I want to monitor, other features of the page constantly cause my
alert to go off when they change.
To be more exact - I'm trying to monitor only the cheapest "earbuds" on the SCM
found here -
http://steamcommunity.com/market/listings/440/Earbuds?filter=earbud they would
be the very top price (the cheapest) of the earbuds for sale. Since picking it
isn't working, is there a way to use the html more precisely? I have limited
knowledge of html and do not know anything else. Inspect element and copy the
element's code that you want? Well, I tried that and it stopped monitoring all
together! (see screenshot: http://i.imgur.com/57NI3NU.png?1)
I would like to know how to best monitor these specific elements such as an
item's price only, however, I'm not sure how to use the regex option and the
picker seems to include other elements on the page. Also, is there a way to
monitor multiple completely separate entities on a page? It seems that right
now you can only monitor items that are connected via div tags or whatever.
I'm using the newest version of Chrome on Windows 7 64bit OS.
```
Original issue reported on code.google.com by `adamjess...@gmail.com` on 3 Dec 2014 at 7:05 | non_main | monitoring specific elements what steps will reproduce the problem please include a url picking specific elements on a page to monitor doesn t appear to be working for me i am a steam gaming network user and i would like to monitor specific elements on the steam community market page however although i ve selected elements that contain the specific elements or even highlighted the exact elements i want to monitor other features of the page constantly cause my alert to go off when they change to be more exact i m trying to monitor only the cheapest earbuds on the scm found here they would be the very top price the cheapest of the earbuds for sale since picking it isn t working is there a way to use the html more precisely i have limited knowledge of html and do not know anything else inspect element and copy the element s code that you want well i tried that and it stopped monitoring all together see screenshot i would like to know how to best monitor these specific elements such as an item s price only however i m not sure how to use the regex option and the picker seems to include other elements on the page also is there a way to monitor multiple completely separate entities on a page it seems that right now you can only monitor items that are connected via div tags or whatever i m using the newest version of chrome on windows os original issue reported on code google com by adamjess gmail com on dec at | 0 |
1,328 | 5,697,637,125 | IssuesEvent | 2017-04-16 23:45:32 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Dependent casks and versioning | awaiting maintainer feedback | While it is unusual, there are a few casks that are dependent on others in a fundamental way. I'm told Flash and a few others fall into this category, but my own experience is with Unity, so I'll use it for my examples.
When Cask installs Unity, it grabs only the barest version, because the "approved" way of installing Unity is to use their Download Assistant, which lets you choose options for what to install. Each of these options grabs a different .pkg file from the web for installation; it roughly makes sense since the support packages can be multiple gigabytes and it's silly to waste bandwidth and space on them if you don't use them.
Cask's approach to this is to make each separate .pkg into a separate cask, which also makes sense. The problem is when the subsidiary packages get out of sync with the main package. We currently have a way of indicating that unity-ios-support-for-editor `depends_on` unity, but that's where it ends. So when a new version of Unity drops, each cask has to be separately updated with the new version number. Since there's rightly no ownership of casks, there's no way to say "if you update unity, also update its dependent casks."
It could be that this is deemed to be not widespread enough to be worth addressing. If it something worth fixing, I'm happy to prepare a PR, but don't want to do the work if it may not land.
So question 1 is: is this something the HBC team feels is worth fixing?
Question 2 is about the best way to solve it. I see two possible solutions, with various levels of problems:
A. Having a way for casks that depend on another cask to get information about it, in this case the version number. That way the version could simply be updated in the unity cask and all dependent casks would get it automatically.
B. We add options to casks so that when installing a base cask, a user could specify to *also* install some dependent packages which, as part of the same cask, could use the same versioning data.
A is probably the simplest to implement, but may break some important rules about isolation of information in casks.
B could be thorny both to implement and support, but would give the most flexibility going forward, so further dependents (like Unity introducing a new build target) could be added easily to the base cask instead of having to make a new cask.
Thanks for your time, and my apologies if this is very off the mark. Love the work y'all do.
| True | Dependent casks and versioning - While it is unusual, there are a few casks that are dependent on others in a fundamental way. I'm told Flash and a few others fall into this category, but my own experience is with Unity, so I'll use it for my examples.
When Cask installs Unity, it grabs only the barest version, because the "approved" way of installing Unity is to use their Download Assistant, which lets you choose options for what to install. Each of these options grabs a different .pkg file from the web for installation; it roughly makes sense since the support packages can be multiple gigabytes and it's silly to waste bandwidth and space on them if you don't use them.
Cask's approach to this is to make each separate .pkg into a separate cask, which also makes sense. The problem is when the subsidiary packages get out of sync with the main package. We currently have a way of indicating that unity-ios-support-for-editor `depends_on` unity, but that's where it ends. So when a new version of Unity drops, each cask has to be separately updated with the new version number. Since there's rightly no ownership of casks, there's no way to say "if you update unity, also update its dependent casks."
It could be that this is deemed to be not widespread enough to be worth addressing. If it something worth fixing, I'm happy to prepare a PR, but don't want to do the work if it may not land.
So question 1 is: is this something the HBC team feels is worth fixing?
Question 2 is about the best way to solve it. I see two possible solutions, with various levels of problems:
A. Having a way for casks that depend on another cask to get information about it, in this case the version number. That way the version could simply be updated in the unity cask and all dependent casks would get it automatically.
B. We add options to casks so that when installing a base cask, a user could specify to *also* install some dependent packages which, as part of the same cask, could use the same versioning data.
A is probably the simplest to implement, but may break some important rules about isolation of information in casks.
B could be thorny both to implement and support, but would give the most flexibility going forward, so further dependents (like Unity introducing a new build target) could be added easily to the base cask instead of having to make a new cask.
Thanks for your time, and my apologies if this is very off the mark. Love the work y'all do.
| main | dependent casks and versioning while it is unusual there are a few casks that are dependent on others in a fundamental way i m told flash and a few others fall into this category but my own experience is with unity so i ll use it for my examples when cask installs unity it grabs only the barest version because the approved way of installing unity is to use their download assistant which lets you choose options for what to install each of these options grabs a different pkg file from the web for installation it roughly makes sense since the support packages can be multiple gigabytes and it s silly to waste bandwidth and space on them if you don t use them cask s approach to this is to make each separate pkg into a separate cask which also makes sense the problem is when the subsidiary packages get out of sync with the main package we currently have a way of indicating that unity ios support for editor depends on unity but that s where it ends so when a new version of unity drops each cask has to be separately updated with the new version number since there s rightly no ownership of casks there s no way to say if you update unity also update its dependent casks it could be that this is deemed to be not widespread enough to be worth addressing if it something worth fixing i m happy to prepare a pr but don t want to do the work if it may not land so question is is this something the hbc team feels is worth fixing question is about the best way to solve it i see two possible solutions with various levels of problems a having a way for casks that depend on another cask to get information about it in this case the version number that way the version could simply be updated in the unity cask and all dependent casks would get it automatically b we add options to casks so that when installing a base cask a user could specify to also install some dependent packages which as part of the same cask could use the same versioning data a is probably the simplest to implement but may break some important rules about isolation of information in casks b could be thorny both to implement and support but would give the most flexibility going forward so further dependents like unity introducing a new build target could be added easily to the base cask instead of having to make a new cask thanks for your time and my apologies if this is very off the mark love the work y all do | 1 |
791,352 | 27,860,677,889 | IssuesEvent | 2023-03-21 05:51:37 | Saga-sanga/mizo-apologia | https://api.github.com/repos/Saga-sanga/mizo-apologia | opened | Search icon in navbar gets squished in screens below 700px | bug medium_priority optimization | Fix the padding of nav elements to have more space. | 1.0 | Search icon in navbar gets squished in screens below 700px - Fix the padding of nav elements to have more space. | non_main | search icon in navbar gets squished in screens below fix the padding of nav elements to have more space | 0 |
3,611 | 14,598,356,970 | IssuesEvent | 2020-12-21 00:28:54 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | OH GOD WHERE DOES THE NOSE GO WHEN YOU PUT UP THE HOOD | Maintainability/Hinders improvements Oversight | Reporting client version: 512
AAAAAAAA


| True | OH GOD WHERE DOES THE NOSE GO WHEN YOU PUT UP THE HOOD - Reporting client version: 512
AAAAAAAA


| main | oh god where does the nose go when you put up the hood reporting client version aaaaaaaa | 1 |
4,355 | 22,034,174,108 | IssuesEvent | 2022-05-28 09:52:15 | bromite/bromite | https://api.github.com/repos/bromite/bromite | closed | Web Feeds (RSS) | enhancement enhancement-without-maintainer | ### Preliminary checklist
- [X] I have read the [README](https://github.com/bromite/bromite/blob/master/README.md)
- [X] I have read the [FAQs](https://github.com/bromite/bromite/blob/master/FAQ.md).
- [X] I have searched [existing issues](https://github.com/bromite/bromite/issues) for my feature request. This is a new issue (NOT a duplicate) and is not related to another issue.
### Is your feature request related to privacy?
No
### Is there a patch available for this feature somewhere?
No.
### Describe the solution you would like
Please would it be possible to implement the web feed feature that is available in Chrome. By enabling the Web Feed Enabled flag the user is able to add websites that they wish to follow. These are displayed on the home screen in a news feed layout. Thanks.
### Describe alternatives you have considered
None. | True | Web Feeds (RSS) - ### Preliminary checklist
- [X] I have read the [README](https://github.com/bromite/bromite/blob/master/README.md)
- [X] I have read the [FAQs](https://github.com/bromite/bromite/blob/master/FAQ.md).
- [X] I have searched [existing issues](https://github.com/bromite/bromite/issues) for my feature request. This is a new issue (NOT a duplicate) and is not related to another issue.
### Is your feature request related to privacy?
No
### Is there a patch available for this feature somewhere?
No.
### Describe the solution you would like
Please would it be possible to implement the web feed feature that is available in Chrome. By enabling the Web Feed Enabled flag the user is able to add websites that they wish to follow. These are displayed on the home screen in a news feed layout. Thanks.
### Describe alternatives you have considered
None. | main | web feeds rss preliminary checklist i have read the i have read the i have searched for my feature request this is a new issue not a duplicate and is not related to another issue is your feature request related to privacy no is there a patch available for this feature somewhere no describe the solution you would like please would it be possible to implement the web feed feature that is available in chrome by enabling the web feed enabled flag the user is able to add websites that they wish to follow these are displayed on the home screen in a news feed layout thanks describe alternatives you have considered none | 1 |
3,917 | 17,574,805,847 | IssuesEvent | 2021-08-15 11:55:51 | Jerit3787/status | https://api.github.com/repos/Jerit3787/status | opened | Unexpected Maintainance | maintainance | <!--
start: 2021-02-24T13:00:00.220Z
end: 2021-02-24T14:00:00.220Z
expectedDown: "Main Website", "API", "Projects Website", "Links Website", "Project Stutastic", "Stutastic Planner", "Channel Buku Teks Website"
-->
Hi, this website is still under maintainance. So, data shown are not accurate as for now. Any updates we'll update here! | True | Unexpected Maintainance - <!--
start: 2021-02-24T13:00:00.220Z
end: 2021-02-24T14:00:00.220Z
expectedDown: "Main Website", "API", "Projects Website", "Links Website", "Project Stutastic", "Stutastic Planner", "Channel Buku Teks Website"
-->
Hi, this website is still under maintainance. So, data shown are not accurate as for now. Any updates we'll update here! | main | unexpected maintainance start end expecteddown main website api projects website links website project stutastic stutastic planner channel buku teks website hi this website is still under maintainance so data shown are not accurate as for now any updates we ll update here | 1 |
330,783 | 24,277,382,388 | IssuesEvent | 2022-09-28 14:45:04 | DMIT-2018/dmit-2018-sep-2022-a02-workbook-Michael-Armenta | https://api.github.com/repos/DMIT-2018/dmit-2018-sep-2022-a02-workbook-Michael-Armenta | opened | General Planning Implementation Tasks of Managing Play List in Chinook | documentation | This task list area will be completed once the implementation plan has been outlined. This area is where one creates the task list that is associated with the milestone. The tasks that are outlined in this area, are the tasks that are counted for the milestone.
- Task A
- [ ] task a.1
- [ ] task a.2
- [ ] task a.3
- Task B
- [ ] task b.1
- Task C
- [ ] task c.1
- [ ] task c.2 | 1.0 | General Planning Implementation Tasks of Managing Play List in Chinook - This task list area will be completed once the implementation plan has been outlined. This area is where one creates the task list that is associated with the milestone. The tasks that are outlined in this area, are the tasks that are counted for the milestone.
- Task A
- [ ] task a.1
- [ ] task a.2
- [ ] task a.3
- Task B
- [ ] task b.1
- Task C
- [ ] task c.1
- [ ] task c.2 | non_main | general planning implementation tasks of managing play list in chinook this task list area will be completed once the implementation plan has been outlined this area is where one creates the task list that is associated with the milestone the tasks that are outlined in this area are the tasks that are counted for the milestone task a task a task a task a task b task b task c task c task c | 0 |
793,649 | 28,006,155,708 | IssuesEvent | 2023-03-27 15:22:44 | rowan04/CW3-Maze-Game | https://api.github.com/repos/rowan04/CW3-Maze-Game | closed | Add wall-breaker with random spawn | base high priority | Add wall breaker that spawns randomly in different spawn points across the maze.
Will be able to break one wall piece, allowing the user to break into the exit.
Maybe add a feature in which if the player accidentally uses their wall breaker not on the exit, another one will spawn? So that the player can't get stuck. | 1.0 | Add wall-breaker with random spawn - Add wall breaker that spawns randomly in different spawn points across the maze.
Will be able to break one wall piece, allowing the user to break into the exit.
Maybe add a feature in which if the player accidentally uses their wall breaker not on the exit, another one will spawn? So that the player can't get stuck. | non_main | add wall breaker with random spawn add wall breaker that spawns randomly in different spawn points across the maze will be able to break one wall piece allowing the user to break into the exit maybe add a feature in which if the player accidentally uses their wall breaker not on the exit another one will spawn so that the player can t get stuck | 0 |
11,422 | 7,202,164,943 | IssuesEvent | 2018-02-06 02:18:49 | BeeDesignLLC/GlutenProject.com | https://api.github.com/repos/BeeDesignLLC/GlutenProject.com | closed | Clean up data | usability | - [x] Consolidate duplicate brands
- [x] Fix uppercase brand names
- [x] Change algolia index to have brand object (name, id)
| True | Clean up data - - [x] Consolidate duplicate brands
- [x] Fix uppercase brand names
- [x] Change algolia index to have brand object (name, id)
| non_main | clean up data consolidate duplicate brands fix uppercase brand names change algolia index to have brand object name id | 0 |
141,132 | 11,395,174,261 | IssuesEvent | 2020-01-30 10:51:05 | akeneo/pim-community-dev | https://api.github.com/repos/akeneo/pim-community-dev | closed | [CE 3.0] edit_product_with_localized_attributes.feature:59 | CI tests | Circle CI build : https://circleci.com/gh/akeneo/pim-enterprise-dev/4313#tests/containers/12
PR : https://github.com/akeneo/pim-enterprise-dev/pull/5736
We encountered an unstable behat when running a EE build.
vendor/akeneo/pim-community-dev/tests/legacy/features/pim/enrichment/product/pef/edit_product_with_localized_attributes.feature:59 - Edit a product with localized attributes
Then the field Date should contain "05/28/2015": Spin : timeout of 50 excedeed, with message : Expected product field "Date" to contain "05/28/2015", but got "150.8675". (Context\Spin\TimeoutException) | 1.0 | [CE 3.0] edit_product_with_localized_attributes.feature:59 - Circle CI build : https://circleci.com/gh/akeneo/pim-enterprise-dev/4313#tests/containers/12
PR : https://github.com/akeneo/pim-enterprise-dev/pull/5736
We encountered an unstable behat when running a EE build.
vendor/akeneo/pim-community-dev/tests/legacy/features/pim/enrichment/product/pef/edit_product_with_localized_attributes.feature:59 - Edit a product with localized attributes
Then the field Date should contain "05/28/2015": Spin : timeout of 50 excedeed, with message : Expected product field "Date" to contain "05/28/2015", but got "150.8675". (Context\Spin\TimeoutException) | non_main | edit product with localized attributes feature circle ci build pr we encountered an unstable behat when running a ee build vendor akeneo pim community dev tests legacy features pim enrichment product pef edit product with localized attributes feature edit a product with localized attributes then the field date should contain spin timeout of excedeed with message expected product field date to contain but got context spin timeoutexception | 0 |
3,229 | 12,368,706,182 | IssuesEvent | 2020-05-18 14:13:28 | Kashdeya/Tiny-Progressions | https://api.github.com/repos/Kashdeya/Tiny-Progressions | closed | Enchantment Table clear out Inventory | Version not Maintainted | Hello all
I Crafting the Bag Pouch and i want it to Soulbound Entchantment on this Bag.
After this attempt, i show in the Bag and i lost the Bag Invetory directly.
## Steps to Reproduce (for bugs)
1. Creating the Bag Pouch
2. Fill up with Items
3. All Items to see
3. Want it to Enchantment Soulbound
4. See it does not work
5. Look in the Bag and lost all Items in this Bag Pouch
6. Same without Lapis.
## Logs
[latest.log](https://github.com/Sunekaer/stoneBlock/files/2452647/latest.log)
## Client Information
* Modpack Version: Stoneblock 1.0.17
* Mod Version: tinyprogressions-1.12.2.-.3.3.3.1
* Server/LAN/Single Player: SP
* Optifine Installed: YES OptiFine_1.12.2_HD_U_E2
* Shaders Enabled: NO
Greetings | True | Enchantment Table clear out Inventory - Hello all
I Crafting the Bag Pouch and i want it to Soulbound Entchantment on this Bag.
After this attempt, i show in the Bag and i lost the Bag Invetory directly.
## Steps to Reproduce (for bugs)
1. Creating the Bag Pouch
2. Fill up with Items
3. All Items to see
3. Want it to Enchantment Soulbound
4. See it does not work
5. Look in the Bag and lost all Items in this Bag Pouch
6. Same without Lapis.
## Logs
[latest.log](https://github.com/Sunekaer/stoneBlock/files/2452647/latest.log)
## Client Information
* Modpack Version: Stoneblock 1.0.17
* Mod Version: tinyprogressions-1.12.2.-.3.3.3.1
* Server/LAN/Single Player: SP
* Optifine Installed: YES OptiFine_1.12.2_HD_U_E2
* Shaders Enabled: NO
Greetings | main | enchantment table clear out inventory hello all i crafting the bag pouch and i want it to soulbound entchantment on this bag after this attempt i show in the bag and i lost the bag invetory directly steps to reproduce for bugs creating the bag pouch fill up with items all items to see want it to enchantment soulbound see it does not work look in the bag and lost all items in this bag pouch same without lapis logs client information modpack version stoneblock mod version tinyprogressions server lan single player sp optifine installed yes optifine hd u shaders enabled no greetings | 1 |
730,446 | 25,173,146,193 | IssuesEvent | 2022-11-11 06:25:15 | saudalnasser/strifelux | https://api.github.com/repos/saudalnasser/strifelux | opened | feat: modals | type: feature priority: medium | ## Problem
need an easy way to create and handle modals in an organized way and with minimal effort.
## Solution(s)
provide an easy way of:
- handling modals
- organizing modals
| 1.0 | feat: modals - ## Problem
need an easy way to create and handle modals in an organized way and with minimal effort.
## Solution(s)
provide an easy way of:
- handling modals
- organizing modals
| non_main | feat modals problem need an easy way to create and handle modals in an organized way and with minimal effort solution s provide an easy way of handling modals organizing modals | 0 |
2,356 | 8,409,619,484 | IssuesEvent | 2018-10-12 07:59:59 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | closed | [Feature request] Транскрипционные метаданные | enhancement need-maintainer nervotrepka similar-implemented | ### 1. Запрос
Неплохо было бы, если б появились метаданные, упрощающие запись ответов — транскрипций с языков, отличных от русского.
+ `*-ou-` — для о/у,
+ `*-zs-` — для з/с,
+ `*-uv-` — для у/в.
Другой случай — [**разночтение транскрипции э/е**](https://github.com/Kristinita/Erics-Green-Room/issues/22) — встречается настолько часто, что необходим функционал на более высоком уровне, чем индивидуальный для каждого вопроса.
### 2. Примеры
1. `Обладатель Золотого мяча 2016*Криштиану Роналду*-ou-` — будут засчитываться ответы как `Криштиану Роналду` так и `Криштиано Роналдо`;
1. `Во «Властелине Колец» ЕГО играл Шон Астин*Сэмвайс Гэмджи*-zs-*-uv-` — будут засчитываться ответы `Сэмвайс Гэмджи`, `Сэмуайз Гемджи`, `Сэмуайс Гемджи`, `Сэмвайс Гэмджи`.
### 3. Аргументация
Экономия времени на рутинную работу для авторов пакетов. Намного быстрее 1 раз нажать шорткат, чем писать десятки букв вариантов. Как автор около 50 пакетов для Альфа-хаба, говорю, что необходимость в этих метаданных возникает часто.
Любое изменение, сокращающее время на рутинную работу — хорошее изменение.
Спасибо. | True | [Feature request] Транскрипционные метаданные - ### 1. Запрос
Неплохо было бы, если б появились метаданные, упрощающие запись ответов — транскрипций с языков, отличных от русского.
+ `*-ou-` — для о/у,
+ `*-zs-` — для з/с,
+ `*-uv-` — для у/в.
Другой случай — [**разночтение транскрипции э/е**](https://github.com/Kristinita/Erics-Green-Room/issues/22) — встречается настолько часто, что необходим функционал на более высоком уровне, чем индивидуальный для каждого вопроса.
### 2. Примеры
1. `Обладатель Золотого мяча 2016*Криштиану Роналду*-ou-` — будут засчитываться ответы как `Криштиану Роналду` так и `Криштиано Роналдо`;
1. `Во «Властелине Колец» ЕГО играл Шон Астин*Сэмвайс Гэмджи*-zs-*-uv-` — будут засчитываться ответы `Сэмвайс Гэмджи`, `Сэмуайз Гемджи`, `Сэмуайс Гемджи`, `Сэмвайс Гэмджи`.
### 3. Аргументация
Экономия времени на рутинную работу для авторов пакетов. Намного быстрее 1 раз нажать шорткат, чем писать десятки букв вариантов. Как автор около 50 пакетов для Альфа-хаба, говорю, что необходимость в этих метаданных возникает часто.
Любое изменение, сокращающее время на рутинную работу — хорошее изменение.
Спасибо. | main | транскрипционные метаданные запрос неплохо было бы если б появились метаданные упрощающие запись ответов — транскрипций с языков отличных от русского ou — для о у zs — для з с uv — для у в другой случай — — встречается настолько часто что необходим функционал на более высоком уровне чем индивидуальный для каждого вопроса примеры обладатель золотого мяча криштиану роналду ou — будут засчитываться ответы как криштиану роналду так и криштиано роналдо во «властелине колец» его играл шон астин сэмвайс гэмджи zs uv — будут засчитываться ответы сэмвайс гэмджи сэмуайз гемджи сэмуайс гемджи сэмвайс гэмджи аргументация экономия времени на рутинную работу для авторов пакетов намного быстрее раз нажать шорткат чем писать десятки букв вариантов как автор около пакетов для альфа хаба говорю что необходимость в этих метаданных возникает часто любое изменение сокращающее время на рутинную работу — хорошее изменение спасибо | 1 |
97,634 | 4,003,464,999 | IssuesEvent | 2016-05-12 00:30:18 | medic/medic-webapp | https://api.github.com/repos/medic/medic-webapp | closed | Data migration script performance | 1 - Scheduled Performance Priority | After doing a data migration for LG the production server became slow and the CPU usage of beam.smp jumped to 100% for a long time (ie: days). [medic-projects issue](https://github.com/medic/medic-projects/issues/317).
Investigate and fix. | 1.0 | Data migration script performance - After doing a data migration for LG the production server became slow and the CPU usage of beam.smp jumped to 100% for a long time (ie: days). [medic-projects issue](https://github.com/medic/medic-projects/issues/317).
Investigate and fix. | non_main | data migration script performance after doing a data migration for lg the production server became slow and the cpu usage of beam smp jumped to for a long time ie days investigate and fix | 0 |
917 | 4,621,818,704 | IssuesEvent | 2016-09-27 03:49:28 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | docker_container.py fails on second run if devices are configured | affects_2.2 bug_report cloud docker waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Nothing special
##### OS / ENVIRONMENT
ubuntu
##### SUMMARY
When you have devices configured in your playbook, e.g.
devices:
- /dev/net/tun
- /dev/kvm
Then a second run through of a playbook will fail. Details below.
##### STEPS TO REPRODUCE
Create a container with devices configured. Then rerun the same playbook.
##### EXPECTED RESULTS
Nothing should change, let alone fail.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1963, in <module>
main()
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1956, in main
cm = ContainerManager(client)
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1609, in __init__
self.present(state)
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1634, in present
different, differences = container.has_different_configuration(image)
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1253, in has_different_configuration
set_b = set(value)
TypeError: unhashable type: 'dict'
failed: [localhost] (item={u'pa': None, u'container_name': u'pcontainer', u'container_version': u'latest', u'name': u'pa'}) => {
"failed": true,
"invocation": {
"module_name": "docker_container"
},
"item": {
"container_name": "pcontainer",
"container_version": "latest",
"name": "pa",
"pa": null
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1963, in <module>\n main()\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1956, in main\n cm = ContainerManager(client)\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1609, in __init__\n self.present(state)\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1634, in present\n different, differences = container.has_different_configuration(image)\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1253, in has_different_configuration\n set_b = set(value)\nTypeError: unhashable type: 'dict'\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
```
Note that the reason is because the docker inspect version of devices is different from the way devices are configured. Devices are configured with a list of strings, each string like "/dev/on/host:/dev/in/container:rwm" where rwm represents permissions. Docker inspect returns something like this:
```
"Devices": [
{
"CgroupPermissions": "rwm",
"PathInContainer": "/dev/net/tun",
"PathOnHost": "/dev/net/tun"
},
{
"CgroupPermissions": "rwm",
"PathInContainer": "/dev/kvm",
"PathOnHost": "/dev/kvm"
}
],
```
I have managed to make it work with this patch to lib/ansible/modules/core/cloud/docker/docker_container.py
```
1203c1203
< devices=host_config.get('Devices'),
---
> devices=[':'.join([ t[p] for p in ['PathOnHost','PathInContainer' if t['PathInContainer'] != t['PathOnHost'] else None ,'CgroupPermissions' if t['CgroupPermissions'] != 'rwm' else None] if p is not None]) for t in host_config.get('Devices')] if host_config.get('Devices') else None,
```
It looks a bit convoluted, but I wanted to keep it in the same inline one liner theme of the surrounding code. Here's the explanation:
Starting at the "for p in []", the list p iterates over is a specified list of strings, in the correct order, that corresponds to the keys that are in the inspect hash. However, the string is replaced with None if it shouldn't be displayed within the configuration string. E.g. if it's default permissions (rwm), or if the host and container have the same path, then the configuration string would not show these values, so replace them with None.
Then a list of the values from the device config hash, corresponding to the listed key is generated, but a None key is ignored with "if p is not None". So at this stage, we would have something like ["/dev/on/host","/dev/in/container","rw" ], or ["/dev/on/host",None,"rw"] and then that list is joined together with a ":" to form the correct string. This is repeated for every device hash, setting t to that hash and the result is compiled into a list of configuration strings, just like what would be set in the .yml file.
Finally, it just returns None if there is no device config.
| True | docker_container.py fails on second run if devices are configured - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Nothing special
##### OS / ENVIRONMENT
ubuntu
##### SUMMARY
When you have devices configured in your playbook, e.g.
devices:
- /dev/net/tun
- /dev/kvm
Then a second run through of a playbook will fail. Details below.
##### STEPS TO REPRODUCE
Create a container with devices configured. Then rerun the same playbook.
##### EXPECTED RESULTS
Nothing should change, let alone fail.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1963, in <module>
main()
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1956, in main
cm = ContainerManager(client)
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1609, in __init__
self.present(state)
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1634, in present
different, differences = container.has_different_configuration(image)
File "/tmp/ansible_cxlQ9b/ansible_module_docker_container.py", line 1253, in has_different_configuration
set_b = set(value)
TypeError: unhashable type: 'dict'
failed: [localhost] (item={u'pa': None, u'container_name': u'pcontainer', u'container_version': u'latest', u'name': u'pa'}) => {
"failed": true,
"invocation": {
"module_name": "docker_container"
},
"item": {
"container_name": "pcontainer",
"container_version": "latest",
"name": "pa",
"pa": null
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1963, in <module>\n main()\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1956, in main\n cm = ContainerManager(client)\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1609, in __init__\n self.present(state)\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1634, in present\n different, differences = container.has_different_configuration(image)\n File \"/tmp/ansible_cxlQ9b/ansible_module_docker_container.py\", line 1253, in has_different_configuration\n set_b = set(value)\nTypeError: unhashable type: 'dict'\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
```
Note that the reason is because the docker inspect version of devices is different from the way devices are configured. Devices are configured with a list of strings, each string like "/dev/on/host:/dev/in/container:rwm" where rwm represents permissions. Docker inspect returns something like this:
```
"Devices": [
{
"CgroupPermissions": "rwm",
"PathInContainer": "/dev/net/tun",
"PathOnHost": "/dev/net/tun"
},
{
"CgroupPermissions": "rwm",
"PathInContainer": "/dev/kvm",
"PathOnHost": "/dev/kvm"
}
],
```
I have managed to make it work with this patch to lib/ansible/modules/core/cloud/docker/docker_container.py
```
1203c1203
< devices=host_config.get('Devices'),
---
> devices=[':'.join([ t[p] for p in ['PathOnHost','PathInContainer' if t['PathInContainer'] != t['PathOnHost'] else None ,'CgroupPermissions' if t['CgroupPermissions'] != 'rwm' else None] if p is not None]) for t in host_config.get('Devices')] if host_config.get('Devices') else None,
```
It looks a bit convoluted, but I wanted to keep it in the same inline one liner theme of the surrounding code. Here's the explanation:
Starting at the "for p in []", the list p iterates over is a specified list of strings, in the correct order, that corresponds to the keys that are in the inspect hash. However, the string is replaced with None if it shouldn't be displayed within the configuration string. E.g. if it's default permissions (rwm), or if the host and container have the same path, then the configuration string would not show these values, so replace them with None.
Then a list of the values from the device config hash, corresponding to the listed key is generated, but a None key is ignored with "if p is not None". So at this stage, we would have something like ["/dev/on/host","/dev/in/container","rw" ], or ["/dev/on/host",None,"rw"] and then that list is joined together with a ":" to form the correct string. This is repeated for every device hash, setting t to that hash and the result is compiled into a list of configuration strings, just like what would be set in the .yml file.
Finally, it just returns None if there is no device config.
| main | docker container py fails on second run if devices are configured issue type bug report component name docker container ansible version ansible config file configured module search path default w o overrides configuration nothing special os environment ubuntu summary when you have devices configured in your playbook e g devices dev net tun dev kvm then a second run through of a playbook will fail details below steps to reproduce create a container with devices configured then rerun the same playbook expected results nothing should change let alone fail actual results an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module docker container py line in main file tmp ansible ansible module docker container py line in main cm containermanager client file tmp ansible ansible module docker container py line in init self present state file tmp ansible ansible module docker container py line in present different differences container has different configuration image file tmp ansible ansible module docker container py line in has different configuration set b set value typeerror unhashable type dict failed item u pa none u container name u pcontainer u container version u latest u name u pa failed true invocation module name docker container item container name pcontainer container version latest name pa pa null module stderr traceback most recent call last n file tmp ansible ansible module docker container py line in n main n file tmp ansible ansible module docker container py line in main n cm containermanager client n file tmp ansible ansible module docker container py line in init n self present state n file tmp ansible ansible module docker container py line in present n different differences container has different configuration image n file tmp ansible ansible module docker container py line in has different configuration n set b set value ntypeerror unhashable type dict n module stdout msg module failure note that the reason is because the docker inspect version of devices is different from the way devices are configured devices are configured with a list of strings each string like dev on host dev in container rwm where rwm represents permissions docker inspect returns something like this devices cgrouppermissions rwm pathincontainer dev net tun pathonhost dev net tun cgrouppermissions rwm pathincontainer dev kvm pathonhost dev kvm i have managed to make it work with this patch to lib ansible modules core cloud docker docker container py devices host config get devices devices for p in t else none cgrouppermissions if t rwm else none if p is not none for t in host config get devices if host config get devices else none it looks a bit convoluted but i wanted to keep it in the same inline one liner theme of the surrounding code here s the explanation starting at the for p in the list p iterates over is a specified list of strings in the correct order that corresponds to the keys that are in the inspect hash however the string is replaced with none if it shouldn t be displayed within the configuration string e g if it s default permissions rwm or if the host and container have the same path then the configuration string would not show these values so replace them with none then a list of the values from the device config hash corresponding to the listed key is generated but a none key is ignored with if p is not none so at this stage we would have something like or and then that list is joined together with a to form the correct string this is repeated for every device hash setting t to that hash and the result is compiled into a list of configuration strings just like what would be set in the yml file finally it just returns none if there is no device config | 1 |
32,867 | 6,953,398,093 | IssuesEvent | 2017-12-06 20:53:00 | Dzhuneyt/jquery-tubular | https://api.github.com/repos/Dzhuneyt/jquery-tubular | closed | Video Quality | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. How can I set the quality of the youtube video?
What is the expected output? What do you see instead?
I have been reading up on the Youtube API but still not sure how to change the
quality of the video.
```
Original issue reported on code.google.com by `harvey.s...@gmail.com` on 23 May 2013 at 12:23
| 1.0 | Video Quality - ```
What steps will reproduce the problem?
1. How can I set the quality of the youtube video?
What is the expected output? What do you see instead?
I have been reading up on the Youtube API but still not sure how to change the
quality of the video.
```
Original issue reported on code.google.com by `harvey.s...@gmail.com` on 23 May 2013 at 12:23
| non_main | video quality what steps will reproduce the problem how can i set the quality of the youtube video what is the expected output what do you see instead i have been reading up on the youtube api but still not sure how to change the quality of the video original issue reported on code google com by harvey s gmail com on may at | 0 |
87,203 | 15,757,366,031 | IssuesEvent | 2021-03-31 05:10:05 | roncli/roncli.com | https://api.github.com/repos/roncli/roncli.com | closed | WS-2019-0332 (Medium) detected in handlebars-4.0.14.tgz | security vulnerability | ## WS-2019-0332 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.14.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.14.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.14.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/roncli.com/roncli.com/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/roncli.com/roncli.com/node_modules/grunt-contrib-handlebars/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-handlebars-1.0.0.tgz (Root Library)
- :x: **handlebars-4.0.14.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/roncli/roncli.com/commit/bc2dcdf9bf97337ddc80f1b419f7e9666c3367b0">bc2dcdf9bf97337ddc80f1b419f7e9666c3367b0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.3. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.It is due to an incomplete fix for a WS-2019-0331.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/198887808780bbef9dba67a8af68ece091d5baa7>WS-2019-0332</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0332 (Medium) detected in handlebars-4.0.14.tgz - ## WS-2019-0332 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.14.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.14.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.14.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/roncli.com/roncli.com/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/roncli.com/roncli.com/node_modules/grunt-contrib-handlebars/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-handlebars-1.0.0.tgz (Root Library)
- :x: **handlebars-4.0.14.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/roncli/roncli.com/commit/bc2dcdf9bf97337ddc80f1b419f7e9666c3367b0">bc2dcdf9bf97337ddc80f1b419f7e9666c3367b0</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.3. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.It is due to an incomplete fix for a WS-2019-0331.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/198887808780bbef9dba67a8af68ece091d5baa7>WS-2019-0332</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws medium detected in handlebars tgz ws medium severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file tmp ws scm roncli com roncli com package json path to vulnerable library tmp ws scm roncli com roncli com node modules grunt contrib handlebars node modules handlebars package json dependency hierarchy grunt contrib handlebars tgz root library x handlebars tgz vulnerable library found in head commit a href vulnerability details arbitrary code execution vulnerability found in handlebars before lookup helper fails to validate templates attack may submit templates that execute arbitrary javascript in the system it is due to an incomplete fix for a ws publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
531,026 | 15,439,315,973 | IssuesEvent | 2021-03-07 23:47:33 | open-aquarium/open-aquarium | https://api.github.com/repos/open-aquarium/open-aquarium | opened | NTP sync | priority: low status: awaiting triage user story | # Description
As a **developer**, I want **to sync the clock with NTP** so that **I can have the correct time**.
# Acceptance Criteria
* Clock automatically syncing with NTP server
# Definition of Ready
1. Triage done
# Definition of Done
**For code**
1. PR merged
1. Acceptance criteria met
1. Product Owner accepts the User Story
# Other relevant information
- [Google: Arduino NTP sync](https://www.google.com/search?safe=off&sxsrf=ALeKk03juVdGRbhInvqEsjfrKxS9Lm75pg%3A1615160816090&source=hp&ei=8GVFYLjyAoOP0Aa4sJioCw&iflsig=AINFCbYAAAAAYEV0AOc2I6_33zX_bOw8JEZJqSbCItEG&q=arduino+ntp+sync&oq=arduino+ntp+sync&gs_lcp=Cgdnd3Mtd2l6EAMyBQgAEMsBMgYIABAWEB4yBggAEBYQHjIGCAAQFhAeOgQIIxAnOgQIABBDOgcIABCxAxBDOgUIABCxAzoHCCMQsQIQJzoHCAAQsQMQCjoGCCMQJxATOgIIAFCPBli_Q2DmRGgBcAB4AIABlAKIAY8SkgEGMC4xNi4xmAEAoAEBqgEHZ3dzLXdpeg&sclient=gws-wiz&ved=0ahUKEwi4qsj1rp_vAhWDB9QKHTgYBrUQ4dUDCAY&uact=5) | 1.0 | NTP sync - # Description
As a **developer**, I want **to sync the clock with NTP** so that **I can have the correct time**.
# Acceptance Criteria
* Clock automatically syncing with NTP server
# Definition of Ready
1. Triage done
# Definition of Done
**For code**
1. PR merged
1. Acceptance criteria met
1. Product Owner accepts the User Story
# Other relevant information
- [Google: Arduino NTP sync](https://www.google.com/search?safe=off&sxsrf=ALeKk03juVdGRbhInvqEsjfrKxS9Lm75pg%3A1615160816090&source=hp&ei=8GVFYLjyAoOP0Aa4sJioCw&iflsig=AINFCbYAAAAAYEV0AOc2I6_33zX_bOw8JEZJqSbCItEG&q=arduino+ntp+sync&oq=arduino+ntp+sync&gs_lcp=Cgdnd3Mtd2l6EAMyBQgAEMsBMgYIABAWEB4yBggAEBYQHjIGCAAQFhAeOgQIIxAnOgQIABBDOgcIABCxAxBDOgUIABCxAzoHCCMQsQIQJzoHCAAQsQMQCjoGCCMQJxATOgIIAFCPBli_Q2DmRGgBcAB4AIABlAKIAY8SkgEGMC4xNi4xmAEAoAEBqgEHZ3dzLXdpeg&sclient=gws-wiz&ved=0ahUKEwi4qsj1rp_vAhWDB9QKHTgYBrUQ4dUDCAY&uact=5) | non_main | ntp sync description as a developer i want to sync the clock with ntp so that i can have the correct time acceptance criteria clock automatically syncing with ntp server definition of ready triage done definition of done for code pr merged acceptance criteria met product owner accepts the user story other relevant information | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.