Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,508 | 8,655,459,785 | IssuesEvent | 2018-11-27 16:00:30 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | QCMA crashes after unplugging vita | unmaintained | Pretty much every time i unplug the vita after transferring something it crashes QCMA | True | QCMA crashes after unplugging vita - Pretty much every time i unplug the vita after transferring something it crashes QCMA | main | qcma crashes after unplugging vita pretty much every time i unplug the vita after transferring something it crashes qcma | 1 |
149,991 | 19,597,828,284 | IssuesEvent | 2022-01-05 20:12:15 | edgexfoundry/go-mod-secrets | https://api.github.com/repos/edgexfoundry/go-mod-secrets | closed | Add "make lint" target and add to "make test" target | security_audit | Should enable golangci-lint with default linters + gosec.
See https://github.com/edgexfoundry/edgex-go/issues/3565 | True | Add "make lint" target and add to "make test" target - Should enable golangci-lint with default linters + gosec.
See https://github.com/edgexfoundry/edgex-go/issues/3565 | non_main | add make lint target and add to make test target should enable golangci lint with default linters gosec see | 0 |
424,631 | 29,146,758,293 | IssuesEvent | 2023-05-18 04:11:25 | nattadasu/ryuuRyuusei | https://api.github.com/repos/nattadasu/ryuuRyuusei | opened | (PY-D0002) Missing class docstring | documentation | ## Description
Class docstring is missing. If you want to ignore this, you can configure this in the `.deepsource.toml` file. Please refer to [docs](https://deepsource.io/docs/analyzer/python/#meta) for available options.
## Occurrences
There are 76 occurrences of this issue in the repository.
See all occurrences on DeepSource → [app.deepsource.com/gh/nattadasu/ryuuRyuusei/issue/PY-D0002/occurrences/](https://app.deepsource.com/gh/nattadasu/ryuuRyuusei/issue/PY-D0002/occurrences/)
| 1.0 | (PY-D0002) Missing class docstring - ## Description
Class docstring is missing. If you want to ignore this, you can configure this in the `.deepsource.toml` file. Please refer to [docs](https://deepsource.io/docs/analyzer/python/#meta) for available options.
## Occurrences
There are 76 occurrences of this issue in the repository.
See all occurrences on DeepSource → [app.deepsource.com/gh/nattadasu/ryuuRyuusei/issue/PY-D0002/occurrences/](https://app.deepsource.com/gh/nattadasu/ryuuRyuusei/issue/PY-D0002/occurrences/)
| non_main | py missing class docstring description class docstring is missing if you want to ignore this you can configure this in the deepsource toml file please refer to for available options occurrences there are occurrences of this issue in the repository see all occurrences on deepsource rarr | 0 |
2,135 | 7,334,336,466 | IssuesEvent | 2018-03-05 22:25:17 | jramell/Choice | https://api.github.com/repos/jramell/Choice | opened | Change SystemManager's Architecture | maintainability | The SystemManager doesn't allow for unconfigured trasition between systems. Should implement a stack of system states, and when a new system is registered, a new entry is added on the stack. When the system unregisters, its state is popped from the stack and applied to the system as a whole. | True | Change SystemManager's Architecture - The SystemManager doesn't allow for unconfigured trasition between systems. Should implement a stack of system states, and when a new system is registered, a new entry is added on the stack. When the system unregisters, its state is popped from the stack and applied to the system as a whole. | main | change systemmanager s architecture the systemmanager doesn t allow for unconfigured trasition between systems should implement a stack of system states and when a new system is registered a new entry is added on the stack when the system unregisters its state is popped from the stack and applied to the system as a whole | 1 |
1,426 | 6,196,339,331 | IssuesEvent | 2017-07-05 14:33:48 | ocaml/opam-repository | https://api.github.com/repos/ocaml/opam-repository | closed | Fix zenon.{0.7.1,0.8.0} | needs maintainer action no maintainer | The packages are misbehaving which leads to obscure errors in [other parts](https://github.com/ocaml/opam-repository/issues/9690#issuecomment-312921611) of the system (which shouldn't but that's another question).
A first fix would be to make them install in their own prefix.
I could not understand what in the world lead to the `zenon` directory [become a symlink](https://github.com/ocaml/opam-repository/issues/9690#issuecomment-312921164).
Unfortunately zenon has no maintainer.
| True | Fix zenon.{0.7.1,0.8.0} - The packages are misbehaving which leads to obscure errors in [other parts](https://github.com/ocaml/opam-repository/issues/9690#issuecomment-312921611) of the system (which shouldn't but that's another question).
A first fix would be to make them install in their own prefix.
I could not understand what in the world lead to the `zenon` directory [become a symlink](https://github.com/ocaml/opam-repository/issues/9690#issuecomment-312921164).
Unfortunately zenon has no maintainer.
| main | fix zenon the packages are misbehaving which leads to obscure errors in of the system which shouldn t but that s another question a first fix would be to make them install in their own prefix i could not understand what in the world lead to the zenon directory unfortunately zenon has no maintainer | 1 |
1,880 | 6,577,510,511 | IssuesEvent | 2017-09-12 01:25:11 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | allow iam group assignment without wiping out all other groups | affects_2.0 aws cloud feature_idea waiting_on_maintainer | ##### Issue Type:
- Feature Idea
##### Component Name:
iam module
##### Ansible Version:
```
2.0.1.0
```
##### Ansible Configuration:
NA
##### Environment:
NA
##### Summary:
It's currently impossible to assign an IAM group to a user without wiping out all other groups. The only way to assign groups is in an iam module task:
```
- iam: iam_type=user name=user_name state=present groups="{{ iam_groups }}"
```
However, this always wipes existing groups, losing data for existing users if you don't already know all of the groups assigned to the user. Other modules with group assignments use another parameter to allow appending to lists (e.g. mysql_user has "append_privs" and ec2_group has "purge_rules")
##### Steps To Reproduce:
```
vars:
groups1:
- groups1_example
groups2:
- groups2_example
tasks:
- iam: iam_type=user name=user_name state=present groups="{{ groups1 }}"
- iam: iam_type=user name=user_name state=present groups="{{ groups2 }}"
```
user will only have groups2_example group assigned.
##### Expected Results:
user would belong to both groups1_example and groups2_example
##### Actual Results:
user will only have groups2_example group assigned.
| True | allow iam group assignment without wiping out all other groups - ##### Issue Type:
- Feature Idea
##### Component Name:
iam module
##### Ansible Version:
```
2.0.1.0
```
##### Ansible Configuration:
NA
##### Environment:
NA
##### Summary:
It's currently impossible to assign an IAM group to a user without wiping out all other groups. The only way to assign groups is in an iam module task:
```
- iam: iam_type=user name=user_name state=present groups="{{ iam_groups }}"
```
However, this always wipes existing groups, losing data for existing users if you don't already know all of the groups assigned to the user. Other modules with group assignments use another parameter to allow appending to lists (e.g. mysql_user has "append_privs" and ec2_group has "purge_rules")
##### Steps To Reproduce:
```
vars:
groups1:
- groups1_example
groups2:
- groups2_example
tasks:
- iam: iam_type=user name=user_name state=present groups="{{ groups1 }}"
- iam: iam_type=user name=user_name state=present groups="{{ groups2 }}"
```
user will only have groups2_example group assigned.
##### Expected Results:
user would belong to both groups1_example and groups2_example
##### Actual Results:
user will only have groups2_example group assigned.
| main | allow iam group assignment without wiping out all other groups issue type feature idea component name iam module ansible version ansible configuration na environment na summary it s currently impossible to assign an iam group to a user without wiping out all other groups the only way to assign groups is in an iam module task iam iam type user name user name state present groups iam groups however this always wipes existing groups losing data for existing users if you don t already know all of the groups assigned to the user other modules with group assignments use another parameter to allow appending to lists e g mysql user has append privs and group has purge rules steps to reproduce vars example example tasks iam iam type user name user name state present groups iam iam type user name user name state present groups user will only have example group assigned expected results user would belong to both example and example actual results user will only have example group assigned | 1 |
66,298 | 7,986,318,334 | IssuesEvent | 2018-07-19 01:23:49 | ProcessMaker/bpm | https://api.github.com/repos/ProcessMaker/bpm | closed | Delete of shapes via crown | P2 designer | When I click on the the trash can icon in the crown I'd like to have a confirmation (javascript confirmation) that gives me options to confirm deletion or cancel. Then have the subsequent object be removed from the canvas. | 1.0 | Delete of shapes via crown - When I click on the the trash can icon in the crown I'd like to have a confirmation (javascript confirmation) that gives me options to confirm deletion or cancel. Then have the subsequent object be removed from the canvas. | non_main | delete of shapes via crown when i click on the the trash can icon in the crown i d like to have a confirmation javascript confirmation that gives me options to confirm deletion or cancel then have the subsequent object be removed from the canvas | 0 |
5,793 | 30,693,785,838 | IssuesEvent | 2023-07-26 16:58:13 | PyCQA/flake8-bugbear | https://api.github.com/repos/PyCQA/flake8-bugbear | closed | Stop using `python setup.py bdist_wheel/sdist` | bug help wanted terrible_maintainer | Lets move to pypa/build in the upload to PyPI action.
```
python setup.py bdist_wheel
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/setuptools/config/pyprojecttoml.py:66: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*.
config = read_configuration(filepath, True, ignore_option_errors, dist)
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying bugbear.py -> build/lib
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer, pypa/build or
other standards-based tools.
installing to build/bdist.linux-x86_64/wheel
running install
See https://blog.ganssle.io/articles/[20](https://github.com/PyCQA/flake8-bugbear/actions/runs/5179489298/jobs/9332438714#step:5:21)[21](https://github.com/PyCQA/flake8-bugbear/actions/runs/5179489298/jobs/9332438714#step:5:22)/10/setup-py-deprecated.html for details.
running install_lib
********************************************************************************
!!
``` | True | Stop using `python setup.py bdist_wheel/sdist` - Lets move to pypa/build in the upload to PyPI action.
```
python setup.py bdist_wheel
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/setuptools/config/pyprojecttoml.py:66: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*.
config = read_configuration(filepath, True, ignore_option_errors, dist)
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying bugbear.py -> build/lib
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer, pypa/build or
other standards-based tools.
installing to build/bdist.linux-x86_64/wheel
running install
See https://blog.ganssle.io/articles/[20](https://github.com/PyCQA/flake8-bugbear/actions/runs/5179489298/jobs/9332438714#step:5:21)[21](https://github.com/PyCQA/flake8-bugbear/actions/runs/5179489298/jobs/9332438714#step:5:22)/10/setup-py-deprecated.html for details.
running install_lib
********************************************************************************
!!
``` | main | stop using python setup py bdist wheel sdist lets move to pypa build in the upload to pypi action python setup py bdist wheel opt hostedtoolcache python lib site packages setuptools config pyprojecttoml py betaconfiguration support for in pyproject toml is still beta config read configuration filepath true ignore option errors dist running bdist wheel running build running build py creating build creating build lib copying bugbear py build lib opt hostedtoolcache python lib site packages setuptools distutils cmd py setuptoolsdeprecationwarning setup py install is deprecated please avoid running setup py directly instead use pypa build pypa installer pypa build or other standards based tools installing to build bdist linux wheel running install see for details running install lib | 1 |
1,270 | 5,375,435,627 | IssuesEvent | 2017-02-23 04:47:22 | wojno/movie_manager | https://api.github.com/repos/wojno/movie_manager | opened | As an authenticated user, I want to view a movie record in my collection | Maintain Collection | As an `authenticated user`, I want to `view` a movie record in my `collection` so that I can see all `details` about that `movie` | True | As an authenticated user, I want to view a movie record in my collection - As an `authenticated user`, I want to `view` a movie record in my `collection` so that I can see all `details` about that `movie` | main | as an authenticated user i want to view a movie record in my collection as an authenticated user i want to view a movie record in my collection so that i can see all details about that movie | 1 |
29,957 | 8,445,333,514 | IssuesEvent | 2018-10-18 21:09:23 | hashicorp/packer | https://api.github.com/repos/hashicorp/packer | closed | KVM/QEMU Network "has no peer" and WinRM/SSH fails due to no network present | bug builder/qemu | I have a Packer KVM build that I have not run for some time locally since it usually runs on the CI server. When I tried to run it locally I initially hit an issue due to the deprecation of `-usbevice tablet` in QEMU. After fixing this I noticed WinRM would never connect and the Windows VM had no network. The output from `PACKER_LOG=1` shows a warning from QEMU: `qemu-system-x86_64: warning: netdev user.0 has no peer` which according to the QEMU docs will result in no functional network.
Packer version: `1.3.1`
Host Platform:
```
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
```
QEMU Version:
```
QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.5)
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
```
QEMU Args in json:
```json
"qemuargs": [
["-boot", "c"],
["-m", "4096M"],
["-smp", "2"],
["-usb"],
["-device", "usb-tablet"],
["-rtc", "base=localtime"],
]
```
Packer Debug log snippet:
```
==> kvm: Overriding defaults Qemu arguments with QemuArgs...
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Executing /usr/bin/qemu-system-x86_64: []string{"-drive", "format=qcow2,file=./windows-2012r2-db/windows-2012r2-db-standard,if=virtio,cache=writeback,discard=ignore", "-boot", "c", "-m", "4096M", "-rtc", "base=localtime", "-fda", "/tmp/packer682066627", "-name", "windows-2012r2-db-standard", "-machine", "type=pc,accel=kvm", "-netdev", "user,id=user.0,hostfwd=tcp::3882-:5985", "-vnc", "127.0.0.1:11", "-smp", "2", "-usb", "-device", "usb-tablet"}
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Started Qemu. Pid: 24112
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: WARNING: Image format was not specified for '/tmp/packer682066627' and probing guessed raw.
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: Specify the 'raw' format explicitly to remove the restrictions.
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: qemu-system-x86_64: warning: netdev user.0 has no peer
```
### Solution
Through some trial and error I managed to get a working set of `qemuargs`.
```json
"qemuargs": [
["-boot", "c"],
["-m", "4096M"],
["-smp", "2"],
["-usb"],
["-device", "usb-tablet"],
["-device", "virtio-net,netdev=user.0"],
["-rtc", "base=localtime"],
]
```
I suspect changes in QEMU have changed the required set of command instructions when setting up the network. Note the addition of `["-device", "virtio-net,netdev=user.0"]`. I tried setting `net_device` as per the packer docs but this had no effect, the above configuration was required.
I'm not sure if this constitutes a bug in Packer? Perhaps changes are required to support the new QEMU features/behaviour.
Having to manually add the `-device virtio-net` to every QEMU build does not seem like a valid long term solution. If the `netdev` ID is ever changed this will break again.
| 1.0 | KVM/QEMU Network "has no peer" and WinRM/SSH fails due to no network present - I have a Packer KVM build that I have not run for some time locally since it usually runs on the CI server. When I tried to run it locally I initially hit an issue due to the deprecation of `-usbevice tablet` in QEMU. After fixing this I noticed WinRM would never connect and the Windows VM had no network. The output from `PACKER_LOG=1` shows a warning from QEMU: `qemu-system-x86_64: warning: netdev user.0 has no peer` which according to the QEMU docs will result in no functional network.
Packer version: `1.3.1`
Host Platform:
```
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
```
QEMU Version:
```
QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.5)
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
```
QEMU Args in json:
```json
"qemuargs": [
["-boot", "c"],
["-m", "4096M"],
["-smp", "2"],
["-usb"],
["-device", "usb-tablet"],
["-rtc", "base=localtime"],
]
```
Packer Debug log snippet:
```
==> kvm: Overriding defaults Qemu arguments with QemuArgs...
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Executing /usr/bin/qemu-system-x86_64: []string{"-drive", "format=qcow2,file=./windows-2012r2-db/windows-2012r2-db-standard,if=virtio,cache=writeback,discard=ignore", "-boot", "c", "-m", "4096M", "-rtc", "base=localtime", "-fda", "/tmp/packer682066627", "-name", "windows-2012r2-db-standard", "-machine", "type=pc,accel=kvm", "-netdev", "user,id=user.0,hostfwd=tcp::3882-:5985", "-vnc", "127.0.0.1:11", "-smp", "2", "-usb", "-device", "usb-tablet"}
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Started Qemu. Pid: 24112
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: WARNING: Image format was not specified for '/tmp/packer682066627' and probing guessed raw.
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: Specify the 'raw' format explicitly to remove the restrictions.
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
2018/10/05 12:44:48 packer: 2018/10/05 12:44:48 Qemu stderr: qemu-system-x86_64: warning: netdev user.0 has no peer
```
### Solution
Through some trial and error I managed to get a working set of `qemuargs`.
```json
"qemuargs": [
["-boot", "c"],
["-m", "4096M"],
["-smp", "2"],
["-usb"],
["-device", "usb-tablet"],
["-device", "virtio-net,netdev=user.0"],
["-rtc", "base=localtime"],
]
```
I suspect changes in QEMU have changed the required set of command instructions when setting up the network. Note the addition of `["-device", "virtio-net,netdev=user.0"]`. I tried setting `net_device` as per the packer docs but this had no effect, the above configuration was required.
I'm not sure if this constitutes a bug in Packer? Perhaps changes are required to support the new QEMU features/behaviour.
Having to manually add the `-device virtio-net` to every QEMU build does not seem like a valid long term solution. If the `netdev` ID is ever changed this will break again.
| non_main | kvm qemu network has no peer and winrm ssh fails due to no network present i have a packer kvm build that i have not run for some time locally since it usually runs on the ci server when i tried to run it locally i initially hit an issue due to the deprecation of usbevice tablet in qemu after fixing this i noticed winrm would never connect and the windows vm had no network the output from packer log shows a warning from qemu qemu system warning netdev user has no peer which according to the qemu docs will result in no functional network packer version host platform distributor id ubuntu description ubuntu lts release codename bionic qemu version qemu emulator version debian dfsg copyright c fabrice bellard and the qemu project developers qemu args in json json qemuargs packer debug log snippet kvm overriding defaults qemu arguments with qemuargs packer executing usr bin qemu system string drive format file windows db windows db standard if virtio cache writeback discard ignore boot c m rtc base localtime fda tmp name windows db standard machine type pc accel kvm netdev user id user hostfwd tcp vnc smp usb device usb tablet packer started qemu pid packer qemu stderr warning image format was not specified for tmp and probing guessed raw packer qemu stderr automatically detecting the format is dangerous for raw images write operations on block will be restricted packer qemu stderr specify the raw format explicitly to remove the restrictions packer qemu stderr qemu system warning host doesn t support requested feature cpuid ecx svm packer qemu stderr qemu system warning host doesn t support requested feature cpuid ecx svm packer qemu stderr qemu system warning netdev user has no peer solution through some trial and error i managed to get a working set of qemuargs json qemuargs i suspect changes in qemu have changed the required set of command instructions when setting up the network note the addition of i tried setting net device as per the packer docs but this had no effect the above configuration was required i m not sure if this constitutes a bug in packer perhaps changes are required to support the new qemu features behaviour having to manually add the device virtio net to every qemu build does not seem like a valid long term solution if the netdev id is ever changed this will break again | 0 |
148,282 | 13,232,969,729 | IssuesEvent | 2020-08-18 14:08:37 | ue-sho/camet | https://api.github.com/repos/ue-sho/camet | closed | 機能仕様書の修正 | documentation | # CASL
## コンパイル機能
- [ ] コンパイルエラーの種類
~~オブジェクトファイルに出力する等の記述(出力形式)~~
## リンク機能
- [ ] リンクのやり方など記述
# COMET
~~オブジェファイルの読み込み等を記述~~
- [ ] バスに色を付ける等の記述
| 1.0 | 機能仕様書の修正 - # CASL
## コンパイル機能
- [ ] コンパイルエラーの種類
~~オブジェクトファイルに出力する等の記述(出力形式)~~
## リンク機能
- [ ] リンクのやり方など記述
# COMET
~~オブジェファイルの読み込み等を記述~~
- [ ] バスに色を付ける等の記述
| non_main | 機能仕様書の修正 casl コンパイル機能 コンパイルエラーの種類 オブジェクトファイルに出力する等の記述 出力形式 リンク機能 リンクのやり方など記述 comet オブジェファイルの読み込み等を記述 バスに色を付ける等の記述 | 0 |
306,615 | 9,397,194,514 | IssuesEvent | 2019-04-08 09:10:05 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Spaces appended on the certificate content in IAM management console service providers' edit functionality | Complexity/Low Component/Auth Framework Priority/High Severity/Critical Type/Bug | To recreate the issue
1. Create an application in the store.
2. Log in to IAM management console.
3. Under service provider click the edit button of any service providers
4. You can see extra spaces appended at the end of the certificate content in Application certificate
5. Remove the spaces and update
6. Redo the above flows and you can see the spaces appended again
If the certificate is updated (by removing the spaces) in step 4 successfully the request flow works fine but if you click the edit button again the spaces are appended, which means every time the page loads on the edit function the spaces are appended at the end of the certificate content.
<img width="1674" alt="screen shot 2019-01-29 at 9 28 13 am" src="https://user-images.githubusercontent.com/18033158/51884526-981f1900-23ad-11e9-9d22-5b1b3d1d3bb9.png">
| 1.0 | Spaces appended on the certificate content in IAM management console service providers' edit functionality - To recreate the issue
1. Create an application in the store.
2. Log in to IAM management console.
3. Under service provider click the edit button of any service providers
4. You can see extra spaces appended at the end of the certificate content in Application certificate
5. Remove the spaces and update
6. Redo the above flows and you can see the spaces appended again
If the certificate is updated (by removing the spaces) in step 4 successfully the request flow works fine but if you click the edit button again the spaces are appended, which means every time the page loads on the edit function the spaces are appended at the end of the certificate content.
<img width="1674" alt="screen shot 2019-01-29 at 9 28 13 am" src="https://user-images.githubusercontent.com/18033158/51884526-981f1900-23ad-11e9-9d22-5b1b3d1d3bb9.png">
| non_main | spaces appended on the certificate content in iam management console service providers edit functionality to recreate the issue create an application in the store log in to iam management console under service provider click the edit button of any service providers you can see extra spaces appended at the end of the certificate content in application certificate remove the spaces and update redo the above flows and you can see the spaces appended again if the certificate is updated by removing the spaces in step successfully the request flow works fine but if you click the edit button again the spaces are appended which means every time the page loads on the edit function the spaces are appended at the end of the certificate content img width alt screen shot at am src | 0 |
3,901 | 17,360,245,191 | IssuesEvent | 2021-07-29 19:30:27 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | get_url doesn't work when running Ansible against Docker container | affects_2.12 bug cloud deprecated docker module needs_maintainer needs_triage support:community support:core | ### Summary
When Im trying to run playbook with get_url task against Docker container it fails with following error even in `-vvvvvv` verbose mode:
```
fatal: [ubuntu2004]: FAILED! => {
"changed": false,
"module_stderr": "",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
}
```
If I will run my playbook with `ANSIBLE_KEEP_REMOTE_FILES=1` and execute the AnsiballZ_get_url.py script it succeeds.
`$ docker exec -i ubuntu2004 /bin/sh -c "/bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-<some_nrs>/AnsiballZ_get_url.py && sleep 0'"`
Also same goes for facts gathering sometimes. It gives me error:
```
fatal: [ubuntu2004]: FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"ansible.legacy.setup": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"failed": true,
"module_stderr": "",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
}
},
"msg": "The following modules failed to execute: ansible.legacy.setup\n"
}
```
But if I execute the command directly, it works:
`$ docker exec -i ubuntu2004 /bin/sh -c "/bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-<some_nrs>/AnsiballZ_setup.py && sleep 0'"`
I've tried Ubuntu 20.04 and CentOS 8 images from here: https://github.com/ansible/distro-test-containers
### Issue Type
Bug Report
### Component Name
get_url, docker, ubuntu
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user1/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user1/.local/lib/python3.6/site-packages/ansible
ansible collection location = /home/user1/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user1/.local/bin/ansible
python version = 3.6.8 (default, Aug 13 2020, 07:46:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
(N/A)
```
### OS / Environment
Running Ansible on RHEL 7.9 against Ubuntu 20.04 Docker (version 1.13.1) container.
### Steps to Reproduce
Im just running a role with `get_url` task
### Expected Results
I expected that the file will be downloaded.
### Actual Results
```console
fatal: [ubuntu2004]: FAILED! => {
"changed": false,
"module_stderr": "",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | True | get_url doesn't work when running Ansible against Docker container - ### Summary
When Im trying to run playbook with get_url task against Docker container it fails with following error even in `-vvvvvv` verbose mode:
```
fatal: [ubuntu2004]: FAILED! => {
"changed": false,
"module_stderr": "",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
}
```
If I will run my playbook with `ANSIBLE_KEEP_REMOTE_FILES=1` and execute the AnsiballZ_get_url.py script it succeeds.
`$ docker exec -i ubuntu2004 /bin/sh -c "/bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-<some_nrs>/AnsiballZ_get_url.py && sleep 0'"`
Also same goes for facts gathering sometimes. It gives me error:
```
fatal: [ubuntu2004]: FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"ansible.legacy.setup": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"failed": true,
"module_stderr": "",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
}
},
"msg": "The following modules failed to execute: ansible.legacy.setup\n"
}
```
But if I execute the command directly, it works:
`$ docker exec -i ubuntu2004 /bin/sh -c "/bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-<some_nrs>/AnsiballZ_setup.py && sleep 0'"`
I've tried Ubuntu 20.04 and CentOS 8 images from here: https://github.com/ansible/distro-test-containers
### Issue Type
Bug Report
### Component Name
get_url, docker, ubuntu
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user1/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user1/.local/lib/python3.6/site-packages/ansible
ansible collection location = /home/user1/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user1/.local/bin/ansible
python version = 3.6.8 (default, Aug 13 2020, 07:46:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
(N/A)
```
### OS / Environment
Running Ansible on RHEL 7.9 against Ubuntu 20.04 Docker (version 1.13.1) container.
### Steps to Reproduce
Im just running a role with `get_url` task
### Expected Results
I expected that the file will be downloaded.
### Actual Results
```console
fatal: [ubuntu2004]: FAILED! => {
"changed": false,
"module_stderr": "",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 0
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | main | get url doesn t work when running ansible against docker container summary when im trying to run playbook with get url task against docker container it fails with following error even in vvvvvv verbose mode fatal failed changed false module stderr module stdout msg module failure nsee stdout stderr for the exact error rc if i will run my playbook with ansible keep remote files and execute the ansiballz get url py script it succeeds docker exec i bin sh c bin sh c usr bin root ansible tmp ansible tmp ansiballz get url py sleep also same goes for facts gathering sometimes it gives me error fatal failed ansible facts changed false failed modules ansible legacy setup ansible facts discovered interpreter python usr bin failed true module stderr module stdout msg module failure nsee stdout stderr for the exact error rc msg the following modules failed to execute ansible legacy setup n but if i execute the command directly it works docker exec i bin sh c bin sh c usr bin root ansible tmp ansible tmp ansiballz setup py sleep i ve tried ubuntu and centos images from here issue type bug report component name get url docker ubuntu ansible version console ansible version ansible config file etc ansible ansible cfg configured module search path ansible python module location home local lib site packages ansible ansible collection location home ansible collections usr share ansible collections executable location home local bin ansible python version default aug jinja version libyaml true configuration console ansible config dump only changed n a os environment running ansible on rhel against ubuntu docker version container steps to reproduce im just running a role with get url task expected results i expected that the file will be downloaded actual results console fatal failed changed false module stderr module stdout msg module failure nsee stdout stderr for the exact error rc code of conduct i agree to follow the ansible code of conduct | 1 |
3,083 | 11,708,806,744 | IssuesEvent | 2020-03-08 15:21:59 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Publish dependencies on Maven Central | maintainability maven | Some of our dependencies are not available on Maven Central - they are old unmaintained libraries or our own forks of them. We currently ship their jars in the repository itself, but this is a problem since it requires installing these in the user's own local Maven repository first. This impedes workflows such as importing OpenRefine in Eclipse with m2e.
We should publish all our dependencies under our own `org.openrefine` groupId. This will make #2254 easier. | True | Publish dependencies on Maven Central - Some of our dependencies are not available on Maven Central - they are old unmaintained libraries or our own forks of them. We currently ship their jars in the repository itself, but this is a problem since it requires installing these in the user's own local Maven repository first. This impedes workflows such as importing OpenRefine in Eclipse with m2e.
We should publish all our dependencies under our own `org.openrefine` groupId. This will make #2254 easier. | main | publish dependencies on maven central some of our dependencies are not available on maven central they are old unmaintained libraries or our own forks of them we currently ship their jars in the repository itself but this is a problem since it requires installing these in the user s own local maven repository first this impedes workflows such as importing openrefine in eclipse with we should publish all our dependencies under our own org openrefine groupid this will make easier | 1 |
3,254 | 12,402,316,304 | IssuesEvent | 2020-05-21 11:43:29 | ocaml/opam-repository | https://api.github.com/repos/ocaml/opam-repository | closed | Problem with mlpost 0.8.2 Package | Stale needs maintainer action | There is a problem with the mlpost 0.8.2 package provided by OPAM, which causes errors when trying to install melt.
This has been observed with the 4.04.0 and 4.0.4.2 compilers.
I believe the problem is that the source provided by OPAM is out of date:
http://mlpost.lri.fr/download/mlpost-0.8.2.tar.gz
Pinning OPAM to the following source seems to work fine:
https://github.com/backtracking/mlpost/releases/tag/0.8.2 | True | Problem with mlpost 0.8.2 Package - There is a problem with the mlpost 0.8.2 package provided by OPAM, which causes errors when trying to install melt.
This has been observed with the 4.04.0 and 4.0.4.2 compilers.
I believe the problem is that the source provided by OPAM is out of date:
http://mlpost.lri.fr/download/mlpost-0.8.2.tar.gz
Pinning OPAM to the following source seems to work fine:
https://github.com/backtracking/mlpost/releases/tag/0.8.2 | main | problem with mlpost package there is a problem with the mlpost package provided by opam which causes errors when trying to install melt this has been observed with the and compilers i believe the problem is that the source provided by opam is out of date pinning opam to the following source seems to work fine | 1 |
81,777 | 7,802,950,778 | IssuesEvent | 2018-06-10 18:10:38 | Students-of-the-city-of-Kostroma/Student-timetable | https://api.github.com/repos/Students-of-the-city-of-Kostroma/Student-timetable | closed | Разработать сценарии функционального тестирования для Story 4 | Functional test Script | Разработать сценарии функционального тестирования для Story #4 | 1.0 | Разработать сценарии функционального тестирования для Story 4 - Разработать сценарии функционального тестирования для Story #4 | non_main | разработать сценарии функционального тестирования для story разработать сценарии функционального тестирования для story | 0 |
2,449 | 8,639,868,976 | IssuesEvent | 2018-11-23 22:13:57 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | How to tx no modulation signal? | V1 related (not maintained) | Hi. I trying tuning IF in my receiver and need HF generator tuning on 10.7 mhz. Signal must be nomod. How I can use ur program for my goal?
In additional. I found some problem with transmit on rpi 3. After starting a tx, signal have distortions and after few moments signal is disappear. | True | How to tx no modulation signal? - Hi. I trying tuning IF in my receiver and need HF generator tuning on 10.7 mhz. Signal must be nomod. How I can use ur program for my goal?
In additional. I found some problem with transmit on rpi 3. After starting a tx, signal have distortions and after few moments signal is disappear. | main | how to tx no modulation signal hi i trying tuning if in my receiver and need hf generator tuning on mhz signal must be nomod how i can use ur program for my goal in additional i found some problem with transmit on rpi after starting a tx signal have distortions and after few moments signal is disappear | 1 |
190,727 | 15,255,299,888 | IssuesEvent | 2021-02-20 15:33:57 | hengband/hengband | https://api.github.com/repos/hengband/hengband | closed | 自動生成スポイラーを出力するコマンドラインオプション | documentation enhancement | 現在、ゲーム中にデバッグ/詐欺オプションを有効にして、デバッグコマンド`^a`→`"`キーで出力できるスポイラーファイルを、`hengband --output-spoiler` のような感じでコマンドラインオプションを与えることで出力できるようにしたい。
この機能があれば、どこかに最新のスポイラーファイルを自動的にアップロードするなどの機能も実現できそう。
とりあえず指定したファイル名で指定したスポイラーファイルを出力する関数を作って、後はプラットフォーム毎にコマンドラインオプションを解析してそれを呼ぶようにする感じか。
| 1.0 | 自動生成スポイラーを出力するコマンドラインオプション - 現在、ゲーム中にデバッグ/詐欺オプションを有効にして、デバッグコマンド`^a`→`"`キーで出力できるスポイラーファイルを、`hengband --output-spoiler` のような感じでコマンドラインオプションを与えることで出力できるようにしたい。
この機能があれば、どこかに最新のスポイラーファイルを自動的にアップロードするなどの機能も実現できそう。
とりあえず指定したファイル名で指定したスポイラーファイルを出力する関数を作って、後はプラットフォーム毎にコマンドラインオプションを解析してそれを呼ぶようにする感じか。
| non_main | 自動生成スポイラーを出力するコマンドラインオプション 現在、ゲーム中にデバッグ 詐欺オプションを有効にして、デバッグコマンド a → キーで出力できるスポイラーファイルを、 hengband output spoiler のような感じでコマンドラインオプションを与えることで出力できるようにしたい。 この機能があれば、どこかに最新のスポイラーファイルを自動的にアップロードするなどの機能も実現できそう。 とりあえず指定したファイル名で指定したスポイラーファイルを出力する関数を作って、後はプラットフォーム毎にコマンドラインオプションを解析してそれを呼ぶようにする感じか。 | 0 |
41,738 | 10,583,959,405 | IssuesEvent | 2019-10-08 14:37:46 | zfsonlinux/zfs | https://api.github.com/repos/zfsonlinux/zfs | closed | After a week of running array, issuing zpool scrub causes system hang | Type: Defect | <!--
Thank you for reporting an issue.
*IMPORTANT* - Please search our issue tracker *before* making a new issue.
If you cannot find a similar issue, then create a new issue.
https://github.com/zfsonlinux/zfs/issues
*IMPORTANT* - This issue tracker is for *bugs* and *issues* only.
Please search the wiki and the mailing list archives before asking
questions on the mailing list.
https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Dell R620 | Sandy Bridge
--- | ---
Distribution Name | Gentoo
Distribution Version | Rolling
Linux Kernel | 4.15.16
Architecture | x86_64
ZFS Version | 0.7.9-r0-gentoo
SPL Version | 0.7.9-r0-gentoo
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
My array, which is made up of 10x5TB disks attached to an LSI 9300 SAS controller running RAIDz2, and is currently about 18.2TB full out of ~36TB of usable storage, is seeing a system hang after zpool scrub runs.
However, I can run zpool scrub on my pool after a fresh reboot, and the scrub runs to completion with no issues (and finds no problems). But if I have the scrub run out of cron once a week, as it has been running for about 2 years now, it will cause the system to become unresponsive. If I run the scrub manually after about a week of running the system, the same behavior occurs.
My system is a Dell R620 running Gentoo. It has 72GB of ECC RAM. I have checked SMART data on all the disks, and run other health checks against the RAM, and nothing has indicated a hardware issue. This started occurring after upgrading ZFS to 0.7.x at some point. I honestly don't know where the cutoff happened, since I wrote off the the strange crashes as anomalies until I noticed the pattern.
### Describe how to reproduce the problem
My system just has to run for about a week doing its normal workloads (a couple of VMs, serving data to my Plex server, etc.), and then kick off a zpool scrub on the pool. This can be done via cron or interactively. Either way, same issue.
### Include any warning/errors/backtraces from the system logs
I'm in the process of getting a serial console hooked up to capture this. Because my cron job is currently scheduled to run at 2am on Sunday's, I forget to get this configured until it's too late. I'm hoping someone has also seen this (Google'ing around did show some similar issues, but nothing specific).
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
| 1.0 | After a week of running array, issuing zpool scrub causes system hang - <!--
Thank you for reporting an issue.
*IMPORTANT* - Please search our issue tracker *before* making a new issue.
If you cannot find a similar issue, then create a new issue.
https://github.com/zfsonlinux/zfs/issues
*IMPORTANT* - This issue tracker is for *bugs* and *issues* only.
Please search the wiki and the mailing list archives before asking
questions on the mailing list.
https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Dell R620 | Sandy Bridge
--- | ---
Distribution Name | Gentoo
Distribution Version | Rolling
Linux Kernel | 4.15.16
Architecture | x86_64
ZFS Version | 0.7.9-r0-gentoo
SPL Version | 0.7.9-r0-gentoo
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
My array, which is made up of 10x5TB disks attached to an LSI 9300 SAS controller running RAIDz2, and is currently about 18.2TB full out of ~36TB of usable storage, is seeing a system hang after zpool scrub runs.
However, I can run zpool scrub on my pool after a fresh reboot, and the scrub runs to completion with no issues (and finds no problems). But if I have the scrub run out of cron once a week, as it has been running for about 2 years now, it will cause the system to become unresponsive. If I run the scrub manually after about a week of running the system, the same behavior occurs.
My system is a Dell R620 running Gentoo. It has 72GB of ECC RAM. I have checked SMART data on all the disks, and run other health checks against the RAM, and nothing has indicated a hardware issue. This started occurring after upgrading ZFS to 0.7.x at some point. I honestly don't know where the cutoff happened, since I wrote off the the strange crashes as anomalies until I noticed the pattern.
### Describe how to reproduce the problem
My system just has to run for about a week doing its normal workloads (a couple of VMs, serving data to my Plex server, etc.), and then kick off a zpool scrub on the pool. This can be done via cron or interactively. Either way, same issue.
### Include any warning/errors/backtraces from the system logs
I'm in the process of getting a serial console hooked up to capture this. Because my cron job is currently scheduled to run at 2am on Sunday's, I forget to get this configured until it's too late. I'm hoping someone has also seen this (Google'ing around did show some similar issues, but nothing specific).
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
| non_main | after a week of running array issuing zpool scrub causes system hang thank you for reporting an issue important please search our issue tracker before making a new issue if you cannot find a similar issue then create a new issue important this issue tracker is for bugs and issues only please search the wiki and the mailing list archives before asking questions on the mailing list please fill in as much of the template as possible system information dell sandy bridge distribution name gentoo distribution version rolling linux kernel architecture zfs version gentoo spl version gentoo commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing my array which is made up of disks attached to an lsi sas controller running and is currently about full out of of usable storage is seeing a system hang after zpool scrub runs however i can run zpool scrub on my pool after a fresh reboot and the scrub runs to completion with no issues and finds no problems but if i have the scrub run out of cron once a week as it has been running for about years now it will cause the system to become unresponsive if i run the scrub manually after about a week of running the system the same behavior occurs my system is a dell running gentoo it has of ecc ram i have checked smart data on all the disks and run other health checks against the ram and nothing has indicated a hardware issue this started occurring after upgrading zfs to x at some point i honestly don t know where the cutoff happened since i wrote off the the strange crashes as anomalies until i noticed the pattern describe how to reproduce the problem my system just has to run for about a week doing its normal workloads a couple of vms serving data to my plex server etc and then kick off a zpool scrub on the pool this can be done via cron or interactively either way same issue include any warning errors backtraces from the system logs i m in the process of getting a serial console hooked up to capture this because my cron job is currently scheduled to run at on sunday s i forget to get this configured until it s too late i m hoping someone has also seen this google ing around did show some similar issues but nothing specific important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with | 0 |
118,803 | 10,011,935,082 | IssuesEvent | 2019-07-15 11:59:37 | GetTerminus/terminus-ui | https://api.github.com/repos/GetTerminus/terminus-ui | opened | Chore commits not tied to a release do not get added to the changelog | Goal: Library Stabilization Needs: exploration Target: latest Type: bug | #### 1. What is the expected behavior?
All commits should be added to the changelog.
#### 2. What is the current behavior?
Chore commits seem to not be added if not commited with code that triggers a release.
#### 3. What are the steps to reproduce?
Providing a reproduction is the *best* way to share your issue.
a) Create a `chore` commit and push it to master
b) Notice no changelog changes are published (which is correct).
c) Create another `chore` commit and then a `fix` commit.
d) Releasing the code, you will see both commits from step 'c' but not the commit from step 'a'.
##### Example:
These commits:

Created this changelog (notice missing commits for the toggle chore and log in chore):

#### 4. Which versions of this library, Angular, TypeScript, & browsers are affected?
- UI Library: latest
| 1.0 | Chore commits not tied to a release do not get added to the changelog - #### 1. What is the expected behavior?
All commits should be added to the changelog.
#### 2. What is the current behavior?
Chore commits seem to not be added if not commited with code that triggers a release.
#### 3. What are the steps to reproduce?
Providing a reproduction is the *best* way to share your issue.
a) Create a `chore` commit and push it to master
b) Notice no changelog changes are published (which is correct).
c) Create another `chore` commit and then a `fix` commit.
d) Releasing the code, you will see both commits from step 'c' but not the commit from step 'a'.
##### Example:
These commits:

Created this changelog (notice missing commits for the toggle chore and log in chore):

#### 4. Which versions of this library, Angular, TypeScript, & browsers are affected?
- UI Library: latest
| non_main | chore commits not tied to a release do not get added to the changelog what is the expected behavior all commits should be added to the changelog what is the current behavior chore commits seem to not be added if not commited with code that triggers a release what are the steps to reproduce providing a reproduction is the best way to share your issue a create a chore commit and push it to master b notice no changelog changes are published which is correct c create another chore commit and then a fix commit d releasing the code you will see both commits from step c but not the commit from step a example these commits created this changelog notice missing commits for the toggle chore and log in chore which versions of this library angular typescript browsers are affected ui library latest | 0 |
177 | 2,772,038,465 | IssuesEvent | 2015-05-02 08:18:57 | spyder-ide/spyder | https://api.github.com/repos/spyder-ide/spyder | opened | Move helper widgets to `helperwidgets.py` | Maintainability Miscelleneous Usability | Refactor the code in charge of the [Check for updates PR](#2321), to move the CheckMessageBox to `helperwidgets.py` file.
Refactor code from the [arraybuilder ](https://bitbucket.org/spyder-ide/spyderlib/pull-request/98/add-array-matrix-helper-in-editor-and) to move the ToolTip to `helperwidgets.py` file. | True | Move helper widgets to `helperwidgets.py` - Refactor the code in charge of the [Check for updates PR](#2321), to move the CheckMessageBox to `helperwidgets.py` file.
Refactor code from the [arraybuilder ](https://bitbucket.org/spyder-ide/spyderlib/pull-request/98/add-array-matrix-helper-in-editor-and) to move the ToolTip to `helperwidgets.py` file. | main | move helper widgets to helperwidgets py refactor the code in charge of the to move the checkmessagebox to helperwidgets py file refactor code from the to move the tooltip to helperwidgets py file | 1 |
305,794 | 26,413,219,814 | IssuesEvent | 2023-01-13 13:59:54 | JQAstudent/TEODOR | https://api.github.com/repos/JQAstudent/TEODOR | opened | teodor.bg - When in Greek a user can enter delivery details in Cyrillic alphabet and a Bulgarian phone number | NEGATIVE TEST CASE | <html>
<body>
<!--StartFragment-->
Bug ID | BR TDR BSKT - 008
-- | --
Name | When in Greek a user can enter delivery details in Cyrillic alphabet and a Bulgarian phone number
Priority |
Severity | 3/4
Description |
Steps to reproduce |
Step 1 | Navigate to https://teodor.bg/
Step 2 | Change the language to Greek
Step 3 | Log in with valid credentials (email: ez_jqa@mail.bg, password: @ideSeg@)
Step 4 | Add any products to the basket
Step 5 | Click on "ΠΡΟΧΩΡΗΣΤΕ ΣΤΗΝ ΑΓΟΡΑ" button
Step 6 | Fill in the required fields in Cyrillic and incomplete Bulgarian phone number (003598)
Step 7 | Click on "ΕΠΟΜΕΝΟ" button
|
Expected result: | The page with the delivery details is loaded in Greek. The required fields are marked and instructions are available for specific fields. There is a verification for the fields with a specific length and format of the number such as phone number. When the fields are filled in in a Cyrillic alphabet, or when shorter phone number is entered or a phone number with different country dialing code, a warning message is displayed and the user is not able to proceed with the order.
|
Actual result: | The page with the delivery details is loaded in Greek. The required fields are marked and instructions are available for a specific fields. There isn't a verification for the fields with a specific length of the number such as phone number and financial identification code. When the fields are filled in in a Cyrillic alphabet, or when shorter number or non-Greek country dialing code is entered not any warning message is displayed and the user is able to proceed with the order with incorrect details entered.
Attachment | https://drive.google.com/file/d/1qKbbob3IZAhvZIMXnkpYJqee3gz_pOkw/view?usp=sharing
Status | Opened
Component | Shopping basket
Version/Build number (Found in) | 2022
Environment | Windows 7, Yandex 22.7.1.806 (64-bit)
Comments |
Date Created | 8/7/2022
Author | Emil Zahariev
Audit Log |
<!--EndFragment-->
</body>
</html>Bug ID BR TDR BSKT - 008
Name When in Greek a user can enter delivery details in Cyrillic alphabet and a Bulgarian phone number
Priority
Severity 3/4
Description
Steps to reproduce
Step 1 Navigate to https://teodor.bg/
Step 2 Change the language to Greek
Step 3 Log in with valid credentials (email: [ez_jqa@mail.bg](mailto:ez_jqa@mail.bg), password: @ideSeg@)
Step 4 Add any products to the basket
Step 5 Click on "ΠΡΟΧΩΡΗΣΤΕ ΣΤΗΝ ΑΓΟΡΑ" button
Step 6 Fill in the required fields in Cyrillic and incomplete Bulgarian phone number (003598)
Step 7 Click on "ΕΠΟΜΕΝΟ" button
Expected result: The page with the delivery details is loaded in Greek. The required fields are marked and instructions are available for specific fields. There is a verification for the fields with a specific length and format of the number such as phone number. When the fields are filled in in a Cyrillic alphabet, or when shorter phone number is entered or a phone number with different country dialing code, a warning message is displayed and the user is not able to proceed with the order.
Actual result: The page with the delivery details is loaded in Greek. The required fields are marked and instructions are available for a specific fields. There isn't a verification for the fields with a specific length of the number such as phone number and financial identification code. When the fields are filled in in a Cyrillic alphabet, or when shorter number or non-Greek country dialing code is entered not any warning message is displayed and the user is able to proceed with the order with incorrect details entered.
Attachment https://drive.google.com/file/d/1qKbbob3IZAhvZIMXnkpYJqee3gz_pOkw/view?usp=sharing
Status Opened
Component Shopping basket
Version/Build number (Found in) 2022
Environment Windows 7, Yandex 22.7.1.806 (64-bit)
Comments
Date Created 8/7/2022
Author --
Audit Log | 1.0 | teodor.bg - When in Greek a user can enter delivery details in Cyrillic alphabet and a Bulgarian phone number - <html>
<body>
<!--StartFragment-->
Bug ID | BR TDR BSKT - 008
-- | --
Name | When in Greek a user can enter delivery details in Cyrillic alphabet and a Bulgarian phone number
Priority |
Severity | 3/4
Description |
Steps to reproduce |
Step 1 | Navigate to https://teodor.bg/
Step 2 | Change the language to Greek
Step 3 | Log in with valid credentials (email: ez_jqa@mail.bg, password: @ideSeg@)
Step 4 | Add any products to the basket
Step 5 | Click on "ΠΡΟΧΩΡΗΣΤΕ ΣΤΗΝ ΑΓΟΡΑ" button
Step 6 | Fill in the required fields in Cyrillic and incomplete Bulgarian phone number (003598)
Step 7 | Click on "ΕΠΟΜΕΝΟ" button
|
Expected result: | The page with the delivery details is loaded in Greek. The required fields are marked and instructions are available for specific fields. There is a verification for the fields with a specific length and format of the number such as phone number. When the fields are filled in in a Cyrillic alphabet, or when shorter phone number is entered or a phone number with different country dialing code, a warning message is displayed and the user is not able to proceed with the order.
|
Actual result: | The page with the delivery details is loaded in Greek. The required fields are marked and instructions are available for a specific fields. There isn't a verification for the fields with a specific length of the number such as phone number and financial identification code. When the fields are filled in in a Cyrillic alphabet, or when shorter number or non-Greek country dialing code is entered not any warning message is displayed and the user is able to proceed with the order with incorrect details entered.
Attachment | https://drive.google.com/file/d/1qKbbob3IZAhvZIMXnkpYJqee3gz_pOkw/view?usp=sharing
Status | Opened
Component | Shopping basket
Version/Build number (Found in) | 2022
Environment | Windows 7, Yandex 22.7.1.806 (64-bit)
Comments |
Date Created | 8/7/2022
Author | Emil Zahariev
Audit Log |
<!--EndFragment-->
</body>
</html>Bug ID BR TDR BSKT - 008
Name When in Greek a user can enter delivery details in Cyrillic alphabet and a Bulgarian phone number
Priority
Severity 3/4
Description
Steps to reproduce
Step 1 Navigate to https://teodor.bg/
Step 2 Change the language to Greek
Step 3 Log in with valid credentials (email: [ez_jqa@mail.bg](mailto:ez_jqa@mail.bg), password: @ideSeg@)
Step 4 Add any products to the basket
Step 5 Click on "ΠΡΟΧΩΡΗΣΤΕ ΣΤΗΝ ΑΓΟΡΑ" button
Step 6 Fill in the required fields in Cyrillic and incomplete Bulgarian phone number (003598)
Step 7 Click on "ΕΠΟΜΕΝΟ" button
Expected result: The page with the delivery details is loaded in Greek. The required fields are marked and instructions are available for specific fields. There is a verification for the fields with a specific length and format of the number such as phone number. When the fields are filled in in a Cyrillic alphabet, or when shorter phone number is entered or a phone number with different country dialing code, a warning message is displayed and the user is not able to proceed with the order.
Actual result: The page with the delivery details is loaded in Greek. The required fields are marked and instructions are available for a specific fields. There isn't a verification for the fields with a specific length of the number such as phone number and financial identification code. When the fields are filled in in a Cyrillic alphabet, or when shorter number or non-Greek country dialing code is entered not any warning message is displayed and the user is able to proceed with the order with incorrect details entered.
Attachment https://drive.google.com/file/d/1qKbbob3IZAhvZIMXnkpYJqee3gz_pOkw/view?usp=sharing
Status Opened
Component Shopping basket
Version/Build number (Found in) 2022
Environment Windows 7, Yandex 22.7.1.806 (64-bit)
Comments
Date Created 8/7/2022
Author --
Audit Log | non_main | teodor bg when in greek a user can enter delivery details in cyrillic alphabet and a bulgarian phone number bug id br tdr bskt name when in greek a user can enter delivery details in cyrillic alphabet and a bulgarian phone number priority severity description steps to reproduce step navigate to step change the language to greek step log in with valid credentials email ez jqa mail bg password ideseg step add any products to the basket step click on προχωρηστε στην αγορα button step fill in the required fields in cyrillic and incomplete bulgarian phone number step click on επομενο button expected result the page with the delivery details is loaded in greek the required fields are marked and instructions are available for specific fields there is a verification for the fields with a specific length and format of the number such as phone number when the fields are filled in in a cyrillic alphabet or when shorter phone number is entered or a phone number with different country dialing code a warning message is displayed and the user is not able to proceed with the order actual result the page with the delivery details is loaded in greek the required fields are marked and instructions are available for a specific fields there isn t a verification for the fields with a specific length of the number such as phone number and financial identification code when the fields are filled in in a cyrillic alphabet or when shorter number or non greek country dialing code is entered not any warning message is displayed and the user is able to proceed with the order with incorrect details entered attachment status opened component shopping basket version build number found in environment windows yandex bit comments date created author emil zahariev audit log bug id br tdr bskt name when in greek a user can enter delivery details in cyrillic alphabet and a bulgarian phone number priority severity description steps to reproduce step navigate to step change the language to greek step log in with valid credentials email mailto ez jqa mail bg password ideseg step add any products to the basket step click on προχωρηστε στην αγορα button step fill in the required fields in cyrillic and incomplete bulgarian phone number step click on επομενο button expected result the page with the delivery details is loaded in greek the required fields are marked and instructions are available for specific fields there is a verification for the fields with a specific length and format of the number such as phone number when the fields are filled in in a cyrillic alphabet or when shorter phone number is entered or a phone number with different country dialing code a warning message is displayed and the user is not able to proceed with the order actual result the page with the delivery details is loaded in greek the required fields are marked and instructions are available for a specific fields there isn t a verification for the fields with a specific length of the number such as phone number and financial identification code when the fields are filled in in a cyrillic alphabet or when shorter number or non greek country dialing code is entered not any warning message is displayed and the user is able to proceed with the order with incorrect details entered attachment status opened component shopping basket version build number found in environment windows yandex bit comments date created author audit log | 0 |
38,596 | 19,405,705,239 | IssuesEvent | 2021-12-19 23:48:27 | ghost-fvtt/fxmaster | https://api.github.com/repos/ghost-fvtt/fxmaster | closed | Investigate performance issues | bug to be confirmed performance | ### Expected Behavior
performance is good and in particular, performance of the core weather effects should be the same as performance of the FXMaster weather effects.
### Current Behavior
Performance suffers massively in large scenes (10890x14000) and it seems to be worse for effects from FXMaster than for core weather effects.
### Steps to Reproduce
1. create a large scene (e.g. 10890x14000)
2. add a FXMaster weather effect to that scene
3. observer bad performance
### Context
Feedback from discord tests of v2.0.0-rc1
### Version
v1.2.1, v2.0.0-rc1
### Foundry VTT Version
V9.235
### Operating System
unknown
### Browser / App
Native Electron App, Chrome, Firefox
### Game System
-
### Relevant Modules
- | True | Investigate performance issues - ### Expected Behavior
performance is good and in particular, performance of the core weather effects should be the same as performance of the FXMaster weather effects.
### Current Behavior
Performance suffers massively in large scenes (10890x14000) and it seems to be worse for effects from FXMaster than for core weather effects.
### Steps to Reproduce
1. create a large scene (e.g. 10890x14000)
2. add a FXMaster weather effect to that scene
3. observer bad performance
### Context
Feedback from discord tests of v2.0.0-rc1
### Version
v1.2.1, v2.0.0-rc1
### Foundry VTT Version
V9.235
### Operating System
unknown
### Browser / App
Native Electron App, Chrome, Firefox
### Game System
-
### Relevant Modules
- | non_main | investigate performance issues expected behavior performance is good and in particular performance of the core weather effects should be the same as performance of the fxmaster weather effects current behavior performance suffers massively in large scenes and it seems to be worse for effects from fxmaster than for core weather effects steps to reproduce create a large scene e g add a fxmaster weather effect to that scene observer bad performance context feedback from discord tests of version foundry vtt version operating system unknown browser app native electron app chrome firefox game system relevant modules | 0 |
1,817 | 6,577,318,442 | IssuesEvent | 2017-09-12 00:04:27 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Ansible 2.1.x cloud/docker incompatible with docker-py 1.1.0 | affects_2.1 bug_report cloud docker waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
_docker module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /users/pikachuexe/projects/spacious/spacious-rails/ansible.cfg
configured module search path = ['./ansible/library']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
```
[defaults]
roles_path = ./ansible/roles
hostfile = ./ansible/inventories/localhost
filter_plugins = ./ansible/filter_plugins
library = ./ansible/library
error_on_undefined_vars = True
display_skipped_hosts = False
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
"N/A"
##### SUMMARY
<!--- Explain the problem briefly -->
Ansible 2.1.x cloud/docker incompatible with docker-py 1.1.0
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
I try to update from Ansible 1.8.x
And worked around some issues with `2.0.x` (at least still works with some issue unfixed)
https://docs.ansible.com/ansible/docker_module.html
Declares it works with `docker-py` >= `0.3.0`
So I run my usual deploy playbook which fails at the task below
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: pull docker image by creating a tmp container
sudo: true
# This is step that fails
# No label specified
action:
module: docker
image: "{{ docker_image_name }}:{{ docker_image_tag }}"
state: "present"
pull: "missing"
# By using name, the existing container will be stopped automatically
name: "{{ docker_container_prefix }}.tmp.pull"
command: "bash"
detach: yes
username: "{{ docker_api_username | mandatory }}"
password: "{{ docker_api_password | mandatory }}"
email: "{{ docker_api_email | mandatory }}"
register: pull_docker_image_result
until: pull_docker_image_result|success
retries: 5
delay: 3
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Starts container without issue
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Failed due to docker-py version too old (locked at `1.1.0` for ansible `1.8.x`)
And `labels` keyword is passed without checking the version
<!--- Paste verbatim command output between quotes below -->
```
fatal: [app_server_02]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "docker"}, "module_stderr": "OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011\ndebug1: Reading configuration data /users/pikachuexe/.ssh/config\r\ndebug1: /users/pikachuexe/.ssh/config line 1: Applying options for *\r\ndebug1: /users/pikachuexe/.ssh/config line 17: Applying options for 119.81.*.*\r\ndebug1: Reading configuration data /etc/ssh_config\r\ndebug1: /etc/ssh_config line 20: Applying options for *\r\ndebug1: /etc/ssh_config line 102: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 87565\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 119.81.xxx.xxx closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1972, in <module>\r\n main()\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1938, in main\r\n present(manager, containers, count, name)\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1742, in present\r\n created = manager.create_containers(delta)\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1660, in create_containers\r\n containers = do_create(count, params)\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1653, in do_create\r\n result = self.client.create_container(**params)\r\nTypeError: create_container() got an unexpected keyword argument 'labels'\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
| True | Ansible 2.1.x cloud/docker incompatible with docker-py 1.1.0 - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
_docker module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /users/pikachuexe/projects/spacious/spacious-rails/ansible.cfg
configured module search path = ['./ansible/library']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
```
[defaults]
roles_path = ./ansible/roles
hostfile = ./ansible/inventories/localhost
filter_plugins = ./ansible/filter_plugins
library = ./ansible/library
error_on_undefined_vars = True
display_skipped_hosts = False
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
"N/A"
##### SUMMARY
<!--- Explain the problem briefly -->
Ansible 2.1.x cloud/docker incompatible with docker-py 1.1.0
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
I try to update from Ansible 1.8.x
And worked around some issues with `2.0.x` (at least still works with some issue unfixed)
https://docs.ansible.com/ansible/docker_module.html
Declares it works with `docker-py` >= `0.3.0`
So I run my usual deploy playbook which fails at the task below
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: pull docker image by creating a tmp container
sudo: true
# This is step that fails
# No label specified
action:
module: docker
image: "{{ docker_image_name }}:{{ docker_image_tag }}"
state: "present"
pull: "missing"
# By using name, the existing container will be stopped automatically
name: "{{ docker_container_prefix }}.tmp.pull"
command: "bash"
detach: yes
username: "{{ docker_api_username | mandatory }}"
password: "{{ docker_api_password | mandatory }}"
email: "{{ docker_api_email | mandatory }}"
register: pull_docker_image_result
until: pull_docker_image_result|success
retries: 5
delay: 3
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Starts container without issue
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Failed due to docker-py version too old (locked at `1.1.0` for ansible `1.8.x`)
And `labels` keyword is passed without checking the version
<!--- Paste verbatim command output between quotes below -->
```
fatal: [app_server_02]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "docker"}, "module_stderr": "OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011\ndebug1: Reading configuration data /users/pikachuexe/.ssh/config\r\ndebug1: /users/pikachuexe/.ssh/config line 1: Applying options for *\r\ndebug1: /users/pikachuexe/.ssh/config line 17: Applying options for 119.81.*.*\r\ndebug1: Reading configuration data /etc/ssh_config\r\ndebug1: /etc/ssh_config line 20: Applying options for *\r\ndebug1: /etc/ssh_config line 102: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 87565\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 119.81.xxx.xxx closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1972, in <module>\r\n main()\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1938, in main\r\n present(manager, containers, count, name)\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1742, in present\r\n created = manager.create_containers(delta)\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1660, in create_containers\r\n containers = do_create(count, params)\r\n File \"/tmp/ansible_q5yEWt/ansible_module_docker.py\", line 1653, in do_create\r\n result = self.client.create_container(**params)\r\nTypeError: create_container() got an unexpected keyword argument 'labels'\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
| main | ansible x cloud docker incompatible with docker py issue type bug report component name docker module ansible version ansible config file users pikachuexe projects spacious spacious rails ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables roles path ansible roles hostfile ansible inventories localhost filter plugins ansible filter plugins library ansible library error on undefined vars true display skipped hosts false os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary ansible x cloud docker incompatible with docker py steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i try to update from ansible x and worked around some issues with x at least still works with some issue unfixed declares it works with docker py so i run my usual deploy playbook which fails at the task below name pull docker image by creating a tmp container sudo true this is step that fails no label specified action module docker image docker image name docker image tag state present pull missing by using name the existing container will be stopped automatically name docker container prefix tmp pull command bash detach yes username docker api username mandatory password docker api password mandatory email docker api email mandatory register pull docker image result until pull docker image result success retries delay expected results starts container without issue actual results failed due to docker py version too old locked at for ansible x and labels keyword is passed without checking the version fatal failed changed false failed true invocation module name docker module stderr openssh osslshim dec reading configuration data users pikachuexe ssh config r users pikachuexe ssh config line applying options for r users pikachuexe ssh config line applying options for r reading configuration data etc ssh config r etc ssh config line applying options for r etc ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to xxx xxx closed r n module stdout traceback most recent call last r n file tmp ansible ansible module docker py line in r n main r n file tmp ansible ansible module docker py line in main r n present manager containers count name r n file tmp ansible ansible module docker py line in present r n created manager create containers delta r n file tmp ansible ansible module docker py line in create containers r n containers do create count params r n file tmp ansible ansible module docker py line in do create r n result self client create container params r ntypeerror create container got an unexpected keyword argument labels r n msg module failure parsed false | 1 |
17,341 | 23,923,198,165 | IssuesEvent | 2022-09-09 19:09:33 | OpenKneeboard/OpenKneeboard | https://api.github.com/repos/OpenKneeboard/OpenKneeboard | closed | Publish 3 empty frames after hiding on OpenXR | compatibility | WMR has a bug (fixed in v112) where depth data from layers that no longer exist/are hidden still affect rendering; this has been described as a 'wave', 'shimmer', or 'fuzz'.
Pushing 3 texture of the same size, fully alpha, should fix this; 3 is based on the number of back buffers in the WMR implementation. | True | Publish 3 empty frames after hiding on OpenXR - WMR has a bug (fixed in v112) where depth data from layers that no longer exist/are hidden still affect rendering; this has been described as a 'wave', 'shimmer', or 'fuzz'.
Pushing 3 texture of the same size, fully alpha, should fix this; 3 is based on the number of back buffers in the WMR implementation. | non_main | publish empty frames after hiding on openxr wmr has a bug fixed in where depth data from layers that no longer exist are hidden still affect rendering this has been described as a wave shimmer or fuzz pushing texture of the same size fully alpha should fix this is based on the number of back buffers in the wmr implementation | 0 |
1,701 | 6,574,387,311 | IssuesEvent | 2017-09-11 12:42:17 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | dellos9_command confirm prompt timeout | affects_2.2 bug_report networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
dellos9_command
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/dell/force10/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
##### SUMMARY
<!--- Explain the problem briefly -->
copy config to the startup-config on the local flash breaks because of yes/no prompt.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: issue delete and copy old config
dellos9_command:
provider: "{{ cli }}"
commands:
- "delete flash://{{ inventory_hostname }}.conf no-confirm"
- "delete flash://startup-conifg no-confirm"
- "copy tftp://10.10.240.253:/{{ inventory_hostname }}.conf flash://{{ inventory_hostname }}.conf"
- "copy flash://{{ inventory_hostname }}.conf startup-config"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
copy command should have completed and provided a default yes for the os prompt.
rr1-n22-r14-4048hl-5-1a#$sh://rr1-n22-r14-4048hl-5-1a.conf startup-config
File with same name already exist.
Proceed to copy the file [confirm yes/no]:
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
fatal: [rr1-n22-r14-4048hl-5-1a]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": false,
"commands": [
"delete flash://rr1-n22-r14-4048hl-5-1a.conf no-confirm",
"delete flash://startup-conifg no-confirm",
"copy tftp://10.10.240.253:/rr1-n22-r14-4048hl-5-1a.conf flash://rr1-n22-r14-4048hl-5-1a.conf",
"copy flash://rr1-n22-r14-4048hl-5-1a.conf startup-config"
],
"host": "10.10.234.161",
"interval": 1,
"password": null,
"port": null,
"provider": {
"host": "10.10.234.161",
"ssh_keyfile": "/srv/tftpboot/masd-rsa.pub",
"transport": "cli",
"username": "admin"
},
"retries": 10,
"ssh_keyfile": "/srv/tftpboot/masd-rsa.pub",
"timeout": 10,
"transport": "cli",
"username": "admin",
"wait_for": null
},
"module_name": "dellos9_command"
},
"msg": "timeout trying to send command: copy flash://rr1-n22-r14-4048hl-5-1a.conf startup-config\r"
}
```
| True | dellos9_command confirm prompt timeout - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
dellos9_command
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/dell/force10/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
##### SUMMARY
<!--- Explain the problem briefly -->
copy config to the startup-config on the local flash breaks because of yes/no prompt.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: issue delete and copy old config
dellos9_command:
provider: "{{ cli }}"
commands:
- "delete flash://{{ inventory_hostname }}.conf no-confirm"
- "delete flash://startup-conifg no-confirm"
- "copy tftp://10.10.240.253:/{{ inventory_hostname }}.conf flash://{{ inventory_hostname }}.conf"
- "copy flash://{{ inventory_hostname }}.conf startup-config"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
copy command should have completed and provided a default yes for the os prompt.
rr1-n22-r14-4048hl-5-1a#$sh://rr1-n22-r14-4048hl-5-1a.conf startup-config
File with same name already exist.
Proceed to copy the file [confirm yes/no]:
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
fatal: [rr1-n22-r14-4048hl-5-1a]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": false,
"commands": [
"delete flash://rr1-n22-r14-4048hl-5-1a.conf no-confirm",
"delete flash://startup-conifg no-confirm",
"copy tftp://10.10.240.253:/rr1-n22-r14-4048hl-5-1a.conf flash://rr1-n22-r14-4048hl-5-1a.conf",
"copy flash://rr1-n22-r14-4048hl-5-1a.conf startup-config"
],
"host": "10.10.234.161",
"interval": 1,
"password": null,
"port": null,
"provider": {
"host": "10.10.234.161",
"ssh_keyfile": "/srv/tftpboot/masd-rsa.pub",
"transport": "cli",
"username": "admin"
},
"retries": 10,
"ssh_keyfile": "/srv/tftpboot/masd-rsa.pub",
"timeout": 10,
"transport": "cli",
"username": "admin",
"wait_for": null
},
"module_name": "dellos9_command"
},
"msg": "timeout trying to send command: copy flash://rr1-n22-r14-4048hl-5-1a.conf startup-config\r"
}
```
| main | command confirm prompt timeout issue type bug report component name command ansible version ansible config file home emarq solutions network automation mas ansible dell ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux generic ubuntu smp wed oct utc gnu linux summary copy config to the startup config on the local flash breaks because of yes no prompt steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name issue delete and copy old config command provider cli commands delete flash inventory hostname conf no confirm delete flash startup conifg no confirm copy tftp inventory hostname conf flash inventory hostname conf copy flash inventory hostname conf startup config expected results copy command should have completed and provided a default yes for the os prompt sh conf startup config file with same name already exist proceed to copy the file actual results fatal failed changed false failed true invocation module args auth pass null authorize false commands delete flash conf no confirm delete flash startup conifg no confirm copy tftp conf flash conf copy flash conf startup config host interval password null port null provider host ssh keyfile srv tftpboot masd rsa pub transport cli username admin retries ssh keyfile srv tftpboot masd rsa pub timeout transport cli username admin wait for null module name command msg timeout trying to send command copy flash conf startup config r | 1 |
20,854 | 3,851,258,590 | IssuesEvent | 2016-04-06 00:46:47 | jQueryGeo/geo | https://api.github.com/repos/jQueryGeo/geo | reopened | draw & measure modes do not send click events | 1 - Low Test With Latest | drawLineString, drawPolygon, measureLength, & measureArea do not send click events while drawing. This differs from the documented behavior.
http://docs.jquerygeo.com/geomap/mode.html
| 2.0 | draw & measure modes do not send click events - drawLineString, drawPolygon, measureLength, & measureArea do not send click events while drawing. This differs from the documented behavior.
http://docs.jquerygeo.com/geomap/mode.html
| non_main | draw measure modes do not send click events drawlinestring drawpolygon measurelength measurearea do not send click events while drawing this differs from the documented behavior | 0 |
65,322 | 8,798,048,531 | IssuesEvent | 2018-12-24 03:49:27 | joncampbell123/dosbox-x | https://api.github.com/repos/joncampbell123/dosbox-x | closed | PC-98 INT DCh tracing and reverse engineering | documentation | **Is your feature request related to a problem? Please describe.**
INT DCh is undocumented and information is not available.
To implement it properly it is necessary to trace and reverse engineer the code.
Notes here are applicable to an MS-DOS 5.00 bootdisk. | 1.0 | PC-98 INT DCh tracing and reverse engineering - **Is your feature request related to a problem? Please describe.**
INT DCh is undocumented and information is not available.
To implement it properly it is necessary to trace and reverse engineer the code.
Notes here are applicable to an MS-DOS 5.00 bootdisk. | non_main | pc int dch tracing and reverse engineering is your feature request related to a problem please describe int dch is undocumented and information is not available to implement it properly it is necessary to trace and reverse engineer the code notes here are applicable to an ms dos bootdisk | 0 |
824 | 4,445,397,771 | IssuesEvent | 2016-08-20 01:57:23 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Bug report: "Error: Unknown command: cask" after #23852 | awaiting maintainer feedback bug | This issue seems to happen only after #23852 was merged. I did a `git reset --hard` to reset `/usr/local/Library/Taps/caskroom/homebrew-cask` to a commit just before the merge commit and everything worked well. However, if I reset it to the merge commit, it starts to have this error.
#### Description of issue
`brew cask` returns
```
Error: Unknown command: cask
```
The issue persists after trying all the commands in the "pre-bug-report":
```
$ brew update; brew cleanup; brew cask cleanup
Already up-to-date.
Removing: (...)
==> This operation has freed approximately 969.5M of disk space.
==> Removing cached downloads
(...)
==> This operation has freed approximately 401.9M of disk space.
$ brew uninstall --force brew-cask; brew update
Already up-to-date.
$ brew untap phinze/cask; brew untap caskroom/cask; brew update
Error: No available tap phinze/cask.
Untapping caskroom/cask... (3,781 files, 74.8M)
Untapped 0 formulae
Already up-to-date.
$ brew cask
==> Tapping caskroom/cask
Cloning into '/usr/local/Library/Taps/caskroom/homebrew-cask'...
remote: Counting objects: 3379, done.
remote: Compressing objects: 100% (3361/3361), done.
remote: Total 3379 (delta 37), reused 439 (delta 12), pack-reused 0
Receiving objects: 100% (3379/3379), 1.14 MiB | 812.00 KiB/s, done.
Resolving deltas: 100% (37/37), done.
Checking connectivity... done.
Tapped 0 formulae (3,386 files, 3.5M)
Error: Unknown command: cask
```
<details><summary>Output of `brew cask <command> --verbose`</summary>
```
Error: Unknown command: cask
```
</details>
<details><summary>Output of `brew doctor`</summary>
```
Your system is ready to brew.
```
</details>
<details><summary>Output of `brew cask doctor`</summary>
```
Error: Unknown command: cask
```
</details> | True | Bug report: "Error: Unknown command: cask" after #23852 - This issue seems to happen only after #23852 was merged. I did a `git reset --hard` to reset `/usr/local/Library/Taps/caskroom/homebrew-cask` to a commit just before the merge commit and everything worked well. However, if I reset it to the merge commit, it starts to have this error.
#### Description of issue
`brew cask` returns
```
Error: Unknown command: cask
```
The issue persists after trying all the commands in the "pre-bug-report":
```
$ brew update; brew cleanup; brew cask cleanup
Already up-to-date.
Removing: (...)
==> This operation has freed approximately 969.5M of disk space.
==> Removing cached downloads
(...)
==> This operation has freed approximately 401.9M of disk space.
$ brew uninstall --force brew-cask; brew update
Already up-to-date.
$ brew untap phinze/cask; brew untap caskroom/cask; brew update
Error: No available tap phinze/cask.
Untapping caskroom/cask... (3,781 files, 74.8M)
Untapped 0 formulae
Already up-to-date.
$ brew cask
==> Tapping caskroom/cask
Cloning into '/usr/local/Library/Taps/caskroom/homebrew-cask'...
remote: Counting objects: 3379, done.
remote: Compressing objects: 100% (3361/3361), done.
remote: Total 3379 (delta 37), reused 439 (delta 12), pack-reused 0
Receiving objects: 100% (3379/3379), 1.14 MiB | 812.00 KiB/s, done.
Resolving deltas: 100% (37/37), done.
Checking connectivity... done.
Tapped 0 formulae (3,386 files, 3.5M)
Error: Unknown command: cask
```
<details><summary>Output of `brew cask <command> --verbose`</summary>
```
Error: Unknown command: cask
```
</details>
<details><summary>Output of `brew doctor`</summary>
```
Your system is ready to brew.
```
</details>
<details><summary>Output of `brew cask doctor`</summary>
```
Error: Unknown command: cask
```
</details> | main | bug report error unknown command cask after this issue seems to happen only after was merged i did a git reset hard to reset usr local library taps caskroom homebrew cask to a commit just before the merge commit and everything worked well however if i reset it to the merge commit it starts to have this error description of issue brew cask returns error unknown command cask the issue persists after trying all the commands in the pre bug report brew update brew cleanup brew cask cleanup already up to date removing this operation has freed approximately of disk space removing cached downloads this operation has freed approximately of disk space brew uninstall force brew cask brew update already up to date brew untap phinze cask brew untap caskroom cask brew update error no available tap phinze cask untapping caskroom cask files untapped formulae already up to date brew cask tapping caskroom cask cloning into usr local library taps caskroom homebrew cask remote counting objects done remote compressing objects done remote total delta reused delta pack reused receiving objects mib kib s done resolving deltas done checking connectivity done tapped formulae files error unknown command cask output of brew cask verbose error unknown command cask output of brew doctor your system is ready to brew output of brew cask doctor error unknown command cask | 1 |
3,057 | 11,452,598,789 | IssuesEvent | 2020-02-06 13:59:01 | pace/bricks | https://api.github.com/repos/pace/bricks | closed | Make health checks in parallel | S::Ready T::Maintainance | ### Problem
Currently, health checks are executed in serial order.
### Solution
Use a waiting group or similar with a max time per check (as of now 5 sec) to reduce the overall latency of the health checks to 10 sec. | True | Make health checks in parallel - ### Problem
Currently, health checks are executed in serial order.
### Solution
Use a waiting group or similar with a max time per check (as of now 5 sec) to reduce the overall latency of the health checks to 10 sec. | main | make health checks in parallel problem currently health checks are executed in serial order solution use a waiting group or similar with a max time per check as of now sec to reduce the overall latency of the health checks to sec | 1 |
2,326 | 8,318,437,341 | IssuesEvent | 2018-09-25 14:41:05 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | opened | Developer version crashes immediately with macOS 10.14 (Mojave) | Affects Maintainability Affects Stability Affects Usability Concerns Development OS macOS Priority High Version Git | **Describe the bug**
In the developer (i.e. Eclipse) version of GAMA, try to launch GAMA from the runtime product. A fatal error is immediately emitted and GAMA stops. This does not seem limited to GAMA (see context).
**Expected behavior**
A clear and concise description of what you expected to happen.
**Logs**
In the logs, the problematic frame seems to be:
```
Stack: [0x00007ffeee9bc000,0x00007ffeef1bc000], sp=0x00007ffeef1b7688, free space=8173k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libobjc.A.dylib+0x7a29] objc_msgSend+0x29
C [libswt-pi-cocoa-4763.jnilib+0x125b5] Java_org_eclipse_swt_internal_cocoa_OS_objc_1msgSend__JJJ+0x35
j org.eclipse.swt.internal.cocoa.OS.objc_msgSend(JJJ)J+0
j org.eclipse.swt.internal.cocoa.NSApplication.setDelegate(Lorg/eclipse/swt/internal/cocoa/id;)V+19
j org.eclipse.swt.widgets.Display.init()V+100
j org.eclipse.swt.graphics.Device.<init>(Lorg/eclipse/swt/graphics/DeviceData;)V+168
j org.eclipse.swt.widgets.Display.<init>(Lorg/eclipse/swt/graphics/DeviceData;)V+2
j org.eclipse.swt.widgets.Display.<init>()V+2
j org.eclipse.swt.widgets.Display.getDefault()Lorg/eclipse/swt/widgets/Display;+15
j msi.gama.application.Application.createProcessor()V+0
```
**Desktop (please complete the following information):**
- OS: macOS Mojave (10.14)
- PC Model: MacBook Pro 2016
- GAMA version: git
- Java version: 1.8
**Additional context**
Similar Eclipse bug already reported here: https://bugs.eclipse.org/bugs/show_bug.cgi?id=538377 | True | Developer version crashes immediately with macOS 10.14 (Mojave) - **Describe the bug**
In the developer (i.e. Eclipse) version of GAMA, try to launch GAMA from the runtime product. A fatal error is immediately emitted and GAMA stops. This does not seem limited to GAMA (see context).
**Expected behavior**
A clear and concise description of what you expected to happen.
**Logs**
In the logs, the problematic frame seems to be:
```
Stack: [0x00007ffeee9bc000,0x00007ffeef1bc000], sp=0x00007ffeef1b7688, free space=8173k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libobjc.A.dylib+0x7a29] objc_msgSend+0x29
C [libswt-pi-cocoa-4763.jnilib+0x125b5] Java_org_eclipse_swt_internal_cocoa_OS_objc_1msgSend__JJJ+0x35
j org.eclipse.swt.internal.cocoa.OS.objc_msgSend(JJJ)J+0
j org.eclipse.swt.internal.cocoa.NSApplication.setDelegate(Lorg/eclipse/swt/internal/cocoa/id;)V+19
j org.eclipse.swt.widgets.Display.init()V+100
j org.eclipse.swt.graphics.Device.<init>(Lorg/eclipse/swt/graphics/DeviceData;)V+168
j org.eclipse.swt.widgets.Display.<init>(Lorg/eclipse/swt/graphics/DeviceData;)V+2
j org.eclipse.swt.widgets.Display.<init>()V+2
j org.eclipse.swt.widgets.Display.getDefault()Lorg/eclipse/swt/widgets/Display;+15
j msi.gama.application.Application.createProcessor()V+0
```
**Desktop (please complete the following information):**
- OS: macOS Mojave (10.14)
- PC Model: MacBook Pro 2016
- GAMA version: git
- Java version: 1.8
**Additional context**
Similar Eclipse bug already reported here: https://bugs.eclipse.org/bugs/show_bug.cgi?id=538377 | main | developer version crashes immediately with macos mojave describe the bug in the developer i e eclipse version of gama try to launch gama from the runtime product a fatal error is immediately emitted and gama stops this does not seem limited to gama see context expected behavior a clear and concise description of what you expected to happen logs in the logs the problematic frame seems to be stack sp free space native frames j compiled java code j interpreted vv vm code c native code c objc msgsend c java org eclipse swt internal cocoa os objc jjj j org eclipse swt internal cocoa os objc msgsend jjj j j org eclipse swt internal cocoa nsapplication setdelegate lorg eclipse swt internal cocoa id v j org eclipse swt widgets display init v j org eclipse swt graphics device lorg eclipse swt graphics devicedata v j org eclipse swt widgets display lorg eclipse swt graphics devicedata v j org eclipse swt widgets display v j org eclipse swt widgets display getdefault lorg eclipse swt widgets display j msi gama application application createprocessor v desktop please complete the following information os macos mojave pc model macbook pro gama version git java version additional context similar eclipse bug already reported here | 1 |
1,266 | 5,368,424,891 | IssuesEvent | 2017-02-22 08:44:57 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | calls to add overlay needs to be batched | Maintainability/Hinders improvements Not a bug | add_overlay supports taking a list, the way it adds entries to the overlays list causes byond to only regenerate the appearance once even when adding a list.
Most of the overhead of add_overlay is from regenerating the appearance.
So the obvious answer is that things that call add_overlay need to batch it's calls so all overlays get added at once.
Most actually do, purely because they only add 1 overlay.
But shit like mobs and other things that add/remove overlays alot do not.
(It doesn't help that @RemieRichards's current system for human overlays makes batching them up kinda hard.)(not trying to call you out rr, just pointing this out in case you can think up a solution since you know your system better then I do.)
I thought about doing a subsystem and batching all overlay calls in a given tick, but removing becomes an issue because of how byond treats overlays. The only way we could do that is if all overlay operations only added/removed /appearance/s, because -='ing from an overlay list is treated specially compared to a normal list if what you are removing is not an appearance.
@tgstation/commit-access
@Cyberboss (because you've done some performance prs)
If at the least, maintainers could require this on new content, we can start to hedge off this issue. | True | calls to add overlay needs to be batched - add_overlay supports taking a list, the way it adds entries to the overlays list causes byond to only regenerate the appearance once even when adding a list.
Most of the overhead of add_overlay is from regenerating the appearance.
So the obvious answer is that things that call add_overlay need to batch it's calls so all overlays get added at once.
Most actually do, purely because they only add 1 overlay.
But shit like mobs and other things that add/remove overlays alot do not.
(It doesn't help that @RemieRichards's current system for human overlays makes batching them up kinda hard.)(not trying to call you out rr, just pointing this out in case you can think up a solution since you know your system better then I do.)
I thought about doing a subsystem and batching all overlay calls in a given tick, but removing becomes an issue because of how byond treats overlays. The only way we could do that is if all overlay operations only added/removed /appearance/s, because -='ing from an overlay list is treated specially compared to a normal list if what you are removing is not an appearance.
@tgstation/commit-access
@Cyberboss (because you've done some performance prs)
If at the least, maintainers could require this on new content, we can start to hedge off this issue. | main | calls to add overlay needs to be batched add overlay supports taking a list the way it adds entries to the overlays list causes byond to only regenerate the appearance once even when adding a list most of the overhead of add overlay is from regenerating the appearance so the obvious answer is that things that call add overlay need to batch it s calls so all overlays get added at once most actually do purely because they only add overlay but shit like mobs and other things that add remove overlays alot do not it doesn t help that remierichards s current system for human overlays makes batching them up kinda hard not trying to call you out rr just pointing this out in case you can think up a solution since you know your system better then i do i thought about doing a subsystem and batching all overlay calls in a given tick but removing becomes an issue because of how byond treats overlays the only way we could do that is if all overlay operations only added removed appearance s because ing from an overlay list is treated specially compared to a normal list if what you are removing is not an appearance tgstation commit access cyberboss because you ve done some performance prs if at the least maintainers could require this on new content we can start to hedge off this issue | 1 |
161,325 | 12,540,354,363 | IssuesEvent | 2020-06-05 10:12:59 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Roundstart portable air tank bomb bug | Test Merge Bug | <!-- Write **BELOW** The Headers and **ABOVE** The comments else it may not be viewable -->
## Round ID:[138785](https://scrubby.melonmesa.com/round/138785)
<!--- **INCLUDE THE ROUND ID**
If you discovered this issue from playing tgstation hosted servers:
[Round ID]: # (It can be found in the Status panel or retrieved from https://atlantaned.space/statbus/round.php ! The round id let's us look up valuable information and logs for the round the bug happened.)-->
## Testmerges:
Fixes turf atmos getting deleted. by nemvar
[REVIEW] Adds Medical Wounds: Bamboo Bones and the Skin of Your Teeth by Ryll-Ryll
<!-- If you're certain the issue is to be caused by a test merge [OOC tab -> Show Server Revision], report it in the pull request's comment section rather than on the tracker(If you're unsure you can refer to the issue number by prefixing said number with #. The issue number can be found beside the title after submitting it to the tracker).If no testmerges are active, feel free to remove this section. -->
## Reproduction:
I did the same thing yesterday and the below did not result in an explosion with my death.
Grab a large red o2 tank from the emergency locker, run to a portable air pump, set to in, turn it on, max the value on it, drag it around to suck up a bunch of air, by this time it should have at least 3000 pressure, turn it off when you are past 3000, put an air tank in it, you now have a suicide bomb, turn on and create a large explosion at least 5x5 empty tiles. at about 2500 it will result in 30,000 in the air tank.
<!-- Explain your issue in detail, including the steps to reproduce it. Issues without proper reproduction steps or explanation are open to being ignored/closed by maintainers.-->
<!-- **For Admins:** Oddities induced by var-edits and other admin tools are not necessarily bugs. Verify that your issues occur under regular circumstances before reporting them. -->
| 1.0 | Roundstart portable air tank bomb bug - <!-- Write **BELOW** The Headers and **ABOVE** The comments else it may not be viewable -->
## Round ID:[138785](https://scrubby.melonmesa.com/round/138785)
<!--- **INCLUDE THE ROUND ID**
If you discovered this issue from playing tgstation hosted servers:
[Round ID]: # (It can be found in the Status panel or retrieved from https://atlantaned.space/statbus/round.php ! The round id let's us look up valuable information and logs for the round the bug happened.)-->
## Testmerges:
Fixes turf atmos getting deleted. by nemvar
[REVIEW] Adds Medical Wounds: Bamboo Bones and the Skin of Your Teeth by Ryll-Ryll
<!-- If you're certain the issue is to be caused by a test merge [OOC tab -> Show Server Revision], report it in the pull request's comment section rather than on the tracker(If you're unsure you can refer to the issue number by prefixing said number with #. The issue number can be found beside the title after submitting it to the tracker).If no testmerges are active, feel free to remove this section. -->
## Reproduction:
I did the same thing yesterday and the below did not result in an explosion with my death.
Grab a large red o2 tank from the emergency locker, run to a portable air pump, set to in, turn it on, max the value on it, drag it around to suck up a bunch of air, by this time it should have at least 3000 pressure, turn it off when you are past 3000, put an air tank in it, you now have a suicide bomb, turn on and create a large explosion at least 5x5 empty tiles. at about 2500 it will result in 30,000 in the air tank.
<!-- Explain your issue in detail, including the steps to reproduce it. Issues without proper reproduction steps or explanation are open to being ignored/closed by maintainers.-->
<!-- **For Admins:** Oddities induced by var-edits and other admin tools are not necessarily bugs. Verify that your issues occur under regular circumstances before reporting them. -->
| non_main | roundstart portable air tank bomb bug round id include the round id if you discovered this issue from playing tgstation hosted servers it can be found in the status panel or retrieved from the round id let s us look up valuable information and logs for the round the bug happened testmerges fixes turf atmos getting deleted by nemvar adds medical wounds bamboo bones and the skin of your teeth by ryll ryll reproduction i did the same thing yesterday and the below did not result in an explosion with my death grab a large red tank from the emergency locker run to a portable air pump set to in turn it on max the value on it drag it around to suck up a bunch of air by this time it should have at least pressure turn it off when you are past put an air tank in it you now have a suicide bomb turn on and create a large explosion at least empty tiles at about it will result in in the air tank | 0 |
2,986 | 3,052,049,107 | IssuesEvent | 2015-08-12 12:45:32 | angular/angular | https://api.github.com/repos/angular/angular | closed | Watch broken in test.unit.dart | comp: build/dev-productivity comp: build/pipeline P1: urgent | Steps to reproduce:
* run `test.unit.dart`
* wait for tests to finish
* touch one of the files (ex. `compiler/integration_spec.ts`)
Problem:
> [10:46:33] Starting '!build/tree.dart'...
[10:46:35] '!build/tree.dart' errored after 1.83 s
[10:46:35] TypeError: [TSToDartTranspiler]: Cannot read property '0' of undefined
at FacadeConverter.getFileAndName (/source/facade_converter.ts:147:33)
at FacadeConverter.visitTypeName (/source/facade_converter.ts:122:30)
at ModuleTranspiler.visitNode (/source/module.ts:71:17)
at Transpiler.visit (/source/main.ts:254:31)
at ModuleTranspiler.TranspilerBase.visit (/source/base.ts:20:39)
at ModuleTranspiler.TranspilerBase.visitList (/source/base.ts:35:12)
at ModuleTranspiler.visitNode (/source/module.ts:57:14)
at Transpiler.visit (/source/main.ts:254:31)
at ModuleTranspiler.TranspilerBase.visit (/source/base.ts:20:39)
at ModuleTranspiler.visitNode (/source/module.ts:43:16)
This error makes running tests in Dart pretty miserable experience so I'm short of flagging is as P0....
//cc: @mprobst @IgorMinar | 2.0 | Watch broken in test.unit.dart - Steps to reproduce:
* run `test.unit.dart`
* wait for tests to finish
* touch one of the files (ex. `compiler/integration_spec.ts`)
Problem:
> [10:46:33] Starting '!build/tree.dart'...
[10:46:35] '!build/tree.dart' errored after 1.83 s
[10:46:35] TypeError: [TSToDartTranspiler]: Cannot read property '0' of undefined
at FacadeConverter.getFileAndName (/source/facade_converter.ts:147:33)
at FacadeConverter.visitTypeName (/source/facade_converter.ts:122:30)
at ModuleTranspiler.visitNode (/source/module.ts:71:17)
at Transpiler.visit (/source/main.ts:254:31)
at ModuleTranspiler.TranspilerBase.visit (/source/base.ts:20:39)
at ModuleTranspiler.TranspilerBase.visitList (/source/base.ts:35:12)
at ModuleTranspiler.visitNode (/source/module.ts:57:14)
at Transpiler.visit (/source/main.ts:254:31)
at ModuleTranspiler.TranspilerBase.visit (/source/base.ts:20:39)
at ModuleTranspiler.visitNode (/source/module.ts:43:16)
This error makes running tests in Dart pretty miserable experience so I'm short of flagging is as P0....
//cc: @mprobst @IgorMinar | non_main | watch broken in test unit dart steps to reproduce run test unit dart wait for tests to finish touch one of the files ex compiler integration spec ts problem starting build tree dart build tree dart errored after s typeerror cannot read property of undefined at facadeconverter getfileandname source facade converter ts at facadeconverter visittypename source facade converter ts at moduletranspiler visitnode source module ts at transpiler visit source main ts at moduletranspiler transpilerbase visit source base ts at moduletranspiler transpilerbase visitlist source base ts at moduletranspiler visitnode source module ts at transpiler visit source main ts at moduletranspiler transpilerbase visit source base ts at moduletranspiler visitnode source module ts this error makes running tests in dart pretty miserable experience so i m short of flagging is as cc mprobst igorminar | 0 |
4,487 | 23,375,624,143 | IssuesEvent | 2022-08-11 02:28:06 | restqa/restqa | https://api.github.com/repos/restqa/restqa | closed | [Dashboard] Search file by feature name | enhancement wontfix pair with maintainer | Hello 👋,
### 👀 Background
While using the RestQA Dashboard the team might have a lot of feature to work with.
### ✌️ What is the actual behavior?
On the editor the selection of the file can be done only through feature filename.

### 🕵️♀️ How to reproduce the current behavior?
1. Initiate a RestQA project `restqa init`
2. Run the dashboard `restqa dashboard`
3. Access to the dashboard go to editor tab
4. Select one or few files
### 🤞 What is the expected behavior?
It would be great if we could :
- have the choice to show the files through file name or feature title
- Having a search input that allow the user to pre-filter the list of files
Cheers.
| True | [Dashboard] Search file by feature name - Hello 👋,
### 👀 Background
While using the RestQA Dashboard the team might have a lot of feature to work with.
### ✌️ What is the actual behavior?
On the editor the selection of the file can be done only through feature filename.

### 🕵️♀️ How to reproduce the current behavior?
1. Initiate a RestQA project `restqa init`
2. Run the dashboard `restqa dashboard`
3. Access to the dashboard go to editor tab
4. Select one or few files
### 🤞 What is the expected behavior?
It would be great if we could :
- have the choice to show the files through file name or feature title
- Having a search input that allow the user to pre-filter the list of files
Cheers.
| main | search file by feature name hello 👋 👀 background while using the restqa dashboard the team might have a lot of feature to work with ✌️ what is the actual behavior on the editor the selection of the file can be done only through feature filename 🕵️♀️ how to reproduce the current behavior initiate a restqa project restqa init run the dashboard restqa dashboard access to the dashboard go to editor tab select one or few files 🤞 what is the expected behavior it would be great if we could have the choice to show the files through file name or feature title having a search input that allow the user to pre filter the list of files cheers | 1 |
204,853 | 23,291,773,765 | IssuesEvent | 2022-08-06 01:02:30 | AlexRogalskiy/roadmap | https://api.github.com/repos/AlexRogalskiy/roadmap | opened | CVE-2022-0084 (Medium) detected in xnio-api-3.8.7.Final.jar | security vulnerability | ## CVE-2022-0084 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xnio-api-3.8.7.Final.jar</b></p></summary>
<p>The API JAR of the XNIO project</p>
<p>Library home page: <a href="http://www.jboss.org/xnio">http://www.jboss.org/xnio</a></p>
<p>Path to dependency file: /modules/roadmap-router-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar,/home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar,/home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar</p>
<p>
Dependency Hierarchy:
- undertow-core-2.3.0.Alpha1.jar (Root Library)
- :x: **xnio-api-3.8.7.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/roadmap/commit/fff78511c51c60993515e1063ec6b51834154cd1">fff78511c51c60993515e1063ec6b51834154cd1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in XNIO, specifically in the notifyReadClosed method. The issue revealed this method was logging a message to another expected end. This flaw allows an attacker to send flawed requests to a server, possibly causing log contention-related performance concerns or an unwanted disk fill-up.
<p>Publish Date: 2022-01-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0084>CVE-2022-0084</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0084 (Medium) detected in xnio-api-3.8.7.Final.jar - ## CVE-2022-0084 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xnio-api-3.8.7.Final.jar</b></p></summary>
<p>The API JAR of the XNIO project</p>
<p>Library home page: <a href="http://www.jboss.org/xnio">http://www.jboss.org/xnio</a></p>
<p>Path to dependency file: /modules/roadmap-router-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar,/home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar,/home/wss-scanner/.m2/repository/org/jboss/xnio/xnio-api/3.8.7.Final/xnio-api-3.8.7.Final.jar</p>
<p>
Dependency Hierarchy:
- undertow-core-2.3.0.Alpha1.jar (Root Library)
- :x: **xnio-api-3.8.7.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/roadmap/commit/fff78511c51c60993515e1063ec6b51834154cd1">fff78511c51c60993515e1063ec6b51834154cd1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in XNIO, specifically in the notifyReadClosed method. The issue revealed this method was logging a message to another expected end. This flaw allows an attacker to send flawed requests to a server, possibly causing log contention-related performance concerns or an unwanted disk fill-up.
<p>Publish Date: 2022-01-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0084>CVE-2022-0084</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in xnio api final jar cve medium severity vulnerability vulnerable library xnio api final jar the api jar of the xnio project library home page a href path to dependency file modules roadmap router service pom xml path to vulnerable library home wss scanner repository org jboss xnio xnio api final xnio api final jar home wss scanner repository org jboss xnio xnio api final xnio api final jar home wss scanner repository org jboss xnio xnio api final xnio api final jar dependency hierarchy undertow core jar root library x xnio api final jar vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in xnio specifically in the notifyreadclosed method the issue revealed this method was logging a message to another expected end this flaw allows an attacker to send flawed requests to a server possibly causing log contention related performance concerns or an unwanted disk fill up publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend | 0 |
162 | 2,700,634,325 | IssuesEvent | 2015-04-04 11:32:34 | krico/jas | https://api.github.com/repos/krico/jas | opened | Get rid of date on generated code | backend low prio maintainer | The slim3 generated code has this annoying `@javax.annotation.Generated` that has the date, this causes conflicts if two devs generate in paralel.
```java
package com.jasify.schedule.appengine.meta.payment;
//@javax.annotation.Generated(value = { "slim3-gen", "@VERSION@" }, date = "2015-03-22 13:08:07")
/** */
public final class PaymentMeta extends org.slim3.datastore.ModelMeta<com.jasify.schedule.appengine.model.payment.Payment> {
...
}
``` | True | Get rid of date on generated code - The slim3 generated code has this annoying `@javax.annotation.Generated` that has the date, this causes conflicts if two devs generate in paralel.
```java
package com.jasify.schedule.appengine.meta.payment;
//@javax.annotation.Generated(value = { "slim3-gen", "@VERSION@" }, date = "2015-03-22 13:08:07")
/** */
public final class PaymentMeta extends org.slim3.datastore.ModelMeta<com.jasify.schedule.appengine.model.payment.Payment> {
...
}
``` | main | get rid of date on generated code the generated code has this annoying javax annotation generated that has the date this causes conflicts if two devs generate in paralel java package com jasify schedule appengine meta payment javax annotation generated value gen version date public final class paymentmeta extends org datastore modelmeta | 1 |
3,312 | 12,827,461,152 | IssuesEvent | 2020-07-06 18:31:56 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Stop testing for Oracle JDK in CI | enhancement maintainability | The differences between Oracle JDK and OpenJDK are minor starting from version 11. It is probably wasteful to be running both of them in Travis.
More background on the differences: https://www.baeldung.com/oracle-jdk-vs-openjdk
Over the past few years I do not recall encountering any bug in OpenRefine which would only be present with one of OracleJDK and OpenJDK. Running many builds at the same time increases the chances that we run into spurious build failures and can increase the build time if the builds are not run concurrently. So I propose we remove oraclejdk11 from Travis. | True | Stop testing for Oracle JDK in CI - The differences between Oracle JDK and OpenJDK are minor starting from version 11. It is probably wasteful to be running both of them in Travis.
More background on the differences: https://www.baeldung.com/oracle-jdk-vs-openjdk
Over the past few years I do not recall encountering any bug in OpenRefine which would only be present with one of OracleJDK and OpenJDK. Running many builds at the same time increases the chances that we run into spurious build failures and can increase the build time if the builds are not run concurrently. So I propose we remove oraclejdk11 from Travis. | main | stop testing for oracle jdk in ci the differences between oracle jdk and openjdk are minor starting from version it is probably wasteful to be running both of them in travis more background on the differences over the past few years i do not recall encountering any bug in openrefine which would only be present with one of oraclejdk and openjdk running many builds at the same time increases the chances that we run into spurious build failures and can increase the build time if the builds are not run concurrently so i propose we remove from travis | 1 |
4,236 | 20,984,588,622 | IssuesEvent | 2022-03-29 00:45:48 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | generate-event: Add Application Load Balancer | maintainer/need-followup | ### Describe your idea/feature/enhancement
Application load balancer payload is missing from `sam local generate-event`
### Proposal
Add Application load balancer payload event to generate-event
### Additional Details
- [Example Application Load Balancer request event](https://docs.aws.amazon.com/lambda/latest/dg/services-alb.html)
```json5
{
"requestContext": {
"elb": {
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/lambda-279XGJDqGZ5rsrHC2Fjr/49e9d65c45c6791a"
}
},
"httpMethod": "GET",
"path": "/lambda",
"queryStringParameters": {
"query": "1234ABCD"
},
"headers": {
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"accept-encoding": "gzip",
"accept-language": "en-US,en;q=0.9",
"connection": "keep-alive",
"host": "lambda-alb-123578498.us-east-2.elb.amazonaws.com",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36",
"x-amzn-trace-id": "Root=1-5c536348-3d683b8b04734faae651f476",
"x-forwarded-for": "72.12.164.125",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-imforwards": "20"
},
"body": "",
"isBase64Encoded": false
}
```
- [alb-lambda-target-request-headers-only.json](https://github.com/aws/aws-lambda-go/blob/main/events/testdata/alb-lambda-target-request-headers-only.json)
```json5
{
"requestContext": {
"elb": {
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/lambda-target/abcdefg"
}
},
"httpMethod": "GET",
"path": "/",
"queryStringParameters": {
"key": "hello"
},
"headers": {
"accept": "*/*",
"connection": "keep-alive",
"host": "lambda-test-alb-1334523864.us-east-1.elb.amazonaws.com",
"user-agent": "curl/7.54.0",
"x-amzn-trace-id": "Root=1-5c34e93e-4dea0086f9763ac0667b115a",
"x-forwarded-for": "25.12.198.67",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-imforwards": "20",
"x-myheader": "123"
},
"isBase64Encoded": false
}
```
- [alb-lambda-target-request-multivalue-headers.json](https://github.com/aws/aws-lambda-go/blob/main/events/testdata/alb-lambda-target-request-multivalue-headers.json)
```json5
{
"requestContext": {
"elb": {
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/lambda-target/abcdefgh"
}
},
"httpMethod": "GET",
"path": "/",
"multiValueQueryStringParameters": {
"key": [
"hello"
]
},
"multiValueHeaders": {
"accept": [
"*/*"
],
"connection": [
"keep-alive"
],
"host": [
"lambda-test-alb-1234567.us-east-1.elb.amazonaws.com"
],
"user-agent": [
"curl/7.54.0"
],
"x-amzn-trace-id": [
"Root=1-5c34e7d4-00ca239424b68028d4c56d68"
],
"x-forwarded-for": [
"72.21.198.67"
],
"x-forwarded-port": [
"80"
],
"x-forwarded-proto": [
"http"
],
"x-imforwards": [
"20"
],
"x-myheader": [
"123"
]
},
"body": "Some text",
"isBase64Encoded": false
}
```
| True | generate-event: Add Application Load Balancer - ### Describe your idea/feature/enhancement
Application load balancer payload is missing from `sam local generate-event`
### Proposal
Add Application load balancer payload event to generate-event
### Additional Details
- [Example Application Load Balancer request event](https://docs.aws.amazon.com/lambda/latest/dg/services-alb.html)
```json5
{
"requestContext": {
"elb": {
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/lambda-279XGJDqGZ5rsrHC2Fjr/49e9d65c45c6791a"
}
},
"httpMethod": "GET",
"path": "/lambda",
"queryStringParameters": {
"query": "1234ABCD"
},
"headers": {
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"accept-encoding": "gzip",
"accept-language": "en-US,en;q=0.9",
"connection": "keep-alive",
"host": "lambda-alb-123578498.us-east-2.elb.amazonaws.com",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36",
"x-amzn-trace-id": "Root=1-5c536348-3d683b8b04734faae651f476",
"x-forwarded-for": "72.12.164.125",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-imforwards": "20"
},
"body": "",
"isBase64Encoded": false
}
```
- [alb-lambda-target-request-headers-only.json](https://github.com/aws/aws-lambda-go/blob/main/events/testdata/alb-lambda-target-request-headers-only.json)
```json5
{
"requestContext": {
"elb": {
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/lambda-target/abcdefg"
}
},
"httpMethod": "GET",
"path": "/",
"queryStringParameters": {
"key": "hello"
},
"headers": {
"accept": "*/*",
"connection": "keep-alive",
"host": "lambda-test-alb-1334523864.us-east-1.elb.amazonaws.com",
"user-agent": "curl/7.54.0",
"x-amzn-trace-id": "Root=1-5c34e93e-4dea0086f9763ac0667b115a",
"x-forwarded-for": "25.12.198.67",
"x-forwarded-port": "80",
"x-forwarded-proto": "http",
"x-imforwards": "20",
"x-myheader": "123"
},
"isBase64Encoded": false
}
```
- [alb-lambda-target-request-multivalue-headers.json](https://github.com/aws/aws-lambda-go/blob/main/events/testdata/alb-lambda-target-request-multivalue-headers.json)
```json5
{
"requestContext": {
"elb": {
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/lambda-target/abcdefgh"
}
},
"httpMethod": "GET",
"path": "/",
"multiValueQueryStringParameters": {
"key": [
"hello"
]
},
"multiValueHeaders": {
"accept": [
"*/*"
],
"connection": [
"keep-alive"
],
"host": [
"lambda-test-alb-1234567.us-east-1.elb.amazonaws.com"
],
"user-agent": [
"curl/7.54.0"
],
"x-amzn-trace-id": [
"Root=1-5c34e7d4-00ca239424b68028d4c56d68"
],
"x-forwarded-for": [
"72.21.198.67"
],
"x-forwarded-port": [
"80"
],
"x-forwarded-proto": [
"http"
],
"x-imforwards": [
"20"
],
"x-myheader": [
"123"
]
},
"body": "Some text",
"isBase64Encoded": false
}
```
| main | generate event add application load balancer describe your idea feature enhancement application load balancer payload is missing from sam local generate event proposal add application load balancer payload event to generate event additional details requestcontext elb targetgrouparn arn aws elasticloadbalancing us east targetgroup lambda httpmethod get path lambda querystringparameters query headers accept text html application xhtml xml application xml q image webp image apng q accept encoding gzip accept language en us en q connection keep alive host lambda alb us east elb amazonaws com upgrade insecure requests user agent mozilla windows nt applewebkit khtml like gecko chrome safari x amzn trace id root x forwarded for x forwarded port x forwarded proto http x imforwards body false requestcontext elb targetgrouparn arn aws elasticloadbalancing us east targetgroup lambda target abcdefg httpmethod get path querystringparameters key hello headers accept connection keep alive host lambda test alb us east elb amazonaws com user agent curl x amzn trace id root x forwarded for x forwarded port x forwarded proto http x imforwards x myheader false requestcontext elb targetgrouparn arn aws elasticloadbalancing us east targetgroup lambda target abcdefgh httpmethod get path multivaluequerystringparameters key hello multivalueheaders accept connection keep alive host lambda test alb us east elb amazonaws com user agent curl x amzn trace id root x forwarded for x forwarded port x forwarded proto http x imforwards x myheader body some text false | 1 |
131,026 | 12,468,111,737 | IssuesEvent | 2020-05-28 18:15:58 | akirillo/begin_2.0-jamstack | https://api.github.com/repos/akirillo/begin_2.0-jamstack | closed | Add JSDoc Comments | documentation | **Is your feature request related to a problem? Please describe.**
Proper documentation for this website is both crucial, and lacking. We need to keep this code maintainable, so that it can be improved upon by others in the future.
**Describe the solution you'd like**
Code documentation using [JSDoc](https://jsdoc.app/).
**Describe alternatives you've considered**
If there are better documentation tools, I'm all ears. I know [gitbook](gitbook.com) is very popular, but JSDoc is nice in that it can generate a documentation webpage based off of your code comments. | 1.0 | Add JSDoc Comments - **Is your feature request related to a problem? Please describe.**
Proper documentation for this website is both crucial, and lacking. We need to keep this code maintainable, so that it can be improved upon by others in the future.
**Describe the solution you'd like**
Code documentation using [JSDoc](https://jsdoc.app/).
**Describe alternatives you've considered**
If there are better documentation tools, I'm all ears. I know [gitbook](gitbook.com) is very popular, but JSDoc is nice in that it can generate a documentation webpage based off of your code comments. | non_main | add jsdoc comments is your feature request related to a problem please describe proper documentation for this website is both crucial and lacking we need to keep this code maintainable so that it can be improved upon by others in the future describe the solution you d like code documentation using describe alternatives you ve considered if there are better documentation tools i m all ears i know gitbook com is very popular but jsdoc is nice in that it can generate a documentation webpage based off of your code comments | 0 |
212,289 | 23,880,895,940 | IssuesEvent | 2022-09-08 01:06:17 | venkateshreddypala/aircraft | https://api.github.com/repos/venkateshreddypala/aircraft | opened | CVE-2022-38751 (Medium) detected in snakeyaml-1.19.jar | security vulnerability | ## CVE-2022-38751 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.19.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /aircraft/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-config-server-2.0.0.M9.jar (Root Library)
- :x: **snakeyaml-1.19.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751>CVE-2022-38751</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-38751 (Medium) detected in snakeyaml-1.19.jar - ## CVE-2022-38751 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.19.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /aircraft/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-config-server-2.0.0.M9.jar (Root Library)
- :x: **snakeyaml-1.19.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38751>CVE-2022-38751</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in snakeyaml jar cve medium severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file aircraft pom xml path to vulnerable library root repository org yaml snakeyaml snakeyaml jar dependency hierarchy spring cloud config server jar root library x snakeyaml jar vulnerable library vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend | 0 |
1,079 | 4,898,860,380 | IssuesEvent | 2016-11-21 08:06:18 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ovirt_vms: No module named enum | affects_2.3 bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ovirt_vms
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.3.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux Mint 18
##### SUMMARY
<!--- Explain the problem briefly -->
ImportError: No module named enum While trying to use module - enum and enum34 are installed via pip
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Clone ansible-modules-extras
run:
```
- name: Setup ovirt
tags:
- ovirt_deploy
ovirt_vms:
auth:
username: "{{ ovirt_user }}"
password: "{{ ovirt_pass }}"
url: "{{ ovirt_control_url }}"
insecure: true
state: present
name: "{{ item.login }}"
template: unprovisioned
cluster: "{{ item.cluster_zone }}"
with_items: "{{ users_to_add }}"
```
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Runs without errors
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
"module_stderr": "Shared connection to
10.150.0.250 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_DZX974/ansible_module_ovirt_vms.py\", lin
e 28, in <module>\r\n from ansible.module_utils.ovirt import *\r\n File \"/tmp/ansible_DZX974/ansible_modlib.zip/ansible/module_utils/ovirt.py
\", line 27, in <module>\r\nImportError: No module named enum\r\n", "msg": "MODULE FAILURE"}
<!--- Paste verbatim command output between quotes below -->
```
```
| True | ovirt_vms: No module named enum - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ovirt_vms
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.3.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Linux Mint 18
##### SUMMARY
<!--- Explain the problem briefly -->
ImportError: No module named enum While trying to use module - enum and enum34 are installed via pip
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Clone ansible-modules-extras
run:
```
- name: Setup ovirt
tags:
- ovirt_deploy
ovirt_vms:
auth:
username: "{{ ovirt_user }}"
password: "{{ ovirt_pass }}"
url: "{{ ovirt_control_url }}"
insecure: true
state: present
name: "{{ item.login }}"
template: unprovisioned
cluster: "{{ item.cluster_zone }}"
with_items: "{{ users_to_add }}"
```
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Runs without errors
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
"module_stderr": "Shared connection to
10.150.0.250 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_DZX974/ansible_module_ovirt_vms.py\", lin
e 28, in <module>\r\n from ansible.module_utils.ovirt import *\r\n File \"/tmp/ansible_DZX974/ansible_modlib.zip/ansible/module_utils/ovirt.py
\", line 27, in <module>\r\nImportError: No module named enum\r\n", "msg": "MODULE FAILURE"}
<!--- Paste verbatim command output between quotes below -->
```
```
| main | ovirt vms no module named enum issue type bug report component name ovirt vms ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux mint summary importerror no module named enum while trying to use module enum and are installed via pip steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used clone ansible modules extras run name setup ovirt tags ovirt deploy ovirt vms auth username ovirt user password ovirt pass url ovirt control url insecure true state present name item login template unprovisioned cluster item cluster zone with items users to add expected results runs without errors actual results module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module ovirt vms py lin e in r n from ansible module utils ovirt import r n file tmp ansible ansible modlib zip ansible module utils ovirt py line in r nimporterror no module named enum r n msg module failure | 1 |
5,855 | 31,280,559,686 | IssuesEvent | 2023-08-22 09:19:05 | google/wasefire | https://api.github.com/repos/google/wasefire | opened | Convert as much scripts as possible to xtask | needs:implementation crate:xtask for:usability for:maintainability | The shell scripts start to get more/too complex. We should migrate them to xtask as modules (or even libraries). Ideally, the only remaining script would be `setup.sh` to install enough things for xtask to run (probably rustup and build-essential). | True | Convert as much scripts as possible to xtask - The shell scripts start to get more/too complex. We should migrate them to xtask as modules (or even libraries). Ideally, the only remaining script would be `setup.sh` to install enough things for xtask to run (probably rustup and build-essential). | main | convert as much scripts as possible to xtask the shell scripts start to get more too complex we should migrate them to xtask as modules or even libraries ideally the only remaining script would be setup sh to install enough things for xtask to run probably rustup and build essential | 1 |
436,850 | 12,554,037,014 | IssuesEvent | 2020-06-07 00:24:22 | eclipse-ee4j/glassfish | https://api.github.com/repos/eclipse-ee4j/glassfish | closed | Parameterized interface confuses Roles annotation processor | Component: security ERR: Assignee Priority: Major Stale Type: Improvement | Serveral of my session beans' remote interfaces extend a common parameterized
interface, such that
SessionBean->RemoteInteface->ParameterizedInterface.
While this is quite usefule, it totally confuses the annotation processor
looking at the roles annotation in the session bean – often leaving some
methods unsecured while others seem to work as expected. The unsecured methods
were a bit of a shock, since I had left a defensive RolesAllowed annotation on
the class a whole which had no effect whatsoever.
My understanding is that this issue is not addressed by the existing specs, but
the current behavior came a shock to me, and certain apects of are quite
dangerous.
Ideally I would like to see the annotation processor understand that the methods
in the parameterized interface are nothing more than the same methods it is
seeing in the RemoteInterface/SessionBean.
Anyways, I have so far found three ways of getting around this (none of which
are very satisfying):
1\. Remove the parameterized interface from the inheritance hierarchy. I tested
this and it works, but it's not really a option at this point as there is too
much client code that relies on its existence.
2\. De-parameterize the parameterized interface. Again, I've tested this and it
works, but it eliminates the compile-time checks from which we are supposed to
benefit with a strongly-typed language. It also makes the code less readable -
more plain Objects, casts, etc.
3\. Write out the method permissions by hand in the ejb-jar.xml. This works, and
although its painful and probably more error-prone than annotations, it is my
current course of action.
There is a fuller discusion of this at:
[http://forums.java.net/jive/thread.jspa?threadID=40129&tstart=15](http://forums.java.net/jive/thread.jspa?threadID=40129&tstart=15)
Thank you,
Ross
#### Environment
Operating System: All
Platform: All
#### Affected Versions
[9.1peur1] | 1.0 | Parameterized interface confuses Roles annotation processor - Serveral of my session beans' remote interfaces extend a common parameterized
interface, such that
SessionBean->RemoteInteface->ParameterizedInterface.
While this is quite usefule, it totally confuses the annotation processor
looking at the roles annotation in the session bean – often leaving some
methods unsecured while others seem to work as expected. The unsecured methods
were a bit of a shock, since I had left a defensive RolesAllowed annotation on
the class a whole which had no effect whatsoever.
My understanding is that this issue is not addressed by the existing specs, but
the current behavior came a shock to me, and certain apects of are quite
dangerous.
Ideally I would like to see the annotation processor understand that the methods
in the parameterized interface are nothing more than the same methods it is
seeing in the RemoteInterface/SessionBean.
Anyways, I have so far found three ways of getting around this (none of which
are very satisfying):
1\. Remove the parameterized interface from the inheritance hierarchy. I tested
this and it works, but it's not really a option at this point as there is too
much client code that relies on its existence.
2\. De-parameterize the parameterized interface. Again, I've tested this and it
works, but it eliminates the compile-time checks from which we are supposed to
benefit with a strongly-typed language. It also makes the code less readable -
more plain Objects, casts, etc.
3\. Write out the method permissions by hand in the ejb-jar.xml. This works, and
although its painful and probably more error-prone than annotations, it is my
current course of action.
There is a fuller discusion of this at:
[http://forums.java.net/jive/thread.jspa?threadID=40129&tstart=15](http://forums.java.net/jive/thread.jspa?threadID=40129&tstart=15)
Thank you,
Ross
#### Environment
Operating System: All
Platform: All
#### Affected Versions
[9.1peur1] | non_main | parameterized interface confuses roles annotation processor serveral of my session beans remote interfaces extend a common parameterized interface such that sessionbean remoteinteface parameterizedinterface while this is quite usefule it totally confuses the annotation processor looking at the roles annotation in the session bean – often leaving some methods unsecured while others seem to work as expected the unsecured methods were a bit of a shock since i had left a defensive rolesallowed annotation on the class a whole which had no effect whatsoever my understanding is that this issue is not addressed by the existing specs but the current behavior came a shock to me and certain apects of are quite dangerous ideally i would like to see the annotation processor understand that the methods in the parameterized interface are nothing more than the same methods it is seeing in the remoteinterface sessionbean anyways i have so far found three ways of getting around this none of which are very satisfying remove the parameterized interface from the inheritance hierarchy i tested this and it works but it s not really a option at this point as there is too much client code that relies on its existence de parameterize the parameterized interface again i ve tested this and it works but it eliminates the compile time checks from which we are supposed to benefit with a strongly typed language it also makes the code less readable more plain objects casts etc write out the method permissions by hand in the ejb jar xml this works and although its painful and probably more error prone than annotations it is my current course of action there is a fuller discusion of this at thank you ross environment operating system all platform all affected versions | 0 |
667,070 | 22,408,220,414 | IssuesEvent | 2022-06-18 09:54:31 | chaotic-aur/packages | https://api.github.com/repos/chaotic-aur/packages | closed | [Request] dot-bin | request:new-pkg waiting:upstream-fix priority:low | ## 👶 For requesting new packages
- Link to the package(s) in AUR: [dot-bin](https://aur.archlinux.org/packages/dot-bin)
- Utility this package has for you:
it's a browser based on firefox that claims to be privacy-oriented, ad-blocking, and easily customisable.
- Do you consider this package(s) to be useful for **every** chaotic user?:
- [ ] YES
- [ ] No, but yes for a great amount.
- [X] No, but yes for a few.
- [ ] No, it's useful only for me.
- Do you consider this package(s) to be useful for feature testing/preview (e.g: mesa-aco, wine-wayland)?:
- [ ] YES
- [X] NO
- Are you sure we don't have this package already (test with `pacman -Ss <pkgname>`)?:
- [X] YES
- Have you tested if this package builds in a clean chroot?:
- [ ] YES
- [X] NO
- Does the package's license allows us to redistribute it?:
- [X] YES
- [ ] No clue.
- [ ] No, but the author doesn't really care, it's just for bureaucracy.
- Have you searched the [issues](https://github.com/chaotic-aur/packages/issues) to ensure this request is new (not duplicated)?:
- [X] YES
- Have you read the [README](https://github.com/chaotic-aur/packages#banished-and-rejected-packages) to ensure this package is not banned?:
- [X] YES | 1.0 | [Request] dot-bin - ## 👶 For requesting new packages
- Link to the package(s) in AUR: [dot-bin](https://aur.archlinux.org/packages/dot-bin)
- Utility this package has for you:
it's a browser based on firefox that claims to be privacy-oriented, ad-blocking, and easily customisable.
- Do you consider this package(s) to be useful for **every** chaotic user?:
- [ ] YES
- [ ] No, but yes for a great amount.
- [X] No, but yes for a few.
- [ ] No, it's useful only for me.
- Do you consider this package(s) to be useful for feature testing/preview (e.g: mesa-aco, wine-wayland)?:
- [ ] YES
- [X] NO
- Are you sure we don't have this package already (test with `pacman -Ss <pkgname>`)?:
- [X] YES
- Have you tested if this package builds in a clean chroot?:
- [ ] YES
- [X] NO
- Does the package's license allows us to redistribute it?:
- [X] YES
- [ ] No clue.
- [ ] No, but the author doesn't really care, it's just for bureaucracy.
- Have you searched the [issues](https://github.com/chaotic-aur/packages/issues) to ensure this request is new (not duplicated)?:
- [X] YES
- Have you read the [README](https://github.com/chaotic-aur/packages#banished-and-rejected-packages) to ensure this package is not banned?:
- [X] YES | non_main | dot bin 👶 for requesting new packages link to the package s in aur utility this package has for you it s a browser based on firefox that claims to be privacy oriented ad blocking and easily customisable do you consider this package s to be useful for every chaotic user yes no but yes for a great amount no but yes for a few no it s useful only for me do you consider this package s to be useful for feature testing preview e g mesa aco wine wayland yes no are you sure we don t have this package already test with pacman ss yes have you tested if this package builds in a clean chroot yes no does the package s license allows us to redistribute it yes no clue no but the author doesn t really care it s just for bureaucracy have you searched the to ensure this request is new not duplicated yes have you read the to ensure this package is not banned yes | 0 |
1,761 | 6,574,999,395 | IssuesEvent | 2017-09-11 14:44:14 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Enable configuration of VM metric account | affects_2.3 azure cloud feature_idea waiting_on_maintainer | ISSUE TYPE
Feature Idea
COMPONENT NAME
http://docs.ansible.com/ansible/azure_module.html
ANSIBLE VERSION
N/A
SUMMARY
The ask is to be able to enable setting metrics storage account via the `azure_rm_virtualmachine` task.
| True | Enable configuration of VM metric account - ISSUE TYPE
Feature Idea
COMPONENT NAME
http://docs.ansible.com/ansible/azure_module.html
ANSIBLE VERSION
N/A
SUMMARY
The ask is to be able to enable setting metrics storage account via the `azure_rm_virtualmachine` task.
| main | enable configuration of vm metric account issue type feature idea component name ansible version n a summary the ask is to be able to enable setting metrics storage account via the azure rm virtualmachine task | 1 |
5,484 | 27,390,499,637 | IssuesEvent | 2023-02-28 15:58:58 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | closed | Switch Field Worker permissions from ownership based to location based | Type: Maintainance Language: Python | **What is the expected state?**
As a Field Worker I expect to see VA data gathered by my fellow Field Workers in the same facility.
**What is the actual state?**
Currently I only see the VAs I have gathered myself.
**Relevant context**
This was a user request from admins on the ground that are fielding questions from Field Workers that are confused by "missing data" at their facility. Not being able to see VAs gathered by their colleagues even though they work at the same place. Previously this ownership model was used because we believed Field Workers would only be interested in correcting errors for VAs they collected that might have errored after VA processing. Implementing this change brings VA Explorer inline with user expectations on the ground.
In some ways, this is good because current processes to link fieldworkers to their VAs specifically via commands like `link_fieldworkers_to_vas` was prone to error/bugs on the ground due to edge cases such as name change. This simplifies that to return permissions to location-based restrictions. So at least:
1. Remove code related to ownership based permissions in places like:
- `/va_explorer/users/management/commands/link_fieldworkers_to_vas.py`
- `/va_explorer/users/forms.py:107`
- `/va_explorer/users/tests/test_fieldworker_linking.py`
- `/va_explorer/users/utils/field_worker_linking.py`
2. Clean up relevant documentation
- `/docs/training/admin_guides.md:76`
- `/docs/training/admin_guides.md:243`
- `/docs/training/user_guides.md:180`
- `/docs/training/general/common_actions.md:22`
- `/docs/training/general/roles.md:37`
- `/docs/usage/features.md:217`
... | True | Switch Field Worker permissions from ownership based to location based - **What is the expected state?**
As a Field Worker I expect to see VA data gathered by my fellow Field Workers in the same facility.
**What is the actual state?**
Currently I only see the VAs I have gathered myself.
**Relevant context**
This was a user request from admins on the ground that are fielding questions from Field Workers that are confused by "missing data" at their facility. Not being able to see VAs gathered by their colleagues even though they work at the same place. Previously this ownership model was used because we believed Field Workers would only be interested in correcting errors for VAs they collected that might have errored after VA processing. Implementing this change brings VA Explorer inline with user expectations on the ground.
In some ways, this is good because current processes to link fieldworkers to their VAs specifically via commands like `link_fieldworkers_to_vas` was prone to error/bugs on the ground due to edge cases such as name change. This simplifies that to return permissions to location-based restrictions. So at least:
1. Remove code related to ownership based permissions in places like:
- `/va_explorer/users/management/commands/link_fieldworkers_to_vas.py`
- `/va_explorer/users/forms.py:107`
- `/va_explorer/users/tests/test_fieldworker_linking.py`
- `/va_explorer/users/utils/field_worker_linking.py`
2. Clean up relevant documentation
- `/docs/training/admin_guides.md:76`
- `/docs/training/admin_guides.md:243`
- `/docs/training/user_guides.md:180`
- `/docs/training/general/common_actions.md:22`
- `/docs/training/general/roles.md:37`
- `/docs/usage/features.md:217`
... | main | switch field worker permissions from ownership based to location based what is the expected state as a field worker i expect to see va data gathered by my fellow field workers in the same facility what is the actual state currently i only see the vas i have gathered myself relevant context this was a user request from admins on the ground that are fielding questions from field workers that are confused by missing data at their facility not being able to see vas gathered by their colleagues even though they work at the same place previously this ownership model was used because we believed field workers would only be interested in correcting errors for vas they collected that might have errored after va processing implementing this change brings va explorer inline with user expectations on the ground in some ways this is good because current processes to link fieldworkers to their vas specifically via commands like link fieldworkers to vas was prone to error bugs on the ground due to edge cases such as name change this simplifies that to return permissions to location based restrictions so at least remove code related to ownership based permissions in places like va explorer users management commands link fieldworkers to vas py va explorer users forms py va explorer users tests test fieldworker linking py va explorer users utils field worker linking py clean up relevant documentation docs training admin guides md docs training admin guides md docs training user guides md docs training general common actions md docs training general roles md docs usage features md | 1 |
156,475 | 13,649,362,781 | IssuesEvent | 2020-09-26 14:05:57 | triaxtec/openapi-python-client | https://api.github.com/repos/triaxtec/openapi-python-client | closed | Add a note about supported OpenAPI versions | documentation enhancement | **Describe the bug**
The configuration I'm generating a client for declares `parameters` with `in: body`, the generation fails with openapi-python-client version: 0.5.4 however:
```
ERROR parsing POST /alerts within alert. Endpoint will not be generated.
Parameter must be declared in path or query
Parameter(name='alerts', param_in='body', description='The alerts to create', required=True, deprecated=False, allowEmptyValue=False, style=None, explode=False, allowReserved=False, param_schema=Reference(ref='#/definitions/postableAlerts'), example=None, examples=None, content=None)
```
**To Reproduce**
Steps to reproduce the behavior:
`openapi-python-client generate --url https://raw.githubusercontent.com/prometheus/alertmanager/master/api/v2/openapi.yaml`
I'm not familiar with OpenAPI specs and versions, e.g. if this is relevant? https://swagger.io/docs/specification/2-0/describing-request-body/
| 1.0 | Add a note about supported OpenAPI versions - **Describe the bug**
The configuration I'm generating a client for declares `parameters` with `in: body`, the generation fails with openapi-python-client version: 0.5.4 however:
```
ERROR parsing POST /alerts within alert. Endpoint will not be generated.
Parameter must be declared in path or query
Parameter(name='alerts', param_in='body', description='The alerts to create', required=True, deprecated=False, allowEmptyValue=False, style=None, explode=False, allowReserved=False, param_schema=Reference(ref='#/definitions/postableAlerts'), example=None, examples=None, content=None)
```
**To Reproduce**
Steps to reproduce the behavior:
`openapi-python-client generate --url https://raw.githubusercontent.com/prometheus/alertmanager/master/api/v2/openapi.yaml`
I'm not familiar with OpenAPI specs and versions, e.g. if this is relevant? https://swagger.io/docs/specification/2-0/describing-request-body/
| non_main | add a note about supported openapi versions describe the bug the configuration i m generating a client for declares parameters with in body the generation fails with openapi python client version however error parsing post alerts within alert endpoint will not be generated parameter must be declared in path or query parameter name alerts param in body description the alerts to create required true deprecated false allowemptyvalue false style none explode false allowreserved false param schema reference ref definitions postablealerts example none examples none content none to reproduce steps to reproduce the behavior openapi python client generate url i m not familiar with openapi specs and versions e g if this is relevant | 0 |
137,229 | 11,102,988,765 | IssuesEvent | 2019-12-17 02:06:50 | uruha/next-with-ts | https://api.github.com/repos/uruha/next-with-ts | opened | testing: account saga task | testing | ## Todo
- [ ] runRequestGetAccount integration testing
- [ ] handleRequestGetAccount unit testing | 1.0 | testing: account saga task - ## Todo
- [ ] runRequestGetAccount integration testing
- [ ] handleRequestGetAccount unit testing | non_main | testing account saga task todo runrequestgetaccount integration testing handlerequestgetaccount unit testing | 0 |
1,418 | 6,186,095,626 | IssuesEvent | 2017-07-04 00:16:26 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | module cannot remove certain limits pam_limits.py | affects_2.3 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
pam_limits module
##### ANSIBLE VERSION
N/A
##### SUMMARY
Is it possible to get an ensure: absent component added to this module.
Here I have an example where I want to add limits underneath limits.d as my default but do not want the limit present in limits.conf.
Ideally I want to create a task that will check to see if the value limit_type and limit_item exists in default limits.conf and if so remove it.
Example Run Required:
pam_limits: domain=user1 limit_type=hard limit_item=nofile backup=yes ensure=abesnt
| True | module cannot remove certain limits pam_limits.py - ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
pam_limits module
##### ANSIBLE VERSION
N/A
##### SUMMARY
Is it possible to get an ensure: absent component added to this module.
Here I have an example where I want to add limits underneath limits.d as my default but do not want the limit present in limits.conf.
Ideally I want to create a task that will check to see if the value limit_type and limit_item exists in default limits.conf and if so remove it.
Example Run Required:
pam_limits: domain=user1 limit_type=hard limit_item=nofile backup=yes ensure=abesnt
| main | module cannot remove certain limits pam limits py issue type feature idea component name pam limits module ansible version n a summary is it possible to get an ensure absent component added to this module here i have an example where i want to add limits underneath limits d as my default but do not want the limit present in limits conf ideally i want to create a task that will check to see if the value limit type and limit item exists in default limits conf and if so remove it example run required pam limits domain limit type hard limit item nofile backup yes ensure abesnt | 1 |
4,670 | 24,148,313,977 | IssuesEvent | 2022-09-21 21:00:45 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Error when selecting a base table for a new exploration makes it impossible for me to create new explorations | type: bug work: backend work: frontend restricted: maintainers status: started | I got this error from `http://localhost:8000/api/db/v0/queries/` after creating a new exploration and selecting a base table. Most of the time, I don't get this error, but it happened once when I created a new exploration immediately after saving a previous one.
<details>
<summary><h3>Error Response does not conform to the api spec. Please handle the exception properly</h3></summary>
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/db/v0/queries/
Django Version: 3.1.14
Python Version: 3.9.8
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 18, in create
serializer.is_valid(raise_exception=True)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 228, in is_valid
raise ValidationError(self.errors)
During handling of the above exception ({'name': [ErrorDetail(string='ui query with this name already exists.', code='unique')]}), another exception occurred:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 71, in mathesar_exception_handler
raise Exception("Error Response does not conform to the api spec. Please handle the exception properly")
Exception Type: Exception at /api/db/v0/queries/
Exception Value: Error Response does not conform to the api spec. Please handle the exception properly
```
</details>
Now Mathesar is in a state where any time I try to create a new exploration and set a base table, I get this error. I'm unable to create new explorations. I'll try re-installing Mathesar.
| True | Error when selecting a base table for a new exploration makes it impossible for me to create new explorations - I got this error from `http://localhost:8000/api/db/v0/queries/` after creating a new exploration and selecting a base table. Most of the time, I don't get this error, but it happened once when I created a new exploration immediately after saving a previous one.
<details>
<summary><h3>Error Response does not conform to the api spec. Please handle the exception properly</h3></summary>
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/db/v0/queries/
Django Version: 3.1.14
Python Version: 3.9.8
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 18, in create
serializer.is_valid(raise_exception=True)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 228, in is_valid
raise ValidationError(self.errors)
During handling of the above exception ({'name': [ErrorDetail(string='ui query with this name already exists.', code='unique')]}), another exception occurred:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 71, in mathesar_exception_handler
raise Exception("Error Response does not conform to the api spec. Please handle the exception properly")
Exception Type: Exception at /api/db/v0/queries/
Exception Value: Error Response does not conform to the api spec. Please handle the exception properly
```
</details>
Now Mathesar is in a state where any time I try to create a new exploration and set a base table, I get this error. I'm unable to create new explorations. I'll try re-installing Mathesar.
| main | error when selecting a base table for a new exploration makes it impossible for me to create new explorations i got this error from after creating a new exploration and selecting a base table most of the time i don t get this error but it happened once when i created a new exploration immediately after saving a previous one error response does not conform to the api spec please handle the exception properly environment request method post request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file usr local lib site packages rest framework mixins py line in create serializer is valid raise exception true file usr local lib site packages rest framework serializers py line in is valid raise validationerror self errors during handling of the above exception name another exception occurred file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exception error response does not conform to the api spec please handle the exception properly exception type exception at api db queries exception value error response does not conform to the api spec please handle the exception properly now mathesar is in a state where any time i try to create a new exploration and set a base table i get this error i m unable to create new explorations i ll try re installing mathesar | 1 |
38,324 | 12,535,453,679 | IssuesEvent | 2020-06-04 21:24:13 | istio/istio | https://api.github.com/repos/istio/istio | closed | 503 response when port name is prefixed with https- (mTLS enabled) | area/networking area/security | **Bug description**
I have EKS cluster with Istio on it. mTLS is enabled. Traffic flow:
AWS ALB -> Istio IngressGateway -> VirtualService -> Service -> Pod
Configuration of particular components:
```- apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: https-port
number: 443
protocol: HTTPS
tls:
credentialName: platform-cert
minProtocolVersion: TLSV1_2
mode: SIMPLE
```
On ingress gateway we terminate SSL traffic coming from ALB. Then we have configured mtls for Ingressgateway:
```
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: istio-ingress-to-svc
namespace: istio-system
spec:
peers:
- mtls: {}
targets:
- name: istio-ingressgateway
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: istio-mutual
namespace: istio-system
spec:
host: '*.svc.cluster.local'
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
```
Ingress, VirtualService and Service definitions:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pub-alb
namespace: istio-system
spec:
rules:
- host: '*.dev.example.com'
http:
paths:
- backend:
serviceName: istio-ingressgateway
servicePort: 443
path: /*
- host: '*.example.com'
http:
paths:
- backend:
serviceName: istio-ingressgateway
servicePort: 443
path: /*
status:
loadBalancer:
ingress:
- hostname: XXX.elb.amazonaws.com
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: example
namespace: playground
spec:
gateways:
- istio-system/istio-gateway
hosts:
- a1.dev.example.com
- test.examaple.com
http:
- match:
- port: 443
route:
- destination:
host: application
port:
number: 80
apiVersion: v1
kind: Service
metadata:
labels:
app: application
service: application
name: application
namespace: playground
spec:
ports:
- name: https-web
port: 80
protocol: TCP
targetPort: 80
selector:
service: application
type: ClusterIP
```
According to https://archive.istio.io/v1.3/docs/ops/traffic-management/protocol-selection/ I prefixed port name with `https-` as mTLS is enabled - so traffic should be encrypted.
By doing that and curling application endpoint I'm getting
```
upstream connect error or disconnect/reset before headers. reset reason: connection termination
```
In istio-proxy logs (proxy in application pod) I see: `HTTP/1.1" 503 UC,URX`
I can confirm that mTLS is enabled between Ingress and Application:
```
$ istioctl authn tls-check istio-ingressgateway-f96555749-8l57r application.playground.svc.cluster.local
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
application.playground.svc.cluster.local:80 OK mTLS mTLS default/playground istio-mutual/playground
```
`playground` namespace mTLS configuration:
```
apiVersion: v1
items:
- apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: playground
spec:
peers:
- mtls: {}
```
The thing is that when I rename `application` Service port to be prefixed with `http-` then everything works as expected... **Is it expected behavior?** Or I have something misconfigured? I also sniffed the traffic inside the application container and looks like traffic is encrypted - all traffic grabbed by ngrep is random signs.
Moreover, `application` pod connects to other Services as a part of request processing. Those services have port names prefixed with `https-`. When playing with ports naming I observed such behavior:
- application svc `https-` -> background svc `https-` - does not work
- application svc `https-` -> background svc `http-` - does not work
- application svc `http-` -> background svc `https-` - do work
- application svc `http-` -> background svc `http-` - does not work
**Expected behavior**
Istio passes encrypted traffic through `https-` prefixed ports when mTLS is enabled.
**Version (include the output of `istioctl version --remote` and `kubectl version` and `helm version` if you used Helm)**
```
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.12-eks-eb1860", GitCommit:"eb1860579253bb5bf83a5a03eb0330307ae26d18", GitTreeState:"clean", BuildDate:"2019-12-23T08:58:45Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
$ helm3 version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}
$ istioctl version --remote
client version: 1.3.1
citadel version: 1.3.8
galley version: 1.3.8
ingressgateway version: 1.3.8
pilot version: 1.3.8
policy version: 1.3.8
sidecar-injector version: 1.3.8
telemetry version: 1.3.8
```
**How was Istio installed?**
Helm chart using https://storage.googleapis.com/istio-release/releases/1.3.8/charts/
**Environment where bug was observed (cloud vendor, OS, etc)**
AWS EKS | True | 503 response when port name is prefixed with https- (mTLS enabled) - **Bug description**
I have EKS cluster with Istio on it. mTLS is enabled. Traffic flow:
AWS ALB -> Istio IngressGateway -> VirtualService -> Service -> Pod
Configuration of particular components:
```- apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: https-port
number: 443
protocol: HTTPS
tls:
credentialName: platform-cert
minProtocolVersion: TLSV1_2
mode: SIMPLE
```
On ingress gateway we terminate SSL traffic coming from ALB. Then we have configured mtls for Ingressgateway:
```
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: istio-ingress-to-svc
namespace: istio-system
spec:
peers:
- mtls: {}
targets:
- name: istio-ingressgateway
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: istio-mutual
namespace: istio-system
spec:
host: '*.svc.cluster.local'
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
```
Ingress, VirtualService and Service definitions:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pub-alb
namespace: istio-system
spec:
rules:
- host: '*.dev.example.com'
http:
paths:
- backend:
serviceName: istio-ingressgateway
servicePort: 443
path: /*
- host: '*.example.com'
http:
paths:
- backend:
serviceName: istio-ingressgateway
servicePort: 443
path: /*
status:
loadBalancer:
ingress:
- hostname: XXX.elb.amazonaws.com
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: example
namespace: playground
spec:
gateways:
- istio-system/istio-gateway
hosts:
- a1.dev.example.com
- test.examaple.com
http:
- match:
- port: 443
route:
- destination:
host: application
port:
number: 80
apiVersion: v1
kind: Service
metadata:
labels:
app: application
service: application
name: application
namespace: playground
spec:
ports:
- name: https-web
port: 80
protocol: TCP
targetPort: 80
selector:
service: application
type: ClusterIP
```
According to https://archive.istio.io/v1.3/docs/ops/traffic-management/protocol-selection/ I prefixed port name with `https-` as mTLS is enabled - so traffic should be encrypted.
By doing that and curling application endpoint I'm getting
```
upstream connect error or disconnect/reset before headers. reset reason: connection termination
```
In istio-proxy logs (proxy in application pod) I see: `HTTP/1.1" 503 UC,URX`
I can confirm that mTLS is enabled between Ingress and Application:
```
$ istioctl authn tls-check istio-ingressgateway-f96555749-8l57r application.playground.svc.cluster.local
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
application.playground.svc.cluster.local:80 OK mTLS mTLS default/playground istio-mutual/playground
```
`playground` namespace mTLS configuration:
```
apiVersion: v1
items:
- apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: playground
spec:
peers:
- mtls: {}
```
The thing is that when I rename `application` Service port to be prefixed with `http-` then everything works as expected... **Is it expected behavior?** Or I have something misconfigured? I also sniffed the traffic inside the application container and looks like traffic is encrypted - all traffic grabbed by ngrep is random signs.
Moreover, `application` pod connects to other Services as a part of request processing. Those services have port names prefixed with `https-`. When playing with ports naming I observed such behavior:
- application svc `https-` -> background svc `https-` - does not work
- application svc `https-` -> background svc `http-` - does not work
- application svc `http-` -> background svc `https-` - do work
- application svc `http-` -> background svc `http-` - does not work
**Expected behavior**
Istio passes encrypted traffic through `https-` prefixed ports when mTLS is enabled.
**Version (include the output of `istioctl version --remote` and `kubectl version` and `helm version` if you used Helm)**
```
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.12-eks-eb1860", GitCommit:"eb1860579253bb5bf83a5a03eb0330307ae26d18", GitTreeState:"clean", BuildDate:"2019-12-23T08:58:45Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
$ helm3 version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}
$ istioctl version --remote
client version: 1.3.1
citadel version: 1.3.8
galley version: 1.3.8
ingressgateway version: 1.3.8
pilot version: 1.3.8
policy version: 1.3.8
sidecar-injector version: 1.3.8
telemetry version: 1.3.8
```
**How was Istio installed?**
Helm chart using https://storage.googleapis.com/istio-release/releases/1.3.8/charts/
**Environment where bug was observed (cloud vendor, OS, etc)**
AWS EKS | non_main | response when port name is prefixed with https mtls enabled bug description i have eks cluster with istio on it mtls is enabled traffic flow aws alb istio ingressgateway virtualservice service pod configuration of particular components apiversion networking istio io kind gateway metadata name istio gateway namespace istio system spec selector istio ingressgateway servers hosts port name https port number protocol https tls credentialname platform cert minprotocolversion mode simple on ingress gateway we terminate ssl traffic coming from alb then we have configured mtls for ingressgateway apiversion authentication istio io kind policy metadata name istio ingress to svc namespace istio system spec peers mtls targets name istio ingressgateway apiversion networking istio io kind destinationrule metadata name istio mutual namespace istio system spec host svc cluster local trafficpolicy tls mode istio mutual ingress virtualservice and service definitions apiversion extensions kind ingress metadata name pub alb namespace istio system spec rules host dev example com http paths backend servicename istio ingressgateway serviceport path host example com http paths backend servicename istio ingressgateway serviceport path status loadbalancer ingress hostname xxx elb amazonaws com apiversion networking istio io kind virtualservice metadata name example namespace playground spec gateways istio system istio gateway hosts dev example com test examaple com http match port route destination host application port number apiversion kind service metadata labels app application service application name application namespace playground spec ports name https web port protocol tcp targetport selector service application type clusterip according to i prefixed port name with https as mtls is enabled so traffic should be encrypted by doing that and curling application endpoint i m getting upstream connect error or disconnect reset before headers reset reason connection termination in istio proxy logs proxy in application pod i see http uc urx i can confirm that mtls is enabled between ingress and application istioctl authn tls check istio ingressgateway application playground svc cluster local host port status server client authn policy destination rule application playground svc cluster local ok mtls mtls default playground istio mutual playground playground namespace mtls configuration apiversion items apiversion authentication istio io kind policy metadata name default namespace playground spec peers mtls the thing is that when i rename application service port to be prefixed with http then everything works as expected is it expected behavior or i have something misconfigured i also sniffed the traffic inside the application container and looks like traffic is encrypted all traffic grabbed by ngrep is random signs moreover application pod connects to other services as a part of request processing those services have port names prefixed with https when playing with ports naming i observed such behavior application svc https background svc https does not work application svc https background svc http does not work application svc http background svc https do work application svc http background svc http does not work expected behavior istio passes encrypted traffic through https prefixed ports when mtls is enabled version include the output of istioctl version remote and kubectl version and helm version if you used helm kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform darwin server version version info major minor gitversion eks gitcommit gittreestate clean builddate goversion compiler gc platform linux version version buildinfo version gitcommit gittreestate clean goversion istioctl version remote client version citadel version galley version ingressgateway version pilot version policy version sidecar injector version telemetry version how was istio installed helm chart using environment where bug was observed cloud vendor os etc aws eks | 0 |
173,225 | 27,404,165,689 | IssuesEvent | 2023-03-01 04:44:42 | Moa-dev-team/Moa-frontend | https://api.github.com/repos/Moa-dev-team/Moa-frontend | opened | 로그인 페이지 디자인 | component design router | ## 💬 Description
로그인 페이지에서 로그인 기능을 제외한 디자인을 완성한다.
## ✅ To Do
- [ ] 리액트 라우터로 로그인 페이지 이동 기능을 추가한다.
- [ ] sns 로그인 버튼 컴포넌트를 생성한다.
| 1.0 | 로그인 페이지 디자인 - ## 💬 Description
로그인 페이지에서 로그인 기능을 제외한 디자인을 완성한다.
## ✅ To Do
- [ ] 리액트 라우터로 로그인 페이지 이동 기능을 추가한다.
- [ ] sns 로그인 버튼 컴포넌트를 생성한다.
| non_main | 로그인 페이지 디자인 💬 description 로그인 페이지에서 로그인 기능을 제외한 디자인을 완성한다 ✅ to do 리액트 라우터로 로그인 페이지 이동 기능을 추가한다 sns 로그인 버튼 컴포넌트를 생성한다 | 0 |
1,579 | 6,572,341,810 | IssuesEvent | 2017-09-11 01:32:55 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | AWS ec2_vpc_route_table.py Unable to append a route to an existing route table. | affects_2.2 aws cloud feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_vpc_route_table.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible --version
ansible 2.2.0
```
##### CONFIGURATION
<!---
AWS Module
-->
##### OS / ENVIRONMENT
<!---
AWS
-->
##### SUMMARY
There is no way to modify an existing AWS route table without building it completely form scratch, the only options are Create or Delete. This feature has been implemented in a different fork below, could this be implemented or is there a reason why this would not be acceptable?
https://github.com/preo/ansible-modules-core/pull/2/files
##### STEPS TO REPRODUCE
<!---
N/A
-->
| True | AWS ec2_vpc_route_table.py Unable to append a route to an existing route table. - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_vpc_route_table.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible --version
ansible 2.2.0
```
##### CONFIGURATION
<!---
AWS Module
-->
##### OS / ENVIRONMENT
<!---
AWS
-->
##### SUMMARY
There is no way to modify an existing AWS route table without building it completely form scratch, the only options are Create or Delete. This feature has been implemented in a different fork below, could this be implemented or is there a reason why this would not be acceptable?
https://github.com/preo/ansible-modules-core/pull/2/files
##### STEPS TO REPRODUCE
<!---
N/A
-->
| main | aws vpc route table py unable to append a route to an existing route table issue type feature idea component name vpc route table py ansible version ansible version ansible configuration aws module os environment aws summary there is no way to modify an existing aws route table without building it completely form scratch the only options are create or delete this feature has been implemented in a different fork below could this be implemented or is there a reason why this would not be acceptable steps to reproduce n a | 1 |
5,102 | 26,009,507,447 | IssuesEvent | 2022-12-20 23:20:21 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | opened | Fix HTML validation errors | bug 🦠 engineering Maintain needs grooming | We have A LOT of HTML errors on the site. Numbers are quite high 🙈 Here are the results of the 4 random pages I tested
- https://foundation.mozilla.org/en/ 52 errors, warnings, or improvement suggestions
- https://foundation.mozilla.org/en/privacynotincluded/categories/toys-games/ 48 errors, warnings, or improvement suggestions
- https://foundation.mozilla.org/en/privacynotincluded/articles/ 34 errors, warnings, or improvement suggestions
- https://foundation.mozilla.org/en/blog/ 77 errors, warnings, or improvement suggestions
### Why should we fix these errors?
From an online source
> There are some markup cases defined as errors because they are potential problems for accessibility, usability, interoperability, security, or maintainability—or because they can result in poor performance, or that might cause your scripts to fail in ways that are hard to troubleshoot.
Along with those, some markup cases are defined as errors because they can cause you to run into potential problems in HTML parsing and error-handling behavior—so that, say, you’d end up with some unintuitive, unexpected result in the DOM.
### How to address these issues?
- I'm thinking we can fix the pages with highest traffic first. Since some code snippets are shared, we will be fixing issues on other pages along this process as well.
- Devs should refresh our knowledge around semantic HTML.
- For more tedious checks (e.g., redundant trailing slash, extra closing tag), let's find if there's any existing tool/script that can do the trick for us.
| True | Fix HTML validation errors - We have A LOT of HTML errors on the site. Numbers are quite high 🙈 Here are the results of the 4 random pages I tested
- https://foundation.mozilla.org/en/ 52 errors, warnings, or improvement suggestions
- https://foundation.mozilla.org/en/privacynotincluded/categories/toys-games/ 48 errors, warnings, or improvement suggestions
- https://foundation.mozilla.org/en/privacynotincluded/articles/ 34 errors, warnings, or improvement suggestions
- https://foundation.mozilla.org/en/blog/ 77 errors, warnings, or improvement suggestions
### Why should we fix these errors?
From an online source
> There are some markup cases defined as errors because they are potential problems for accessibility, usability, interoperability, security, or maintainability—or because they can result in poor performance, or that might cause your scripts to fail in ways that are hard to troubleshoot.
Along with those, some markup cases are defined as errors because they can cause you to run into potential problems in HTML parsing and error-handling behavior—so that, say, you’d end up with some unintuitive, unexpected result in the DOM.
### How to address these issues?
- I'm thinking we can fix the pages with highest traffic first. Since some code snippets are shared, we will be fixing issues on other pages along this process as well.
- Devs should refresh our knowledge around semantic HTML.
- For more tedious checks (e.g., redundant trailing slash, extra closing tag), let's find if there's any existing tool/script that can do the trick for us.
| main | fix html validation errors we have a lot of html errors on the site numbers are quite high 🙈 here are the results of the random pages i tested errors warnings or improvement suggestions errors warnings or improvement suggestions errors warnings or improvement suggestions errors warnings or improvement suggestions why should we fix these errors from an online source there are some markup cases defined as errors because they are potential problems for accessibility usability interoperability security or maintainability—or because they can result in poor performance or that might cause your scripts to fail in ways that are hard to troubleshoot along with those some markup cases are defined as errors because they can cause you to run into potential problems in html parsing and error handling behavior—so that say you’d end up with some unintuitive unexpected result in the dom how to address these issues i m thinking we can fix the pages with highest traffic first since some code snippets are shared we will be fixing issues on other pages along this process as well devs should refresh our knowledge around semantic html for more tedious checks e g redundant trailing slash extra closing tag let s find if there s any existing tool script that can do the trick for us | 1 |
1,182 | 5,097,754,030 | IssuesEvent | 2017-01-03 22:32:32 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | vmware_guest module documentation is broken | affects_2.3 bug_report cloud vmware waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.3
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
ansible-doc vmware_guest fails ...
```
jtanner-OSX:AME-3237 jtanner$ ansible-doc -vvvv vmware_guest
No config file found; using defaults
Traceback (most recent call last):
File "/Users/jtanner/workspace/issues/AME-3237/ansible/lib/ansible/cli/doc.py", line 130, in run
text += self.get_man_text(doc)
File "/Users/jtanner/workspace/issues/AME-3237/ansible/lib/ansible/cli/doc.py", line 287, in get_man_text
text.append(textwrap.fill(CLI.tty_ify(choices + default), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
UnboundLocalError: local variable 'choices' referenced before assignment
None
ERROR! module vmware_guest missing documentation (or could not parse documentation): local variable 'choices' referenced before assignment
```
| True | vmware_guest module documentation is broken - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.3
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
ansible-doc vmware_guest fails ...
```
jtanner-OSX:AME-3237 jtanner$ ansible-doc -vvvv vmware_guest
No config file found; using defaults
Traceback (most recent call last):
File "/Users/jtanner/workspace/issues/AME-3237/ansible/lib/ansible/cli/doc.py", line 130, in run
text += self.get_man_text(doc)
File "/Users/jtanner/workspace/issues/AME-3237/ansible/lib/ansible/cli/doc.py", line 287, in get_man_text
text.append(textwrap.fill(CLI.tty_ify(choices + default), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
UnboundLocalError: local variable 'choices' referenced before assignment
None
ERROR! module vmware_guest missing documentation (or could not parse documentation): local variable 'choices' referenced before assignment
```
| main | vmware guest module documentation is broken issue type bug report component name vmware guest module ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary ansible doc vmware guest fails jtanner osx ame jtanner ansible doc vvvv vmware guest no config file found using defaults traceback most recent call last file users jtanner workspace issues ame ansible lib ansible cli doc py line in run text self get man text doc file users jtanner workspace issues ame ansible lib ansible cli doc py line in get man text text append textwrap fill cli tty ify choices default limit initial indent opt indent subsequent indent opt indent unboundlocalerror local variable choices referenced before assignment none error module vmware guest missing documentation or could not parse documentation local variable choices referenced before assignment | 1 |
8,284 | 3,703,067,419 | IssuesEvent | 2016-02-29 19:04:08 | stkent/amplify | https://api.github.com/repos/stkent/amplify | closed | Install time doesn't seem to be working correctly | bug code difficulty-medium | After integrating Amplify and running it with the default options it seems to be prompting right away. The desired behavior should wait one week before prompting. | 1.0 | Install time doesn't seem to be working correctly - After integrating Amplify and running it with the default options it seems to be prompting right away. The desired behavior should wait one week before prompting. | non_main | install time doesn t seem to be working correctly after integrating amplify and running it with the default options it seems to be prompting right away the desired behavior should wait one week before prompting | 0 |
3,037 | 11,258,985,354 | IssuesEvent | 2020-01-13 06:57:33 | microsoft/DirectXTK | https://api.github.com/repos/microsoft/DirectXTK | closed | Retire support for VS 2015 | maintainence | In 2020, I plan to retire support for VS 2015. The following projects will be removed, and the NuGet ``directxtk_desktop_2015`` package will be deprecated in favor of one built with VS 2017 and/or VS 2019:
DirectXTK_Desktop_2015
DirectXTKAudio_Desktop_2015_Win8
DirectXTKAudio_Desktop_2015_DXSDK
DirectXTK_Desktop_2015_Win10
DirectXTK_Windows10_2015
DirectXTK_XboxOneXDK_2015
xwbtool_Desktop_2015
Please put any requests for continued support for one or more of these here.
| True | Retire support for VS 2015 - In 2020, I plan to retire support for VS 2015. The following projects will be removed, and the NuGet ``directxtk_desktop_2015`` package will be deprecated in favor of one built with VS 2017 and/or VS 2019:
DirectXTK_Desktop_2015
DirectXTKAudio_Desktop_2015_Win8
DirectXTKAudio_Desktop_2015_DXSDK
DirectXTK_Desktop_2015_Win10
DirectXTK_Windows10_2015
DirectXTK_XboxOneXDK_2015
xwbtool_Desktop_2015
Please put any requests for continued support for one or more of these here.
| main | retire support for vs in i plan to retire support for vs the following projects will be removed and the nuget directxtk desktop package will be deprecated in favor of one built with vs and or vs directxtk desktop directxtkaudio desktop directxtkaudio desktop dxsdk directxtk desktop directxtk directxtk xboxonexdk xwbtool desktop please put any requests for continued support for one or more of these here | 1 |
3,033 | 11,234,045,707 | IssuesEvent | 2020-01-09 03:35:30 | sympy/sympy | https://api.github.com/repos/sympy/sympy | closed | Use __all__ top-level variable in module files | Easy to Fix Maintainability | I have seen `__all__` top-level variable set to list of objects that are imported when we do `from sympy.combinatorics.coset_table import *` I get `bisect_left` which isn't even supposed to be in the module.
Also I think this will reduce import time when doing `from sympy.combinatorics.coset_table import *` (will this also reduce import time for `from sympy import *`? after considering the files like `coset_table.py` has public available classes and functions)
I have seen this often used in scikit-learn source code. This will be easy to fix, once someone comments that it is a good decision to have this.
[Edit]: I know the existence of `__all__` in some of module `__init__` files, but still a large number of them are missing, ex. combinatorics module. | True | Use __all__ top-level variable in module files - I have seen `__all__` top-level variable set to list of objects that are imported when we do `from sympy.combinatorics.coset_table import *` I get `bisect_left` which isn't even supposed to be in the module.
Also I think this will reduce import time when doing `from sympy.combinatorics.coset_table import *` (will this also reduce import time for `from sympy import *`? after considering the files like `coset_table.py` has public available classes and functions)
I have seen this often used in scikit-learn source code. This will be easy to fix, once someone comments that it is a good decision to have this.
[Edit]: I know the existence of `__all__` in some of module `__init__` files, but still a large number of them are missing, ex. combinatorics module. | main | use all top level variable in module files i have seen all top level variable set to list of objects that are imported when we do from sympy combinatorics coset table import i get bisect left which isn t even supposed to be in the module also i think this will reduce import time when doing from sympy combinatorics coset table import will this also reduce import time for from sympy import after considering the files like coset table py has public available classes and functions i have seen this often used in scikit learn source code this will be easy to fix once someone comments that it is a good decision to have this i know the existence of all in some of module init files but still a large number of them are missing ex combinatorics module | 1 |
2,928 | 10,454,072,959 | IssuesEvent | 2019-09-19 18:01:23 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFP - hdrmerge | Status: Available For Maintainer(s) | ## Checklist
- [x] The package I am requesting does not already exist on https://chocolatey.org/packages;
- [x] There is no open issue for this package;
- [x] The issue title starts 'RFP - ';
- [x] The download URL is public and not locked behind a paywall / login;
## Package Details
Software project URL : http://jcelaya.github.io/hdrmerge/
Direct download URL for the software / installer :
https://github.com/jcelaya/hdrmerge/releases/download/v0.5.0/hdrmerge-setup-0.5.0.exe
https://github.com/jcelaya/hdrmerge/releases/download/v0.5.0/hdrmerge-setup64-0.5.0.exe
Software summary / short description:
> HDRMerge combines two or more raw images into a single raw with an extended dynamic range. It can import any raw image supported by LibRaw, and outputs a DNG 1.4 image with floating point data. The output raw is built from the less noisy pixels of the input, so that shadows maintain as much detail as possible. This tool also offers a GUI to remove ghosts from the resulting image.
ref: https://github.com/jcelaya/hdrmerge/issues/166 | True | RFP - hdrmerge - ## Checklist
- [x] The package I am requesting does not already exist on https://chocolatey.org/packages;
- [x] There is no open issue for this package;
- [x] The issue title starts 'RFP - ';
- [x] The download URL is public and not locked behind a paywall / login;
## Package Details
Software project URL : http://jcelaya.github.io/hdrmerge/
Direct download URL for the software / installer :
https://github.com/jcelaya/hdrmerge/releases/download/v0.5.0/hdrmerge-setup-0.5.0.exe
https://github.com/jcelaya/hdrmerge/releases/download/v0.5.0/hdrmerge-setup64-0.5.0.exe
Software summary / short description:
> HDRMerge combines two or more raw images into a single raw with an extended dynamic range. It can import any raw image supported by LibRaw, and outputs a DNG 1.4 image with floating point data. The output raw is built from the less noisy pixels of the input, so that shadows maintain as much detail as possible. This tool also offers a GUI to remove ghosts from the resulting image.
ref: https://github.com/jcelaya/hdrmerge/issues/166 | main | rfp hdrmerge checklist the package i am requesting does not already exist on there is no open issue for this package the issue title starts rfp the download url is public and not locked behind a paywall login package details software project url direct download url for the software installer software summary short description hdrmerge combines two or more raw images into a single raw with an extended dynamic range it can import any raw image supported by libraw and outputs a dng image with floating point data the output raw is built from the less noisy pixels of the input so that shadows maintain as much detail as possible this tool also offers a gui to remove ghosts from the resulting image ref | 1 |
44,949 | 13,097,422,482 | IssuesEvent | 2020-08-03 17:26:07 | jtimberlake/COSMOS | https://api.github.com/repos/jtimberlake/COSMOS | opened | CVE-2018-20676 (Medium) detected in bootstrap-3.0.3.js, bootstrap-3.0.3.min.js | security vulnerability | ## CVE-2018-20676 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.0.3.js</b>, <b>bootstrap-3.0.3.min.js</b></p></summary>
<p>
<details><summary><b>bootstrap-3.0.3.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.0.3/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.0.3/js/bootstrap.js</a></p>
<p>Path to vulnerable library: /COSMOS/test/performance/config/tools/handbook_creator/assets/js/bootstrap.js,/COSMOS/demo/config/tools/handbook_creator/assets/js/bootstrap.js,/COSMOS/autohotkey/config/tools/handbook_creator/assets/js/bootstrap.js,/COSMOS/install/config/tools/handbook_creator/assets/js/bootstrap.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.0.3.js** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.0.3.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.0.3/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.0.3/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: /COSMOS/autohotkey/config/tools/handbook_creator/assets/js/bootstrap.min.js,/COSMOS/demo/config/tools/handbook_creator/assets/js/bootstrap.min.js,/COSMOS/test/performance/config/tools/handbook_creator/assets/js/bootstrap.min.js,/COSMOS/install/config/tools/handbook_creator/assets/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.0.3.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/COSMOS/commit/e967c77766e1b731bec2bab4f5fafb6af874c2c1">e967c77766e1b731bec2bab4f5fafb6af874c2c1</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676>CVE-2018-20676</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.0.3","isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.0.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"bootstrap - 3.4.0"},{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.0.3","isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.0.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"bootstrap - 3.4.0"}],"vulnerabilityIdentifier":"CVE-2018-20676","vulnerabilityDetails":"In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-20676 (Medium) detected in bootstrap-3.0.3.js, bootstrap-3.0.3.min.js - ## CVE-2018-20676 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.0.3.js</b>, <b>bootstrap-3.0.3.min.js</b></p></summary>
<p>
<details><summary><b>bootstrap-3.0.3.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.0.3/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.0.3/js/bootstrap.js</a></p>
<p>Path to vulnerable library: /COSMOS/test/performance/config/tools/handbook_creator/assets/js/bootstrap.js,/COSMOS/demo/config/tools/handbook_creator/assets/js/bootstrap.js,/COSMOS/autohotkey/config/tools/handbook_creator/assets/js/bootstrap.js,/COSMOS/install/config/tools/handbook_creator/assets/js/bootstrap.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.0.3.js** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.0.3.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.0.3/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.0.3/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: /COSMOS/autohotkey/config/tools/handbook_creator/assets/js/bootstrap.min.js,/COSMOS/demo/config/tools/handbook_creator/assets/js/bootstrap.min.js,/COSMOS/test/performance/config/tools/handbook_creator/assets/js/bootstrap.min.js,/COSMOS/install/config/tools/handbook_creator/assets/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.0.3.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/COSMOS/commit/e967c77766e1b731bec2bab4f5fafb6af874c2c1">e967c77766e1b731bec2bab4f5fafb6af874c2c1</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676>CVE-2018-20676</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.0.3","isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.0.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"bootstrap - 3.4.0"},{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"3.0.3","isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:3.0.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"bootstrap - 3.4.0"}],"vulnerabilityIdentifier":"CVE-2018-20676","vulnerabilityDetails":"In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_main | cve medium detected in bootstrap js bootstrap min js cve medium severity vulnerability vulnerable libraries bootstrap js bootstrap min js bootstrap js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library cosmos test performance config tools handbook creator assets js bootstrap js cosmos demo config tools handbook creator assets js bootstrap js cosmos autohotkey config tools handbook creator assets js bootstrap js cosmos install config tools handbook creator assets js bootstrap js dependency hierarchy x bootstrap js vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library cosmos autohotkey config tools handbook creator assets js bootstrap min js cosmos demo config tools handbook creator assets js bootstrap min js cosmos test performance config tools handbook creator assets js bootstrap min js cosmos install config tools handbook creator assets js bootstrap min js dependency hierarchy x bootstrap min js vulnerable library found in head commit a href vulnerability details in bootstrap before xss is possible in the tooltip data viewport attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in bootstrap before xss is possible in the tooltip data viewport attribute vulnerabilityurl | 0 |
293,556 | 25,303,645,231 | IssuesEvent | 2022-11-17 12:39:01 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | opened | Avoid warnings during MSI building | team/qa type/dev-testing status/not-tracked | | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.4.0 | https://github.com/wazuh/wazuh/issues/14716 | https://github.com/wazuh/wazuh/pull/15422 |
<!-- Important: No section may be left blank. If not, delete it directly (in principle only Steps to reproduce could be left blank in case of not proceeding, although there are always exceptions). -->
## Description
<!-- Description that puts into context and shows the QA tester the changes that have been made by the developer and need to be tested. -->
In this [PR](https://github.com/wazuh/wazuh/pull/15422) we have modified the WXS file and the installations scripts used in the MSI to avoid some of the warnings that appeared when building the package.
## Proposed checks
<!-- Indicate through a list of checkboxes the suggested checks to be carried out by the QA tester -->
|System|Installation | Upgrade from 3.13.6 |Upgrade from 4.3.9| Uninstall |
|---|---|---|---|---|
| XP | | | | ||
| Vista | | | | ||
| 2008 | | | | ||
| Windows 7 | ||||
|2012 | | | ||
|2016 | | | ||
|2019 | | | ||
| 10 | | | ||
|11| | | ||
The 4.4.0 package should install the correct template for each version, to verify this check the ossec.conf
```
<!--
Wazuh - Agent - Default configuration for Windows
More info at: https://documentation.wazuh.com
Mailing list: https://groups.google.com/forum/#!forum/wazuh
-->
<ossec_config>
<client>
<server>
<address>0.0.0.0</address>
<port>1514</port>
<protocol>tcp</protocol>
</server>
<config-profile>windows, windows10</config-profile>
....
```
After upgrading the service must be running if the service was previously running. | 1.0 | Avoid warnings during MSI building - | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.4.0 | https://github.com/wazuh/wazuh/issues/14716 | https://github.com/wazuh/wazuh/pull/15422 |
<!-- Important: No section may be left blank. If not, delete it directly (in principle only Steps to reproduce could be left blank in case of not proceeding, although there are always exceptions). -->
## Description
<!-- Description that puts into context and shows the QA tester the changes that have been made by the developer and need to be tested. -->
In this [PR](https://github.com/wazuh/wazuh/pull/15422) we have modified the WXS file and the installations scripts used in the MSI to avoid some of the warnings that appeared when building the package.
## Proposed checks
<!-- Indicate through a list of checkboxes the suggested checks to be carried out by the QA tester -->
|System|Installation | Upgrade from 3.13.6 |Upgrade from 4.3.9| Uninstall |
|---|---|---|---|---|
| XP | | | | ||
| Vista | | | | ||
| 2008 | | | | ||
| Windows 7 | ||||
|2012 | | | ||
|2016 | | | ||
|2019 | | | ||
| 10 | | | ||
|11| | | ||
The 4.4.0 package should install the correct template for each version, to verify this check the ossec.conf
```
<!--
Wazuh - Agent - Default configuration for Windows
More info at: https://documentation.wazuh.com
Mailing list: https://groups.google.com/forum/#!forum/wazuh
-->
<ossec_config>
<client>
<server>
<address>0.0.0.0</address>
<port>1514</port>
<protocol>tcp</protocol>
</server>
<config-profile>windows, windows10</config-profile>
....
```
After upgrading the service must be running if the service was previously running. | non_main | avoid warnings during msi building target version related issue related pr description in this we have modified the wxs file and the installations scripts used in the msi to avoid some of the warnings that appeared when building the package proposed checks system installation upgrade from upgrade from uninstall xp vista windows the package should install the correct template for each version to verify this check the ossec conf wazuh agent default configuration for windows more info at mailing list tcp windows after upgrading the service must be running if the service was previously running | 0 |
519,428 | 15,051,021,601 | IssuesEvent | 2021-02-03 13:37:40 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | error in samples/subsys/usb/cdc_acm | area: Samples area: USB bug priority: low | **Describe the bug**
The CDC ACM sample isn't usable on native_posix. It pulls needlessly GPIO drivers in, without using them.
**To Reproduce**
1. cd samples/subsys/usb/cdc_acm
2. west build --pristine -b native_posix
3. Error at compilation time:
```
-- Configuring done
CMake Error at ../../../../cmake/extensions.cmake:398 (add_library):
No SOURCES given to target: drivers__gpio
```
**Expected behavior**
I expected that the compilation actually worked, as USB seems to be emulated when compiled for native_posix and can be used via the Linux "usbip" command line tool.
BTW, removing the offending line fixes the issue:
1. sed -i '/CONFIG_GPIO/d' prj.conf
2. west build --pristine -b native_posix
**Impact**
Unable to test the sample in native posix mode.
**Environment (please complete the following information):**
- OS: Linux
- git describe --tags: zephyr-v2.4.0-388-g5bda586c64 (which is git HEAD as of now)
| 1.0 | error in samples/subsys/usb/cdc_acm - **Describe the bug**
The CDC ACM sample isn't usable on native_posix. It pulls needlessly GPIO drivers in, without using them.
**To Reproduce**
1. cd samples/subsys/usb/cdc_acm
2. west build --pristine -b native_posix
3. Error at compilation time:
```
-- Configuring done
CMake Error at ../../../../cmake/extensions.cmake:398 (add_library):
No SOURCES given to target: drivers__gpio
```
**Expected behavior**
I expected that the compilation actually worked, as USB seems to be emulated when compiled for native_posix and can be used via the Linux "usbip" command line tool.
BTW, removing the offending line fixes the issue:
1. sed -i '/CONFIG_GPIO/d' prj.conf
2. west build --pristine -b native_posix
**Impact**
Unable to test the sample in native posix mode.
**Environment (please complete the following information):**
- OS: Linux
- git describe --tags: zephyr-v2.4.0-388-g5bda586c64 (which is git HEAD as of now)
| non_main | error in samples subsys usb cdc acm describe the bug the cdc acm sample isn t usable on native posix it pulls needlessly gpio drivers in without using them to reproduce cd samples subsys usb cdc acm west build pristine b native posix error at compilation time configuring done cmake error at cmake extensions cmake add library no sources given to target drivers gpio expected behavior i expected that the compilation actually worked as usb seems to be emulated when compiled for native posix and can be used via the linux usbip command line tool btw removing the offending line fixes the issue sed i config gpio d prj conf west build pristine b native posix impact unable to test the sample in native posix mode environment please complete the following information os linux git describe tags zephyr which is git head as of now | 0 |
107,531 | 9,216,608,376 | IssuesEvent | 2019-03-11 08:38:26 | TEAMMATES/teammates | https://api.github.com/repos/TEAMMATES/teammates | opened | Fix unstable FeedbackQuestionAttributesTest | a-Testing c.Bug d.FirstTimers t-Java |
- **Environment**: da079783605f64265d6ba9324e79768ad613a844
**Steps to reproduce**

1. Run `FeedbackQuestionAttributesTest#testToEntity()`.
2. It will fail if the execution of the test is slow.
**Expected behaviour**
1. The test should pass no matter how slow/fast the test is executed.
**Actual behaviour**
1. It fails sometimes.
**Additional info**
Probably we should just assert not null for `getCreatedAt` and `getUpdatedAt`.
| 1.0 | Fix unstable FeedbackQuestionAttributesTest -
- **Environment**: da079783605f64265d6ba9324e79768ad613a844
**Steps to reproduce**

1. Run `FeedbackQuestionAttributesTest#testToEntity()`.
2. It will fail if the execution of the test is slow.
**Expected behaviour**
1. The test should pass no matter how slow/fast the test is executed.
**Actual behaviour**
1. It fails sometimes.
**Additional info**
Probably we should just assert not null for `getCreatedAt` and `getUpdatedAt`.
| non_main | fix unstable feedbackquestionattributestest environment steps to reproduce run feedbackquestionattributestest testtoentity it will fail if the execution of the test is slow expected behaviour the test should pass no matter how slow fast the test is executed actual behaviour it fails sometimes additional info probably we should just assert not null for getcreatedat and getupdatedat | 0 |
26,774 | 4,019,397,421 | IssuesEvent | 2016-05-16 14:48:34 | mozilla/Community-Gatherings | https://api.github.com/repos/mozilla/Community-Gatherings | opened | Synthesize request from team on where they need George + present draft George schedule | Event Design | - [ ] Get a draft of George's schedule out to the team for reactions | 1.0 | Synthesize request from team on where they need George + present draft George schedule - - [ ] Get a draft of George's schedule out to the team for reactions | non_main | synthesize request from team on where they need george present draft george schedule get a draft of george s schedule out to the team for reactions | 0 |
639,277 | 20,750,476,525 | IssuesEvent | 2022-03-15 06:50:09 | AY2122S2-CS2113-T11-1/tp | https://api.github.com/repos/AY2122S2-CS2113-T11-1/tp | closed | As a hotel manager, I can view room vacancy status | type.Story priority.High | so that I can identify how many rooms are vacant per floor in order to determine how many housekeepers will be assigned to each floor | 1.0 | As a hotel manager, I can view room vacancy status - so that I can identify how many rooms are vacant per floor in order to determine how many housekeepers will be assigned to each floor | non_main | as a hotel manager i can view room vacancy status so that i can identify how many rooms are vacant per floor in order to determine how many housekeepers will be assigned to each floor | 0 |
3,005 | 31,046,452,390 | IssuesEvent | 2023-08-11 00:26:47 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | closed | [BUG] NDS query 14 parts 1 and 2 both fail at SF100K | bug reliability | query14_part1 exception:
```
Job aborted due to stage failure: Task 60 in stage 44.0 failed 4 times, most recent failure: Lost task 60.3 in stage 44.0 (TID 16851) (10.153.158.11 executor 5): com.nvidia.spark.rapids.jni.SplitAndRetryOOM: GPU OutOfMemory: could not split inputs and retry
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$NoInputSpliterator.split(RmmRapidsRetryIterator.scala:359)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.next(RmmRapidsRetryIterator.scala:530)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryAutoCloseableIterator.next(RmmRapidsRetryIterator.scala:468)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.drainSingleWithVerification(RmmRapidsRetryIterator.scala:275)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withRetryNoSplit(RmmRapidsRetryIterator.scala:181)
at org.apache.spark.sql.rapids.execution.BaseHashJoinIterator.createGatherer(GpuHashJoin.scala:305)
at com.nvidia.spark.rapids.SplittableJoinIterator.$anonfun$setupNextGatherer$2(AbstractGpuJoinIterator.scala:239)
at com.nvidia.spark.rapids.Arm$.withResource(Arm.scala:29)
at com.nvidia.spark.rapids.SplittableJoinIterator.$anonfun$setupNextGatherer$1(AbstractGpuJoinIterator.scala:221)
at com.nvidia.spark.rapids.GpuMetric.ns(GpuExec.scala:150)
at com.nvidia.spark.rapids.SplittableJoinIterator.setupNextGatherer(AbstractGpuJoinIterator.scala:221)
at com.nvidia.spark.rapids.AbstractGpuJoinIterator.hasNext(AbstractGpuJoinIterator.scala:95)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.$anonfun$hasNext$9(GpuSubPartitionHashJoin.scala:537)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.$anonfun$hasNext$9$adapted(GpuSubPartitionHashJoin.scala:537)
at scala.Option.exists(Option.scala:376)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.hasNext(GpuSubPartitionHashJoin.scala:537)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at scala.collection.AbstractIterator.to(Iterator.scala:1431)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1030)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2254)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
query14_part2 exception (looks to be same as part 1 exception)
```
Job aborted due to stage failure: Task 60 in stage 60.0 failed 4 times, most recent failure: Lost task 60.3 in stage 60.0 (TID 21368) (10.153.158.16 executor 6): com.nvidia.spark.rapids.jni.SplitAndRetryOOM: GPU OutOfMemory: could not split inputs and retry
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$NoInputSpliterator.split(RmmRapidsRetryIterator.scala:359)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.next(RmmRapidsRetryIterator.scala:530)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryAutoCloseableIterator.next(RmmRapidsRetryIterator.scala:468)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.drainSingleWithVerification(RmmRapidsRetryIterator.scala:275)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withRetryNoSplit(RmmRapidsRetryIterator.scala:181)
at org.apache.spark.sql.rapids.execution.BaseHashJoinIterator.createGatherer(GpuHashJoin.scala:305)
at com.nvidia.spark.rapids.SplittableJoinIterator.$anonfun$setupNextGatherer$2(AbstractGpuJoinIterator.scala:239)
at com.nvidia.spark.rapids.Arm$.withResource(Arm.scala:29)
at com.nvidia.spark.rapids.SplittableJoinIterator.$anonfun$setupNextGatherer$1(AbstractGpuJoinIterator.scala:221)
at com.nvidia.spark.rapids.GpuMetric.ns(GpuExec.scala:150)
at com.nvidia.spark.rapids.SplittableJoinIterator.setupNextGatherer(AbstractGpuJoinIterator.scala:221)
at com.nvidia.spark.rapids.AbstractGpuJoinIterator.hasNext(AbstractGpuJoinIterator.scala:95)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.$anonfun$hasNext$9(GpuSubPartitionHashJoin.scala:537)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.$anonfun$hasNext$9$adapted(GpuSubPartitionHashJoin.scala:537)
at scala.Option.exists(Option.scala:376)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.hasNext(GpuSubPartitionHashJoin.scala:537)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at scala.collection.AbstractIterator.to(Iterator.scala:1431)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1030)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2254)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
Note that this failure is reproducible even after potential q95 fix for SF30K/SF100K (ref: https://github.com/NVIDIA/spark-rapids/pull/8936). | True | [BUG] NDS query 14 parts 1 and 2 both fail at SF100K - query14_part1 exception:
```
Job aborted due to stage failure: Task 60 in stage 44.0 failed 4 times, most recent failure: Lost task 60.3 in stage 44.0 (TID 16851) (10.153.158.11 executor 5): com.nvidia.spark.rapids.jni.SplitAndRetryOOM: GPU OutOfMemory: could not split inputs and retry
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$NoInputSpliterator.split(RmmRapidsRetryIterator.scala:359)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.next(RmmRapidsRetryIterator.scala:530)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryAutoCloseableIterator.next(RmmRapidsRetryIterator.scala:468)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.drainSingleWithVerification(RmmRapidsRetryIterator.scala:275)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withRetryNoSplit(RmmRapidsRetryIterator.scala:181)
at org.apache.spark.sql.rapids.execution.BaseHashJoinIterator.createGatherer(GpuHashJoin.scala:305)
at com.nvidia.spark.rapids.SplittableJoinIterator.$anonfun$setupNextGatherer$2(AbstractGpuJoinIterator.scala:239)
at com.nvidia.spark.rapids.Arm$.withResource(Arm.scala:29)
at com.nvidia.spark.rapids.SplittableJoinIterator.$anonfun$setupNextGatherer$1(AbstractGpuJoinIterator.scala:221)
at com.nvidia.spark.rapids.GpuMetric.ns(GpuExec.scala:150)
at com.nvidia.spark.rapids.SplittableJoinIterator.setupNextGatherer(AbstractGpuJoinIterator.scala:221)
at com.nvidia.spark.rapids.AbstractGpuJoinIterator.hasNext(AbstractGpuJoinIterator.scala:95)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.$anonfun$hasNext$9(GpuSubPartitionHashJoin.scala:537)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.$anonfun$hasNext$9$adapted(GpuSubPartitionHashJoin.scala:537)
at scala.Option.exists(Option.scala:376)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.hasNext(GpuSubPartitionHashJoin.scala:537)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at scala.collection.AbstractIterator.to(Iterator.scala:1431)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1030)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2254)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
query14_part2 exception (looks to be same as part 1 exception)
```
Job aborted due to stage failure: Task 60 in stage 60.0 failed 4 times, most recent failure: Lost task 60.3 in stage 60.0 (TID 21368) (10.153.158.16 executor 6): com.nvidia.spark.rapids.jni.SplitAndRetryOOM: GPU OutOfMemory: could not split inputs and retry
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$NoInputSpliterator.split(RmmRapidsRetryIterator.scala:359)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.next(RmmRapidsRetryIterator.scala:530)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryAutoCloseableIterator.next(RmmRapidsRetryIterator.scala:468)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.drainSingleWithVerification(RmmRapidsRetryIterator.scala:275)
at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withRetryNoSplit(RmmRapidsRetryIterator.scala:181)
at org.apache.spark.sql.rapids.execution.BaseHashJoinIterator.createGatherer(GpuHashJoin.scala:305)
at com.nvidia.spark.rapids.SplittableJoinIterator.$anonfun$setupNextGatherer$2(AbstractGpuJoinIterator.scala:239)
at com.nvidia.spark.rapids.Arm$.withResource(Arm.scala:29)
at com.nvidia.spark.rapids.SplittableJoinIterator.$anonfun$setupNextGatherer$1(AbstractGpuJoinIterator.scala:221)
at com.nvidia.spark.rapids.GpuMetric.ns(GpuExec.scala:150)
at com.nvidia.spark.rapids.SplittableJoinIterator.setupNextGatherer(AbstractGpuJoinIterator.scala:221)
at com.nvidia.spark.rapids.AbstractGpuJoinIterator.hasNext(AbstractGpuJoinIterator.scala:95)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.$anonfun$hasNext$9(GpuSubPartitionHashJoin.scala:537)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.$anonfun$hasNext$9$adapted(GpuSubPartitionHashJoin.scala:537)
at scala.Option.exists(Option.scala:376)
at org.apache.spark.sql.rapids.execution.BaseSubHashJoinIterator.hasNext(GpuSubPartitionHashJoin.scala:537)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at scala.collection.AbstractIterator.to(Iterator.scala:1431)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1431)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1431)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1030)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2254)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
Note that this failure is reproducible even after potential q95 fix for SF30K/SF100K (ref: https://github.com/NVIDIA/spark-rapids/pull/8936). | non_main | nds query parts and both fail at exception job aborted due to stage failure task in stage failed times most recent failure lost task in stage tid executor com nvidia spark rapids jni splitandretryoom gpu outofmemory could not split inputs and retry at com nvidia spark rapids rmmrapidsretryiterator noinputspliterator split rmmrapidsretryiterator scala at com nvidia spark rapids rmmrapidsretryiterator rmmrapidsretryiterator next rmmrapidsretryiterator scala at com nvidia spark rapids rmmrapidsretryiterator rmmrapidsretryautocloseableiterator next rmmrapidsretryiterator scala at com nvidia spark rapids rmmrapidsretryiterator drainsinglewithverification rmmrapidsretryiterator scala at com nvidia spark rapids rmmrapidsretryiterator withretrynosplit rmmrapidsretryiterator scala at org apache spark sql rapids execution basehashjoiniterator creategatherer gpuhashjoin scala at com nvidia spark rapids splittablejoiniterator anonfun setupnextgatherer abstractgpujoiniterator scala at com nvidia spark rapids arm withresource arm scala at com nvidia spark rapids splittablejoiniterator anonfun setupnextgatherer abstractgpujoiniterator scala at com nvidia spark rapids gpumetric ns gpuexec scala at com nvidia spark rapids splittablejoiniterator setupnextgatherer abstractgpujoiniterator scala at com nvidia spark rapids abstractgpujoiniterator hasnext abstractgpujoiniterator scala at scala collection iterator anon hasnext iterator scala at org apache spark sql rapids execution basesubhashjoiniterator anonfun hasnext gpusubpartitionhashjoin scala at org apache spark sql rapids execution basesubhashjoiniterator anonfun hasnext adapted gpusubpartitionhashjoin scala at scala option exists option scala at org apache spark sql rapids execution basesubhashjoiniterator hasnext gpusubpartitionhashjoin scala at scala collection iterator anon hasnext iterator scala at scala collection iterator foreach iterator scala at scala collection iterator foreach iterator scala at scala collection abstractiterator foreach iterator scala at scala collection generic growable plus plus eq growable scala at scala collection generic growable plus plus eq growable scala at scala collection mutable arraybuffer plus plus eq arraybuffer scala at scala collection mutable arraybuffer plus plus eq arraybuffer scala at scala collection traversableonce to traversableonce scala at scala collection traversableonce to traversableonce scala at scala collection abstractiterator to iterator scala at scala collection traversableonce tobuffer traversableonce scala at scala collection traversableonce tobuffer traversableonce scala at scala collection abstractiterator tobuffer iterator scala at scala collection traversableonce toarray traversableonce scala at scala collection traversableonce toarray traversableonce scala at scala collection abstractiterator toarray iterator scala at org apache spark rdd rdd anonfun collect rdd scala at org apache spark sparkcontext anonfun runjob sparkcontext scala at org apache spark scheduler resulttask runtask resulttask scala at org apache spark scheduler task run task scala at org apache spark executor executor taskrunner anonfun run executor scala at org apache spark util utils trywithsafefinally utils scala at org apache spark executor executor taskrunner run executor scala at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java exception looks to be same as part exception job aborted due to stage failure task in stage failed times most recent failure lost task in stage tid executor com nvidia spark rapids jni splitandretryoom gpu outofmemory could not split inputs and retry at com nvidia spark rapids rmmrapidsretryiterator noinputspliterator split rmmrapidsretryiterator scala at com nvidia spark rapids rmmrapidsretryiterator rmmrapidsretryiterator next rmmrapidsretryiterator scala at com nvidia spark rapids rmmrapidsretryiterator rmmrapidsretryautocloseableiterator next rmmrapidsretryiterator scala at com nvidia spark rapids rmmrapidsretryiterator drainsinglewithverification rmmrapidsretryiterator scala at com nvidia spark rapids rmmrapidsretryiterator withretrynosplit rmmrapidsretryiterator scala at org apache spark sql rapids execution basehashjoiniterator creategatherer gpuhashjoin scala at com nvidia spark rapids splittablejoiniterator anonfun setupnextgatherer abstractgpujoiniterator scala at com nvidia spark rapids arm withresource arm scala at com nvidia spark rapids splittablejoiniterator anonfun setupnextgatherer abstractgpujoiniterator scala at com nvidia spark rapids gpumetric ns gpuexec scala at com nvidia spark rapids splittablejoiniterator setupnextgatherer abstractgpujoiniterator scala at com nvidia spark rapids abstractgpujoiniterator hasnext abstractgpujoiniterator scala at scala collection iterator anon hasnext iterator scala at org apache spark sql rapids execution basesubhashjoiniterator anonfun hasnext gpusubpartitionhashjoin scala at org apache spark sql rapids execution basesubhashjoiniterator anonfun hasnext adapted gpusubpartitionhashjoin scala at scala option exists option scala at org apache spark sql rapids execution basesubhashjoiniterator hasnext gpusubpartitionhashjoin scala at scala collection iterator anon hasnext iterator scala at scala collection iterator foreach iterator scala at scala collection iterator foreach iterator scala at scala collection abstractiterator foreach iterator scala at scala collection generic growable plus plus eq growable scala at scala collection generic growable plus plus eq growable scala at scala collection mutable arraybuffer plus plus eq arraybuffer scala at scala collection mutable arraybuffer plus plus eq arraybuffer scala at scala collection traversableonce to traversableonce scala at scala collection traversableonce to traversableonce scala at scala collection abstractiterator to iterator scala at scala collection traversableonce tobuffer traversableonce scala at scala collection traversableonce tobuffer traversableonce scala at scala collection abstractiterator tobuffer iterator scala at scala collection traversableonce toarray traversableonce scala at scala collection traversableonce toarray traversableonce scala at scala collection abstractiterator toarray iterator scala at org apache spark rdd rdd anonfun collect rdd scala at org apache spark sparkcontext anonfun runjob sparkcontext scala at org apache spark scheduler resulttask runtask resulttask scala at org apache spark scheduler task run task scala at org apache spark executor executor taskrunner anonfun run executor scala at org apache spark util utils trywithsafefinally utils scala at org apache spark executor executor taskrunner run executor scala at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java note that this failure is reproducible even after potential fix for ref | 0 |
402,231 | 27,359,822,245 | IssuesEvent | 2023-02-27 15:10:56 | Unstructured-IO/unstructured | https://api.github.com/repos/Unstructured-IO/unstructured | closed | ModuleNotFoundError: No module named 'unstructured.documents.pdf' | bug documentation | Hello,
Following the documentation for parsing a pdf as found here: https://unstructured-io.github.io/unstructured/examples.html#pdf-parsing
It seems that the import statement:
`from unstructured.documents.pdf import PDFDocument
`
results in a not found error.
Indeed, checking unstructured/unstructured/documents, I can't seem to find anything relevant for PDF parsing.
Thank you
| 1.0 | ModuleNotFoundError: No module named 'unstructured.documents.pdf' - Hello,
Following the documentation for parsing a pdf as found here: https://unstructured-io.github.io/unstructured/examples.html#pdf-parsing
It seems that the import statement:
`from unstructured.documents.pdf import PDFDocument
`
results in a not found error.
Indeed, checking unstructured/unstructured/documents, I can't seem to find anything relevant for PDF parsing.
Thank you
| non_main | modulenotfounderror no module named unstructured documents pdf hello following the documentation for parsing a pdf as found here it seems that the import statement from unstructured documents pdf import pdfdocument results in a not found error indeed checking unstructured unstructured documents i can t seem to find anything relevant for pdf parsing thank you | 0 |
1,833 | 6,577,362,666 | IssuesEvent | 2017-09-12 00:23:08 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | list supported devices for ios and nxos | affects_2.1 docs_report networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Documentation Report
##### COMPONENT NAME
ios_*
nxos_*
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /home/admin-0/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
N/A
-->
##### OS / ENVIRONMENT
<!---
N/A
-->
##### SUMMARY
It would be good to have a complete list of supported devices. For example, ansible state here https://www.ansible.com/press/red-hat-brings-devops-to-the-network-with-new-ansible-capabilities
that IOS-XE is supported. I have tried running against an IOS-XE 4500X switch with the ios_command module and there is no Python interpreter installed on the switch for ansible to start.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
```
| True | list supported devices for ios and nxos - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Documentation Report
##### COMPONENT NAME
ios_*
nxos_*
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /home/admin-0/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
N/A
-->
##### OS / ENVIRONMENT
<!---
N/A
-->
##### SUMMARY
It would be good to have a complete list of supported devices. For example, ansible state here https://www.ansible.com/press/red-hat-brings-devops-to-the-network-with-new-ansible-capabilities
that IOS-XE is supported. I have tried running against an IOS-XE 4500X switch with the ios_command module and there is no Python interpreter installed on the switch for ansible to start.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
```
| main | list supported devices for ios and nxos issue type documentation report component name ios nxos ansible version ansible config file home admin ansible ansible cfg configured module search path default w o overrides configuration n a os environment n a summary it would be good to have a complete list of supported devices for example ansible state here that ios xe is supported i have tried running against an ios xe switch with the ios command module and there is no python interpreter installed on the switch for ansible to start steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results | 1 |
1,937 | 6,609,886,306 | IssuesEvent | 2017-09-19 15:51:41 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | closed | [Feature request] Отыгрыш только неотвеченных вопросов | need-maintainer | ### 1. Запрос
Было бы неплохо, если б у игрока в комнате была возможность повторно отыграть вопросы, на которые он не ответил при первом отыгрыше.
### 2. Пример желаемого поведения
Игрок отыграл пакет → не ответил, например, на 7 вопросов, → появляется диалог с примерным текстом «Повторить отыгрыш пакета на неотвеченных вопросах?»
Если игрок вводит, к примеру, `1`, → 7 неотвеченных вопросов запускаются по новой.
Если игрок вводит, к примеру, `2`, → появляется главное меню комнаты.
### 3. Аргументация
Эффективное расходование времени. На повторение вопросов, на которые уже ответил, игрок тратит лишнее время.
я уже применял данный метод, когда повторял вопросы после выписки их в ежедневник. По субъективным ощущениям, усвояемость материала улучшилась.
Спасибо. | True | [Feature request] Отыгрыш только неотвеченных вопросов - ### 1. Запрос
Было бы неплохо, если б у игрока в комнате была возможность повторно отыграть вопросы, на которые он не ответил при первом отыгрыше.
### 2. Пример желаемого поведения
Игрок отыграл пакет → не ответил, например, на 7 вопросов, → появляется диалог с примерным текстом «Повторить отыгрыш пакета на неотвеченных вопросах?»
Если игрок вводит, к примеру, `1`, → 7 неотвеченных вопросов запускаются по новой.
Если игрок вводит, к примеру, `2`, → появляется главное меню комнаты.
### 3. Аргументация
Эффективное расходование времени. На повторение вопросов, на которые уже ответил, игрок тратит лишнее время.
я уже применял данный метод, когда повторял вопросы после выписки их в ежедневник. По субъективным ощущениям, усвояемость материала улучшилась.
Спасибо. | main | отыгрыш только неотвеченных вопросов запрос было бы неплохо если б у игрока в комнате была возможность повторно отыграть вопросы на которые он не ответил при первом отыгрыше пример желаемого поведения игрок отыграл пакет → не ответил например на вопросов → появляется диалог с примерным текстом «повторить отыгрыш пакета на неотвеченных вопросах » если игрок вводит к примеру → неотвеченных вопросов запускаются по новой если игрок вводит к примеру → появляется главное меню комнаты аргументация эффективное расходование времени на повторение вопросов на которые уже ответил игрок тратит лишнее время я уже применял данный метод когда повторял вопросы после выписки их в ежедневник по субъективным ощущениям усвояемость материала улучшилась спасибо | 1 |
2,470 | 8,639,904,719 | IssuesEvent | 2018-11-23 22:33:57 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | rpitx sometimes hang on exit | V1 related (not maintained) | Sometimes, rpitx could not be stopped by a CTL-C.
by a sudo killall rpitx solve the problem.
Maybe a DMA related issue. Still don't know how to be sure that a DMA channel is available : depends on model and Linux version, and maybe Device Tree.
| True | rpitx sometimes hang on exit - Sometimes, rpitx could not be stopped by a CTL-C.
by a sudo killall rpitx solve the problem.
Maybe a DMA related issue. Still don't know how to be sure that a DMA channel is available : depends on model and Linux version, and maybe Device Tree.
| main | rpitx sometimes hang on exit sometimes rpitx could not be stopped by a ctl c by a sudo killall rpitx solve the problem maybe a dma related issue still don t know how to be sure that a dma channel is available depends on model and linux version and maybe device tree | 1 |
4,338 | 21,883,411,849 | IssuesEvent | 2022-05-19 16:10:11 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: Possible missing tokens | type: bug 🐛 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
@carbon/react
### Browser
_No response_
### Package version
v11
### React version
16
### Description
I may be missing something, or very new to sass and carbon 11, but I been grabbing tokens of the `@carbon/styles` as such
```
@use '@carbon/styles';
...
.someClass {
// background-color: $interactive-01; <----carbon v10 with import
background-color: styles.$button-primary; <-----carbon 11 using use
}
```
but for some reason, I cannot find the replaced tokens for
```
interactive-01
interactive-02
interactive-03
interactive-04
```
<img width="739" alt="image" src="https://user-images.githubusercontent.com/8866319/161900435-403d5ab6-5f78-455c-8619-25f206eab9e5.png">
using `@use '@carbon/styles';` or `@use '@carbon/colors;`
I been able to find other colors/styles tokens but not these ones.
### CodeSandbox example
N/A
### Steps to reproduce
See description
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: Possible missing tokens - ### Package
@carbon/react
### Browser
_No response_
### Package version
v11
### React version
16
### Description
I may be missing something, or very new to sass and carbon 11, but I been grabbing tokens of the `@carbon/styles` as such
```
@use '@carbon/styles';
...
.someClass {
// background-color: $interactive-01; <----carbon v10 with import
background-color: styles.$button-primary; <-----carbon 11 using use
}
```
but for some reason, I cannot find the replaced tokens for
```
interactive-01
interactive-02
interactive-03
interactive-04
```
<img width="739" alt="image" src="https://user-images.githubusercontent.com/8866319/161900435-403d5ab6-5f78-455c-8619-25f206eab9e5.png">
using `@use '@carbon/styles';` or `@use '@carbon/colors;`
I been able to find other colors/styles tokens but not these ones.
### CodeSandbox example
N/A
### Steps to reproduce
See description
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | possible missing tokens package carbon react browser no response package version react version description i may be missing something or very new to sass and carbon but i been grabbing tokens of the carbon styles as such use carbon styles someclass background color interactive carbon with import background color styles button primary carbon using use but for some reason i cannot find the replaced tokens for interactive interactive interactive interactive img width alt image src using use carbon styles or use carbon colors i been able to find other colors styles tokens but not these ones codesandbox example n a steps to reproduce see description code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
2,640 | 8,960,233,824 | IssuesEvent | 2019-01-28 04:34:52 | chocolatey/chocolatey-package-requests | https://api.github.com/repos/chocolatey/chocolatey-package-requests | closed | RFM - DuckieTV & DuckieTV Nightly | Status: Available For Maintainer(s) | Looking for someone to take over this package https://chocolatey.org/packages/duckietv in accordance of https://github.com/JourneyOver/chocolatey-packages/issues/19#issuecomment-449665066
I honestly just don't feel like maintaining it anymore, and duckietv nightly was just having issues to begin with; with pushing up to chocolatey due to an upstream error (though that should hopefully be fixed eventually)..
If anyone fancies trying to take a crack at running with this package let me know and I will gladly send it your way. | True | RFM - DuckieTV & DuckieTV Nightly - Looking for someone to take over this package https://chocolatey.org/packages/duckietv in accordance of https://github.com/JourneyOver/chocolatey-packages/issues/19#issuecomment-449665066
I honestly just don't feel like maintaining it anymore, and duckietv nightly was just having issues to begin with; with pushing up to chocolatey due to an upstream error (though that should hopefully be fixed eventually)..
If anyone fancies trying to take a crack at running with this package let me know and I will gladly send it your way. | main | rfm duckietv duckietv nightly looking for someone to take over this package in accordance of i honestly just don t feel like maintaining it anymore and duckietv nightly was just having issues to begin with with pushing up to chocolatey due to an upstream error though that should hopefully be fixed eventually if anyone fancies trying to take a crack at running with this package let me know and i will gladly send it your way | 1 |
467,343 | 13,446,230,469 | IssuesEvent | 2020-09-08 12:40:05 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1488] Crash for investigation | Priority: Medium Status: Investigate | 1. It happened when I press Draft Law button:
```
erver encountered an exception:
<size=60.00%>Exception: KeyNotFoundException
Message:The given key was not present in the dictionary.
Source:System.Collections.Immutable
System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary.
at System.Collections.Immutable.ImmutableDictionary`2.get_Item(TKey key)
at Eco.Core.Controller.ControllerManager.GetControllerInstanceInfo(IController controller)
at Eco.Core.Controller.ControllerExtensions.SubscribeAllRecursive(ISubscriptions subs, IController controller, Action`2 changedCallback, Func`2 filter)
at Eco.Core.Controller.ControllerExtensions.SubscribeAllRecursive(ISubscriptions subs, IController controller, Action`2 changedCallback, Func`2 filter)
at Eco.Core.Controller.ControllerExtensions.SubscribeAllRecursive(ISubscriptions subs, IController controller, Action`2 changedCallback, Func`2 filter)
at Eco.Gameplay.Systems.ViewEditor.UpdateValidity(PropertyInfo prop, Object objParent)
at Eco.Gameplay.Systems.ViewEditor.<Setup>b__32_0()
at Eco.Shared.View.SubscriptionsExtensions.SubscribeAndCall(ISubscriptions ss, IObservable s, String propname, Action changedCallback)
at Eco.Gameplay.Systems.ViewEditor.Setup(User user, ISaveablePlugin toSave, String buttonText, String overrideTitle)
at Eco.Gameplay.Systems.ViewEditor..ctor(User user, IController toEdit, ISaveablePlugin toSave, String buttonText, String overrideTitle, Type restrictToType, Boolean readOnly)
at Eco.Gameplay.Systems.ViewEditor.Edit(User user, IController controller, ISaveablePlugin toSave, Action`1 onSubmit, String buttonText, String overrideTitle, String uiStyle, Type restrictToType, Boolean readOnly)
at Eco.Gameplay.Components.CivicObjectComponent.Edit(Player player, Int32 index)</size>
```
[Crash start to draft Law.txt](https://github.com/StrangeLoopGames/EcoIssues/files/4394606/Crash.start.to.draft.Law.txt)
2. It happened when I placed Capitol (after reconnect after crash capitol was placed and working good.)
```
Server encountered an exception:
<size=60,00%>Exception: ArgumentException
Message:An element with the same key but a different value already exists. Key: 7186
Source:System.Collections.Immutable
System.ArgumentException: An element with the same key but a different value already exists. Key: 7186
at System.Collections.Immutable.ImmutableDictionary`2.HashBucket.Add(TKey key, TValue value, IEqualityComparer`1 keyOnlyComparer, IEqualityComparer`1 valueComparer, KeyCollisionBehavior behavior, OperationResult& result)
at System.Collections.Immutable.ImmutableDictionary`2.Add(TKey key, TValue value, KeyCollisionBehavior behavior, MutationInput origin)
at System.Collections.Immutable.ImmutableDictionary`2.Add(TKey key, TValue value)
at Eco.Core.Utils.ImmutableHelper.ApplyImmutable[T](T& original, Func`2 apply)
at Eco.Core.Controller.ControllerManager.TryBindController(IController controller, IMvcNetClient boundClient)
at Eco.Core.Controller.ControllerManager.PackageController(IController controller, IMvcNetClient boundClient)
at Eco.Core.Controller.ControllerManager.PackageValue(IMvcNetClient boundClient, Object value, Int32 nameID)
at Eco.Core.Controller.ControllerManager.PackageMember(IMvcNetClient boundClient, IController controller, Int32 nameID)
at Eco.Core.Controller.ControllerManager.AddPendingView(IController controller, IMvcNetClient client)
at Eco.Core.Controller.ControllerManager.PackageController(IController controller, IMvcNetClient boundClient)
at Eco.Core.Controller.ControllerManager.PackageValue(IMvcNetClient boundClient, Object value, Int32 nameID)
at Eco.Core.Controller.ControllerManager.PackageMember(IMvcNetClient boundClient, IController controller, Int32 nameID)
at Eco.Core.Controller.ControllerManager.AddPendingView(IController controller, IMvcNetClient client)
at Eco.Core.Controller.ControllerManager.PackageController(IController controller, IMvcNetClient boundClient)
at Eco.Shared.Serialization.BsonManipulator.ToBson(Object val, INetClient client, Boolean useReflection)
at Eco.Shared.Utils.EnumerableExtensions.ToBson(IEnumerable enumeration, INetClient client, Boolean doAllParams)
at Eco.Shared.Serialization.BsonManipulator.ToBson(Object val, INetClient client, Boolean useReflection)
at Eco.Shared.Serialization.BsonManipulator.ToBson(IDictionary dictionary, INetClient client, Boolean useReflection)
at Eco.Shared.Serialization.BsonManipulator.ToBson(Object val, INetClient client, Boolean useReflection)
at Eco.Core.Controller.ControllerManager.PackageValue(IMvcNetClient boundClient, Object value, Int32 nameID)
at Eco.Core.Controller.ControllerManager.PackageMember(IMvcNetClient boundClient, IController controller, Int32 nameID)
at Eco.Core.Controller.ControllerManager.GetChanges(IMvcNetClient client)
at Eco.Plugins.Networking.Client.Update()
at Eco.Plugins.Networking.Client.<.ctor>b__61_0()</size>
```
[Crash place capitol.txt](https://github.com/StrangeLoopGames/EcoIssues/files/4394605/Crash.place.capitol.txt) | 1.0 | [0.9.0 staging-1488] Crash for investigation - 1. It happened when I press Draft Law button:
```
erver encountered an exception:
<size=60.00%>Exception: KeyNotFoundException
Message:The given key was not present in the dictionary.
Source:System.Collections.Immutable
System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary.
at System.Collections.Immutable.ImmutableDictionary`2.get_Item(TKey key)
at Eco.Core.Controller.ControllerManager.GetControllerInstanceInfo(IController controller)
at Eco.Core.Controller.ControllerExtensions.SubscribeAllRecursive(ISubscriptions subs, IController controller, Action`2 changedCallback, Func`2 filter)
at Eco.Core.Controller.ControllerExtensions.SubscribeAllRecursive(ISubscriptions subs, IController controller, Action`2 changedCallback, Func`2 filter)
at Eco.Core.Controller.ControllerExtensions.SubscribeAllRecursive(ISubscriptions subs, IController controller, Action`2 changedCallback, Func`2 filter)
at Eco.Gameplay.Systems.ViewEditor.UpdateValidity(PropertyInfo prop, Object objParent)
at Eco.Gameplay.Systems.ViewEditor.<Setup>b__32_0()
at Eco.Shared.View.SubscriptionsExtensions.SubscribeAndCall(ISubscriptions ss, IObservable s, String propname, Action changedCallback)
at Eco.Gameplay.Systems.ViewEditor.Setup(User user, ISaveablePlugin toSave, String buttonText, String overrideTitle)
at Eco.Gameplay.Systems.ViewEditor..ctor(User user, IController toEdit, ISaveablePlugin toSave, String buttonText, String overrideTitle, Type restrictToType, Boolean readOnly)
at Eco.Gameplay.Systems.ViewEditor.Edit(User user, IController controller, ISaveablePlugin toSave, Action`1 onSubmit, String buttonText, String overrideTitle, String uiStyle, Type restrictToType, Boolean readOnly)
at Eco.Gameplay.Components.CivicObjectComponent.Edit(Player player, Int32 index)</size>
```
[Crash start to draft Law.txt](https://github.com/StrangeLoopGames/EcoIssues/files/4394606/Crash.start.to.draft.Law.txt)
2. It happened when I placed Capitol (after reconnect after crash capitol was placed and working good.)
```
Server encountered an exception:
<size=60,00%>Exception: ArgumentException
Message:An element with the same key but a different value already exists. Key: 7186
Source:System.Collections.Immutable
System.ArgumentException: An element with the same key but a different value already exists. Key: 7186
at System.Collections.Immutable.ImmutableDictionary`2.HashBucket.Add(TKey key, TValue value, IEqualityComparer`1 keyOnlyComparer, IEqualityComparer`1 valueComparer, KeyCollisionBehavior behavior, OperationResult& result)
at System.Collections.Immutable.ImmutableDictionary`2.Add(TKey key, TValue value, KeyCollisionBehavior behavior, MutationInput origin)
at System.Collections.Immutable.ImmutableDictionary`2.Add(TKey key, TValue value)
at Eco.Core.Utils.ImmutableHelper.ApplyImmutable[T](T& original, Func`2 apply)
at Eco.Core.Controller.ControllerManager.TryBindController(IController controller, IMvcNetClient boundClient)
at Eco.Core.Controller.ControllerManager.PackageController(IController controller, IMvcNetClient boundClient)
at Eco.Core.Controller.ControllerManager.PackageValue(IMvcNetClient boundClient, Object value, Int32 nameID)
at Eco.Core.Controller.ControllerManager.PackageMember(IMvcNetClient boundClient, IController controller, Int32 nameID)
at Eco.Core.Controller.ControllerManager.AddPendingView(IController controller, IMvcNetClient client)
at Eco.Core.Controller.ControllerManager.PackageController(IController controller, IMvcNetClient boundClient)
at Eco.Core.Controller.ControllerManager.PackageValue(IMvcNetClient boundClient, Object value, Int32 nameID)
at Eco.Core.Controller.ControllerManager.PackageMember(IMvcNetClient boundClient, IController controller, Int32 nameID)
at Eco.Core.Controller.ControllerManager.AddPendingView(IController controller, IMvcNetClient client)
at Eco.Core.Controller.ControllerManager.PackageController(IController controller, IMvcNetClient boundClient)
at Eco.Shared.Serialization.BsonManipulator.ToBson(Object val, INetClient client, Boolean useReflection)
at Eco.Shared.Utils.EnumerableExtensions.ToBson(IEnumerable enumeration, INetClient client, Boolean doAllParams)
at Eco.Shared.Serialization.BsonManipulator.ToBson(Object val, INetClient client, Boolean useReflection)
at Eco.Shared.Serialization.BsonManipulator.ToBson(IDictionary dictionary, INetClient client, Boolean useReflection)
at Eco.Shared.Serialization.BsonManipulator.ToBson(Object val, INetClient client, Boolean useReflection)
at Eco.Core.Controller.ControllerManager.PackageValue(IMvcNetClient boundClient, Object value, Int32 nameID)
at Eco.Core.Controller.ControllerManager.PackageMember(IMvcNetClient boundClient, IController controller, Int32 nameID)
at Eco.Core.Controller.ControllerManager.GetChanges(IMvcNetClient client)
at Eco.Plugins.Networking.Client.Update()
at Eco.Plugins.Networking.Client.<.ctor>b__61_0()</size>
```
[Crash place capitol.txt](https://github.com/StrangeLoopGames/EcoIssues/files/4394605/Crash.place.capitol.txt) | non_main | crash for investigation it happened when i press draft law button erver encountered an exception exception keynotfoundexception message the given key was not present in the dictionary source system collections immutable system collections generic keynotfoundexception the given key was not present in the dictionary at system collections immutable immutabledictionary get item tkey key at eco core controller controllermanager getcontrollerinstanceinfo icontroller controller at eco core controller controllerextensions subscribeallrecursive isubscriptions subs icontroller controller action changedcallback func filter at eco core controller controllerextensions subscribeallrecursive isubscriptions subs icontroller controller action changedcallback func filter at eco core controller controllerextensions subscribeallrecursive isubscriptions subs icontroller controller action changedcallback func filter at eco gameplay systems vieweditor updatevalidity propertyinfo prop object objparent at eco gameplay systems vieweditor b at eco shared view subscriptionsextensions subscribeandcall isubscriptions ss iobservable s string propname action changedcallback at eco gameplay systems vieweditor setup user user isaveableplugin tosave string buttontext string overridetitle at eco gameplay systems vieweditor ctor user user icontroller toedit isaveableplugin tosave string buttontext string overridetitle type restricttotype boolean readonly at eco gameplay systems vieweditor edit user user icontroller controller isaveableplugin tosave action onsubmit string buttontext string overridetitle string uistyle type restricttotype boolean readonly at eco gameplay components civicobjectcomponent edit player player index it happened when i placed capitol after reconnect after crash capitol was placed and working good server encountered an exception exception argumentexception message an element with the same key but a different value already exists key source system collections immutable system argumentexception an element with the same key but a different value already exists key at system collections immutable immutabledictionary hashbucket add tkey key tvalue value iequalitycomparer keyonlycomparer iequalitycomparer valuecomparer keycollisionbehavior behavior operationresult result at system collections immutable immutabledictionary add tkey key tvalue value keycollisionbehavior behavior mutationinput origin at system collections immutable immutabledictionary add tkey key tvalue value at eco core utils immutablehelper applyimmutable t original func apply at eco core controller controllermanager trybindcontroller icontroller controller imvcnetclient boundclient at eco core controller controllermanager packagecontroller icontroller controller imvcnetclient boundclient at eco core controller controllermanager packagevalue imvcnetclient boundclient object value nameid at eco core controller controllermanager packagemember imvcnetclient boundclient icontroller controller nameid at eco core controller controllermanager addpendingview icontroller controller imvcnetclient client at eco core controller controllermanager packagecontroller icontroller controller imvcnetclient boundclient at eco core controller controllermanager packagevalue imvcnetclient boundclient object value nameid at eco core controller controllermanager packagemember imvcnetclient boundclient icontroller controller nameid at eco core controller controllermanager addpendingview icontroller controller imvcnetclient client at eco core controller controllermanager packagecontroller icontroller controller imvcnetclient boundclient at eco shared serialization bsonmanipulator tobson object val inetclient client boolean usereflection at eco shared utils enumerableextensions tobson ienumerable enumeration inetclient client boolean doallparams at eco shared serialization bsonmanipulator tobson object val inetclient client boolean usereflection at eco shared serialization bsonmanipulator tobson idictionary dictionary inetclient client boolean usereflection at eco shared serialization bsonmanipulator tobson object val inetclient client boolean usereflection at eco core controller controllermanager packagevalue imvcnetclient boundclient object value nameid at eco core controller controllermanager packagemember imvcnetclient boundclient icontroller controller nameid at eco core controller controllermanager getchanges imvcnetclient client at eco plugins networking client update at eco plugins networking client b | 0 |
16,871 | 22,151,690,069 | IssuesEvent | 2022-06-03 17:33:35 | hashgraph/hedera-mirror-node | https://api.github.com/repos/hashgraph/hedera-mirror-node | opened | Release checklist 0.58 | enhancement P1 process | ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [ ] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.58.0) for milestone
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [ ] Deploy to VM
## Performance
- [ ] Deploy to Kubernetes
- [ ] Deploy to VM
- [ ] gRPC API performance tests
- [ ] Importer performance tests
- [ ] REST API performance tests
- [ ] Migrations tested against mainnet clone
## Previewnet
- [ ] Deploy to VM
## Testnet
- [ ] Deploy to VM
## Mainnet
- [ ] Deploy to Kubernetes EU
- [ ] Deploy to Kubernetes NA
- [ ] Deploy to VM
### Alternatives
_No response_ | 1.0 | Release checklist 0.58 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [ ] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.58.0) for milestone
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [ ] Deploy to VM
## Performance
- [ ] Deploy to Kubernetes
- [ ] Deploy to VM
- [ ] gRPC API performance tests
- [ ] Importer performance tests
- [ ] REST API performance tests
- [ ] Migrations tested against mainnet clone
## Previewnet
- [ ] Deploy to VM
## Testnet
- [ ] Deploy to VM
## Mainnet
- [ ] Deploy to Kubernetes EU
- [ ] Deploy to Kubernetes NA
- [ ] Deploy to VM
### Alternatives
_No response_ | non_main | release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on nothing for milestone github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm alternatives no response | 0 |
89,200 | 25,601,992,129 | IssuesEvent | 2022-12-01 21:07:04 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | closed | Baserow is broken | 0.kind: build failure | ### Steps To Reproduce
Steps to reproduce the behavior:
1. build `baserow`
2. it will err out with missing dependency: `uvloop`
### Build log
```
[...]
Finished executing setuptoolsBuildPhase
installing
Executing pipInstallPhase
/build/source/backend/dist /build/source/backend
Processing ./baserow-1.10.2-py3-none-any.whl
ERROR: Could not find a version that satisfies the requirement uvloop (from baserow) (from versions: none)
ERROR: No matching distribution found for uvloop
```
### Additional context
It seems I am building the 1.10.2, but it was initialized at 1.10.1 in 6323269208f036bc6c4140fc72251f8505bf8073 and I do not find the upgrade commit, I do not understand if it was a mistake.
### Notify maintainers
@onny
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
❯ nsp nix-info --run "nix-info -m"
- system: `"x86_64-linux"`
- host os: `Linux 5.15.58, NixOS, 22.11 (Raccoon), 22.11pre402830.5e804cd8a27`
- multi-user?: `yes`
- sandbox: `yes`
- version: `nix-env (Nix) 2.10.3`
- channels(root): `"home-manager, nixos, sops-nix"`
- channels(raito): `"home-manager, nixgl, nixpkgs-21.11pre319254.b5182c214fa"`
- nixpkgs: `/nix/var/nix/profiles/per-user/root/channels/nixos`
``` | 1.0 | Baserow is broken - ### Steps To Reproduce
Steps to reproduce the behavior:
1. build `baserow`
2. it will err out with missing dependency: `uvloop`
### Build log
```
[...]
Finished executing setuptoolsBuildPhase
installing
Executing pipInstallPhase
/build/source/backend/dist /build/source/backend
Processing ./baserow-1.10.2-py3-none-any.whl
ERROR: Could not find a version that satisfies the requirement uvloop (from baserow) (from versions: none)
ERROR: No matching distribution found for uvloop
```
### Additional context
It seems I am building the 1.10.2, but it was initialized at 1.10.1 in 6323269208f036bc6c4140fc72251f8505bf8073 and I do not find the upgrade commit, I do not understand if it was a mistake.
### Notify maintainers
@onny
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
❯ nsp nix-info --run "nix-info -m"
- system: `"x86_64-linux"`
- host os: `Linux 5.15.58, NixOS, 22.11 (Raccoon), 22.11pre402830.5e804cd8a27`
- multi-user?: `yes`
- sandbox: `yes`
- version: `nix-env (Nix) 2.10.3`
- channels(root): `"home-manager, nixos, sops-nix"`
- channels(raito): `"home-manager, nixgl, nixpkgs-21.11pre319254.b5182c214fa"`
- nixpkgs: `/nix/var/nix/profiles/per-user/root/channels/nixos`
``` | non_main | baserow is broken steps to reproduce steps to reproduce the behavior build baserow it will err out with missing dependency uvloop build log finished executing setuptoolsbuildphase installing executing pipinstallphase build source backend dist build source backend processing baserow none any whl error could not find a version that satisfies the requirement uvloop from baserow from versions none error no matching distribution found for uvloop additional context it seems i am building the but it was initialized at in and i do not find the upgrade commit i do not understand if it was a mistake notify maintainers onny metadata please run nix shell p nix info run nix info m and paste the result console ❯ nsp nix info run nix info m system linux host os linux nixos raccoon multi user yes sandbox yes version nix env nix channels root home manager nixos sops nix channels raito home manager nixgl nixpkgs nixpkgs nix var nix profiles per user root channels nixos | 0 |
4,361 | 22,062,138,654 | IssuesEvent | 2022-05-30 19:33:11 | Clever-ISA/Clever-ISA | https://api.github.com/repos/Clever-ISA/Clever-ISA | opened | Tracking Issue for Version 1.0 | S-blocked-on-maintainer C-tracking-issue V-1.0 | This tracks the review and finalization of Version 1.0 of the Clever-ISA Specification.
The following extensions will be included in version 1.0:
- X-main (#2)
- X-float (#3)
- X-vector (#1)
- X-float-ext (#20)
The following technical documents will accompany the publication of version 1.0:
- D-abi (#5)
- D-asm (#22)
- D-stability-policy (#17)
- D-toolchain (#23) | True | Tracking Issue for Version 1.0 - This tracks the review and finalization of Version 1.0 of the Clever-ISA Specification.
The following extensions will be included in version 1.0:
- X-main (#2)
- X-float (#3)
- X-vector (#1)
- X-float-ext (#20)
The following technical documents will accompany the publication of version 1.0:
- D-abi (#5)
- D-asm (#22)
- D-stability-policy (#17)
- D-toolchain (#23) | main | tracking issue for version this tracks the review and finalization of version of the clever isa specification the following extensions will be included in version x main x float x vector x float ext the following technical documents will accompany the publication of version d abi d asm d stability policy d toolchain | 1 |
214,873 | 24,121,050,097 | IssuesEvent | 2022-09-20 18:44:43 | Azure/AKS | https://api.github.com/repos/Azure/AKS | closed | AKS in VNET behind company HTTP proxy | enhancement security feature-request resolution/shipped | I need to deploy AKS into a custom VNET, that is behind a company HTTP proxy to access the public internet.
With ACS or acs-engine I couldn't get this working out-of-the-box as the cloud-init scripts need internet access before I'm able to set the http_proxy on all nodes.
Is this possible with AKS once #27 is supported? | True | AKS in VNET behind company HTTP proxy - I need to deploy AKS into a custom VNET, that is behind a company HTTP proxy to access the public internet.
With ACS or acs-engine I couldn't get this working out-of-the-box as the cloud-init scripts need internet access before I'm able to set the http_proxy on all nodes.
Is this possible with AKS once #27 is supported? | non_main | aks in vnet behind company http proxy i need to deploy aks into a custom vnet that is behind a company http proxy to access the public internet with acs or acs engine i couldn t get this working out of the box as the cloud init scripts need internet access before i m able to set the http proxy on all nodes is this possible with aks once is supported | 0 |
1,756 | 6,574,983,455 | IssuesEvent | 2017-09-11 14:41:24 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Support for Logon to Amazon EC2 Container Registry | affects_2.1 cloud docker feature_idea waiting_on_maintainer | ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
docker_login
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.1.1.0
```
##### SUMMARY
<!--- Explain the problem briefly -->
As far as I know, the only way to logon to (and pull docker images from / push to) [Amazon ECR](https://aws.amazon.com/de/ecr/) is via the shell module. It would be nice if the docker_login module supports logon to ECR so that further ansible docker tasks can directly work with ECR.
This is how I handle this currently:
```
- name: ECR login
shell: "$(aws ecr get-login --region eu-central-1)"
- name: Pull image from ECR
shell: "docker pull myid.dkr.ecr.eu-central-1.amazonaws.com/my-app:latest"
```
The output from `aws ecr get-login` is:
```
docker login -u AWS -p [VERY-LONG-PASSWORD] -e none https://myid.dkr.ecr.eu-central-1.amazonaws.com
```
The generated password is valid for 12 hours.
Of course, I could use awk to get the password from the output and then use docker_login. But the output format may change...
| True | Support for Logon to Amazon EC2 Container Registry - ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
docker_login
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
2.1.1.0
```
##### SUMMARY
<!--- Explain the problem briefly -->
As far as I know, the only way to logon to (and pull docker images from / push to) [Amazon ECR](https://aws.amazon.com/de/ecr/) is via the shell module. It would be nice if the docker_login module supports logon to ECR so that further ansible docker tasks can directly work with ECR.
This is how I handle this currently:
```
- name: ECR login
shell: "$(aws ecr get-login --region eu-central-1)"
- name: Pull image from ECR
shell: "docker pull myid.dkr.ecr.eu-central-1.amazonaws.com/my-app:latest"
```
The output from `aws ecr get-login` is:
```
docker login -u AWS -p [VERY-LONG-PASSWORD] -e none https://myid.dkr.ecr.eu-central-1.amazonaws.com
```
The generated password is valid for 12 hours.
Of course, I could use awk to get the password from the output and then use docker_login. But the output format may change...
| main | support for logon to amazon container registry issue type feature idea component name docker login ansible version summary as far as i know the only way to logon to and pull docker images from push to is via the shell module it would be nice if the docker login module supports logon to ecr so that further ansible docker tasks can directly work with ecr this is how i handle this currently name ecr login shell aws ecr get login region eu central name pull image from ecr shell docker pull myid dkr ecr eu central amazonaws com my app latest the output from aws ecr get login is docker login u aws p e none the generated password is valid for hours of course i could use awk to get the password from the output and then use docker login but the output format may change | 1 |
868 | 4,536,112,575 | IssuesEvent | 2016-09-08 19:23:55 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | get_url fails with some ftp servers | bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
get_url
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Ansible.cfg does not exist. So it is using all default values.
##### OS / ENVIRONMENT
I think that is not platform-specific.
But I have tested un ubuntu 14.04 and CentOS 6.
##### SUMMARY
The get_url module fails with "fatal" error with some ftp servers.
This is the error shown:
```
fatal: [localhost]: FAILED! => {"changed": false, "dest": "/tmp", "failed": true, "gid": 0, "group": "root", "mode": "01777", "msg": "Request failed", "owner": "root", "response": "OK (282593 bytes)", "size": 4096, "state": "directory", "status_code": null, "uid": 0, "url": "ftp://ftp.surfsara.nl/pub/outgoing/pbs_python-4.6.0.tar.gz"}
```
##### STEPS TO REPRODUCE
Create a simple playbook like that and call ansible-playbook:
```
- hosts: localhost
connection: local
tasks:
- get_url: url=ftp://ftp.surfsara.nl/pub/outgoing/pbs_python-4.6.0.tar.gz dest=/tmp
# the next one also fails
- get_url: url=ftp://mirror.cc.columbia.edu/pub/software/apache/hadoop/core/hadoop-1.2.1//hadoop-1.2.1.tar.gz dest=/tmp
```
##### EXPECTED RESULTS
Correctly download the file and return this output:
```
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [get_url] *****************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
```
##### ACTUAL RESULTS
```
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [get_url] *****************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "dest": "/tmp", "failed": true, "gid": 0, "group": "root", "mode": "01777", "msg": "Request failed", "owner": "root", "response": "OK (282593 bytes)", "size": 4096, "state": "directory", "status_code": null, "uid": 0, "url": "ftp://ftp.surfsara.nl/pub/outgoing/pbs_python-4.6.0.tar.gz"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @test.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
```
| True | get_url fails with some ftp servers - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
get_url
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Ansible.cfg does not exist. So it is using all default values.
##### OS / ENVIRONMENT
I think that is not platform-specific.
But I have tested un ubuntu 14.04 and CentOS 6.
##### SUMMARY
The get_url module fails with "fatal" error with some ftp servers.
This is the error shown:
```
fatal: [localhost]: FAILED! => {"changed": false, "dest": "/tmp", "failed": true, "gid": 0, "group": "root", "mode": "01777", "msg": "Request failed", "owner": "root", "response": "OK (282593 bytes)", "size": 4096, "state": "directory", "status_code": null, "uid": 0, "url": "ftp://ftp.surfsara.nl/pub/outgoing/pbs_python-4.6.0.tar.gz"}
```
##### STEPS TO REPRODUCE
Create a simple playbook like that and call ansible-playbook:
```
- hosts: localhost
connection: local
tasks:
- get_url: url=ftp://ftp.surfsara.nl/pub/outgoing/pbs_python-4.6.0.tar.gz dest=/tmp
# the next one also fails
- get_url: url=ftp://mirror.cc.columbia.edu/pub/software/apache/hadoop/core/hadoop-1.2.1//hadoop-1.2.1.tar.gz dest=/tmp
```
##### EXPECTED RESULTS
Correctly download the file and return this output:
```
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [get_url] *****************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
```
##### ACTUAL RESULTS
```
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [get_url] *****************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "dest": "/tmp", "failed": true, "gid": 0, "group": "root", "mode": "01777", "msg": "Request failed", "owner": "root", "response": "OK (282593 bytes)", "size": 4096, "state": "directory", "status_code": null, "uid": 0, "url": "ftp://ftp.surfsara.nl/pub/outgoing/pbs_python-4.6.0.tar.gz"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @test.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
```
| main | get url fails with some ftp servers issue type bug report component name get url ansible version ansible config file configured module search path default w o overrides configuration ansible cfg does not exist so it is using all default values os environment i think that is not platform specific but i have tested un ubuntu and centos summary the get url module fails with fatal error with some ftp servers this is the error shown fatal failed changed false dest tmp failed true gid group root mode msg request failed owner root response ok bytes size state directory status code null uid url ftp ftp surfsara nl pub outgoing pbs python tar gz steps to reproduce create a simple playbook like that and call ansible playbook hosts localhost connection local tasks get url url ftp ftp surfsara nl pub outgoing pbs python tar gz dest tmp the next one also fails get url url ftp mirror cc columbia edu pub software apache hadoop core hadoop hadoop tar gz dest tmp expected results correctly download the file and return this output play task ok task changed play recap localhost ok changed unreachable failed actual results play task ok task fatal failed changed false dest tmp failed true gid group root mode msg request failed owner root response ok bytes size state directory status code null uid url ftp ftp surfsara nl pub outgoing pbs python tar gz no more hosts left to retry use limit test retry play recap localhost ok changed unreachable failed | 1 |
3,047 | 11,372,575,531 | IssuesEvent | 2020-01-28 02:24:29 | openwrt/packages | https://api.github.com/repos/openwrt/packages | closed | vim: vim-fuller package should install vimdiff symlink | waiting for maintainer | Maintainer: @neheb or @ratkaj
Description:
Vim will automatically go into a diff mode when invoked as `vimdiff`. There are no build options that need to be changed, instead there just needs to be a symlink with source `/usr/bin/vim` dest `/usr/bin/vimdiff`. I verified this works with the `vim-fuller` package, I'm not sure exactly which other vim builds this functionality applies to (I think it might be disabled for the tiny builds?).
You can test this works by invoking a command like `vimdiff /etc/config/luci /etc/config/luci-okg` and checking that you get the diff mode of vim. | True | vim: vim-fuller package should install vimdiff symlink - Maintainer: @neheb or @ratkaj
Description:
Vim will automatically go into a diff mode when invoked as `vimdiff`. There are no build options that need to be changed, instead there just needs to be a symlink with source `/usr/bin/vim` dest `/usr/bin/vimdiff`. I verified this works with the `vim-fuller` package, I'm not sure exactly which other vim builds this functionality applies to (I think it might be disabled for the tiny builds?).
You can test this works by invoking a command like `vimdiff /etc/config/luci /etc/config/luci-okg` and checking that you get the diff mode of vim. | main | vim vim fuller package should install vimdiff symlink maintainer neheb or ratkaj description vim will automatically go into a diff mode when invoked as vimdiff there are no build options that need to be changed instead there just needs to be a symlink with source usr bin vim dest usr bin vimdiff i verified this works with the vim fuller package i m not sure exactly which other vim builds this functionality applies to i think it might be disabled for the tiny builds you can test this works by invoking a command like vimdiff etc config luci etc config luci okg and checking that you get the diff mode of vim | 1 |
172,751 | 14,381,012,985 | IssuesEvent | 2020-12-02 04:17:36 | fga-eps-mds/2020.1-stay-safe-docs | https://api.github.com/repos/fga-eps-mds/2020.1-stay-safe-docs | opened | US17 - Edit the information of an registered occurrence | documentation | ## Story Description
Me, as a user, would like to edit the information of an occurrence that I registered so that I can correct a wrong information.
## Tasks
The following tasks must be completed for the story to be completed:
- [ ] [Screen to view, edit and exclude occurrences - Frontend](https://github.com/fga-eps-mds/2020.1-stay-safe-front-end/issues/23) | 1.0 | US17 - Edit the information of an registered occurrence - ## Story Description
Me, as a user, would like to edit the information of an occurrence that I registered so that I can correct a wrong information.
## Tasks
The following tasks must be completed for the story to be completed:
- [ ] [Screen to view, edit and exclude occurrences - Frontend](https://github.com/fga-eps-mds/2020.1-stay-safe-front-end/issues/23) | non_main | edit the information of an registered occurrence story description me as a user would like to edit the information of an occurrence that i registered so that i can correct a wrong information tasks the following tasks must be completed for the story to be completed | 0 |
4,450 | 23,148,319,552 | IssuesEvent | 2022-07-29 05:04:38 | K-54N7H05H/burst-engine | https://api.github.com/repos/K-54N7H05H/burst-engine | opened | Issue and Pull Request Templates | Type: Maintainance | Currently the repo does not have a template for openging issues and pull requests.
It would be good idea to have a minimal template going on to create a good format for related issues and make pull requests' goal clear. | True | Issue and Pull Request Templates - Currently the repo does not have a template for openging issues and pull requests.
It would be good idea to have a minimal template going on to create a good format for related issues and make pull requests' goal clear. | main | issue and pull request templates currently the repo does not have a template for openging issues and pull requests it would be good idea to have a minimal template going on to create a good format for related issues and make pull requests goal clear | 1 |
77,206 | 3,506,271,962 | IssuesEvent | 2016-01-08 05:10:51 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | comiling error (BB #269) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** cemak
**Original Date:** 21.08.2010 00:41:30 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/269
<hr>
It's error will sorry my bad English (
| 1.0 | comiling error (BB #269) - This issue was migrated from bitbucket.
**Original Reporter:** cemak
**Original Date:** 21.08.2010 00:41:30 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/269
<hr>
It's error will sorry my bad English (
| non_main | comiling error bb this issue was migrated from bitbucket original reporter cemak original date gmt original priority major original type bug original state invalid direct link it s error will sorry my bad english | 0 |
130,060 | 18,154,826,504 | IssuesEvent | 2021-09-26 22:06:41 | ghc-dev/Maureen-Castro | https://api.github.com/repos/ghc-dev/Maureen-Castro | opened | CVE-2019-10744 (High) detected in multiple libraries | security vulnerability | ## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-0.10.0.tgz</b>, <b>lodash-0.9.2.tgz</b>, <b>lodash-3.7.0.tgz</b>, <b>lodash-3.10.1.tgz</b></p></summary>
<p>
<details><summary><b>lodash-0.10.0.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz">https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz</a></p>
<p>Path to dependency file: Maureen-Castro/package.json</p>
<p>Path to vulnerable library: Maureen-Castro/node_modules/grunt-bower-task/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-bower-task-0.5.0.tgz (Root Library)
- :x: **lodash-0.10.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-0.9.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz">https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz</a></p>
<p>Path to dependency file: Maureen-Castro/package.json</p>
<p>Path to vulnerable library: Maureen-Castro/node_modules/grunt-connect-proxy-updated/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-connect-proxy-updated-0.2.1.tgz (Root Library)
- :x: **lodash-0.9.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.7.0.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz</a></p>
<p>Path to dependency file: Maureen-Castro/package.json</p>
<p>Path to vulnerable library: Maureen-Castro/node_modules/htmlhint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-htmlhint-0.9.13.tgz (Root Library)
- htmlhint-0.9.13.tgz
- jshint-2.8.0.tgz
- :x: **lodash-3.7.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: Maureen-Castro/package.json</p>
<p>Path to vulnerable library: Maureen-Castro/node_modules/grunt-usemin/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-usemin-3.1.1.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Maureen-Castro/commit/107ca9ecc01ce5d215c82af7e092f6674d915cf4">107ca9ecc01ce5d215c82af7e092f6674d915cf4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.10.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-bower-task:0.5.0;lodash:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.9.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-connect-proxy-updated:0.2.1;lodash:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.7.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-htmlhint:0.9.13;htmlhint:0.9.13;jshint:2.8.0;lodash:3.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-usemin:3.1.1;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-10744","vulnerabilityDetails":"Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-10744 (High) detected in multiple libraries - ## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-0.10.0.tgz</b>, <b>lodash-0.9.2.tgz</b>, <b>lodash-3.7.0.tgz</b>, <b>lodash-3.10.1.tgz</b></p></summary>
<p>
<details><summary><b>lodash-0.10.0.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz">https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz</a></p>
<p>Path to dependency file: Maureen-Castro/package.json</p>
<p>Path to vulnerable library: Maureen-Castro/node_modules/grunt-bower-task/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-bower-task-0.5.0.tgz (Root Library)
- :x: **lodash-0.10.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-0.9.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz">https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz</a></p>
<p>Path to dependency file: Maureen-Castro/package.json</p>
<p>Path to vulnerable library: Maureen-Castro/node_modules/grunt-connect-proxy-updated/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-connect-proxy-updated-0.2.1.tgz (Root Library)
- :x: **lodash-0.9.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.7.0.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz</a></p>
<p>Path to dependency file: Maureen-Castro/package.json</p>
<p>Path to vulnerable library: Maureen-Castro/node_modules/htmlhint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-htmlhint-0.9.13.tgz (Root Library)
- htmlhint-0.9.13.tgz
- jshint-2.8.0.tgz
- :x: **lodash-3.7.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: Maureen-Castro/package.json</p>
<p>Path to vulnerable library: Maureen-Castro/node_modules/grunt-usemin/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-usemin-3.1.1.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Maureen-Castro/commit/107ca9ecc01ce5d215c82af7e092f6674d915cf4">107ca9ecc01ce5d215c82af7e092f6674d915cf4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.10.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-bower-task:0.5.0;lodash:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.9.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-connect-proxy-updated:0.2.1;lodash:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.7.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-htmlhint:0.9.13;htmlhint:0.9.13;jshint:2.8.0;lodash:3.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-usemin:3.1.1;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-10744","vulnerabilityDetails":"Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file maureen castro package json path to vulnerable library maureen castro node modules grunt bower task node modules lodash package json dependency hierarchy grunt bower task tgz root library x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file maureen castro package json path to vulnerable library maureen castro node modules grunt connect proxy updated node modules lodash package json dependency hierarchy grunt connect proxy updated tgz root library x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file maureen castro package json path to vulnerable library maureen castro node modules htmlhint node modules lodash package json dependency hierarchy grunt htmlhint tgz root library htmlhint tgz jshint tgz x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file maureen castro package json path to vulnerable library maureen castro node modules grunt usemin node modules lodash package json dependency hierarchy grunt usemin tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash lodash amd lodash es lodash defaultsdeep lodash merge lodash mergewith lodash template isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt bower task lodash isminimumfixversionavailable true minimumfixversion lodash lodash amd lodash es lodash defaultsdeep lodash merge lodash mergewith lodash template packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree grunt connect proxy updated lodash isminimumfixversionavailable true minimumfixversion lodash lodash amd lodash es lodash defaultsdeep lodash merge lodash mergewith lodash template packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree grunt htmlhint htmlhint jshint lodash isminimumfixversionavailable true minimumfixversion lodash lodash amd lodash es lodash defaultsdeep lodash merge lodash mergewith lodash template packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree grunt usemin lodash isminimumfixversionavailable true minimumfixversion lodash lodash amd lodash es lodash defaultsdeep lodash merge lodash mergewith lodash template basebranches vulnerabilityidentifier cve vulnerabilitydetails versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload vulnerabilityurl | 0 |
248,642 | 7,934,613,983 | IssuesEvent | 2018-07-08 21:18:21 | openhab/openhab-docs | https://api.github.com/repos/openhab/openhab-docs | closed | Refactor Readme.md | orga: question :grey_question: priority: high :fire: | Since the docs are generated with vuepress in the future and we are changing the repo structure in #665, we have to adapt the readme file too.
I would wait for the ne generation and resctructuring process to be finished, until we refactor that.
We already could remove the jekyll related stuff to prevent some confusion.
Wdyt? | 1.0 | Refactor Readme.md - Since the docs are generated with vuepress in the future and we are changing the repo structure in #665, we have to adapt the readme file too.
I would wait for the ne generation and resctructuring process to be finished, until we refactor that.
We already could remove the jekyll related stuff to prevent some confusion.
Wdyt? | non_main | refactor readme md since the docs are generated with vuepress in the future and we are changing the repo structure in we have to adapt the readme file too i would wait for the ne generation and resctructuring process to be finished until we refactor that we already could remove the jekyll related stuff to prevent some confusion wdyt | 0 |
405,269 | 11,870,699,137 | IssuesEvent | 2020-03-26 13:15:49 | hotosm/tasking-manager | https://api.github.com/repos/hotosm/tasking-manager | opened | Project favorites filter | Component: Backend Priority: Critical Status: In Progress Type: Bug | `projects/?favoritedByMe=true` filter should return results based on auth token. However, at present, this filter is returning all projects in published state. | 1.0 | Project favorites filter - `projects/?favoritedByMe=true` filter should return results based on auth token. However, at present, this filter is returning all projects in published state. | non_main | project favorites filter projects favoritedbyme true filter should return results based on auth token however at present this filter is returning all projects in published state | 0 |
598,611 | 18,248,760,397 | IssuesEvent | 2021-10-01 23:02:28 | usc-isi-i2/kgtk | https://api.github.com/repos/usc-isi-i2/kgtk | closed | `kgtk sort2` Should Default to an OS-specific sort | enhancement priority 2 | `kgtk sort2` should default to `sort` on Linux and `gsort` on macOS.
Perhaps implement by defining OSSORT as the default, then expand that? | 1.0 | `kgtk sort2` Should Default to an OS-specific sort - `kgtk sort2` should default to `sort` on Linux and `gsort` on macOS.
Perhaps implement by defining OSSORT as the default, then expand that? | non_main | kgtk should default to an os specific sort kgtk should default to sort on linux and gsort on macos perhaps implement by defining ossort as the default then expand that | 0 |
42 | 2,589,615,808 | IssuesEvent | 2015-02-18 14:06:56 | mranney/node_pcap | https://api.github.com/repos/mranney/node_pcap | closed | Unit testing framework | maintainance | ~~I am looking to add some unit tests and am very partial to jasmine~~. Going with Mocha, forgot jasmine did not have a non-browser version. Any objections?
Current progress
- [x] Add mocha to grunt
- [x] Calculate code coverage with Istanbul
- [x] Open pull request/push to upstream. | True | Unit testing framework - ~~I am looking to add some unit tests and am very partial to jasmine~~. Going with Mocha, forgot jasmine did not have a non-browser version. Any objections?
Current progress
- [x] Add mocha to grunt
- [x] Calculate code coverage with Istanbul
- [x] Open pull request/push to upstream. | main | unit testing framework i am looking to add some unit tests and am very partial to jasmine going with mocha forgot jasmine did not have a non browser version any objections current progress add mocha to grunt calculate code coverage with istanbul open pull request push to upstream | 1 |
421,845 | 12,262,152,988 | IssuesEvent | 2020-05-06 21:28:38 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | sorting broken if result contains null values | Database/BigQuery Priority:P3 Type:Bug | - Your browser and the version: Chrome latest
- Your operating system: OSX latest
- Your databases: BigQuery
- Metabase version: 0.28.2
- Metabase internal database: MySQL
- *Repeatable steps to reproduce the issue*
If your result contains null values the sorting breaks with a null pointer exception. Tested with numerical columns. | 1.0 | sorting broken if result contains null values - - Your browser and the version: Chrome latest
- Your operating system: OSX latest
- Your databases: BigQuery
- Metabase version: 0.28.2
- Metabase internal database: MySQL
- *Repeatable steps to reproduce the issue*
If your result contains null values the sorting breaks with a null pointer exception. Tested with numerical columns. | non_main | sorting broken if result contains null values your browser and the version chrome latest your operating system osx latest your databases bigquery metabase version metabase internal database mysql repeatable steps to reproduce the issue if your result contains null values the sorting breaks with a null pointer exception tested with numerical columns | 0 |
383,763 | 11,362,130,203 | IssuesEvent | 2020-01-26 19:18:28 | arfc/2020-fairhurst-hydrogen-production | https://api.github.com/repos/arfc/2020-fairhurst-hydrogen-production | closed | "The same report estimates" | Comp:Core Difficulty:1-Beginner Priority:2-Normal Status:1-New Type:Bug | In the abstract, section 2.3 : "The same report estimates" is out of place, because no report has been discussed. I recognize you are referring to reference 6, but to use "The same" in this way, there must be an antecedent to which "same" refers.
This issue can be closed with a PR that corrects this sentence. | 1.0 | "The same report estimates" - In the abstract, section 2.3 : "The same report estimates" is out of place, because no report has been discussed. I recognize you are referring to reference 6, but to use "The same" in this way, there must be an antecedent to which "same" refers.
This issue can be closed with a PR that corrects this sentence. | non_main | the same report estimates in the abstract section the same report estimates is out of place because no report has been discussed i recognize you are referring to reference but to use the same in this way there must be an antecedent to which same refers this issue can be closed with a pr that corrects this sentence | 0 |
4,792 | 24,676,698,561 | IssuesEvent | 2022-10-18 17:36:51 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Improve Record Page | type: enhancement work: frontend status: ready restricted: maintainers | - Record
- [ ] Allow deleting the record
- [x] ~~Display the record summary in the `<h1>` element and the `<title>` element.~~ (covered in #1600)
- Direct fields
- [ ] Add a status bar at the top to let users know their data is saved (like we do in the sheet)
- [ ] Fix updating boolean field
- [ ] Show loading status when updating an FK field
- [ ] Focus next input when pressing Enter (unless it's a `textarea`)
- [ ] Allow user to set values to NULL
- [ ] Display a tooltip like "Press Tab to Save" or "Press Enter to Save" when focused
- Table widget
- [ ] Don't show page size input within the pagination widget (so that user can't modify it)
- [ ] Make it collapsible
- [ ] Don't show Pagination UI when there is only one page
- [ ] Allow resizing columns
- [ ] Header gets cut off while horizontal scrolling within each table widget
| True | Improve Record Page - - Record
- [ ] Allow deleting the record
- [x] ~~Display the record summary in the `<h1>` element and the `<title>` element.~~ (covered in #1600)
- Direct fields
- [ ] Add a status bar at the top to let users know their data is saved (like we do in the sheet)
- [ ] Fix updating boolean field
- [ ] Show loading status when updating an FK field
- [ ] Focus next input when pressing Enter (unless it's a `textarea`)
- [ ] Allow user to set values to NULL
- [ ] Display a tooltip like "Press Tab to Save" or "Press Enter to Save" when focused
- Table widget
- [ ] Don't show page size input within the pagination widget (so that user can't modify it)
- [ ] Make it collapsible
- [ ] Don't show Pagination UI when there is only one page
- [ ] Allow resizing columns
- [ ] Header gets cut off while horizontal scrolling within each table widget
| main | improve record page record allow deleting the record display the record summary in the element and the element covered in direct fields add a status bar at the top to let users know their data is saved like we do in the sheet fix updating boolean field show loading status when updating an fk field focus next input when pressing enter unless it s a textarea allow user to set values to null display a tooltip like press tab to save or press enter to save when focused table widget don t show page size input within the pagination widget so that user can t modify it make it collapsible don t show pagination ui when there is only one page allow resizing columns header gets cut off while horizontal scrolling within each table widget | 1 |
159,768 | 13,771,522,376 | IssuesEvent | 2020-10-07 22:11:53 | bcgov/cloud-pathfinder-technology-and-ux | https://api.github.com/repos/bcgov/cloud-pathfinder-technology-and-ux | closed | SPIKE: Determine sessions/sections of Azure Phase 2 work to be part of | Technology documentation | **Describe the issue**
ES & MSFT are going to be starting phase 2 of the Azure foundational work. CP team contributed to the SoW & should participate in some (all?) sessions.
Tenancy: Azure Active directory
Subscription: Billing construct (each client/ministry?)
Resource Group: Within one subscription, there are multiple workspaces (Dev, Test, Prod)
We need a mini refinement session to go over the following todo list.
**Which Sprint Priority is this issue related to?**
Sprint 12
**Additional context**
See Phase 2 SoW & the following details from Dennis
Azure Phase 2 is 16 weeks half-time rather than 8 weeks full time
Streams Description
Platform maturity A list of proposed tasks is:
1. Security (higher priority)
a. Security & network logs management solution must be reviewed
(based on existing Design) and implemented (retention, access, etc..)
b. ASC configuration must be assessed and any gaps fixed.
c. Evaluate and configure ATP for services (VMs, DB etc..)
d. Azure Policies listed in the Design must be reviewed and implemented. Additional Out-of-the-box compliance score must be enabled (I.e PBMM)
2. Azure platform access management must be defined and deployed
a. Build/Review RACI based on our recommended segregation (in the Design document) & Break class accounts use.
b. Enable MFA and PIM for privileged accounts.
c. Plan for Cloud PAW.
d. Evaluate the integration Hashicorp Keycloak integration
e. Evaluate the integration Hashicorp Vault integration
3. Hub infrastructure and service HA and DR must be defined, coded and deployed.
a. Hub “shared” services must be coded and deployed (FW, DNS at least and optionally AD-DS). The Design is done (we should revisit the NVA one)
b. Hub Ingress/Egress services (including Azure to and from Internet and to and from Datacenters) must be designed, coded, configured and monitored
c. transfer existing S2S VPN to the permanent place in one of the datacenters.
d. Evaluate Availability zone for HA (if GA in Canada Central).
e. Build another Landing Zone in Canada East for DR (including hybrid connectivity).
4. Operation & automation
a. Azure monitoring to be configured as per Design.
b. Azure backup (including DR scenario) and VMs update to be configured for IaaS (optional)
c. Deployment pipeline must be built, and Infrastructure-as-code must be reviewed and integrated. It includes evaluating the integration with Terraform cloud.
The Customer will collaborate with Microsoft in GitHub repos in which the Infrastructure-as-code deployment artifacts (Templates, scripts) will be worked on.
Identify, Design, and build new workloads in Azure Analyze the list of workloads requests submitted to the BC Government and assist in their prioritization. One of the candidates is OpenShift (IaaS or as a service).
For each of the prioritized workloads:
Streams Description
• Design the target solution in Azure using PaaS as a primary target and IaaS when no native services can be used.
• Build the automation (Terraform as first option or ARM Template & Scripts)
• Identify any gaps in term of operation and Security and propose Azure foundation improvement.
• Assist the BC Government in deploying the workload in Production.
**Definition of done**
- Agreement to have CP team member(s) involved in all sessions
- CP team member(s) identified for involvement in the various streams
- Tickets with DoD are created based on this work
| 1.0 | SPIKE: Determine sessions/sections of Azure Phase 2 work to be part of - **Describe the issue**
ES & MSFT are going to be starting phase 2 of the Azure foundational work. CP team contributed to the SoW & should participate in some (all?) sessions.
Tenancy: Azure Active directory
Subscription: Billing construct (each client/ministry?)
Resource Group: Within one subscription, there are multiple workspaces (Dev, Test, Prod)
We need a mini refinement session to go over the following todo list.
**Which Sprint Priority is this issue related to?**
Sprint 12
**Additional context**
See Phase 2 SoW & the following details from Dennis
Azure Phase 2 is 16 weeks half-time rather than 8 weeks full time
Streams Description
Platform maturity A list of proposed tasks is:
1. Security (higher priority)
a. Security & network logs management solution must be reviewed
(based on existing Design) and implemented (retention, access, etc..)
b. ASC configuration must be assessed and any gaps fixed.
c. Evaluate and configure ATP for services (VMs, DB etc..)
d. Azure Policies listed in the Design must be reviewed and implemented. Additional Out-of-the-box compliance score must be enabled (I.e PBMM)
2. Azure platform access management must be defined and deployed
a. Build/Review RACI based on our recommended segregation (in the Design document) & Break class accounts use.
b. Enable MFA and PIM for privileged accounts.
c. Plan for Cloud PAW.
d. Evaluate the integration Hashicorp Keycloak integration
e. Evaluate the integration Hashicorp Vault integration
3. Hub infrastructure and service HA and DR must be defined, coded and deployed.
a. Hub “shared” services must be coded and deployed (FW, DNS at least and optionally AD-DS). The Design is done (we should revisit the NVA one)
b. Hub Ingress/Egress services (including Azure to and from Internet and to and from Datacenters) must be designed, coded, configured and monitored
c. transfer existing S2S VPN to the permanent place in one of the datacenters.
d. Evaluate Availability zone for HA (if GA in Canada Central).
e. Build another Landing Zone in Canada East for DR (including hybrid connectivity).
4. Operation & automation
a. Azure monitoring to be configured as per Design.
b. Azure backup (including DR scenario) and VMs update to be configured for IaaS (optional)
c. Deployment pipeline must be built, and Infrastructure-as-code must be reviewed and integrated. It includes evaluating the integration with Terraform cloud.
The Customer will collaborate with Microsoft in GitHub repos in which the Infrastructure-as-code deployment artifacts (Templates, scripts) will be worked on.
Identify, Design, and build new workloads in Azure Analyze the list of workloads requests submitted to the BC Government and assist in their prioritization. One of the candidates is OpenShift (IaaS or as a service).
For each of the prioritized workloads:
Streams Description
• Design the target solution in Azure using PaaS as a primary target and IaaS when no native services can be used.
• Build the automation (Terraform as first option or ARM Template & Scripts)
• Identify any gaps in term of operation and Security and propose Azure foundation improvement.
• Assist the BC Government in deploying the workload in Production.
**Definition of done**
- Agreement to have CP team member(s) involved in all sessions
- CP team member(s) identified for involvement in the various streams
- Tickets with DoD are created based on this work
| non_main | spike determine sessions sections of azure phase work to be part of describe the issue es msft are going to be starting phase of the azure foundational work cp team contributed to the sow should participate in some all sessions tenancy azure active directory subscription billing construct each client ministry resource group within one subscription there are multiple workspaces dev test prod we need a mini refinement session to go over the following todo list which sprint priority is this issue related to sprint additional context see phase sow the following details from dennis azure phase is weeks half time rather than weeks full time streams description platform maturity a list of proposed tasks is security higher priority a security network logs management solution must be reviewed based on existing design and implemented retention access etc b asc configuration must be assessed and any gaps fixed c evaluate and configure atp for services vms db etc d azure policies listed in the design must be reviewed and implemented additional out of the box compliance score must be enabled i e pbmm azure platform access management must be defined and deployed a build review raci based on our recommended segregation in the design document break class accounts use b enable mfa and pim for privileged accounts c plan for cloud paw d evaluate the integration hashicorp keycloak integration e evaluate the integration hashicorp vault integration hub infrastructure and service ha and dr must be defined coded and deployed a hub “shared” services must be coded and deployed fw dns at least and optionally ad ds the design is done we should revisit the nva one b hub ingress egress services including azure to and from internet and to and from datacenters must be designed coded configured and monitored c transfer existing vpn to the permanent place in one of the datacenters d evaluate availability zone for ha if ga in canada central e build another landing zone in canada east for dr including hybrid connectivity operation automation a azure monitoring to be configured as per design b azure backup including dr scenario and vms update to be configured for iaas optional c deployment pipeline must be built and infrastructure as code must be reviewed and integrated it includes evaluating the integration with terraform cloud the customer will collaborate with microsoft in github repos in which the infrastructure as code deployment artifacts templates scripts will be worked on identify design and build new workloads in azure analyze the list of workloads requests submitted to the bc government and assist in their prioritization one of the candidates is openshift iaas or as a service for each of the prioritized workloads streams description • design the target solution in azure using paas as a primary target and iaas when no native services can be used • build the automation terraform as first option or arm template scripts • identify any gaps in term of operation and security and propose azure foundation improvement • assist the bc government in deploying the workload in production definition of done agreement to have cp team member s involved in all sessions cp team member s identified for involvement in the various streams tickets with dod are created based on this work | 0 |
2,506 | 8,655,459,707 | IssuesEvent | 2018-11-27 16:00:30 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | QCMA on Slackware 14.2 | unmaintained | Hi,
I just want to report that there is also a SlackBuild script to build QCMA on Slackware. You can find it [here](https://slackbuilds.org/repository/14.2/misc/QCMA/).
Best regards,
Cristiano. | True | QCMA on Slackware 14.2 - Hi,
I just want to report that there is also a SlackBuild script to build QCMA on Slackware. You can find it [here](https://slackbuilds.org/repository/14.2/misc/QCMA/).
Best regards,
Cristiano. | main | qcma on slackware hi i just want to report that there is also a slackbuild script to build qcma on slackware you can find it best regards cristiano | 1 |
151,990 | 23,901,251,665 | IssuesEvent | 2022-09-08 19:00:04 | phetsims/sun | https://api.github.com/repos/phetsims/sun | closed | How to set `phetioIsVisiblePropertyInstrumented: false` for the content of Checkbox and AquaRadioButton | dev:phet-io meeting:phet-io design:phet-io | How do we want to generally prevent PhET-iO clients from making the content of a Checkbox or AquaRadioButton invisible? We don't want to end up with just the box of a Checkbox, or just the circular button of an AquaRadioButton.
Currently, we have to add `phetioIsVisiblePropertyInstrumented: false`. And I find myself having to do this for every Checkbox and AquaRadioButton.
How do we make this more foolproof, or (even better) something developers don't have to deal with.
This also applies to VerticalCheckboxGroup and AquaRadioButtonGroup. Another option would be not to worry about this. Let clients hide these things, and come to their own conclusion that it doesn't make sense. That's a question for designers. | 1.0 | How to set `phetioIsVisiblePropertyInstrumented: false` for the content of Checkbox and AquaRadioButton - How do we want to generally prevent PhET-iO clients from making the content of a Checkbox or AquaRadioButton invisible? We don't want to end up with just the box of a Checkbox, or just the circular button of an AquaRadioButton.
Currently, we have to add `phetioIsVisiblePropertyInstrumented: false`. And I find myself having to do this for every Checkbox and AquaRadioButton.
How do we make this more foolproof, or (even better) something developers don't have to deal with.
This also applies to VerticalCheckboxGroup and AquaRadioButtonGroup. Another option would be not to worry about this. Let clients hide these things, and come to their own conclusion that it doesn't make sense. That's a question for designers. | non_main | how to set phetioisvisiblepropertyinstrumented false for the content of checkbox and aquaradiobutton how do we want to generally prevent phet io clients from making the content of a checkbox or aquaradiobutton invisible we don t want to end up with just the box of a checkbox or just the circular button of an aquaradiobutton currently we have to add phetioisvisiblepropertyinstrumented false and i find myself having to do this for every checkbox and aquaradiobutton how do we make this more foolproof or even better something developers don t have to deal with this also applies to verticalcheckboxgroup and aquaradiobuttongroup another option would be not to worry about this let clients hide these things and come to their own conclusion that it doesn t make sense that s a question for designers | 0 |
5,400 | 27,115,670,275 | IssuesEvent | 2023-02-15 18:22:01 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | closed | Make validator functions into validator classes | Type: Maintainance Domain: Deployment/ Integration Status: Inactive | **What is the expected state?**
```
validate_user_object``` and ```validate_user_form``` should be converted to validator classes.
**What is the actual state?**
```
validate_user_object``` and ```validate_user_form``` appear to be standalone functions outside of the validator class.
**Relevant context**
- **```va_explorer/va_explorer/users/validators.py```**
| True | Make validator functions into validator classes - **What is the expected state?**
```
validate_user_object``` and ```validate_user_form``` should be converted to validator classes.
**What is the actual state?**
```
validate_user_object``` and ```validate_user_form``` appear to be standalone functions outside of the validator class.
**Relevant context**
- **```va_explorer/va_explorer/users/validators.py```**
| main | make validator functions into validator classes what is the expected state
validate user object and validate user form should be converted to validator classes what is the actual state
validate user object and validate user form appear to be standalone functions outside of the validator class relevant context va explorer va explorer users validators py | 1 |
2,669 | 9,166,415,362 | IssuesEvent | 2019-03-02 03:03:51 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | closed | Delete Homebrew/homebrew-cask-eid | awaiting maintainer feedback discussion | https://github.com/Homebrew/homebrew-cask-eid was well-intentioned, but it might be best if we delete it. It has few casks, they get few updates, when they do many times it requires more than a simple bump, and analytics shows too few downloads (the most popular one had 25 downloads in the last 90 days, and more than half had less than 7).
Leaving this open for a bit for disagreements. | True | Delete Homebrew/homebrew-cask-eid - https://github.com/Homebrew/homebrew-cask-eid was well-intentioned, but it might be best if we delete it. It has few casks, they get few updates, when they do many times it requires more than a simple bump, and analytics shows too few downloads (the most popular one had 25 downloads in the last 90 days, and more than half had less than 7).
Leaving this open for a bit for disagreements. | main | delete homebrew homebrew cask eid was well intentioned but it might be best if we delete it it has few casks they get few updates when they do many times it requires more than a simple bump and analytics shows too few downloads the most popular one had downloads in the last days and more than half had less than leaving this open for a bit for disagreements | 1 |
60,023 | 6,669,377,688 | IssuesEvent | 2017-10-03 19:07:20 | CNMAT/CNMAT-Externs | https://api.github.com/repos/CNMAT/CNMAT-Externs | closed | OSC-timetag 64bit bugs | bug testing | OSC-timetag has some issues in 64bit
Andy's timetag scrubber example in the help patch (one of my favorites) only works in 32bit mode.
Also a crash when converting to iso8601:
<pre><code>
----------begin_max5_patcher----------
469.3ocqT0zaiBCD8L7qvxmShrQD1jdpR8GPOr6ospZkCLMwUXajsoahp5+8
ZOFTRSR+JpHDnYXlGu2aF4myynqLaAGkbE4NRV1y4YYXpXhrg3LpRrstU3vx
n0FkBzd5jz27vVOl2uQ5H0VgaC3HRMopbkzSTlFfT2asgVZ2M1jtWI0sfGQj
Ojz420BHTik0I70aj50+yB09DGKYEyXSHkEKiu3KwWErYLx86w1z6GAmMjU1
fHaV83zxJZL2K44wGS9hxVC+Oz7IptyJ26EehrZfGD8s9ORc74Q8LmyPQhOK
J9NhawknME3bh0vIhS5LKpX7f+xXSY7v8eXrqv6Yr30eOqxKtbkWNmiZNE8A
RmepzWNBepL+tNHgMkRt+mahe6uuYpWp.uXM4ZugLXR+z6.kKPmfWU88cBN6
cshQ19FGAQl1J0GeR.RuX92ZSNSusdDwAqmrmeMfyK0BuznOnlHoNnnMxlFP
e3VbizIV0BnLXmcd8UoyQ+pyymEWHcRlknq6Iv5FfDYRX64QiMF9qIXnTmBQ
DoV3I4X8kXFgML68gAeuMsbrspjlZMbpoU2Kw4adzCxGVhNZ.oEpTqpvRkjM
tFFbp3AtuyZWp3ArQEk+R9q.BbVZDA
-----------end_max5_patcher-----------
</code></pre>
| 1.0 | OSC-timetag 64bit bugs - OSC-timetag has some issues in 64bit
Andy's timetag scrubber example in the help patch (one of my favorites) only works in 32bit mode.
Also a crash when converting to iso8601:
<pre><code>
----------begin_max5_patcher----------
469.3ocqT0zaiBCD8L7qvxmShrQD1jdpR8GPOr6ospZkCLMwUXajsoahp5+8
ZOFTRSR+JpHDnYXlGu2aF4myynqLaAGkbE4NRV1y4YYXpXhrg3LpRrstU3vx
n0FkBzd5jz27vVOl2uQ5H0VgaC3HRMopbkzSTlFfT2asgVZ2M1jtWI0sfGQj
Ojz420BHTik0I70aj50+yB09DGKYEyXSHkEKiu3KwWErYLx86w1z6GAmMjU1
fHaV83zxJZL2K44wGS9hxVC+Oz7IptyJ26EehrZfGD8s9ORc74Q8LmyPQhOK
J9NhawknME3bh0vIhS5LKpX7f+xXSY7v8eXrqv6Yr30eOqxKtbkWNmiZNE8A
RmepzWNBepL+tNHgMkRt+mahe6uuYpWp.uXM4ZugLXR+z6.kKPmfWU88cBN6
cshQ19FGAQl1J0GeR.RuX92ZSNSusdDwAqmrmeMfyK0BuznOnlHoNnnMxlFP
e3VbizIV0BnLXmcd8UoyQ+pyymEWHcRlknq6Iv5FfDYRX64QiMF9qIXnTmBQ
DoV3I4X8kXFgML68gAeuMsbrspjlZMbpoU2Kw4adzCxGVhNZ.oEpTqpvRkjM
tFFbp3AtuyZWp3ArQEk+R9q.BbVZDA
-----------end_max5_patcher-----------
</code></pre>
| non_main | osc timetag bugs osc timetag has some issues in andy s timetag scrubber example in the help patch one of my favorites only works in mode also a crash when converting to begin patcher zoftrsr rmepzwnbepl tnhgmkrt oeptqpvrkjm bbvzda end patcher | 0 |
82,117 | 3,603,307,231 | IssuesEvent | 2016-02-03 18:34:43 | bitDubai/fermat-org | https://api.github.com/repos/bitDubai/fermat-org | closed | Register app | Priority: HIGH server | Registrar la app con el owner id del usuario que hizo login y los datos de la aplicacion registrada en github
@fuelusumar | 1.0 | Register app - Registrar la app con el owner id del usuario que hizo login y los datos de la aplicacion registrada en github
@fuelusumar | non_main | register app registrar la app con el owner id del usuario que hizo login y los datos de la aplicacion registrada en github fuelusumar | 0 |
224,154 | 24,769,705,119 | IssuesEvent | 2022-10-23 01:12:26 | rgordon95/github-search-redux-thunk | https://api.github.com/repos/rgordon95/github-search-redux-thunk | opened | CVE-2022-37598 (High) detected in uglify-js-3.4.10.tgz | security vulnerability | ## CVE-2022-37598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-3.4.10.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz</a></p>
<p>Path to dependency file: /github-search-redux-thunk/package.json</p>
<p>Path to vulnerable library: /node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- html-webpack-plugin-4.0.0-beta.5.tgz
- html-minifier-3.5.21.tgz
- :x: **uglify-js-3.4.10.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function DEFNODE in ast.js in mishoo UglifyJS 3.13.2 via the name variable in ast.js.
<p>Publish Date: 2022-10-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37598>CVE-2022-37598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-20</p>
<p>Fix Resolution (uglify-js): 3.13.10</p>
<p>Direct dependency fix Resolution (react-scripts): 3.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-37598 (High) detected in uglify-js-3.4.10.tgz - ## CVE-2022-37598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-3.4.10.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz</a></p>
<p>Path to dependency file: /github-search-redux-thunk/package.json</p>
<p>Path to vulnerable library: /node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- html-webpack-plugin-4.0.0-beta.5.tgz
- html-minifier-3.5.21.tgz
- :x: **uglify-js-3.4.10.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function DEFNODE in ast.js in mishoo UglifyJS 3.13.2 via the name variable in ast.js.
<p>Publish Date: 2022-10-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37598>CVE-2022-37598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-20</p>
<p>Fix Resolution (uglify-js): 3.13.10</p>
<p>Direct dependency fix Resolution (react-scripts): 3.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in uglify js tgz cve high severity vulnerability vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file github search redux thunk package json path to vulnerable library node modules uglify js package json dependency hierarchy react scripts tgz root library html webpack plugin beta tgz html minifier tgz x uglify js tgz vulnerable library vulnerability details prototype pollution vulnerability in function defnode in ast js in mishoo uglifyjs via the name variable in ast js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution uglify js direct dependency fix resolution react scripts step up your open source security game with mend | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.