Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
437 | 3,560,229,144 | IssuesEvent | 2016-01-23 00:36:57 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | closed | APC wire pulses should be working with a real timer, not spawn() | Maintainability - Hinders improvements Not a bug Usability | Airlocks use a "real" timer in the game. | True | APC wire pulses should be working with a real timer, not spawn() - Airlocks use a "real" timer in the game. | main | apc wire pulses should be working with a real timer not spawn airlocks use a real timer in the game | 1 |
373,043 | 11,032,347,490 | IssuesEvent | 2019-12-06 19:57:23 | seccomp/libseccomp | https://api.github.com/repos/seccomp/libseccomp | opened | REF: setup libseccomp documentation on Read the Docs | enhancement priority/low | It might be nice to look into hosting some of the libseccomp documentation on [Read the Docs](https://readthedocs.org); at the very least a short into into the project and the info from the manpages.
However we do this, it should be automated. I do not want to have to maintain multiple sets of documentation; the Read the Docs documentation should pull from either the manpages or doxygen style comments in the code. | 1.0 | REF: setup libseccomp documentation on Read the Docs - It might be nice to look into hosting some of the libseccomp documentation on [Read the Docs](https://readthedocs.org); at the very least a short into into the project and the info from the manpages.
However we do this, it should be automated. I do not want to have to maintain multiple sets of documentation; the Read the Docs documentation should pull from either the manpages or doxygen style comments in the code. | non_main | ref setup libseccomp documentation on read the docs it might be nice to look into hosting some of the libseccomp documentation on at the very least a short into into the project and the info from the manpages however we do this it should be automated i do not want to have to maintain multiple sets of documentation the read the docs documentation should pull from either the manpages or doxygen style comments in the code | 0 |
384 | 3,419,355,134 | IssuesEvent | 2015-12-08 09:21:26 | Homebrew/homebrew | https://api.github.com/repos/Homebrew/homebrew | closed | [RFC] Improve handling of --verbose/-v argument | maintainer feedback | Currently, the handling of the `--verbose` argument and its short version `-v` is pretty inconsistent and gives a horrible user experience. Compare these outputs (the `list` command is just an example):
**Short Form `-v`**
```
$ brew list -v colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/cdiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/CHANGES
/opt/homebrew/Cellar/colordiff/1.0.16/COPYING
/opt/homebrew/Cellar/colordiff/1.0.16/INSTALL_RECEIPT.json
/opt/homebrew/Cellar/colordiff/1.0.16/README
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/cdiff.1
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/colordiff.1
```
```
$ brew -v list colordiff
Homebrew 0.9.5 (git revision 5979e; last commit 2015-11-24)
/opt/homebrew/Cellar/colordiff/1.0.16/bin/cdiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/CHANGES
/opt/homebrew/Cellar/colordiff/1.0.16/COPYING
/opt/homebrew/Cellar/colordiff/1.0.16/INSTALL_RECEIPT.json
/opt/homebrew/Cellar/colordiff/1.0.16/README
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/cdiff.1
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/colordiff.1
```
Notice how the header `Homebrew 0.9.5 (git revision 5979e; last commit 2015-11-24)` is either printed or not depending on the position of the `-v` option. In both cases, it is interpreted to mean “verbose”.
**Long Form `--verbose`**
```
$ brew list --verbose colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/cdiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/CHANGES
/opt/homebrew/Cellar/colordiff/1.0.16/COPYING
/opt/homebrew/Cellar/colordiff/1.0.16/INSTALL_RECEIPT.json
/opt/homebrew/Cellar/colordiff/1.0.16/README
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/cdiff.1
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/colordiff.1
```
```
$ brew --verbose list colordiff
Error: Unknown command: --verbose
```
Suddenly, `--verbose` is recognized as a command and thus not accepted in the same place where its short counterpart is (so it isn't exactly equivalent).
**Other Related Cases**
```
$ brew -v
Homebrew 0.9.5 (git revision 5979e; last commit 2015-11-24)
```
```
$ brew --version
0.9.5 (git revision 5979e; last commit 2015-11-24)
```
Without any other arguments `-v` is just an odd variation of `--version` with `Homebrew` prefixed.
**Suggestion**
Passing `-v` as the first argument is undocumented, as is its special handling and the additional header it prints. Passing `-v` as the sole argument doesn't seem very useful given that `--version` also exists (and is documented). If I'm not the only one who feels that way, I volunteer to clean up `brew.rb` accordingly. | True | [RFC] Improve handling of --verbose/-v argument - Currently, the handling of the `--verbose` argument and its short version `-v` is pretty inconsistent and gives a horrible user experience. Compare these outputs (the `list` command is just an example):
**Short Form `-v`**
```
$ brew list -v colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/cdiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/CHANGES
/opt/homebrew/Cellar/colordiff/1.0.16/COPYING
/opt/homebrew/Cellar/colordiff/1.0.16/INSTALL_RECEIPT.json
/opt/homebrew/Cellar/colordiff/1.0.16/README
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/cdiff.1
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/colordiff.1
```
```
$ brew -v list colordiff
Homebrew 0.9.5 (git revision 5979e; last commit 2015-11-24)
/opt/homebrew/Cellar/colordiff/1.0.16/bin/cdiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/CHANGES
/opt/homebrew/Cellar/colordiff/1.0.16/COPYING
/opt/homebrew/Cellar/colordiff/1.0.16/INSTALL_RECEIPT.json
/opt/homebrew/Cellar/colordiff/1.0.16/README
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/cdiff.1
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/colordiff.1
```
Notice how the header `Homebrew 0.9.5 (git revision 5979e; last commit 2015-11-24)` is either printed or not depending on the position of the `-v` option. In both cases, it is interpreted to mean “verbose”.
**Long Form `--verbose`**
```
$ brew list --verbose colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/cdiff
/opt/homebrew/Cellar/colordiff/1.0.16/bin/colordiff
/opt/homebrew/Cellar/colordiff/1.0.16/CHANGES
/opt/homebrew/Cellar/colordiff/1.0.16/COPYING
/opt/homebrew/Cellar/colordiff/1.0.16/INSTALL_RECEIPT.json
/opt/homebrew/Cellar/colordiff/1.0.16/README
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/cdiff.1
/opt/homebrew/Cellar/colordiff/1.0.16/share/man/man1/colordiff.1
```
```
$ brew --verbose list colordiff
Error: Unknown command: --verbose
```
Suddenly, `--verbose` is recognized as a command and thus not accepted in the same place where its short counterpart is (so it isn't exactly equivalent).
**Other Related Cases**
```
$ brew -v
Homebrew 0.9.5 (git revision 5979e; last commit 2015-11-24)
```
```
$ brew --version
0.9.5 (git revision 5979e; last commit 2015-11-24)
```
Without any other arguments `-v` is just an odd variation of `--version` with `Homebrew` prefixed.
**Suggestion**
Passing `-v` as the first argument is undocumented, as is its special handling and the additional header it prints. Passing `-v` as the sole argument doesn't seem very useful given that `--version` also exists (and is documented). If I'm not the only one who feels that way, I volunteer to clean up `brew.rb` accordingly. | main | improve handling of verbose v argument currently the handling of the verbose argument and its short version v is pretty inconsistent and gives a horrible user experience compare these outputs the list command is just an example short form v brew list v colordiff opt homebrew cellar colordiff bin cdiff opt homebrew cellar colordiff bin colordiff opt homebrew cellar colordiff changes opt homebrew cellar colordiff copying opt homebrew cellar colordiff install receipt json opt homebrew cellar colordiff readme opt homebrew cellar colordiff share man cdiff opt homebrew cellar colordiff share man colordiff brew v list colordiff homebrew git revision last commit opt homebrew cellar colordiff bin cdiff opt homebrew cellar colordiff bin colordiff opt homebrew cellar colordiff changes opt homebrew cellar colordiff copying opt homebrew cellar colordiff install receipt json opt homebrew cellar colordiff readme opt homebrew cellar colordiff share man cdiff opt homebrew cellar colordiff share man colordiff notice how the header homebrew git revision last commit is either printed or not depending on the position of the v option in both cases it is interpreted to mean “verbose” long form verbose brew list verbose colordiff opt homebrew cellar colordiff bin cdiff opt homebrew cellar colordiff bin colordiff opt homebrew cellar colordiff changes opt homebrew cellar colordiff copying opt homebrew cellar colordiff install receipt json opt homebrew cellar colordiff readme opt homebrew cellar colordiff share man cdiff opt homebrew cellar colordiff share man colordiff brew verbose list colordiff error unknown command verbose suddenly verbose is recognized as a command and thus not accepted in the same place where its short counterpart is so it isn t exactly equivalent other related cases brew v homebrew git revision last commit brew version git revision last commit without any other arguments v is just an odd variation of version with homebrew prefixed suggestion passing v as the first argument is undocumented as is its special handling and the additional header it prints passing v as the sole argument doesn t seem very useful given that version also exists and is documented if i m not the only one who feels that way i volunteer to clean up brew rb accordingly | 1 |
1,159 | 5,050,869,124 | IssuesEvent | 2016-12-20 20:04:14 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | locale_gen creating duplicate lines in /etc/locale.gen | affects_1.9 bug_report P3 waiting_on_maintainer | ##### Issue Type:
- Bug Report
##### COMPONENT NAME
locale_gen
##### Ansible Version:
```
ansible 1.9.0.1
configured module search path = None
```
##### Ansible Configuration:
No custom config
##### Environment:
Control OS: OS X 10.10.3
Target OS: Arch Linux ARM
##### Summary:
`locale_gen` is generating duplicate locale entries in `/etc/locale.gen` because it uncomments all lines matching the `name` provided to the modle. Some of the `locale` names exist twice due to an example at the top of the file in addition to the list of all available `locales`.
##### Steps To Reproduce:
The base `/etc/locale.gen` file looks like this:
```
# Configuration file for locale-gen
#
# lists of locales that are to be generated by the locale-gen command.
#
# Each line is of the form:
#
# <locale> <charset>
#
# where <locale> is one of the locales given in /usr/share/i18n/locales
# and <charset> is one of the character sets listed in /usr/share/i18n/charmaps
#
# Examples:
# en_US ISO-8859-1
# en_US.UTF-8 UTF-8
# de_DE ISO-8859-1
# de_DE@euro ISO-8859-15
#
# The locale-gen command will generate all the locales,
# placing them in /usr/lib/locale.
#
# A list of supported locales is included in this file.
# Uncomment the ones you need.
#
...
#en_SG.UTF-8 UTF-8
#en_SG ISO-8859-1
#en_US.UTF-8 UTF-8
#en_US ISO-8859-1
#en_ZA.UTF-8 UTF-8
#en_ZA ISO-8859-1
#en_ZM UTF-8
...
```
After running the task:
``` yaml
- name: generate en utf8 locale
locale_gen: name=en_US.UTF-8 state=present
```
```
# Configuration file for locale-gen
#
# lists of locales that are to be generated by the locale-gen command.
#
# Each line is of the form:
#
# <locale> <charset>
#
# where <locale> is one of the locales given in /usr/share/i18n/locales
# and <charset> is one of the character sets listed in /usr/share/i18n/charmaps
#
# Examples:
# en_US ISO-8859-1
en_US.UTF-8 UTF-8
# de_DE ISO-8859-1
# de_DE@euro ISO-8859-15
#
# The locale-gen command will generate all the locales,
# placing them in /usr/lib/locale.
#
# A list of supported locales is included in this file.
# Uncomment the ones you need.
#
...
#en_SG ISO-8859-1
en_US.UTF-8 UTF-8
#en_US ISO-8859-1
#en_ZA.UTF-8 UTF-8
#en_ZA ISO-8859-1
...
```
##### Expected Results:
I would expect only a single line matching the desired locale to exist.
##### Actual Results:
Two lines matching the locale are added. I'm not sure if its an actual problem, but it seems incorrect.
| True | locale_gen creating duplicate lines in /etc/locale.gen - ##### Issue Type:
- Bug Report
##### COMPONENT NAME
locale_gen
##### Ansible Version:
```
ansible 1.9.0.1
configured module search path = None
```
##### Ansible Configuration:
No custom config
##### Environment:
Control OS: OS X 10.10.3
Target OS: Arch Linux ARM
##### Summary:
`locale_gen` is generating duplicate locale entries in `/etc/locale.gen` because it uncomments all lines matching the `name` provided to the modle. Some of the `locale` names exist twice due to an example at the top of the file in addition to the list of all available `locales`.
##### Steps To Reproduce:
The base `/etc/locale.gen` file looks like this:
```
# Configuration file for locale-gen
#
# lists of locales that are to be generated by the locale-gen command.
#
# Each line is of the form:
#
# <locale> <charset>
#
# where <locale> is one of the locales given in /usr/share/i18n/locales
# and <charset> is one of the character sets listed in /usr/share/i18n/charmaps
#
# Examples:
# en_US ISO-8859-1
# en_US.UTF-8 UTF-8
# de_DE ISO-8859-1
# de_DE@euro ISO-8859-15
#
# The locale-gen command will generate all the locales,
# placing them in /usr/lib/locale.
#
# A list of supported locales is included in this file.
# Uncomment the ones you need.
#
...
#en_SG.UTF-8 UTF-8
#en_SG ISO-8859-1
#en_US.UTF-8 UTF-8
#en_US ISO-8859-1
#en_ZA.UTF-8 UTF-8
#en_ZA ISO-8859-1
#en_ZM UTF-8
...
```
After running the task:
``` yaml
- name: generate en utf8 locale
locale_gen: name=en_US.UTF-8 state=present
```
```
# Configuration file for locale-gen
#
# lists of locales that are to be generated by the locale-gen command.
#
# Each line is of the form:
#
# <locale> <charset>
#
# where <locale> is one of the locales given in /usr/share/i18n/locales
# and <charset> is one of the character sets listed in /usr/share/i18n/charmaps
#
# Examples:
# en_US ISO-8859-1
en_US.UTF-8 UTF-8
# de_DE ISO-8859-1
# de_DE@euro ISO-8859-15
#
# The locale-gen command will generate all the locales,
# placing them in /usr/lib/locale.
#
# A list of supported locales is included in this file.
# Uncomment the ones you need.
#
...
#en_SG ISO-8859-1
en_US.UTF-8 UTF-8
#en_US ISO-8859-1
#en_ZA.UTF-8 UTF-8
#en_ZA ISO-8859-1
...
```
##### Expected Results:
I would expect only a single line matching the desired locale to exist.
##### Actual Results:
Two lines matching the locale are added. I'm not sure if its an actual problem, but it seems incorrect.
| main | locale gen creating duplicate lines in etc locale gen issue type bug report component name locale gen ansible version ansible configured module search path none ansible configuration no custom config environment control os os x target os arch linux arm summary locale gen is generating duplicate locale entries in etc locale gen because it uncomments all lines matching the name provided to the modle some of the locale names exist twice due to an example at the top of the file in addition to the list of all available locales steps to reproduce the base etc locale gen file looks like this configuration file for locale gen lists of locales that are to be generated by the locale gen command each line is of the form where is one of the locales given in usr share locales and is one of the character sets listed in usr share charmaps examples en us iso en us utf utf de de iso de de euro iso the locale gen command will generate all the locales placing them in usr lib locale a list of supported locales is included in this file uncomment the ones you need en sg utf utf en sg iso en us utf utf en us iso en za utf utf en za iso en zm utf after running the task yaml name generate en locale locale gen name en us utf state present configuration file for locale gen lists of locales that are to be generated by the locale gen command each line is of the form where is one of the locales given in usr share locales and is one of the character sets listed in usr share charmaps examples en us iso en us utf utf de de iso de de euro iso the locale gen command will generate all the locales placing them in usr lib locale a list of supported locales is included in this file uncomment the ones you need en sg iso en us utf utf en us iso en za utf utf en za iso expected results i would expect only a single line matching the desired locale to exist actual results two lines matching the locale are added i m not sure if its an actual problem but it seems incorrect | 1 |
755,789 | 26,439,735,322 | IssuesEvent | 2023-01-15 20:38:25 | EddieHubCommunity/LinkFree | https://api.github.com/repos/EddieHubCommunity/LinkFree | closed | [FEATURE] Auto-update github if username changes. | ⭐ goal: addition 🟩 priority: low 💬 talk: discussion | ### Description
A api or something to have a look every 10 minutes if the github username exists, if it doesn't create a issue thread and a PR fixing the issue thread, updating the github avatar url and github url, according to new username, using commit history.
### Screenshots
_No response_
### Additional information
It may be confusing coz I didn't explain well lol. | 1.0 | [FEATURE] Auto-update github if username changes. - ### Description
A api or something to have a look every 10 minutes if the github username exists, if it doesn't create a issue thread and a PR fixing the issue thread, updating the github avatar url and github url, according to new username, using commit history.
### Screenshots
_No response_
### Additional information
It may be confusing coz I didn't explain well lol. | non_main | auto update github if username changes description a api or something to have a look every minutes if the github username exists if it doesn t create a issue thread and a pr fixing the issue thread updating the github avatar url and github url according to new username using commit history screenshots no response additional information it may be confusing coz i didn t explain well lol | 0 |
1,777 | 6,575,809,800 | IssuesEvent | 2017-09-11 17:24:47 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | mysql_user invalid privileges string: Invalid privileges specified: frozenset(['\"REPLICATION SLAVE\"']) | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
No custom configuration
##### OS / ENVIRONMENT
OSX 10.11.6
##### SUMMARY
mysql_user throws `invalid privileges string` for the following task
```
- name: create repl mysql user
mysql_user: name=repl password={{ mysql_repl_password }} host=% priv=*.*:"REPLICATION SLAVE",REQUIRESSL
```
##### STEPS TO REPRODUCE
Run the following task, MySQL version is `5.5.51`
```
- name: create repl mysql user
mysql_user: name=repl password={{ mysql_repl_password }} host=% priv=*.*:"REPLICATION SLAVE",REQUIRESSL
```
##### EXPECTED RESULTS
Creation of `repl` user to perform mysql replication.
##### ACTUAL RESULTS
```
TASK [mysql : create repl mysql user] ******************************************
task path: /Users/dbusby/Documents/Projects/Github/*******/playbooks/roles/mysql/tasks/main.yml:56
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: *****
<127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583 `" && echo ansible-tmp-1475578388.34-258119882161583="` echo $HOME/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583 `" ) && sleep 0'"'"''
<127.0.0.1> PUT /var/folders/2g/cfqb94g549q05wndh_w9h7sh0000gn/T/tmpXkrbnY TO /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user
<127.0.0.1> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]'
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: *****
<127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'chmod u+x /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/ /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user && sleep 0'"'"''
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: *****
<127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bmocsvhjerihxuoppgzgxoupupdtgxgi; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user; rm -rf "/home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
fatal: [***-dev]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": false, "config_file": "/root/.my.cnf", "connect_timeout": 30, "encrypted": false, "host": "%", "host_all": false, "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "repl", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": "*.*:\"REPLICATION SLAVE\",REQUIRESSL", "sql_log_bin": true, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "repl"}, "module_name": "mysql_user"}, "msg": "invalid privileges string: Invalid privileges specified: frozenset(['\"REPLICATION SLAVE\"'])"}
```
| True | mysql_user invalid privileges string: Invalid privileges specified: frozenset(['\"REPLICATION SLAVE\"']) - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
No custom configuration
##### OS / ENVIRONMENT
OSX 10.11.6
##### SUMMARY
mysql_user throws `invalid privileges string` for the following task
```
- name: create repl mysql user
mysql_user: name=repl password={{ mysql_repl_password }} host=% priv=*.*:"REPLICATION SLAVE",REQUIRESSL
```
##### STEPS TO REPRODUCE
Run the following task, MySQL version is `5.5.51`
```
- name: create repl mysql user
mysql_user: name=repl password={{ mysql_repl_password }} host=% priv=*.*:"REPLICATION SLAVE",REQUIRESSL
```
##### EXPECTED RESULTS
Creation of `repl` user to perform mysql replication.
##### ACTUAL RESULTS
```
TASK [mysql : create repl mysql user] ******************************************
task path: /Users/dbusby/Documents/Projects/Github/*******/playbooks/roles/mysql/tasks/main.yml:56
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: *****
<127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583 `" && echo ansible-tmp-1475578388.34-258119882161583="` echo $HOME/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583 `" ) && sleep 0'"'"''
<127.0.0.1> PUT /var/folders/2g/cfqb94g549q05wndh_w9h7sh0000gn/T/tmpXkrbnY TO /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user
<127.0.0.1> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]'
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: *****
<127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'chmod u+x /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/ /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user && sleep 0'"'"''
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: *****
<127.0.0.1> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=8222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=***** -o ConnectTimeout=10 -o ControlPath=/Users/dbusby/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bmocsvhjerihxuoppgzgxoupupdtgxgi; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/mysql_user; rm -rf "/home/*****/.ansible/tmp/ansible-tmp-1475578388.34-258119882161583/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
fatal: [***-dev]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": false, "config_file": "/root/.my.cnf", "connect_timeout": 30, "encrypted": false, "host": "%", "host_all": false, "login_host": "localhost", "login_password": null, "login_port": 3306, "login_unix_socket": null, "login_user": null, "name": "repl", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": "*.*:\"REPLICATION SLAVE\",REQUIRESSL", "sql_log_bin": true, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "repl"}, "module_name": "mysql_user"}, "msg": "invalid privileges string: Invalid privileges specified: frozenset(['\"REPLICATION SLAVE\"'])"}
```
| main | mysql user invalid privileges string invalid privileges specified frozenset issue type bug report component name mysql user ansible version ansible config file configured module search path default w o overrides configuration no custom configuration os environment osx summary mysql user throws invalid privileges string for the following task name create repl mysql user mysql user name repl password mysql repl password host priv replication slave requiressl steps to reproduce run the following task mysql version is name create repl mysql user mysql user name repl password mysql repl password host priv replication slave requiressl expected results creation of repl user to perform mysql replication actual results task task path users dbusby documents projects github playbooks roles mysql tasks main yml establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath users dbusby ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpxkrbny to home ansible tmp ansible tmp mysql user ssh exec sftp b c vvv o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath users dbusby ansible cp ansible ssh h p r establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath users dbusby ansible cp ansible ssh h p r bin sh c chmod u x home ansible tmp ansible tmp home ansible tmp ansible tmp mysql user sleep establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath users dbusby ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success bmocsvhjerihxuoppgzgxoupupdtgxgi lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp mysql user rm rf home ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args append privs false check implicit admin false config file root my cnf connect timeout encrypted false host host all false login host localhost login password null login port login unix socket null login user null name repl password value specified in no log parameter priv replication slave requiressl sql log bin true ssl ca null ssl cert null ssl key null state present update password always user repl module name mysql user msg invalid privileges string invalid privileges specified frozenset | 1 |
1,684 | 6,574,154,670 | IssuesEvent | 2017-09-11 11:44:06 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ios_command module fails with "msg": "matched error in response: ..." | affects_2.3 bug_report networking waiting_on_maintainer |
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ios_command
##### ANSIBLE VERSION
```
ansible 2.3.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When running the ios_command module with "show ip bgp" on a Cisco 6500 the module fails with;
"msg": "matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"
}
There is a difference in the output and Ansible seems to react to the output from the Cisco 6500 containing some keywords that causes it to believe the command failed.
Actual output from a Cisco 6500;
BGP table version is 61521, local router ID is x.x.x.x
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
--- output cut ---
Same command from a Cisco 3750(works);
BGP table version is 789, local router ID is 185.25.44.53
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
--- output cut ----
##### STEPS TO REPRODUCE
Running this playbook towards a Cisco 6500
```
---
- hosts: [prod_rt]
connection: local
gather_facts: no
tasks:
- name: Run command
ios_command:
host: "{{ inventory_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
commands:
- 'show ip bgp'
register: output
- debug: msg={{ output.stdout_lines }}
```
##### EXPECTED RESULTS
Running this command on another router(tried on 3750) works and shows the router output as intended.
##### ACTUAL RESULTS
Fails with;
"msg": "matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"
}
```
pa@PA-Hanssons-MacBook-Pro:~/PycharmProjects/network_ansible$ ansible-playbook show_ip_bgp_test.yml -l rt1.age -vvv
Using /Users/pa/PycharmProjects/network_ansible/ansible.cfg as config file
PLAYBOOK: show_ip_bgp_test.yml *************************************************
1 plays in show_ip_bgp_test.yml
PLAY [prod_rt] *****************************************************************
TASK [Run command] *************************************************************
task path: /Users/pa/PycharmProjects/network_ansible/show_ip_bgp_test.yml:7
Using module file /Library/Python/2.7/site-packages/ansible/modules/core/network/ios/ios_command.py
<rt1.age> ESTABLISH LOCAL CONNECTION FOR USER: pa
<rt1.age> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979 `" && echo ansible-tmp-1479388715.07-39385131693979="` echo $HOME/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979 `" ) && sleep 0'
<rt1.age> PUT /var/folders/d1/xrrfzjfd52n_9rrtyl2wckl40000gp/T/tmp6bkoai TO /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py
<rt1.age> EXEC /bin/sh -c 'chmod u+x /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py && sleep 0'
<rt1.age> EXEC /bin/sh -c '/usr/bin/python /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py; rm -rf "/Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/" > /dev/null 2>&1 && sleep 0'
fatal: [rt1.age]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": false,
"commands": [
"show ip bgp"
],
"host": "rt1.age",
"interval": 1,
"match": "all",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": null,
"provider": null,
"retries": 10,
"ssh_keyfile": null,
"timeout": 10,
"transport": null,
"use_ssl": true,
"username": "<removed>",
"validate_certs": true,
"wait_for": null
},
"module_name": "ios_command"
},
"msg": "matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"
}
to retry, use: --limit @/Users/pa/PycharmProjects/network_ansible/show_ip_bgp_test.retry
PLAY RECAP *********************************************************************
rt1.age : ok=0 changed=0 unreachable=0 failed=1
```
| True | ios_command module fails with "msg": "matched error in response: ..." -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ios_command
##### ANSIBLE VERSION
```
ansible 2.3.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When running the ios_command module with "show ip bgp" on a Cisco 6500 the module fails with;
"msg": "matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"
}
There is a difference in the output and Ansible seems to react to the output from the Cisco 6500 containing some keywords that causes it to believe the command failed.
Actual output from a Cisco 6500;
BGP table version is 61521, local router ID is x.x.x.x
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
--- output cut ---
Same command from a Cisco 3750(works);
BGP table version is 789, local router ID is 185.25.44.53
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete
--- output cut ----
##### STEPS TO REPRODUCE
Running this playbook towards a Cisco 6500
```
---
- hosts: [prod_rt]
connection: local
gather_facts: no
tasks:
- name: Run command
ios_command:
host: "{{ inventory_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
commands:
- 'show ip bgp'
register: output
- debug: msg={{ output.stdout_lines }}
```
##### EXPECTED RESULTS
Running this command on another router(tried on 3750) works and shows the router output as intended.
##### ACTUAL RESULTS
Fails with;
"msg": "matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"
}
```
pa@PA-Hanssons-MacBook-Pro:~/PycharmProjects/network_ansible$ ansible-playbook show_ip_bgp_test.yml -l rt1.age -vvv
Using /Users/pa/PycharmProjects/network_ansible/ansible.cfg as config file
PLAYBOOK: show_ip_bgp_test.yml *************************************************
1 plays in show_ip_bgp_test.yml
PLAY [prod_rt] *****************************************************************
TASK [Run command] *************************************************************
task path: /Users/pa/PycharmProjects/network_ansible/show_ip_bgp_test.yml:7
Using module file /Library/Python/2.7/site-packages/ansible/modules/core/network/ios/ios_command.py
<rt1.age> ESTABLISH LOCAL CONNECTION FOR USER: pa
<rt1.age> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979 `" && echo ansible-tmp-1479388715.07-39385131693979="` echo $HOME/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979 `" ) && sleep 0'
<rt1.age> PUT /var/folders/d1/xrrfzjfd52n_9rrtyl2wckl40000gp/T/tmp6bkoai TO /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py
<rt1.age> EXEC /bin/sh -c 'chmod u+x /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py && sleep 0'
<rt1.age> EXEC /bin/sh -c '/usr/bin/python /Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/ios_command.py; rm -rf "/Users/pa/.ansible/tmp/ansible-tmp-1479388715.07-39385131693979/" > /dev/null 2>&1 && sleep 0'
fatal: [rt1.age]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": false,
"commands": [
"show ip bgp"
],
"host": "rt1.age",
"interval": 1,
"match": "all",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": null,
"provider": null,
"retries": 10,
"ssh_keyfile": null,
"timeout": 10,
"transport": null,
"use_ssl": true,
"username": "<removed>",
"validate_certs": true,
"wait_for": null
},
"module_name": "ios_command"
},
"msg": "matched error in response: up-path, f RT-Filter, \r\n x best-external, a additional-path, c RIB-compressed, \r\nOrigin codes: i - IGP, e - EGP, ? - incomplete\r\nRPKI validation codes: V valid, I invalid, N Not found\r\n\r\n"
}
to retry, use: --limit @/Users/pa/PycharmProjects/network_ansible/show_ip_bgp_test.retry
PLAY RECAP *********************************************************************
rt1.age : ok=0 changed=0 unreachable=0 failed=1
```
| main | ios command module fails with msg matched error in response issue type bug report component name ios command ansible version ansible configuration os environment n a summary when running the ios command module with show ip bgp on a cisco the module fails with msg matched error in response up path f rt filter r n x best external a additional path c rib compressed r norigin codes i igp e egp incomplete r nrpki validation codes v valid i invalid n not found r n r n there is a difference in the output and ansible seems to react to the output from the cisco containing some keywords that causes it to believe the command failed actual output from a cisco bgp table version is local router id is x x x x status codes s suppressed d damped h history valid best i internal r rib failure s stale m multipath b backup path f rt filter x best external a additional path c rib compressed origin codes i igp e egp incomplete rpki validation codes v valid i invalid n not found output cut same command from a cisco works bgp table version is local router id is status codes s suppressed d damped h history valid best i internal r rib failure s stale origin codes i igp e egp incomplete output cut steps to reproduce running this playbook towards a cisco hosts connection local gather facts no tasks name run command ios command host inventory hostname username username password password commands show ip bgp register output debug msg output stdout lines expected results running this command on another router tried on works and shows the router output as intended actual results fails with msg matched error in response up path f rt filter r n x best external a additional path c rib compressed r norigin codes i igp e egp incomplete r nrpki validation codes v valid i invalid n not found r n r n pa pa hanssons macbook pro pycharmprojects network ansible ansible playbook show ip bgp test yml l age vvv using users pa pycharmprojects network ansible ansible cfg as config file playbook show ip bgp test yml plays in show ip bgp test yml play task task path users pa pycharmprojects network ansible show ip bgp test yml using module file library python site packages ansible modules core network ios ios command py establish local connection for user pa exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t to users pa ansible tmp ansible tmp ios command py exec bin sh c chmod u x users pa ansible tmp ansible tmp users pa ansible tmp ansible tmp ios command py sleep exec bin sh c usr bin python users pa ansible tmp ansible tmp ios command py rm rf users pa ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args auth pass null authorize false commands show ip bgp host age interval match all password value specified in no log parameter port null provider null retries ssh keyfile null timeout transport null use ssl true username validate certs true wait for null module name ios command msg matched error in response up path f rt filter r n x best external a additional path c rib compressed r norigin codes i igp e egp incomplete r nrpki validation codes v valid i invalid n not found r n r n to retry use limit users pa pycharmprojects network ansible show ip bgp test retry play recap age ok changed unreachable failed | 1 |
3,098 | 11,784,345,986 | IssuesEvent | 2020-03-17 08:09:25 | EMS-TU-Ilmenau/fastmat | https://api.github.com/repos/EMS-TU-Ilmenau/fastmat | closed | Specifying unprocessed keyworded arguments to an __init__ should raise a warning | maintainance polishing | Since we are refactoring also names of options, this would cause silent breaks due to refactoring if we do not output sensible warnings. Especially since the names of the row and column selection parameters in Partial were changed, this may cause intransparent problems with code using the legacy interface.
Also, the use of the old names should also be allowed a grace period of deprecation warnings. | True | Specifying unprocessed keyworded arguments to an __init__ should raise a warning - Since we are refactoring also names of options, this would cause silent breaks due to refactoring if we do not output sensible warnings. Especially since the names of the row and column selection parameters in Partial were changed, this may cause intransparent problems with code using the legacy interface.
Also, the use of the old names should also be allowed a grace period of deprecation warnings. | main | specifying unprocessed keyworded arguments to an init should raise a warning since we are refactoring also names of options this would cause silent breaks due to refactoring if we do not output sensible warnings especially since the names of the row and column selection parameters in partial were changed this may cause intransparent problems with code using the legacy interface also the use of the old names should also be allowed a grace period of deprecation warnings | 1 |
71,541 | 15,207,774,937 | IssuesEvent | 2021-02-17 00:59:33 | billmcchesney1/foxtrot | https://api.github.com/repos/billmcchesney1/foxtrot | opened | CVE-2020-9546 (High) detected in jackson-databind-2.9.9.1.jar | security vulnerability | ## CVE-2020-9546 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: foxtrot/foxtrot-sql/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar</p>
<p>
Dependency Hierarchy:
- dropwizard-jackson-1.3.13.jar (Root Library)
- :x: **jackson-databind-2.9.9.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/foxtrot/commit/ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc">ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546>CVE-2020-9546</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.1","packageFilePaths":["/foxtrot-sql/pom.xml","/foxtrot-core/pom.xml","/foxtrot-server/pom.xml","/foxtrot-common/pom.xml","/foxtrot-translator/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.dropwizard:dropwizard-jackson:1.3.13;com.fasterxml.jackson.core:jackson-databind:2.9.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.10.3"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-9546","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-9546 (High) detected in jackson-databind-2.9.9.1.jar - ## CVE-2020-9546 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: foxtrot/foxtrot-sql/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.1/jackson-databind-2.9.9.1.jar</p>
<p>
Dependency Hierarchy:
- dropwizard-jackson-1.3.13.jar (Root Library)
- :x: **jackson-databind-2.9.9.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/foxtrot/commit/ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc">ffb8a6014463ce8aac1bf6e7dc9a23fc4a2a8adc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546>CVE-2020-9546</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.1","packageFilePaths":["/foxtrot-sql/pom.xml","/foxtrot-core/pom.xml","/foxtrot-server/pom.xml","/foxtrot-common/pom.xml","/foxtrot-translator/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"io.dropwizard:dropwizard-jackson:1.3.13;com.fasterxml.jackson.core:jackson-databind:2.9.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.10.3"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-9546","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file foxtrot foxtrot sql pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy dropwizard jackson jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache hadoop shaded com zaxxer hikari hikariconfig aka shaded hikari config publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree io dropwizard dropwizard jackson com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache hadoop shaded com zaxxer hikari hikariconfig aka shaded hikari config vulnerabilityurl | 0 |
3,949 | 17,910,025,121 | IssuesEvent | 2021-09-09 03:04:20 | tgstation/tgstation-server | https://api.github.com/repos/tgstation/tgstation-server | closed | Check that Version 4 isn't hardcoded anywhere | Maintainability Issue Good First Issue Backlog | We will bump to TGS5 eventually. We'll need to ensure we're not hardcoding the number anywhere.
Goddamn ReleaseNotes is a big offender. | True | Check that Version 4 isn't hardcoded anywhere - We will bump to TGS5 eventually. We'll need to ensure we're not hardcoding the number anywhere.
Goddamn ReleaseNotes is a big offender. | main | check that version isn t hardcoded anywhere we will bump to eventually we ll need to ensure we re not hardcoding the number anywhere goddamn releasenotes is a big offender | 1 |
287,494 | 8,816,105,993 | IssuesEvent | 2018-12-30 05:16:36 | askdfjlas/src_textbook | https://api.github.com/repos/askdfjlas/src_textbook | closed | Organize by Subjects | back-end front-end priority: high | Make a separate table in the database as a list of subjects, which should be read from a file. Client-side should show offers listed under subjects. | 1.0 | Organize by Subjects - Make a separate table in the database as a list of subjects, which should be read from a file. Client-side should show offers listed under subjects. | non_main | organize by subjects make a separate table in the database as a list of subjects which should be read from a file client side should show offers listed under subjects | 0 |
21,619 | 14,671,338,604 | IssuesEvent | 2020-12-30 07:53:38 | omapsapp/omapsapp | https://api.github.com/repos/omapsapp/omapsapp | closed | Find Linux machine for GitLab CI/CD Runner | devops infrastructure | This project is heavy-weight. We need our own CI/CD runner to build Android/iOS binaries.
| 1.0 | Find Linux machine for GitLab CI/CD Runner - This project is heavy-weight. We need our own CI/CD runner to build Android/iOS binaries.
| non_main | find linux machine for gitlab ci cd runner this project is heavy weight we need our own ci cd runner to build android ios binaries | 0 |
76,656 | 9,477,889,373 | IssuesEvent | 2019-04-19 20:21:14 | quicwg/base-drafts | https://api.github.com/repos/quicwg/base-drafts | closed | Don't change CID on peer CID change | -transport design | ```Endpoints that use connection IDs with length greater than zero could have their
activity correlated if their peers keep using the same destination connection ID
after migration. Endpoints that receive packets with a previously unused
Destination Connection ID SHOULD change to sending packets with a connection ID
that has not been used on any other network path. The goal here is to ensure
that packets sent on different paths cannot be correlated. To fulfill this
privacy requirement, endpoints that initiate migration and use connection IDs
with length greater than zero SHOULD provide their peers with new connection IDs
before migration.
Caution:
: If both endpoints change connection ID in response to seeing a change in
connection ID from their peer, then this can trigger an infinite sequence of
changes.
```
I don't remember this being what we agreed, and it's not necessary. You only need to change when you see a new path, not after a new CID. That's how we got rid of counting to infinity. | 1.0 | Don't change CID on peer CID change - ```Endpoints that use connection IDs with length greater than zero could have their
activity correlated if their peers keep using the same destination connection ID
after migration. Endpoints that receive packets with a previously unused
Destination Connection ID SHOULD change to sending packets with a connection ID
that has not been used on any other network path. The goal here is to ensure
that packets sent on different paths cannot be correlated. To fulfill this
privacy requirement, endpoints that initiate migration and use connection IDs
with length greater than zero SHOULD provide their peers with new connection IDs
before migration.
Caution:
: If both endpoints change connection ID in response to seeing a change in
connection ID from their peer, then this can trigger an infinite sequence of
changes.
```
I don't remember this being what we agreed, and it's not necessary. You only need to change when you see a new path, not after a new CID. That's how we got rid of counting to infinity. | non_main | don t change cid on peer cid change endpoints that use connection ids with length greater than zero could have their activity correlated if their peers keep using the same destination connection id after migration endpoints that receive packets with a previously unused destination connection id should change to sending packets with a connection id that has not been used on any other network path the goal here is to ensure that packets sent on different paths cannot be correlated to fulfill this privacy requirement endpoints that initiate migration and use connection ids with length greater than zero should provide their peers with new connection ids before migration caution if both endpoints change connection id in response to seeing a change in connection id from their peer then this can trigger an infinite sequence of changes i don t remember this being what we agreed and it s not necessary you only need to change when you see a new path not after a new cid that s how we got rid of counting to infinity | 0 |
2,589 | 8,813,246,760 | IssuesEvent | 2018-12-28 19:09:07 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Weapon firing mechanism | Maintainability/Hinders improvements | Issue reported from Round ID: 93745 (/tg/Station Terry [EU] [100% LAG FREE])
Reporting client version: 512
If you scrap the weapon firing mechanism with the integrated curcuit printer, the weapon inside gets deleted | True | Weapon firing mechanism - Issue reported from Round ID: 93745 (/tg/Station Terry [EU] [100% LAG FREE])
Reporting client version: 512
If you scrap the weapon firing mechanism with the integrated curcuit printer, the weapon inside gets deleted | main | weapon firing mechanism issue reported from round id tg station terry reporting client version if you scrap the weapon firing mechanism with the integrated curcuit printer the weapon inside gets deleted | 1 |
5,298 | 26,766,248,919 | IssuesEvent | 2023-01-31 10:50:59 | Windham-High-School/CubeServer-api-python | https://api.github.com/repos/Windham-High-School/CubeServer-api-python | closed | Versioning system | enhancement maintainability | The API wrapper needs to be versioned properly and have releases that state compatibility with different versions of the server. | True | Versioning system - The API wrapper needs to be versioned properly and have releases that state compatibility with different versions of the server. | main | versioning system the api wrapper needs to be versioned properly and have releases that state compatibility with different versions of the server | 1 |
52,064 | 21,961,298,007 | IssuesEvent | 2022-05-24 16:02:45 | Azure/azure-sdk-for-net | https://api.github.com/repos/Azure/azure-sdk-for-net | closed | Manage claim required errors | Service Bus Client customer-reported question Functions issue-addressed | I have the following setup:
- azure function with a ServiceBus trigger (topic subscription)
- for authentication I'm using the ManagedIdentity approach and for this I assigned the _"Azure Service Bus Data Receiver"_ role to the MI for that specific topic subscription
- the function is using the dotnet runtime with Microsoft.Azure.WebJobs.Extensions.ServiceBus nuget version used is 5.3.0
- the function runtime version is 4 (dotnet 6)
The documentation states the following regarding the Access attribute and Manage access permission:
"Access rights for the connection string. Available values are manage and listen. The default is manage, which indicates that the connection has the Manage permission. If you use a connection string that does not have the Manage permission, set accessRights to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations."
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus-trigger?tabs=in-process%2Cextensionv5&pivots=programming-language-csharp#attributes
My understanding is that Manage claim in not supported for version 2 or higher.
The issue:
I can see a constant stream of _"Manage,EntityRead claims required for this operation."_ errors in Service Bus AzureDiagnostics logs.
The function on the other side is triggered and there are no logs on its side complaining about any kind of ServiceBus access.
What would be the cause of this? Is Manage still required? | 1.0 | Manage claim required errors - I have the following setup:
- azure function with a ServiceBus trigger (topic subscription)
- for authentication I'm using the ManagedIdentity approach and for this I assigned the _"Azure Service Bus Data Receiver"_ role to the MI for that specific topic subscription
- the function is using the dotnet runtime with Microsoft.Azure.WebJobs.Extensions.ServiceBus nuget version used is 5.3.0
- the function runtime version is 4 (dotnet 6)
The documentation states the following regarding the Access attribute and Manage access permission:
"Access rights for the connection string. Available values are manage and listen. The default is manage, which indicates that the connection has the Manage permission. If you use a connection string that does not have the Manage permission, set accessRights to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations."
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus-trigger?tabs=in-process%2Cextensionv5&pivots=programming-language-csharp#attributes
My understanding is that Manage claim in not supported for version 2 or higher.
The issue:
I can see a constant stream of _"Manage,EntityRead claims required for this operation."_ errors in Service Bus AzureDiagnostics logs.
The function on the other side is triggered and there are no logs on its side complaining about any kind of ServiceBus access.
What would be the cause of this? Is Manage still required? | non_main | manage claim required errors i have the following setup azure function with a servicebus trigger topic subscription for authentication i m using the managedidentity approach and for this i assigned the azure service bus data receiver role to the mi for that specific topic subscription the function is using the dotnet runtime with microsoft azure webjobs extensions servicebus nuget version used is the function runtime version is dotnet the documentation states the following regarding the access attribute and manage access permission access rights for the connection string available values are manage and listen the default is manage which indicates that the connection has the manage permission if you use a connection string that does not have the manage permission set accessrights to listen otherwise the functions runtime might fail trying to do operations that require manage rights in azure functions version x and higher this property is not available because the latest version of the service bus sdk doesn t support manage operations my understanding is that manage claim in not supported for version or higher the issue i can see a constant stream of manage entityread claims required for this operation errors in service bus azurediagnostics logs the function on the other side is triggered and there are no logs on its side complaining about any kind of servicebus access what would be the cause of this is manage still required | 0 |
5,010 | 25,758,665,060 | IssuesEvent | 2022-12-08 18:28:09 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | SPIKE [PNI] Improve performance of product grid load time | engineering Buyer's Guide 🛍 Maintain p3 | # Description
There is currently an issue where the PNI pages with the product grid (namely home page and the review/category pages) load extremely slowly for logged in users.
I am guessing the template caching gets around this for the anonymous users.
```html+django
{% if request.user.is_anonymous %}
{# User is not logged in. Return cached results. 24 hour caching applied. #}
{% cache 86400 pni_home_page template_cache_key_fragment %}
{% for product in products %}
{% product_in_category product category as matched %}
{% include "fragments/buyersguide/item.html" with product=product.localized matched=matched %}
{% endfor %}
{% endcache %}
{% else %}
{# User is logged in. Don't cache their results so they can see live and draft products here. #}
{% for product in products %}
{% product_in_category product category as matched %}
{% include "fragments/buyersguide/item.html" with product=product.localized matched=matched %}
{% endfor %}
{% endif %}
```
But it still affects logged in users very badly and makes development extremely painful.
I am guessing the issue is related to the large queryset we are using and that we use 1+n queries to retrieve the localized version of each product (`product=product.localized`).
We should investigate the root cause for the bad load times implement some performance improvements. Move the retrieval of the localized products into the view and make sure we only need 1 or 2 queries for this (anything below n would be good). Also, prefetch related items like the image on each product. That should also be possible to include in the above query. | True | SPIKE [PNI] Improve performance of product grid load time - # Description
There is currently an issue where the PNI pages with the product grid (namely home page and the review/category pages) load extremely slowly for logged in users.
I am guessing the template caching gets around this for the anonymous users.
```html+django
{% if request.user.is_anonymous %}
{# User is not logged in. Return cached results. 24 hour caching applied. #}
{% cache 86400 pni_home_page template_cache_key_fragment %}
{% for product in products %}
{% product_in_category product category as matched %}
{% include "fragments/buyersguide/item.html" with product=product.localized matched=matched %}
{% endfor %}
{% endcache %}
{% else %}
{# User is logged in. Don't cache their results so they can see live and draft products here. #}
{% for product in products %}
{% product_in_category product category as matched %}
{% include "fragments/buyersguide/item.html" with product=product.localized matched=matched %}
{% endfor %}
{% endif %}
```
But it still affects logged in users very badly and makes development extremely painful.
I am guessing the issue is related to the large queryset we are using and that we use 1+n queries to retrieve the localized version of each product (`product=product.localized`).
We should investigate the root cause for the bad load times implement some performance improvements. Move the retrieval of the localized products into the view and make sure we only need 1 or 2 queries for this (anything below n would be good). Also, prefetch related items like the image on each product. That should also be possible to include in the above query. | main | spike improve performance of product grid load time description there is currently an issue where the pni pages with the product grid namely home page and the review category pages load extremely slowly for logged in users i am guessing the template caching gets around this for the anonymous users html django if request user is anonymous user is not logged in return cached results hour caching applied cache pni home page template cache key fragment for product in products product in category product category as matched include fragments buyersguide item html with product product localized matched matched endfor endcache else user is logged in don t cache their results so they can see live and draft products here for product in products product in category product category as matched include fragments buyersguide item html with product product localized matched matched endfor endif but it still affects logged in users very badly and makes development extremely painful i am guessing the issue is related to the large queryset we are using and that we use n queries to retrieve the localized version of each product product product localized we should investigate the root cause for the bad load times implement some performance improvements move the retrieval of the localized products into the view and make sure we only need or queries for this anything below n would be good also prefetch related items like the image on each product that should also be possible to include in the above query | 1 |
1,221 | 5,216,898,376 | IssuesEvent | 2017-01-26 12:00:22 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_group should allow for security group revocations | affects_2.1 aws cloud feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_group
##### ANSIBLE VERSION
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
##### OS / ENVIRONMENT
N/A
##### SUMMARY
ec2_group allows us to create security groups, to remove security groups and to add new rules to security groups. What it doesn't allow us to do is to remove rules, so we can't easily revoke rules that have been set. This feature is already available in AWS and boto3 ([relevant documentation](https://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.SecurityGroup.revoke_ingress).
##### STEPS TO REPRODUCE
Let's say you have a playbook that launches a new EC2 instance. That role also adds a new rule to an existing ELB, so that the new EC2 instance can access it. If you now have a `cleanup` playbook that destroys that EC2 instance, you ideally want to remove the rule you just added to the ELB. A simple revoke would be ideal, but that feature is not available.
We could have two new options to the module, `revoke_rules` and `revoke_rules_egress`, to take care of this.
| True | ec2_group should allow for security group revocations - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_group
##### ANSIBLE VERSION
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
##### OS / ENVIRONMENT
N/A
##### SUMMARY
ec2_group allows us to create security groups, to remove security groups and to add new rules to security groups. What it doesn't allow us to do is to remove rules, so we can't easily revoke rules that have been set. This feature is already available in AWS and boto3 ([relevant documentation](https://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.SecurityGroup.revoke_ingress).
##### STEPS TO REPRODUCE
Let's say you have a playbook that launches a new EC2 instance. That role also adds a new rule to an existing ELB, so that the new EC2 instance can access it. If you now have a `cleanup` playbook that destroys that EC2 instance, you ideally want to remove the rule you just added to the ELB. A simple revoke would be ideal, but that feature is not available.
We could have two new options to the module, `revoke_rules` and `revoke_rules_egress`, to take care of this.
| main | group should allow for security group revocations issue type feature idea component name group ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment n a summary group allows us to create security groups to remove security groups and to add new rules to security groups what it doesn t allow us to do is to remove rules so we can t easily revoke rules that have been set this feature is already available in aws and steps to reproduce let s say you have a playbook that launches a new instance that role also adds a new rule to an existing elb so that the new instance can access it if you now have a cleanup playbook that destroys that instance you ideally want to remove the rule you just added to the elb a simple revoke would be ideal but that feature is not available we could have two new options to the module revoke rules and revoke rules egress to take care of this | 1 |
2,757 | 9,872,900,315 | IssuesEvent | 2019-06-22 09:15:36 | arcticicestudio/snowsaw | https://api.github.com/repos/arcticicestudio/snowsaw | opened | Prettier | context-workflow scope-maintainability type-feature | <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48644231-4556d780-e9e2-11e8-862e-e8ce630fd0ba.png" width="30%" /></p>
Integrate [Prettier][], the opinionated code formatter with support for many languages and integrations with most editors. It ensures that all outputted code conforms to a consistent style.
### Configuration
This is one of the main features of Prettier: It already provides the best and recommended style configurations of-out-the-box™.
The only option we will change is the [print width][prettier-docs-pwidth]. It is set to 80 by default which not up-to-date for modern screens (might only be relevant when working in terminals only like e.g. with Vim). It'll be changed to 120 used by all of [_Arctic Ice Studio's_ style guides][gh-search-repo-stylg].
The `prettier.config.js` configuration file will be placed in the project root as well as the `.prettierignore` file to also define ignore pattern.
### NPM script/task
To allow to format all sources a `format:pretty` npm script/task will be added to be included in the main `format` script flow.
## Tasks
- [ ] Install [prettier][npm-prettier] package.
- [ ] Implement `prettier.config.js` configuration file.
- [ ] Implement `.prettierignore` ignore pattern file.
- [ ] Implement NPM `format:pretty` script/task.
- [ ] Format current code base for the first time and fix possible style guide violations using the configured linters of the project.
[npm-prettier]: https://www.npmjs.com/package/prettier
[prettier]: https://prettier.io
[prettier-blog-1.15-mdx]: https://prettier.io/blog/2018/11/07/1.15.0.html#mdx
[prettier-docs-pwidth]: https://prettier.io/docs/en/options.html#print-width
[gh-search-repo-stylg]: https://github.com/arcticicestudio?tab=repositories&q=styleguide&type=source | True | Prettier - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48644231-4556d780-e9e2-11e8-862e-e8ce630fd0ba.png" width="30%" /></p>
Integrate [Prettier][], the opinionated code formatter with support for many languages and integrations with most editors. It ensures that all outputted code conforms to a consistent style.
### Configuration
This is one of the main features of Prettier: It already provides the best and recommended style configurations of-out-the-box™.
The only option we will change is the [print width][prettier-docs-pwidth]. It is set to 80 by default which not up-to-date for modern screens (might only be relevant when working in terminals only like e.g. with Vim). It'll be changed to 120 used by all of [_Arctic Ice Studio's_ style guides][gh-search-repo-stylg].
The `prettier.config.js` configuration file will be placed in the project root as well as the `.prettierignore` file to also define ignore pattern.
### NPM script/task
To allow to format all sources a `format:pretty` npm script/task will be added to be included in the main `format` script flow.
## Tasks
- [ ] Install [prettier][npm-prettier] package.
- [ ] Implement `prettier.config.js` configuration file.
- [ ] Implement `.prettierignore` ignore pattern file.
- [ ] Implement NPM `format:pretty` script/task.
- [ ] Format current code base for the first time and fix possible style guide violations using the configured linters of the project.
[npm-prettier]: https://www.npmjs.com/package/prettier
[prettier]: https://prettier.io
[prettier-blog-1.15-mdx]: https://prettier.io/blog/2018/11/07/1.15.0.html#mdx
[prettier-docs-pwidth]: https://prettier.io/docs/en/options.html#print-width
[gh-search-repo-stylg]: https://github.com/arcticicestudio?tab=repositories&q=styleguide&type=source | main | prettier integrate the opinionated code formatter with support for many languages and integrations with most editors it ensures that all outputted code conforms to a consistent style configuration this is one of the main features of prettier it already provides the best and recommended style configurations of out the box™ the only option we will change is the it is set to by default which not up to date for modern screens might only be relevant when working in terminals only like e g with vim it ll be changed to used by all of the prettier config js configuration file will be placed in the project root as well as the prettierignore file to also define ignore pattern npm script task to allow to format all sources a format pretty npm script task will be added to be included in the main format script flow tasks install package implement prettier config js configuration file implement prettierignore ignore pattern file implement npm format pretty script task format current code base for the first time and fix possible style guide violations using the configured linters of the project | 1 |
118,464 | 15,300,985,355 | IssuesEvent | 2021-02-24 13:02:50 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | Design Vet Center form design | Content forms Design Needs refining | ## Description
Form design for top Vet Center tasks (orange labels in this diagram)

## Acceptance Criteria
- [ ] TBD
| 1.0 | Design Vet Center form design - ## Description
Form design for top Vet Center tasks (orange labels in this diagram)

## Acceptance Criteria
- [ ] TBD
| non_main | design vet center form design description form design for top vet center tasks orange labels in this diagram acceptance criteria tbd | 0 |
841 | 4,488,907,567 | IssuesEvent | 2016-08-30 09:06:37 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Quack and Hack Cheat Sheet - Add forum link | Maintainer Input Requested | The new forum at forum.duckduckhack.com should be added to this cheat sheet.
------
IA Page: http://duck.co/ia/view/quackhack_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @zekiel | True | Quack and Hack Cheat Sheet - Add forum link - The new forum at forum.duckduckhack.com should be added to this cheat sheet.
------
IA Page: http://duck.co/ia/view/quackhack_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @zekiel | main | quack and hack cheat sheet add forum link the new forum at forum duckduckhack com should be added to this cheat sheet ia page zekiel | 1 |
3,988 | 18,443,399,399 | IssuesEvent | 2021-10-14 21:07:10 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | sam local invoke throws "Error: Error building docker image" | blocked/more-info-needed area/local/invoke maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
We are encountering below error while locally invoking a lambda in AWS code build machine
Image was not found.
842 | Building image............................................................................................................................
843 | Failed to build Docker Image
844 | NoneType: None
845 | Exception on /2015-03-31/functions/FunctionName/invocations [POST]
846 | Traceback (most recent call last):
847 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
848 | response = self.full_dispatch_request()
849 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
850 | rv = self.handle_user_exception(e)
851 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
852 | reraise(exc_type, exc_value, tb)
853 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
854 | raise value
855 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
856 | rv = self.dispatch_request()
857 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
858 | return self.view_functionsrule.endpoint
859 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/lambda_service/local_lambda_invoke_service.py", line 151, in _invoke_request_handler
860 | self.lambda_runner.invoke(function_name, request_data, stdout=stdout_stream_writer, stderr=self.stderr)
861 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/commands/local/lib/local_lambda.py", line 130, in invoke
862 | self.local_runtime.invoke(
863 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 217, in wrapped_func
864 | return_value = func(*args, **kwargs)
865 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 176, in invoke
866 | container = self.create(function_config, debug_context, container_host, container_host_interface)
867 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 73, in create
868 | container = LambdaContainer(
869 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 87, in init
870 | image = LambdaContainer._get_image(lambda_image, runtime, packagetype, imageuri, layers)
871 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 213, in _get_image
872 | return lambda_image.build(runtime, packagetype, image, layers)
873 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 144, in build
874 | self._build_image(image if image else image_name, image_tag, downloaded_layers, stream=stream_writer)
875 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 245, in _build_image
876 | raise ImageBuildException("Error building docker image: {}".format(log["error"]))
877 | samcli.commands.local.cli_common.user_exceptions.ImageBuildException: Error building docker image: The command '/bin/sh -c chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1
Build machine specs:
AWS CodeBuild - Linux env
SAM CLI --version: 1.27.2
AWS region: us-east-1
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
### Observed result:
<!-- Please provide command output with `--debug` flag set. -->
### Expected result:
<!-- Describe what you expected. -->
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS:
2. `sam --version`:
3. AWS region:
`Add --debug flag to command you are running`
| True | sam local invoke throws "Error: Error building docker image" - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
We are encountering below error while locally invoking a lambda in AWS code build machine
Image was not found.
842 | Building image............................................................................................................................
843 | Failed to build Docker Image
844 | NoneType: None
845 | Exception on /2015-03-31/functions/FunctionName/invocations [POST]
846 | Traceback (most recent call last):
847 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
848 | response = self.full_dispatch_request()
849 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
850 | rv = self.handle_user_exception(e)
851 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
852 | reraise(exc_type, exc_value, tb)
853 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
854 | raise value
855 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
856 | rv = self.dispatch_request()
857 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
858 | return self.view_functionsrule.endpoint
859 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/lambda_service/local_lambda_invoke_service.py", line 151, in _invoke_request_handler
860 | self.lambda_runner.invoke(function_name, request_data, stdout=stdout_stream_writer, stderr=self.stderr)
861 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/commands/local/lib/local_lambda.py", line 130, in invoke
862 | self.local_runtime.invoke(
863 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 217, in wrapped_func
864 | return_value = func(*args, **kwargs)
865 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 176, in invoke
866 | container = self.create(function_config, debug_context, container_host, container_host_interface)
867 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 73, in create
868 | container = LambdaContainer(
869 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 87, in init
870 | image = LambdaContainer._get_image(lambda_image, runtime, packagetype, imageuri, layers)
871 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 213, in _get_image
872 | return lambda_image.build(runtime, packagetype, image, layers)
873 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 144, in build
874 | self._build_image(image if image else image_name, image_tag, downloaded_layers, stream=stream_writer)
875 | File "/root/.pyenv/versions/3.8.10/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 245, in _build_image
876 | raise ImageBuildException("Error building docker image: {}".format(log["error"]))
877 | samcli.commands.local.cli_common.user_exceptions.ImageBuildException: Error building docker image: The command '/bin/sh -c chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1
Build machine specs:
AWS CodeBuild - Linux env
SAM CLI --version: 1.27.2
AWS region: us-east-1
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
### Observed result:
<!-- Please provide command output with `--debug` flag set. -->
### Expected result:
<!-- Describe what you expected. -->
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS:
2. `sam --version`:
3. AWS region:
`Add --debug flag to command you are running`
| main | sam local invoke throws error error building docker image make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description we are encountering below error while locally invoking a lambda in aws code build machine image was not found building image failed to build docker image nonetype none exception on functions functionname invocations traceback most recent call last file root pyenv versions lib site packages flask app py line in wsgi app response self full dispatch request file root pyenv versions lib site packages flask app py line in full dispatch request rv self handle user exception e file root pyenv versions lib site packages flask app py line in handle user exception reraise exc type exc value tb file root pyenv versions lib site packages flask compat py line in reraise raise value file root pyenv versions lib site packages flask app py line in full dispatch request rv self dispatch request file root pyenv versions lib site packages flask app py line in dispatch request return self view functionsrule endpoint file root pyenv versions lib site packages samcli local lambda service local lambda invoke service py line in invoke request handler self lambda runner invoke function name request data stdout stdout stream writer stderr self stderr file root pyenv versions lib site packages samcli commands local lib local lambda py line in invoke self local runtime invoke file root pyenv versions lib site packages samcli lib telemetry metric py line in wrapped func return value func args kwargs file root pyenv versions lib site packages samcli local lambdafn runtime py line in invoke container self create function config debug context container host container host interface file root pyenv versions lib site packages samcli local lambdafn runtime py line in create container lambdacontainer file root pyenv versions lib site packages samcli local docker lambda container py line in init image lambdacontainer get image lambda image runtime packagetype imageuri layers file root pyenv versions lib site packages samcli local docker lambda container py line in get image return lambda image build runtime packagetype image layers file root pyenv versions lib site packages samcli local docker lambda image py line in build self build image image if image else image name image tag downloaded layers stream stream writer file root pyenv versions lib site packages samcli local docker lambda image py line in build image raise imagebuildexception error building docker image format log samcli commands local cli common user exceptions imagebuildexception error building docker image the command bin sh c chmod x var rapid aws lambda rie returned a non zero code build machine specs aws codebuild linux env sam cli version aws region us east steps to reproduce observed result expected result additional environment details ex windows mac amazon linux etc os sam version aws region add debug flag to command you are running | 1 |
2,621 | 8,886,471,769 | IssuesEvent | 2019-01-15 00:43:15 | dgets/lasttime | https://api.github.com/repos/dgets/lasttime | closed | Determine how to use a custom field properly for units | bug enhancement help wanted maintainability | En route to #12, it has become apparent that utilizing an `Enum` with either string or integer equivalences is not going to work due to how these are implemented when stored in the database. They're being stored in some format that I haven't come across, so any comparisons always return the default value for that record field. That record field isn't even changeable in the database from the admin view, so this definitely needs to be implemented in a different way. | True | Determine how to use a custom field properly for units - En route to #12, it has become apparent that utilizing an `Enum` with either string or integer equivalences is not going to work due to how these are implemented when stored in the database. They're being stored in some format that I haven't come across, so any comparisons always return the default value for that record field. That record field isn't even changeable in the database from the admin view, so this definitely needs to be implemented in a different way. | main | determine how to use a custom field properly for units en route to it has become apparent that utilizing an enum with either string or integer equivalences is not going to work due to how these are implemented when stored in the database they re being stored in some format that i haven t come across so any comparisons always return the default value for that record field that record field isn t even changeable in the database from the admin view so this definitely needs to be implemented in a different way | 1 |
3,319 | 12,879,577,569 | IssuesEvent | 2020-07-11 23:12:22 | short-d/short | https://api.github.com/repos/short-d/short | opened | [Refactor] Consider adding custom error type for failed auto alias | maintainability | **What is frustrating you?**
In `CreateShortLink`, if auto alias generation fails, error is thrown to API consumer. This can expose more details than necessary to API caller and could risk API being attacked by hackers.
**Your solution**
Add a new error type to use case layer that signifies to API caller that short link creation failed due to error generating alias, without revealing any information from the implementation of auto alias generation. Throw this new error type in use case layer when auto alias fails.
**Alternatives considered**
Throw error as-is from use case layer and let the GraphQL resolver create a new error type for auto alias fail.
| True | [Refactor] Consider adding custom error type for failed auto alias - **What is frustrating you?**
In `CreateShortLink`, if auto alias generation fails, error is thrown to API consumer. This can expose more details than necessary to API caller and could risk API being attacked by hackers.
**Your solution**
Add a new error type to use case layer that signifies to API caller that short link creation failed due to error generating alias, without revealing any information from the implementation of auto alias generation. Throw this new error type in use case layer when auto alias fails.
**Alternatives considered**
Throw error as-is from use case layer and let the GraphQL resolver create a new error type for auto alias fail.
| main | consider adding custom error type for failed auto alias what is frustrating you in createshortlink if auto alias generation fails error is thrown to api consumer this can expose more details than necessary to api caller and could risk api being attacked by hackers your solution add a new error type to use case layer that signifies to api caller that short link creation failed due to error generating alias without revealing any information from the implementation of auto alias generation throw this new error type in use case layer when auto alias fails alternatives considered throw error as is from use case layer and let the graphql resolver create a new error type for auto alias fail | 1 |
126,694 | 5,002,458,125 | IssuesEvent | 2016-12-11 12:06:45 | mulesoft/api-workbench | https://api.github.com/repos/mulesoft/api-workbench | closed | Syntax highlighting switches off after IDE restart | atom bug in progress priority:critical | Open a Box project -> open a boxAPI.raml file -> Reload IDE
### Result:
Syntax highlighting becomes gray
<img width="629" alt="2016-12-09 18 31 01" src="https://cloud.githubusercontent.com/assets/13314242/21047649/a8b2dcac-be3d-11e6-878d-adb44db80daa.png">
| 1.0 | Syntax highlighting switches off after IDE restart - Open a Box project -> open a boxAPI.raml file -> Reload IDE
### Result:
Syntax highlighting becomes gray
<img width="629" alt="2016-12-09 18 31 01" src="https://cloud.githubusercontent.com/assets/13314242/21047649/a8b2dcac-be3d-11e6-878d-adb44db80daa.png">
| non_main | syntax highlighting switches off after ide restart open a box project open a boxapi raml file reload ide result syntax highlighting becomes gray img width alt src | 0 |
560,342 | 16,594,148,884 | IssuesEvent | 2021-06-01 11:27:04 | Edgeryders-Participio/multi-dreams | https://api.github.com/repos/Edgeryders-Participio/multi-dreams | closed | Unable to use Facebook to authenticate | Priority: 1 (now - within 1 month) | I tried to log into Dreams using Facebook, and got an error that the application was still in development mode.

| 1.0 | Unable to use Facebook to authenticate - I tried to log into Dreams using Facebook, and got an error that the application was still in development mode.

| non_main | unable to use facebook to authenticate i tried to log into dreams using facebook and got an error that the application was still in development mode | 0 |
44,530 | 9,601,817,468 | IssuesEvent | 2019-05-10 13:10:28 | HGustavs/LenaSYS | https://api.github.com/repos/HGustavs/LenaSYS | closed | Codeviewer.js brackets does not follow the standard for functions | CodeViewer gruppC2019 | There are some functions in this document that do not follow the standard when it comes to how brackets should be typed
~~~
function alignBoxesHeight3stack(boxValArray, boxNumBase, boxNumAlign, boxNumAlignSecond){
...
}
**Should be:**
function alignBoxesHeight3stack(boxValArray, boxNumBase, boxNumAlign, boxNumAlignSecond)
{
...
} | 1.0 | Codeviewer.js brackets does not follow the standard for functions - There are some functions in this document that do not follow the standard when it comes to how brackets should be typed
~~~
function alignBoxesHeight3stack(boxValArray, boxNumBase, boxNumAlign, boxNumAlignSecond){
...
}
**Should be:**
function alignBoxesHeight3stack(boxValArray, boxNumBase, boxNumAlign, boxNumAlignSecond)
{
...
} | non_main | codeviewer js brackets does not follow the standard for functions there are some functions in this document that do not follow the standard when it comes to how brackets should be typed function boxvalarray boxnumbase boxnumalign boxnumalignsecond should be function boxvalarray boxnumbase boxnumalign boxnumalignsecond | 0 |
1,210 | 5,165,975,944 | IssuesEvent | 2017-01-17 15:08:49 | betaflight/betaflight | https://api.github.com/repos/betaflight/betaflight | closed | Inconsistent cycle time with blackbox enabled, 8k pid loop. | For Target Maintainer | Hi,
I am using custom made, F4 board.
CPU load is pretty low when 8k/8k setting.
But, when I enable blackbox, the cpu cycletime is very inconsisstent.
I've tried 1k blackbox setting, still inconsistent cycle time.
What about another board? have you tried to log blackbox with 8k/8k setup? | True | Inconsistent cycle time with blackbox enabled, 8k pid loop. - Hi,
I am using custom made, F4 board.
CPU load is pretty low when 8k/8k setting.
But, when I enable blackbox, the cpu cycletime is very inconsisstent.
I've tried 1k blackbox setting, still inconsistent cycle time.
What about another board? have you tried to log blackbox with 8k/8k setup? | main | inconsistent cycle time with blackbox enabled pid loop hi i am using custom made board cpu load is pretty low when setting but when i enable blackbox the cpu cycletime is very inconsisstent i ve tried blackbox setting still inconsistent cycle time what about another board have you tried to log blackbox with setup | 1 |
3,266 | 12,425,870,319 | IssuesEvent | 2020-05-24 18:22:16 | ipfs/go-ds-crdt | https://api.github.com/repos/ipfs/go-ds-crdt | closed | Set "Remove" operation removes all tags repeatedly | P1 kind/enhancement need/analysis need/maintainers-input | Currently, calling set.Rmv(key) runs an element prefix query for the given key, and tombstones all the results it gets. This means a full element removal, every time a Rmv() operation is run. This results in redundant tombstones ending up in the Delta. Should it not only tombstone the elements that haven't yet been tombstoned.
Example.
```
set.Add("key", 1) #produces delta with ID: Qa
set.Add("key", 2) #produces delta with ID: Qb
set.Rmv("key") # produces delta tombstoning Qa, and Qb
set.Add("key", 3) # produces delta with ID: Qc
set.Rmv("key") # produces delta tombstoning Qa, Qb, and Qc
```
(Assume there is a merge operation called between each action)
In the example above, `Qa` and `Qb` were tombstoned twice (Once for each Rmv() call). Additionally, this problem only gets worse if you were to make several hundred updates to a key, followed by removals. Given that the entire BlockID is included in a tombstone Delta, this inflates the storage requirements exponentially.
Given that each Merge call updates the local state key value store, the record of which keys have been tombstoned is already available. In the case where we are syncing from a remote store, we already sync the Merkle Clock DAG and its deltas as well, so again we have a consistent state of which keys have been tombstoned when. In either of these cases, do we not already have the necessary information to only add tombstones of elements which havent actually been tombstoned yet?
Is this an intended design for the Merkle CRDT semantics of an ORSet, or just taken verbatim from the original design from the delta-state paper. | True | Set "Remove" operation removes all tags repeatedly - Currently, calling set.Rmv(key) runs an element prefix query for the given key, and tombstones all the results it gets. This means a full element removal, every time a Rmv() operation is run. This results in redundant tombstones ending up in the Delta. Should it not only tombstone the elements that haven't yet been tombstoned.
Example.
```
set.Add("key", 1) #produces delta with ID: Qa
set.Add("key", 2) #produces delta with ID: Qb
set.Rmv("key") # produces delta tombstoning Qa, and Qb
set.Add("key", 3) # produces delta with ID: Qc
set.Rmv("key") # produces delta tombstoning Qa, Qb, and Qc
```
(Assume there is a merge operation called between each action)
In the example above, `Qa` and `Qb` were tombstoned twice (Once for each Rmv() call). Additionally, this problem only gets worse if you were to make several hundred updates to a key, followed by removals. Given that the entire BlockID is included in a tombstone Delta, this inflates the storage requirements exponentially.
Given that each Merge call updates the local state key value store, the record of which keys have been tombstoned is already available. In the case where we are syncing from a remote store, we already sync the Merkle Clock DAG and its deltas as well, so again we have a consistent state of which keys have been tombstoned when. In either of these cases, do we not already have the necessary information to only add tombstones of elements which havent actually been tombstoned yet?
Is this an intended design for the Merkle CRDT semantics of an ORSet, or just taken verbatim from the original design from the delta-state paper. | main | set remove operation removes all tags repeatedly currently calling set rmv key runs an element prefix query for the given key and tombstones all the results it gets this means a full element removal every time a rmv operation is run this results in redundant tombstones ending up in the delta should it not only tombstone the elements that haven t yet been tombstoned example set add key produces delta with id qa set add key produces delta with id qb set rmv key produces delta tombstoning qa and qb set add key produces delta with id qc set rmv key produces delta tombstoning qa qb and qc assume there is a merge operation called between each action in the example above qa and qb were tombstoned twice once for each rmv call additionally this problem only gets worse if you were to make several hundred updates to a key followed by removals given that the entire blockid is included in a tombstone delta this inflates the storage requirements exponentially given that each merge call updates the local state key value store the record of which keys have been tombstoned is already available in the case where we are syncing from a remote store we already sync the merkle clock dag and its deltas as well so again we have a consistent state of which keys have been tombstoned when in either of these cases do we not already have the necessary information to only add tombstones of elements which havent actually been tombstoned yet is this an intended design for the merkle crdt semantics of an orset or just taken verbatim from the original design from the delta state paper | 1 |
79,823 | 9,956,539,403 | IssuesEvent | 2019-07-05 14:11:01 | Submitty/Submitty | https://api.github.com/repos/Submitty/Submitty | closed | Inconsistent usage of "students" and "users" | question / design / discussion needed | When I am writing API things for course users, I find that graders are included in the students' list in the `Students Enrolled in Registration Section NULL` section. When clicking "Download Users", these graders will be included in the csv.
Is it an expected behaviour? Do we need to modify "students" to "users" or we want to remove graders from students' list? | 1.0 | Inconsistent usage of "students" and "users" - When I am writing API things for course users, I find that graders are included in the students' list in the `Students Enrolled in Registration Section NULL` section. When clicking "Download Users", these graders will be included in the csv.
Is it an expected behaviour? Do we need to modify "students" to "users" or we want to remove graders from students' list? | non_main | inconsistent usage of students and users when i am writing api things for course users i find that graders are included in the students list in the students enrolled in registration section null section when clicking download users these graders will be included in the csv is it an expected behaviour do we need to modify students to users or we want to remove graders from students list | 0 |
1,540 | 6,572,229,716 | IssuesEvent | 2017-09-11 00:20:32 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | yum module should support state "downloaded" | affects_2.1 feature_idea waiting_on_maintainer | Hi, guys!
In some of our clients, the internet connection isn't that great and we have been forced to split our installation using two playbooks: one for preparing the environment and the other for installing effectively.
In the prepare playbook we do things like downloading files to the server. Typically, we run it some hours before the actual installation.
Specifically, using yum module we would like to have the rpm packages present in the server, but not installed. Manually we are able to do this with the yumdownloader command.
Anyway, thanks for the great tool you guys maintain!
Cheers,
-- Rodrigo Couto
| True | yum module should support state "downloaded" - Hi, guys!
In some of our clients, the internet connection isn't that great and we have been forced to split our installation using two playbooks: one for preparing the environment and the other for installing effectively.
In the prepare playbook we do things like downloading files to the server. Typically, we run it some hours before the actual installation.
Specifically, using yum module we would like to have the rpm packages present in the server, but not installed. Manually we are able to do this with the yumdownloader command.
Anyway, thanks for the great tool you guys maintain!
Cheers,
-- Rodrigo Couto
| main | yum module should support state downloaded hi guys in some of our clients the internet connection isn t that great and we have been forced to split our installation using two playbooks one for preparing the environment and the other for installing effectively in the prepare playbook we do things like downloading files to the server typically we run it some hours before the actual installation specifically using yum module we would like to have the rpm packages present in the server but not installed manually we are able to do this with the yumdownloader command anyway thanks for the great tool you guys maintain cheers rodrigo couto | 1 |
5,713 | 30,197,968,576 | IssuesEvent | 2023-07-05 00:59:20 | tModLoader/tModLoader | https://api.github.com/repos/tModLoader/tModLoader | closed | tModLoader development on non windows systems | Requestor-TML Maintainers Type: Change/Feature Request NEW ISSUE | ### Do you intend to personally contribute/program this feature?
Yes
### I would like to see this change made to improve my experience with
tModLoader code as a Contributor/Maintainer
### Description
i have some ideas and am interested in contributing to tmodloader but i don't have a windows vm and would rather prefer to not have to set one up.
there is #1456 but that seems to be about compiling mods on non windows systems.
i've asked around and the patcher uses windows forms so it doesn't run on linux/mac, also no bash/sh script is provided for linux/mac setup.
i would be able and willing to modify/make the patcher to run on linux,
however, i've never contributed a patch to tmodloader before so i am unaware of any details of the development workflow and what features exactly would such a tool need.
### What does this proposal attempt to solve or improve?
it is currently impossible for someone on non windows to contribute to tmodloader. or at least it is not apparent how someone on linux can contribute to tmodloader.
### Which (other) solutions should be considered?
_No response_ | True | tModLoader development on non windows systems - ### Do you intend to personally contribute/program this feature?
Yes
### I would like to see this change made to improve my experience with
tModLoader code as a Contributor/Maintainer
### Description
i have some ideas and am interested in contributing to tmodloader but i don't have a windows vm and would rather prefer to not have to set one up.
there is #1456 but that seems to be about compiling mods on non windows systems.
i've asked around and the patcher uses windows forms so it doesn't run on linux/mac, also no bash/sh script is provided for linux/mac setup.
i would be able and willing to modify/make the patcher to run on linux,
however, i've never contributed a patch to tmodloader before so i am unaware of any details of the development workflow and what features exactly would such a tool need.
### What does this proposal attempt to solve or improve?
it is currently impossible for someone on non windows to contribute to tmodloader. or at least it is not apparent how someone on linux can contribute to tmodloader.
### Which (other) solutions should be considered?
_No response_ | main | tmodloader development on non windows systems do you intend to personally contribute program this feature yes i would like to see this change made to improve my experience with tmodloader code as a contributor maintainer description i have some ideas and am interested in contributing to tmodloader but i don t have a windows vm and would rather prefer to not have to set one up there is but that seems to be about compiling mods on non windows systems i ve asked around and the patcher uses windows forms so it doesn t run on linux mac also no bash sh script is provided for linux mac setup i would be able and willing to modify make the patcher to run on linux however i ve never contributed a patch to tmodloader before so i am unaware of any details of the development workflow and what features exactly would such a tool need what does this proposal attempt to solve or improve it is currently impossible for someone on non windows to contribute to tmodloader or at least it is not apparent how someone on linux can contribute to tmodloader which other solutions should be considered no response | 1 |
398 | 3,442,589,531 | IssuesEvent | 2015-12-14 23:16:22 | espeak-ng/espeak-ng | https://api.github.com/repos/espeak-ng/espeak-ng | closed | Remove commented/#ifdef'd out code. | maintainability resolved/fixed | This code is not used by the program and is accessible by the source control history. Therefore, this code should be removed to make the code more readable. | True | Remove commented/#ifdef'd out code. - This code is not used by the program and is accessible by the source control history. Therefore, this code should be removed to make the code more readable. | main | remove commented ifdef d out code this code is not used by the program and is accessible by the source control history therefore this code should be removed to make the code more readable | 1 |
191,672 | 15,300,802,597 | IssuesEvent | 2021-02-24 12:48:48 | gchriswill/JicJacJoe | https://api.github.com/repos/gchriswill/JicJacJoe | closed | Update README for Module 5 | documentation | Update README file for the following items:
- [ ] Current status
- [ ] Milestone "Module 5" Details
- [ ] Branch Details
- [ ] Typo corrections | 1.0 | Update README for Module 5 - Update README file for the following items:
- [ ] Current status
- [ ] Milestone "Module 5" Details
- [ ] Branch Details
- [ ] Typo corrections | non_main | update readme for module update readme file for the following items current status milestone module details branch details typo corrections | 0 |
1,807 | 6,575,943,739 | IssuesEvent | 2017-09-11 17:55:45 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | How do I remove references in file /etc/services | affects_2.0 feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
Yes Not in GitHub
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible-2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
want to uninstall Netbackup for rhel,
Remove NetBackup references in the /etc/services file:
# NetBackup services
#
bpjava-msvc 13722/tcp bpjava-msvc
bpcd 13782/tcp bpcd
vnetd 13724/tcp vnetd
vopied 13783/tcp vopied
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| True | How do I remove references in file /etc/services - <!--- Verify first that your issue/request is not already reported in GitHub -->
Yes Not in GitHub
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible-2.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
want to uninstall Netbackup for rhel,
Remove NetBackup references in the /etc/services file:
# NetBackup services
#
bpjava-msvc 13722/tcp bpjava-msvc
bpcd 13782/tcp bpcd
vnetd 13724/tcp vnetd
vopied 13783/tcp vopied
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| main | how do i remove references in file etc services yes not in github issue type feature idea component name ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary want to uninstall netbackup for rhel remove netbackup references in the etc services file netbackup services bpjava msvc tcp bpjava msvc bpcd tcp bpcd vnetd tcp vnetd vopied tcp vopied steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results | 1 |
5,081 | 25,988,027,472 | IssuesEvent | 2022-12-20 03:25:04 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Change behavior of cell selection via row/column headers | type: enhancement work: frontend status: draft restricted: maintainers | The cell selection functionality, when performed via the row headers or column headers doesn't match my expected behavior. Our [specs](https://github.com/centerofci/mathesar-wiki/blob/master/design/specs/table-inspector.md) aren't super clear on these details, so I'm using this ticket to get further into the weeds by describing my expected behavior.
I want the cell selection to work the way that LibreOffice works because that seems simple and effective. Some of the UX subtleties are difficult to describe, so if you're confused about what I'm asking for, play with LibreOffice to see how it works. I find it helpful to use LibreOffice as a working prototype.
Below I'll describe the changes I'd like in terms of columns and column headers -- but these changes should apply to rows and row headers too.
## Specific changes
- When all cells in the column are selected, clicking on the column header (or dragging from and to the same column header) resets the selection to zero cells. Instead, this should begin a new selection, selecting all the currently selected cells (effectively retaining the currently selected cells).
- When I drag to select multiple columns, currently no changes are made to the selection. Instead, this should begin a new selection, selecting the cells in all columns between (and including) the starting column and the ending column.
| True | Change behavior of cell selection via row/column headers - The cell selection functionality, when performed via the row headers or column headers doesn't match my expected behavior. Our [specs](https://github.com/centerofci/mathesar-wiki/blob/master/design/specs/table-inspector.md) aren't super clear on these details, so I'm using this ticket to get further into the weeds by describing my expected behavior.
I want the cell selection to work the way that LibreOffice works because that seems simple and effective. Some of the UX subtleties are difficult to describe, so if you're confused about what I'm asking for, play with LibreOffice to see how it works. I find it helpful to use LibreOffice as a working prototype.
Below I'll describe the changes I'd like in terms of columns and column headers -- but these changes should apply to rows and row headers too.
## Specific changes
- When all cells in the column are selected, clicking on the column header (or dragging from and to the same column header) resets the selection to zero cells. Instead, this should begin a new selection, selecting all the currently selected cells (effectively retaining the currently selected cells).
- When I drag to select multiple columns, currently no changes are made to the selection. Instead, this should begin a new selection, selecting the cells in all columns between (and including) the starting column and the ending column.
| main | change behavior of cell selection via row column headers the cell selection functionality when performed via the row headers or column headers doesn t match my expected behavior our aren t super clear on these details so i m using this ticket to get further into the weeds by describing my expected behavior i want the cell selection to work the way that libreoffice works because that seems simple and effective some of the ux subtleties are difficult to describe so if you re confused about what i m asking for play with libreoffice to see how it works i find it helpful to use libreoffice as a working prototype below i ll describe the changes i d like in terms of columns and column headers but these changes should apply to rows and row headers too specific changes when all cells in the column are selected clicking on the column header or dragging from and to the same column header resets the selection to zero cells instead this should begin a new selection selecting all the currently selected cells effectively retaining the currently selected cells when i drag to select multiple columns currently no changes are made to the selection instead this should begin a new selection selecting the cells in all columns between and including the starting column and the ending column | 1 |
797,732 | 28,153,843,494 | IssuesEvent | 2023-04-03 05:20:46 | calcom/cal.com | https://api.github.com/repos/calcom/cal.com | closed | [CAL-1094] Embed modal / Inline - UI/Layout/Spacing issues | ⚡ Quick Wins Low priority | Current

Should be

[View in Figma ](https://www.figma.com/file/xk4HOxtSI82J0F7enMxeak/Cal---Live?node-id=22%3A38140&t=AQX9GWSFsCzlRrEy-1)
**Note:** We might not need to touch the preview part. Mostly check spacing & components in the left panel, and ensure the modal doesn't touch the edges of the screen.
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-1094](https://linear.app/calcom/issue/CAL-1094/inline-embed-modal-uilayoutspacing-issues)</sub> | 1.0 | [CAL-1094] Embed modal / Inline - UI/Layout/Spacing issues - Current

Should be

[View in Figma ](https://www.figma.com/file/xk4HOxtSI82J0F7enMxeak/Cal---Live?node-id=22%3A38140&t=AQX9GWSFsCzlRrEy-1)
**Note:** We might not need to touch the preview part. Mostly check spacing & components in the left panel, and ensure the modal doesn't touch the edges of the screen.
<sub>From [SyncLinear.com](https://synclinear.com) | [CAL-1094](https://linear.app/calcom/issue/CAL-1094/inline-embed-modal-uilayoutspacing-issues)</sub> | non_main | embed modal inline ui layout spacing issues current should be note we might not need to touch the preview part mostly check spacing components in the left panel and ensure the modal doesn t touch the edges of the screen from | 0 |
2,838 | 10,212,124,117 | IssuesEvent | 2019-08-14 18:38:47 | arcticicestudio/styleguide-javascript | https://api.github.com/repos/arcticicestudio/styleguide-javascript | opened | Monorepo with ESLint packages | context-pkg context-workflow scope-configurability scope-dx scope-maintainability scope-quality scope-stability target-pkg-eslint target-pkg-eslint-base type-feature | Currently this repository only contains the actual styleguide documentation while specific projects that implement the guidelines for linters and code style analyzer live in separate repositories. This is the best approach for modularity and a small and clear code base, but it increases the maintenance overhead by 1(n) since changes to the development workflow or toolbox, general project documentations as well as dependency management requires changes in every repository with dedicated tickets/issues and PRs. In particular, Node packages require frequent dependency management due to their fast development cycles to keep up-to-date with the latest package changes like (security) bug fixes.
This styleguide is currently implemented by the [eslint-config-arcticicestudio-base][npm-esl-c-base] and [eslint-config-arcticicestudio][npm-esl-c] Node packages living in their own repositories. The development workflow is clean using most of GitHub's awesome features like project boards, _codeowner_ assignments, issue & PR automation and so on, but changes to one of them often requires actions for the other package too since they are based on each other and they are using the same development tooling and documentation standards.
In order to reduce the maintenance overhead both packages will migrate into this repository using [Yarn workspaces][y-d-ws]. This simplifies the development tooling setup and allows to use a unified documentation base as well as a smoother development and testing workflow.
:construction: This issue is **work in progress** and is still incomplete! :construction:
[npm-esl-c-base]: https://www.npmjs.com/package/eslint-config-arcticicestudio-base
[npm-esl-c]: https://www.npmjs.com/package/eslint-config-arcticicestudio
[y-d-ws]: https://yarnpkg.com/en/docs/workspaces
| True | Monorepo with ESLint packages - Currently this repository only contains the actual styleguide documentation while specific projects that implement the guidelines for linters and code style analyzer live in separate repositories. This is the best approach for modularity and a small and clear code base, but it increases the maintenance overhead by 1(n) since changes to the development workflow or toolbox, general project documentations as well as dependency management requires changes in every repository with dedicated tickets/issues and PRs. In particular, Node packages require frequent dependency management due to their fast development cycles to keep up-to-date with the latest package changes like (security) bug fixes.
This styleguide is currently implemented by the [eslint-config-arcticicestudio-base][npm-esl-c-base] and [eslint-config-arcticicestudio][npm-esl-c] Node packages living in their own repositories. The development workflow is clean using most of GitHub's awesome features like project boards, _codeowner_ assignments, issue & PR automation and so on, but changes to one of them often requires actions for the other package too since they are based on each other and they are using the same development tooling and documentation standards.
In order to reduce the maintenance overhead both packages will migrate into this repository using [Yarn workspaces][y-d-ws]. This simplifies the development tooling setup and allows to use a unified documentation base as well as a smoother development and testing workflow.
:construction: This issue is **work in progress** and is still incomplete! :construction:
[npm-esl-c-base]: https://www.npmjs.com/package/eslint-config-arcticicestudio-base
[npm-esl-c]: https://www.npmjs.com/package/eslint-config-arcticicestudio
[y-d-ws]: https://yarnpkg.com/en/docs/workspaces
| main | monorepo with eslint packages currently this repository only contains the actual styleguide documentation while specific projects that implement the guidelines for linters and code style analyzer live in separate repositories this is the best approach for modularity and a small and clear code base but it increases the maintenance overhead by n since changes to the development workflow or toolbox general project documentations as well as dependency management requires changes in every repository with dedicated tickets issues and prs in particular node packages require frequent dependency management due to their fast development cycles to keep up to date with the latest package changes like security bug fixes this styleguide is currently implemented by the and node packages living in their own repositories the development workflow is clean using most of github s awesome features like project boards codeowner assignments issue pr automation and so on but changes to one of them often requires actions for the other package too since they are based on each other and they are using the same development tooling and documentation standards in order to reduce the maintenance overhead both packages will migrate into this repository using this simplifies the development tooling setup and allows to use a unified documentation base as well as a smoother development and testing workflow construction this issue is work in progress and is still incomplete construction | 1 |
285,182 | 8,755,436,107 | IssuesEvent | 2018-12-14 14:52:35 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.humblebundle.com - site is not usable | browser-firefox-mobile priority-important | <!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.humblebundle.com/
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: site not usable
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.humblebundle.com - site is not usable - <!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.humblebundle.com/
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: site not usable
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description site not usable steps to reproduce browser configuration none from with ❤️ | 0 |
137,248 | 11,104,197,525 | IssuesEvent | 2019-12-17 06:49:26 | hXtreme/HCP-Project | https://api.github.com/repos/hXtreme/HCP-Project | opened | [Test-Case] Test the search bar -- Location testing | test-case | ### The test case
Test the search bar by searching for a location, fixing or omitting the practice type (not relevant for this test). Ensure that we can successfully obtain a record of a number of local practices.
### Test 1 (failure)
1. Enter a nonsense location name that will definitely not return a valid location, i.e. "dfoigunje"
2. Fetch the results, check that no locations are returned.
### Test 2 (success)
1. Enter a location with a known list of providers, i.e. "Philadelphia", "Los Angeles, CA"... Anything that can return valid info from the Google Maps API.
2. Fetch the results, check that we have a non-empty list of locations.
3. Optionally, check that their contents equal a known set of provider info, to guard against garbage data returns. | 1.0 | [Test-Case] Test the search bar -- Location testing - ### The test case
Test the search bar by searching for a location, fixing or omitting the practice type (not relevant for this test). Ensure that we can successfully obtain a record of a number of local practices.
### Test 1 (failure)
1. Enter a nonsense location name that will definitely not return a valid location, i.e. "dfoigunje"
2. Fetch the results, check that no locations are returned.
### Test 2 (success)
1. Enter a location with a known list of providers, i.e. "Philadelphia", "Los Angeles, CA"... Anything that can return valid info from the Google Maps API.
2. Fetch the results, check that we have a non-empty list of locations.
3. Optionally, check that their contents equal a known set of provider info, to guard against garbage data returns. | non_main | test the search bar location testing the test case test the search bar by searching for a location fixing or omitting the practice type not relevant for this test ensure that we can successfully obtain a record of a number of local practices test failure enter a nonsense location name that will definitely not return a valid location i e dfoigunje fetch the results check that no locations are returned test success enter a location with a known list of providers i e philadelphia los angeles ca anything that can return valid info from the google maps api fetch the results check that we have a non empty list of locations optionally check that their contents equal a known set of provider info to guard against garbage data returns | 0 |
5,427 | 27,237,657,307 | IssuesEvent | 2023-02-21 17:31:07 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Control access to routes based on user role | work: frontend status: ready restricted: maintainers | - Admin route
- Import upload and preview routes
- Record page route
- Exploration edit route | True | Control access to routes based on user role - - Admin route
- Import upload and preview routes
- Record page route
- Exploration edit route | main | control access to routes based on user role admin route import upload and preview routes record page route exploration edit route | 1 |
9,874 | 6,486,774,641 | IssuesEvent | 2017-08-19 23:09:09 | python/mypy | https://api.github.com/repos/python/mypy | closed | Better messages when type is incompatible due to invariance | topic-usability | Users are often confused by compatibility of invariant collection types. Here is an example from #3351:
```py
from typing import List, Union
def mean(numbers: List[Union[int, float]]) -> float:
return sum(numbers) / len(numbers)
some_numbers = [1, 2, 3, 4]
mean(some_numbers) # Argument 1 to "mean" has incompatible type List[int];
# expected List[Union[int, float]]
```
It might be helpful to add a note with more details, since the message doesn't really provide enough context to start googling for help. Here is one idea:
```
program.py:23:note: "List" is invariant -- see <link to docs>
program.py:23:note: Maybe you can use "Sequence" as the target type, which is covariant?
``` | True | Better messages when type is incompatible due to invariance - Users are often confused by compatibility of invariant collection types. Here is an example from #3351:
```py
from typing import List, Union
def mean(numbers: List[Union[int, float]]) -> float:
return sum(numbers) / len(numbers)
some_numbers = [1, 2, 3, 4]
mean(some_numbers) # Argument 1 to "mean" has incompatible type List[int];
# expected List[Union[int, float]]
```
It might be helpful to add a note with more details, since the message doesn't really provide enough context to start googling for help. Here is one idea:
```
program.py:23:note: "List" is invariant -- see <link to docs>
program.py:23:note: Maybe you can use "Sequence" as the target type, which is covariant?
``` | non_main | better messages when type is incompatible due to invariance users are often confused by compatibility of invariant collection types here is an example from py from typing import list union def mean numbers list float return sum numbers len numbers some numbers mean some numbers argument to mean has incompatible type list expected list it might be helpful to add a note with more details since the message doesn t really provide enough context to start googling for help here is one idea program py note list is invariant see program py note maybe you can use sequence as the target type which is covariant | 0 |
14,790 | 9,524,268,539 | IssuesEvent | 2019-04-28 01:30:02 | TheCacophonyProject/cacophonometer | https://api.github.com/repos/TheCacophonyProject/cacophonometer | reopened | Sort groups in group list | good first issue usability | The list of groups shown in the setup wizard is unsorted which makes it hard to find the right one if there's a lot of groups available. | True | Sort groups in group list - The list of groups shown in the setup wizard is unsorted which makes it hard to find the right one if there's a lot of groups available. | non_main | sort groups in group list the list of groups shown in the setup wizard is unsorted which makes it hard to find the right one if there s a lot of groups available | 0 |
5,168 | 26,321,998,332 | IssuesEvent | 2023-01-10 01:02:34 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | closed | Bazel build of the plugin fails on a fresh machine | type: bug P2 product: IntelliJ lang: java awaiting-maintainer | ### Description of the bug:
If you try to build the plugin without having java installed and you run
`bazel build --define=ij_product=intellij-2021.2 //ijwb:ijwb_bazel_zip`
You get:
```
ERROR: /private/var/tmp/_bazel_ittaiz/8fcd92dbc3da57be8b1fdbdb0fada9a0/external/bazel_tools/tools/jdk/BUILD:336:14: JavaToolchainCompileBootClasspath external/bazel_tools/tools/jdk/platformclasspath.jar failed: (Exit 1): java failed: error executing command external/remotejdk11_macos/bin/java -XX:+IgnoreUnrecognizedVMOptions '--add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED' '--add-exports=jdk.compiler/com.sun.tools.javac.platform=ALL-UNNAMED' ... (remaining 6 arguments skipped)
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
Exception in thread "main" java.lang.IllegalArgumentException: external/local_jdk
at jdk.compiler/com.sun.tools.javac.file.Locations$SystemModulesLocationHandler.update(Locations.java:1853)
at jdk.compiler/com.sun.tools.javac.file.Locations$SystemModulesLocationHandler.handleOption(Locations.java:1798)
at jdk.compiler/com.sun.tools.javac.file.Locations.handleOption(Locations.java:2062)
at jdk.compiler/com.sun.tools.javac.file.BaseFileManager.handleOption(BaseFileManager.java:269)
at jdk.compiler/com.sun.tools.javac.file.BaseFileManager$2.handleFileManagerOption(BaseFileManager.java:222)
at jdk.compiler/com.sun.tools.javac.main.Option.process(Option.java:1138)
at jdk.compiler/com.sun.tools.javac.main.Option.handleOption(Option.java:1086)
at jdk.compiler/com.sun.tools.javac.file.BaseFileManager.handleOption(BaseFileManager.java:232)
at jdk.compiler/com.sun.tools.javac.main.Arguments.doProcessArgs(Arguments.java:390)
at jdk.compiler/com.sun.tools.javac.main.Arguments.processArgs(Arguments.java:347)
at jdk.compiler/com.sun.tools.javac.main.Arguments.init(Arguments.java:246)
at jdk.compiler/com.sun.tools.javac.api.JavacTool.getTask(JavacTool.java:185)
at DumpPlatformClassPath.dumpJDK9AndNewerBootClassPath(DumpPlatformClassPath.java:106)
at DumpPlatformClassPath.main(DumpPlatformClassPath.java:67)
Target //ijwb:ijwb_bazel_zip failed to build
```
When I ran
`bazel build --define=ij_product=intellij-2021.2 --tool_java_runtime_version=remotejdk_11 --java_runtime_version=remotejdk_11 --java_language_version=8 --tool_java_language_version=11 //ijwb:ijwb_bazel_zip`
then the build passes.
I think that this isn't a problem now because there's an assumption java is installed?
Maybe the plugin should have the above flags in the workspace bazelrc?
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Checkout the plugin on a machine without java installed / on the path
Run `bazel build --define=ij_product=intellij-2021.2 //ijwb:ijwb_bazel_zip`
See the failure
### Which Intellij IDE are you using? Please provide the specific version.
2021.2.4
### What programming languages and tools are you using? Please provide specific versions.
scala 2.12, java 11
### What Bazel plugin version are you using?
v2022.08.09 Stable
### Have you found anything relevant by searching the web?
https://github.com/bazelbuild/bazel/issues/7953
https://github.com/bazelbuild/bazel/issues/7304
### Any other information, logs, or outputs that you want to share?
_No response_ | True | Bazel build of the plugin fails on a fresh machine - ### Description of the bug:
If you try to build the plugin without having java installed and you run
`bazel build --define=ij_product=intellij-2021.2 //ijwb:ijwb_bazel_zip`
You get:
```
ERROR: /private/var/tmp/_bazel_ittaiz/8fcd92dbc3da57be8b1fdbdb0fada9a0/external/bazel_tools/tools/jdk/BUILD:336:14: JavaToolchainCompileBootClasspath external/bazel_tools/tools/jdk/platformclasspath.jar failed: (Exit 1): java failed: error executing command external/remotejdk11_macos/bin/java -XX:+IgnoreUnrecognizedVMOptions '--add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED' '--add-exports=jdk.compiler/com.sun.tools.javac.platform=ALL-UNNAMED' ... (remaining 6 arguments skipped)
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
Exception in thread "main" java.lang.IllegalArgumentException: external/local_jdk
at jdk.compiler/com.sun.tools.javac.file.Locations$SystemModulesLocationHandler.update(Locations.java:1853)
at jdk.compiler/com.sun.tools.javac.file.Locations$SystemModulesLocationHandler.handleOption(Locations.java:1798)
at jdk.compiler/com.sun.tools.javac.file.Locations.handleOption(Locations.java:2062)
at jdk.compiler/com.sun.tools.javac.file.BaseFileManager.handleOption(BaseFileManager.java:269)
at jdk.compiler/com.sun.tools.javac.file.BaseFileManager$2.handleFileManagerOption(BaseFileManager.java:222)
at jdk.compiler/com.sun.tools.javac.main.Option.process(Option.java:1138)
at jdk.compiler/com.sun.tools.javac.main.Option.handleOption(Option.java:1086)
at jdk.compiler/com.sun.tools.javac.file.BaseFileManager.handleOption(BaseFileManager.java:232)
at jdk.compiler/com.sun.tools.javac.main.Arguments.doProcessArgs(Arguments.java:390)
at jdk.compiler/com.sun.tools.javac.main.Arguments.processArgs(Arguments.java:347)
at jdk.compiler/com.sun.tools.javac.main.Arguments.init(Arguments.java:246)
at jdk.compiler/com.sun.tools.javac.api.JavacTool.getTask(JavacTool.java:185)
at DumpPlatformClassPath.dumpJDK9AndNewerBootClassPath(DumpPlatformClassPath.java:106)
at DumpPlatformClassPath.main(DumpPlatformClassPath.java:67)
Target //ijwb:ijwb_bazel_zip failed to build
```
When I ran
`bazel build --define=ij_product=intellij-2021.2 --tool_java_runtime_version=remotejdk_11 --java_runtime_version=remotejdk_11 --java_language_version=8 --tool_java_language_version=11 //ijwb:ijwb_bazel_zip`
then the build passes.
I think that this isn't a problem now because there's an assumption java is installed?
Maybe the plugin should have the above flags in the workspace bazelrc?
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Checkout the plugin on a machine without java installed / on the path
Run `bazel build --define=ij_product=intellij-2021.2 //ijwb:ijwb_bazel_zip`
See the failure
### Which Intellij IDE are you using? Please provide the specific version.
2021.2.4
### What programming languages and tools are you using? Please provide specific versions.
scala 2.12, java 11
### What Bazel plugin version are you using?
v2022.08.09 Stable
### Have you found anything relevant by searching the web?
https://github.com/bazelbuild/bazel/issues/7953
https://github.com/bazelbuild/bazel/issues/7304
### Any other information, logs, or outputs that you want to share?
_No response_ | main | bazel build of the plugin fails on a fresh machine description of the bug if you try to build the plugin without having java installed and you run bazel build define ij product intellij ijwb ijwb bazel zip you get error private var tmp bazel ittaiz external bazel tools tools jdk build javatoolchaincompilebootclasspath external bazel tools tools jdk platformclasspath jar failed exit java failed error executing command external macos bin java xx ignoreunrecognizedvmoptions add exports jdk compiler com sun tools javac api all unnamed add exports jdk compiler com sun tools javac platform all unnamed remaining arguments skipped use sandbox debug to see verbose messages from the sandbox and retain the sandbox build root for debugging exception in thread main java lang illegalargumentexception external local jdk at jdk compiler com sun tools javac file locations systemmoduleslocationhandler update locations java at jdk compiler com sun tools javac file locations systemmoduleslocationhandler handleoption locations java at jdk compiler com sun tools javac file locations handleoption locations java at jdk compiler com sun tools javac file basefilemanager handleoption basefilemanager java at jdk compiler com sun tools javac file basefilemanager handlefilemanageroption basefilemanager java at jdk compiler com sun tools javac main option process option java at jdk compiler com sun tools javac main option handleoption option java at jdk compiler com sun tools javac file basefilemanager handleoption basefilemanager java at jdk compiler com sun tools javac main arguments doprocessargs arguments java at jdk compiler com sun tools javac main arguments processargs arguments java at jdk compiler com sun tools javac main arguments init arguments java at jdk compiler com sun tools javac api javactool gettask javactool java at dumpplatformclasspath dumpplatformclasspath java at dumpplatformclasspath main dumpplatformclasspath java target ijwb ijwb bazel zip failed to build when i ran bazel build define ij product intellij tool java runtime version remotejdk java runtime version remotejdk java language version tool java language version ijwb ijwb bazel zip then the build passes i think that this isn t a problem now because there s an assumption java is installed maybe the plugin should have the above flags in the workspace bazelrc what s the simplest easiest way to reproduce this bug please provide a minimal example if possible checkout the plugin on a machine without java installed on the path run bazel build define ij product intellij ijwb ijwb bazel zip see the failure which intellij ide are you using please provide the specific version what programming languages and tools are you using please provide specific versions scala java what bazel plugin version are you using stable have you found anything relevant by searching the web any other information logs or outputs that you want to share no response | 1 |
4,428 | 22,843,523,620 | IssuesEvent | 2022-07-13 01:59:49 | DynamoRIO/dynamorio | https://api.github.com/repos/DynamoRIO/dynamorio | opened | Refactor and split drmemtrace tracer.cpp | Type-Feature Maintainability Component-DrCacheSim | Splitting out from https://github.com/DynamoRIO/dynamorio/issues/3995#issuecomment-1063198769
Which came originally from https://github.com/DynamoRIO/dynamorio/pull/5393#discussion_r820264882
tracer.cpp has grown large and complex. The idea is to split it up into separate files and possibly classes or other organization for more modularity and easier maintainability. | True | Refactor and split drmemtrace tracer.cpp - Splitting out from https://github.com/DynamoRIO/dynamorio/issues/3995#issuecomment-1063198769
Which came originally from https://github.com/DynamoRIO/dynamorio/pull/5393#discussion_r820264882
tracer.cpp has grown large and complex. The idea is to split it up into separate files and possibly classes or other organization for more modularity and easier maintainability. | main | refactor and split drmemtrace tracer cpp splitting out from which came originally from tracer cpp has grown large and complex the idea is to split it up into separate files and possibly classes or other organization for more modularity and easier maintainability | 1 |
123,721 | 17,772,304,845 | IssuesEvent | 2021-08-30 14:57:05 | kapseliboi/platform-status | https://api.github.com/repos/kapseliboi/platform-status | opened | CVE-2018-13797 (High) detected in macaddress-0.2.8.tgz | security vulnerability | ## CVE-2018-13797 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>macaddress-0.2.8.tgz</b></p></summary>
<p>Get the MAC addresses (hardware addresses) of the hosts network interfaces.</p>
<p>Library home page: <a href="https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz">https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz</a></p>
<p>Path to dependency file: platform-status/package.json</p>
<p>Path to vulnerable library: platform-status/node_modules/gulp-cssnano/node_modules/macaddress/package.json</p>
<p>
Dependency Hierarchy:
- gulp-cssnano-2.1.2.tgz (Root Library)
- cssnano-3.10.0.tgz
- postcss-filter-plugins-2.0.2.tgz
- uniqid-4.1.1.tgz
- :x: **macaddress-0.2.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/platform-status/commit/f1843ccb4f9fa8cac219c196c9bcceb734286e98">f1843ccb4f9fa8cac219c196c9bcceb734286e98</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The macaddress module before 0.2.9 for Node.js is prone to an arbitrary command injection flaw, due to allowing unsanitized input to an exec (rather than execFile) call.
<p>Publish Date: 2018-07-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-13797>CVE-2018-13797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-13797">https://nvd.nist.gov/vuln/detail/CVE-2018-13797</a></p>
<p>Release Date: 2018-07-10</p>
<p>Fix Resolution: 0.2.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-13797 (High) detected in macaddress-0.2.8.tgz - ## CVE-2018-13797 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>macaddress-0.2.8.tgz</b></p></summary>
<p>Get the MAC addresses (hardware addresses) of the hosts network interfaces.</p>
<p>Library home page: <a href="https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz">https://registry.npmjs.org/macaddress/-/macaddress-0.2.8.tgz</a></p>
<p>Path to dependency file: platform-status/package.json</p>
<p>Path to vulnerable library: platform-status/node_modules/gulp-cssnano/node_modules/macaddress/package.json</p>
<p>
Dependency Hierarchy:
- gulp-cssnano-2.1.2.tgz (Root Library)
- cssnano-3.10.0.tgz
- postcss-filter-plugins-2.0.2.tgz
- uniqid-4.1.1.tgz
- :x: **macaddress-0.2.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/platform-status/commit/f1843ccb4f9fa8cac219c196c9bcceb734286e98">f1843ccb4f9fa8cac219c196c9bcceb734286e98</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The macaddress module before 0.2.9 for Node.js is prone to an arbitrary command injection flaw, due to allowing unsanitized input to an exec (rather than execFile) call.
<p>Publish Date: 2018-07-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-13797>CVE-2018-13797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-13797">https://nvd.nist.gov/vuln/detail/CVE-2018-13797</a></p>
<p>Release Date: 2018-07-10</p>
<p>Fix Resolution: 0.2.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in macaddress tgz cve high severity vulnerability vulnerable library macaddress tgz get the mac addresses hardware addresses of the hosts network interfaces library home page a href path to dependency file platform status package json path to vulnerable library platform status node modules gulp cssnano node modules macaddress package json dependency hierarchy gulp cssnano tgz root library cssnano tgz postcss filter plugins tgz uniqid tgz x macaddress tgz vulnerable library found in head commit a href found in base branch master vulnerability details the macaddress module before for node js is prone to an arbitrary command injection flaw due to allowing unsanitized input to an exec rather than execfile call publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
1,462 | 6,362,623,145 | IssuesEvent | 2017-07-31 15:19:48 | chef-cookbooks/omnibus | https://api.github.com/repos/chef-cookbooks/omnibus | closed | omnibus_build: can't modify frozen Hash | Status: Maintainer Review Needed | ### Cookbook version
5.2.0
### Chef-client version
13.1.31
### Platform Details
CentOS/RHEL 7.3
### Scenario:
Converge the omnibus build node using the following recipe:
```ruby
include_recipe 'omnibus::default'
omnibus_build 'myproj' do
project_dir node['omnibus']['build_dir']
log_level :internal
end
```
### Actual Result:
```
Recipe: omnibus-myproj::build_package
* omnibus_build[myproj] action execute
Recipe: <Dynamically Defined Resource>
* directory[/var/cache/omnibus/build/myproj/*.manifest] action delete (up to date)
* directory[/var/cache/omnibus/pkg] action delete (up to date)
* directory[/home/vagrant/omnibus-apica/pkg] action delete (up to date)
* directory[/opt/myproj] action delete
- delete existing directory /opt/myproj
* directory[/var/cache/omnibus] action create (up to date)
* directory[/opt/myproj] action create
- create new directory /opt/myproj
- change owner from '' to 'vagrant'
- restore selinux security context
================================================================================
Error executing action `execute` on resource 'omnibus_build[myproj]'
================================================================================
RuntimeError
------------
can't modify frozen Hash
Cookbook Trace:
---------------
/tmp/kitchen/cache/cookbooks/omnibus/libraries/omnibus_build.rb:153:in `environment'
/tmp/kitchen/cache/cookbooks/omnibus/libraries/omnibus_build.rb:144:in `execute_with_omnibus_toolchain'
/tmp/kitchen/cache/cookbooks/omnibus/libraries/omnibus_build.rb:77:in `block (2 levels) in <class:OmnibusBuild>'
/tmp/kitchen/cache/cookbooks/omnibus/libraries/omnibus_build.rb:74:in `block in <class:OmnibusBuild>'
Resource Declaration:
---------------------
# In /tmp/kitchen/cache/cookbooks/omnibus-myproj/recipes/build_package.rb
7: omnibus_build 'myproj' do
8: project_dir node['omnibus']['build_dir']
9: log_level :internal
10: end
Compiled Resource:
------------------
# Declared in /tmp/kitchen/cache/cookbooks/omnibus-myproj/recipes/build_package.rb:7:in `from_file'
omnibus_build("myproj") do
action [:execute]
default_guard_interpreter :default
declared_type :omnibus_build
cookbook_name "omnibus-myproj"
recipe_name "build_package"
project_dir "/home/vagrant/omnibus-apica"
log_level :internal
base_dir "/var/cache/omnibus"
project_name "myproj"
install_dir "/opt/myproj"
config_file "/home/vagrant/omnibus-apica/omnibus.rb"
end
System Info:
------------
chef_version=13.1.31
platform=centos
platform_version=7.3.1611
ruby=ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-linux]
program_name=chef-client worker: ppid=10230;start=10:43:49;
executable=/opt/chef/bin/chef-client
```
| True | omnibus_build: can't modify frozen Hash - ### Cookbook version
5.2.0
### Chef-client version
13.1.31
### Platform Details
CentOS/RHEL 7.3
### Scenario:
Converge the omnibus build node using the following recipe:
```ruby
include_recipe 'omnibus::default'
omnibus_build 'myproj' do
project_dir node['omnibus']['build_dir']
log_level :internal
end
```
### Actual Result:
```
Recipe: omnibus-myproj::build_package
* omnibus_build[myproj] action execute
Recipe: <Dynamically Defined Resource>
* directory[/var/cache/omnibus/build/myproj/*.manifest] action delete (up to date)
* directory[/var/cache/omnibus/pkg] action delete (up to date)
* directory[/home/vagrant/omnibus-apica/pkg] action delete (up to date)
* directory[/opt/myproj] action delete
- delete existing directory /opt/myproj
* directory[/var/cache/omnibus] action create (up to date)
* directory[/opt/myproj] action create
- create new directory /opt/myproj
- change owner from '' to 'vagrant'
- restore selinux security context
================================================================================
Error executing action `execute` on resource 'omnibus_build[myproj]'
================================================================================
RuntimeError
------------
can't modify frozen Hash
Cookbook Trace:
---------------
/tmp/kitchen/cache/cookbooks/omnibus/libraries/omnibus_build.rb:153:in `environment'
/tmp/kitchen/cache/cookbooks/omnibus/libraries/omnibus_build.rb:144:in `execute_with_omnibus_toolchain'
/tmp/kitchen/cache/cookbooks/omnibus/libraries/omnibus_build.rb:77:in `block (2 levels) in <class:OmnibusBuild>'
/tmp/kitchen/cache/cookbooks/omnibus/libraries/omnibus_build.rb:74:in `block in <class:OmnibusBuild>'
Resource Declaration:
---------------------
# In /tmp/kitchen/cache/cookbooks/omnibus-myproj/recipes/build_package.rb
7: omnibus_build 'myproj' do
8: project_dir node['omnibus']['build_dir']
9: log_level :internal
10: end
Compiled Resource:
------------------
# Declared in /tmp/kitchen/cache/cookbooks/omnibus-myproj/recipes/build_package.rb:7:in `from_file'
omnibus_build("myproj") do
action [:execute]
default_guard_interpreter :default
declared_type :omnibus_build
cookbook_name "omnibus-myproj"
recipe_name "build_package"
project_dir "/home/vagrant/omnibus-apica"
log_level :internal
base_dir "/var/cache/omnibus"
project_name "myproj"
install_dir "/opt/myproj"
config_file "/home/vagrant/omnibus-apica/omnibus.rb"
end
System Info:
------------
chef_version=13.1.31
platform=centos
platform_version=7.3.1611
ruby=ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-linux]
program_name=chef-client worker: ppid=10230;start=10:43:49;
executable=/opt/chef/bin/chef-client
```
| main | omnibus build can t modify frozen hash cookbook version chef client version platform details centos rhel scenario converge the omnibus build node using the following recipe ruby include recipe omnibus default omnibus build myproj do project dir node log level internal end actual result recipe omnibus myproj build package omnibus build action execute recipe directory action delete up to date directory action delete up to date directory action delete up to date directory action delete delete existing directory opt myproj directory action create up to date directory action create create new directory opt myproj change owner from to vagrant restore selinux security context error executing action execute on resource omnibus build runtimeerror can t modify frozen hash cookbook trace tmp kitchen cache cookbooks omnibus libraries omnibus build rb in environment tmp kitchen cache cookbooks omnibus libraries omnibus build rb in execute with omnibus toolchain tmp kitchen cache cookbooks omnibus libraries omnibus build rb in block levels in tmp kitchen cache cookbooks omnibus libraries omnibus build rb in block in resource declaration in tmp kitchen cache cookbooks omnibus myproj recipes build package rb omnibus build myproj do project dir node log level internal end compiled resource declared in tmp kitchen cache cookbooks omnibus myproj recipes build package rb in from file omnibus build myproj do action default guard interpreter default declared type omnibus build cookbook name omnibus myproj recipe name build package project dir home vagrant omnibus apica log level internal base dir var cache omnibus project name myproj install dir opt myproj config file home vagrant omnibus apica omnibus rb end system info chef version platform centos platform version ruby ruby revision program name chef client worker ppid start executable opt chef bin chef client | 1 |
5,634 | 28,297,369,328 | IssuesEvent | 2023-04-10 00:06:10 | NIAEFEUP/website-niaefeup-frontend | https://api.github.com/repos/NIAEFEUP/website-niaefeup-frontend | opened | global: restructure project files | maintainability | Just as explained in [this comment](https://github.com/NIAEFEUP/website-niaefeup-frontend/pull/38#issuecomment-1497964997), we should rethink how the project files are structured. There are currently inconsistencies and there isn't a standard to follow.
My suggestion is to follow the approach mentioned in the comments above and discuss further with the team | True | global: restructure project files - Just as explained in [this comment](https://github.com/NIAEFEUP/website-niaefeup-frontend/pull/38#issuecomment-1497964997), we should rethink how the project files are structured. There are currently inconsistencies and there isn't a standard to follow.
My suggestion is to follow the approach mentioned in the comments above and discuss further with the team | main | global restructure project files just as explained in we should rethink how the project files are structured there are currently inconsistencies and there isn t a standard to follow my suggestion is to follow the approach mentioned in the comments above and discuss further with the team | 1 |
339,575 | 30,456,837,297 | IssuesEvent | 2023-07-17 00:44:15 | TaleStation/TaleStation | https://api.github.com/repos/TaleStation/TaleStation | opened | Flaky test create_and_destroy: /obj/item/bodypart/arm/right/golem hard deleted 1 times out of a total del count of 8 | 🤖 Flaky Test Report | <!-- This issue can be renamed, but do not change the next comment! -->
<!-- title: Flaky test create_and_destroy: /obj/item/bodypart/arm/right/golem hard deleted 1 times out of a total del count of 8 -->
Flaky tests were detected in [this test run](https://github.com/TaleStation/TaleStation/actions/runs/5570461760/attempts/1). This means that there was a failure that was cleared when the tests were simply restarted.
Failures:
```
create_and_destroy: /obj/item/bodypart/arm/right/golem hard deleted 1 times out of a total del count of 8 at code/modules/unit_tests/create_and_destroy.dm:198
```
| 1.0 | Flaky test create_and_destroy: /obj/item/bodypart/arm/right/golem hard deleted 1 times out of a total del count of 8 - <!-- This issue can be renamed, but do not change the next comment! -->
<!-- title: Flaky test create_and_destroy: /obj/item/bodypart/arm/right/golem hard deleted 1 times out of a total del count of 8 -->
Flaky tests were detected in [this test run](https://github.com/TaleStation/TaleStation/actions/runs/5570461760/attempts/1). This means that there was a failure that was cleared when the tests were simply restarted.
Failures:
```
create_and_destroy: /obj/item/bodypart/arm/right/golem hard deleted 1 times out of a total del count of 8 at code/modules/unit_tests/create_and_destroy.dm:198
```
| non_main | flaky test create and destroy obj item bodypart arm right golem hard deleted times out of a total del count of flaky tests were detected in this means that there was a failure that was cleared when the tests were simply restarted failures create and destroy obj item bodypart arm right golem hard deleted times out of a total del count of at code modules unit tests create and destroy dm | 0 |
599,544 | 18,276,664,792 | IssuesEvent | 2021-10-04 19:42:44 | web-illinois/illinois_framework_theme | https://api.github.com/repos/web-illinois/illinois_framework_theme | closed | Galleria for Flickr Galleries | enhancement priority | Add in Galleria for flickr galleries.
We will use the folio one for the animal sciences web galleries.
@lizshalley - please put each gallery link here for Bill for the facilities pages and anywhere else we may need one.
Thanks. | 1.0 | Galleria for Flickr Galleries - Add in Galleria for flickr galleries.
We will use the folio one for the animal sciences web galleries.
@lizshalley - please put each gallery link here for Bill for the facilities pages and anywhere else we may need one.
Thanks. | non_main | galleria for flickr galleries add in galleria for flickr galleries we will use the folio one for the animal sciences web galleries lizshalley please put each gallery link here for bill for the facilities pages and anywhere else we may need one thanks | 0 |
259,322 | 22,466,487,720 | IssuesEvent | 2022-06-22 02:33:48 | rubyforgood/casa | https://api.github.com/repos/rubyforgood/casa | opened | Add test for controllers/case_court_reports_controller.rb | testing | Thank you for working on CASA!
To complete this issue:
1. open `controllers/case_court_reports_controller.rb`
1. make a new test file: controllers/case_court_reports_controller_spec.rb
1. add at least one test for the functionality in controllers/case_court_reports_controller.rb!
1. remove the line controllers/case_court_reports_controller.rb from `.allow_skipping_tests`
This will improve our test coverage and make our code safer to modify and easier to understand.
Example:
```
Before:
require "rails_helper"
RSpec.describe CaseCourtReportsController do
it "adds the numbers" do
expect(described_class.new(1, 1)).to eq(2)
end
end
```
### Questions? Join Slack!
We highly recommend that you join us in slack https://rubyforgood.herokuapp.com/ #casa channel to ask questions quickly and hear about office hours (currently Tuesday 6-8pm Pacific), stakeholder news, and upcoming new issues.
| 1.0 | Add test for controllers/case_court_reports_controller.rb - Thank you for working on CASA!
To complete this issue:
1. open `controllers/case_court_reports_controller.rb`
1. make a new test file: controllers/case_court_reports_controller_spec.rb
1. add at least one test for the functionality in controllers/case_court_reports_controller.rb!
1. remove the line controllers/case_court_reports_controller.rb from `.allow_skipping_tests`
This will improve our test coverage and make our code safer to modify and easier to understand.
Example:
```
Before:
require "rails_helper"
RSpec.describe CaseCourtReportsController do
it "adds the numbers" do
expect(described_class.new(1, 1)).to eq(2)
end
end
```
### Questions? Join Slack!
We highly recommend that you join us in slack https://rubyforgood.herokuapp.com/ #casa channel to ask questions quickly and hear about office hours (currently Tuesday 6-8pm Pacific), stakeholder news, and upcoming new issues.
| non_main | add test for controllers case court reports controller rb thank you for working on casa to complete this issue open controllers case court reports controller rb make a new test file controllers case court reports controller spec rb add at least one test for the functionality in controllers case court reports controller rb remove the line controllers case court reports controller rb from allow skipping tests this will improve our test coverage and make our code safer to modify and easier to understand example before require rails helper rspec describe casecourtreportscontroller do it adds the numbers do expect described class new to eq end end questions join slack we highly recommend that you join us in slack casa channel to ask questions quickly and hear about office hours currently tuesday pacific stakeholder news and upcoming new issues | 0 |
945 | 4,674,946,702 | IssuesEvent | 2016-10-07 04:41:11 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Template in block is not running filters properly | affects_2.3 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
blockinfile module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Centos 6.7
##### SUMMARY
<!--- Explain the problem briefly -->
When using a template as block, if the template uses a filter regex_replace, the filter is not working as expected. The second argument to regex_replace has to be set as \1 instead of \\1 to get it working.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
play.yml
---
# vim: set ft=ansible:
- name: play test
hosts: localhost
gather_facts: no
vars:
url: "http://wibble.wobbble.com:9091/some/path?param=value¶m2=othervalue"
tasks:
- name: abcd
blockinfile:
dest: temp
marker: "#<!-- {mark} ANSIBLE MANAGED BLOCK erp-ssl -->"
insertafter: "#erp-ssl"
state: present
block: |
RewriteCond %{HTTP_HOST} ={{ url | regex_replace('(?:https?://)?([^/:]+)?.*', '\\1') }}
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
file temp should contain :
```
#<!-- BEGIN ANSIBLE MANAGED BLOCK erp-ssl -->
RewriteCond %{HTTP_HOST} =wibble.wobbble.com
#<!-- END ANSIBLE MANAGED BLOCK erp-ssl -->
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
#<!-- BEGIN ANSIBLE MANAGED BLOCK erp-ssl -->
RewriteCond %{HTTP_HOST} =\1
#<!-- END ANSIBLE MANAGED BLOCK erp-ssl -->
```
<!--- Paste verbatim command output between quotes below -->
```
```
| True | Template in block is not running filters properly - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
blockinfile module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Centos 6.7
##### SUMMARY
<!--- Explain the problem briefly -->
When using a template as block, if the template uses a filter regex_replace, the filter is not working as expected. The second argument to regex_replace has to be set as \1 instead of \\1 to get it working.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
play.yml
---
# vim: set ft=ansible:
- name: play test
hosts: localhost
gather_facts: no
vars:
url: "http://wibble.wobbble.com:9091/some/path?param=value¶m2=othervalue"
tasks:
- name: abcd
blockinfile:
dest: temp
marker: "#<!-- {mark} ANSIBLE MANAGED BLOCK erp-ssl -->"
insertafter: "#erp-ssl"
state: present
block: |
RewriteCond %{HTTP_HOST} ={{ url | regex_replace('(?:https?://)?([^/:]+)?.*', '\\1') }}
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
file temp should contain :
```
#<!-- BEGIN ANSIBLE MANAGED BLOCK erp-ssl -->
RewriteCond %{HTTP_HOST} =wibble.wobbble.com
#<!-- END ANSIBLE MANAGED BLOCK erp-ssl -->
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
#<!-- BEGIN ANSIBLE MANAGED BLOCK erp-ssl -->
RewriteCond %{HTTP_HOST} =\1
#<!-- END ANSIBLE MANAGED BLOCK erp-ssl -->
```
<!--- Paste verbatim command output between quotes below -->
```
```
| main | template in block is not running filters properly issue type bug report component name blockinfile module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific centos summary when using a template as block if the template uses a filter regex replace the filter is not working as expected the second argument to regex replace has to be set as instead of to get it working steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used play yml vim set ft ansible name play test hosts localhost gather facts no vars url tasks name abcd blockinfile dest temp marker insertafter erp ssl state present block rewritecond http host url regex replace https expected results file temp should contain rewritecond http host wibble wobbble com actual results rewritecond http host | 1 |
4,833 | 24,912,104,906 | IssuesEvent | 2022-10-30 00:51:14 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFM - cura-lulzbot | Status: Available For Maintainer(s) | ## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/cura-lulzbot
Package source URL: https://github.com/jtcmedia/chocolatey-packages/tree/master/cura-lulzbot
| True | RFM - cura-lulzbot - ## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/cura-lulzbot
Package source URL: https://github.com/jtcmedia/chocolatey-packages/tree/master/cura-lulzbot
| main | rfm cura lulzbot current maintainer i am the maintainer of the package and wish to pass it to someone else checklist issue title starts with rfm existing package details package url package source url | 1 |
5,518 | 27,596,717,054 | IssuesEvent | 2023-03-09 07:04:34 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | closed | Move foundation.mozilla.org to different organization | engineering devops maintain | Need to coordinate with cknowles to do this. Needed to bring back heroku review apps! | True | Move foundation.mozilla.org to different organization - Need to coordinate with cknowles to do this. Needed to bring back heroku review apps! | main | move foundation mozilla org to different organization need to coordinate with cknowles to do this needed to bring back heroku review apps | 1 |
266,988 | 23,271,530,092 | IssuesEvent | 2022-08-05 00:00:58 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | webview - webviews should be able to send and receive messages | integration-test-failure | https://monacotools.visualstudio.com/DefaultCollection/Monaco/_build/results?buildId=172382&view=logs&j=306b2268-2a17-5e97-ab97-f41a26dc5206&t=f99521cb-b554-56f1-45de-e2bc176b0ad3&l=406
```
1 failing
1) vscode API - webview
webviews should be able to send and receive messages:
Error: Timeout of 60000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (D:\a\_work\1\s\extensions\vscode-api-tests\out\singlefolder-tests\webview.test.js)
at listOnTimeout (node:internal/timers:557:17)
``` | 1.0 | webview - webviews should be able to send and receive messages - https://monacotools.visualstudio.com/DefaultCollection/Monaco/_build/results?buildId=172382&view=logs&j=306b2268-2a17-5e97-ab97-f41a26dc5206&t=f99521cb-b554-56f1-45de-e2bc176b0ad3&l=406
```
1 failing
1) vscode API - webview
webviews should be able to send and receive messages:
Error: Timeout of 60000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (D:\a\_work\1\s\extensions\vscode-api-tests\out\singlefolder-tests\webview.test.js)
at listOnTimeout (node:internal/timers:557:17)
``` | non_main | webview webviews should be able to send and receive messages failing vscode api webview webviews should be able to send and receive messages error timeout of exceeded for async tests and hooks ensure done is called if returning a promise ensure it resolves d a work s extensions vscode api tests out singlefolder tests webview test js at listontimeout node internal timers | 0 |
1,963 | 6,690,043,580 | IssuesEvent | 2017-10-09 07:18:22 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Pull requests that only update the appcast `checkpoint` | awaiting maintainer feedback discussion meta travis | I realise contributors may be submitting PRs for `checkpoint` updates as part of scripts they have to automate the update process but I don't see any value in a PR that only updates the `checkpoint`.
With Travis CI wait times getting longer I also think it is a bad use of CI time to be running builds that only update the `checkpoint`.
/cc @vitorgalvao | True | Pull requests that only update the appcast `checkpoint` - I realise contributors may be submitting PRs for `checkpoint` updates as part of scripts they have to automate the update process but I don't see any value in a PR that only updates the `checkpoint`.
With Travis CI wait times getting longer I also think it is a bad use of CI time to be running builds that only update the `checkpoint`.
/cc @vitorgalvao | main | pull requests that only update the appcast checkpoint i realise contributors may be submitting prs for checkpoint updates as part of scripts they have to automate the update process but i don t see any value in a pr that only updates the checkpoint with travis ci wait times getting longer i also think it is a bad use of ci time to be running builds that only update the checkpoint cc vitorgalvao | 1 |
678,987 | 23,218,249,891 | IssuesEvent | 2022-08-02 15:44:23 | 5ergiu/deals | https://api.github.com/repos/5ergiu/deals | opened | CRUD Stores | Feature High Priority Backend | CRUD for stores:
- name -> required
- details -> nullable
- status -> a store can be active, disabled, deleted, draft.
- store_category_id -> required -> foreign key for store_categories
- image -> nullable (if no image, use a default one)
- home_page_url (ex: emag.ro)
Note: only admins can add/edit/delete stores. | 1.0 | CRUD Stores - CRUD for stores:
- name -> required
- details -> nullable
- status -> a store can be active, disabled, deleted, draft.
- store_category_id -> required -> foreign key for store_categories
- image -> nullable (if no image, use a default one)
- home_page_url (ex: emag.ro)
Note: only admins can add/edit/delete stores. | non_main | crud stores crud for stores name required details nullable status a store can be active disabled deleted draft store category id required foreign key for store categories image nullable if no image use a default one home page url ex emag ro note only admins can add edit delete stores | 0 |
1,739 | 6,574,877,277 | IssuesEvent | 2017-09-11 14:22:05 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | docker_container networks not working as expected (viz. mac addresses) | affects_2.1 bug_report cloud docker waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
[Relevant Docker docs](https://docs.docker.com/engine/reference/run/#/network-settings). Should be able to set mac address for the connected networks (I have my container connected to two networks, but need its mac address set in one of them. Using the `mac_address` configuration option only sets the address in the default bridge network, not in my custom network.
##### STEPS TO REPRODUCE
Try to set mac address on container, fail to do so.
```
- name: start container
become: false
docker_container:
name: blah
state: started
restart_policy: always
image: "{{ blah.image }}"
env: "{{ blah.environment }}"
log_opt: "{{ log_opts }}"
volumes: "{{ blah.volumes }}"
ports: "{{ blah.ports }}"
mac_address: "{{ mac_address.stdout }}"
networks:
- name: "{{ custom_network_name }}"
ipv6_address: "{{ blah_ipv6_address }}"
driver: bridge
```
##### EXPECTED RESULTS
I would expect the custom network to have the mac address specified.
##### ACTUAL RESULTS
Container gets two networks, one being the default bridge, the other being the custom network. Only the former gets the custom mac address.
| True | docker_container networks not working as expected (viz. mac addresses) - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
[Relevant Docker docs](https://docs.docker.com/engine/reference/run/#/network-settings). Should be able to set mac address for the connected networks (I have my container connected to two networks, but need its mac address set in one of them. Using the `mac_address` configuration option only sets the address in the default bridge network, not in my custom network.
##### STEPS TO REPRODUCE
Try to set mac address on container, fail to do so.
```
- name: start container
become: false
docker_container:
name: blah
state: started
restart_policy: always
image: "{{ blah.image }}"
env: "{{ blah.environment }}"
log_opt: "{{ log_opts }}"
volumes: "{{ blah.volumes }}"
ports: "{{ blah.ports }}"
mac_address: "{{ mac_address.stdout }}"
networks:
- name: "{{ custom_network_name }}"
ipv6_address: "{{ blah_ipv6_address }}"
driver: bridge
```
##### EXPECTED RESULTS
I would expect the custom network to have the mac address specified.
##### ACTUAL RESULTS
Container gets two networks, one being the default bridge, the other being the custom network. Only the former gets the custom mac address.
| main | docker container networks not working as expected viz mac addresses issue type bug report component name docker container ansible version ansible config file configured module search path default w o overrides configuration n a os environment n a summary should be able to set mac address for the connected networks i have my container connected to two networks but need its mac address set in one of them using the mac address configuration option only sets the address in the default bridge network not in my custom network steps to reproduce try to set mac address on container fail to do so name start container become false docker container name blah state started restart policy always image blah image env blah environment log opt log opts volumes blah volumes ports blah ports mac address mac address stdout networks name custom network name address blah address driver bridge expected results i would expect the custom network to have the mac address specified actual results container gets two networks one being the default bridge the other being the custom network only the former gets the custom mac address | 1 |
276,323 | 30,445,186,083 | IssuesEvent | 2023-07-15 15:16:36 | hinoshiba/news | https://api.github.com/repos/hinoshiba/news | closed | [SecurityWeek] 3 Tax Prep Firms Shared ‘Extraordinarily Sensitive’ Data About Taxpayers With Meta, Lawmakers Say | SecurityWeek Stale |
A group of congressional Democrats reported that three large tax preparation firms sent “extraordinarily sensitive” information on tens of millions of taxpayers to Facebook parent company Meta over the course of at least two years.
The post [3 Tax Prep Firms Shared ‘Extraordinarily Sensitive’ Data About Taxpayers With Meta, Lawmakers Say](https://www.securityweek.com/3-tax-prep-firms-shared-extraordinarily-sensitive-data-about-taxpayers-with-meta-lawmakers-say/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/3-tax-prep-firms-shared-extraordinarily-sensitive-data-about-taxpayers-with-meta-lawmakers-say/>
| True | [SecurityWeek] 3 Tax Prep Firms Shared ‘Extraordinarily Sensitive’ Data About Taxpayers With Meta, Lawmakers Say -
A group of congressional Democrats reported that three large tax preparation firms sent “extraordinarily sensitive” information on tens of millions of taxpayers to Facebook parent company Meta over the course of at least two years.
The post [3 Tax Prep Firms Shared ‘Extraordinarily Sensitive’ Data About Taxpayers With Meta, Lawmakers Say](https://www.securityweek.com/3-tax-prep-firms-shared-extraordinarily-sensitive-data-about-taxpayers-with-meta-lawmakers-say/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/3-tax-prep-firms-shared-extraordinarily-sensitive-data-about-taxpayers-with-meta-lawmakers-say/>
| non_main | tax prep firms shared ‘extraordinarily sensitive’ data about taxpayers with meta lawmakers say a group of congressional democrats reported that three large tax preparation firms sent “extraordinarily sensitive” information on tens of millions of taxpayers to facebook parent company meta over the course of at least two years the post appeared first on | 0 |
1,878 | 6,577,505,211 | IssuesEvent | 2017-09-12 01:22:59 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Support for Shell Commands in Check Mode | affects_2.1 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
command.py
##### ANSIBLE VERSION
```
ansible 2.1.0 (devel 5fdac707fd) last updated 2016/03/29 10:45:16 (GMT +000)
lib/ansible/modules/core: (detached HEAD 0268864211) last updated 2016/03/29 10:45:34 (GMT +000)
lib/ansible/modules/extras: (detached HEAD 6978984244) last updated 2016/03/29 10:45:53 (GMT +000)
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Ansible has a feature known as check mode. I have a requirement whereby when this mode is run all shell commands would be logged as they would be run during a real deployment. Currently the shell module does not support check mode and just skips the task.
I got around this by forking the command.py to allow check mode to run and log the command on the remote host to a temporary file
[command.py](https://gist.github.com/philltomlinson/46796d5759c6f78180e62857df6ed3e7) (with comments "change here").
As the playbook ends I then collect this file from each host using the fetch.py, by removing the if condition in check mode so that it runs and collects the remote command file
[fetch.py](https://gist.github.com/philltomlinson/0f729aa9a37c492852d8dc857f990ad8) (this file is in the main ansible repo).
However this was a quick fix in order to solve the problem I had. I wondered if:
1. This would be a feature that would be useful to others?
2. Is there a better level this change could be made (at the module execution level)?
I put the change in the command.py module directly so I would know the host that the command was run on and any ansible variables passed to the shell command have already been correctly substituted.
##### STEPS TO REPRODUCE
Run Ansible in check mode with a shell task. Check remote host for command file under /tmp, however the current gist will need a environment variable set to (for example):
export $command_file_name="check-mode-commands"
##### EXPECTED RESULTS
This will produce files in the expected format, with three columns, datetime, hostname and command that would have run. These files will be on the remote hosts filesystem which we then collect.
```
09:37:31.305791 host echo "hello"
09:37:38.549812 host echo "hello again"
```
##### ACTUAL RESULTS
N/A
| True | Support for Shell Commands in Check Mode - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
command.py
##### ANSIBLE VERSION
```
ansible 2.1.0 (devel 5fdac707fd) last updated 2016/03/29 10:45:16 (GMT +000)
lib/ansible/modules/core: (detached HEAD 0268864211) last updated 2016/03/29 10:45:34 (GMT +000)
lib/ansible/modules/extras: (detached HEAD 6978984244) last updated 2016/03/29 10:45:53 (GMT +000)
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Ansible has a feature known as check mode. I have a requirement whereby when this mode is run all shell commands would be logged as they would be run during a real deployment. Currently the shell module does not support check mode and just skips the task.
I got around this by forking the command.py to allow check mode to run and log the command on the remote host to a temporary file
[command.py](https://gist.github.com/philltomlinson/46796d5759c6f78180e62857df6ed3e7) (with comments "change here").
As the playbook ends I then collect this file from each host using the fetch.py, by removing the if condition in check mode so that it runs and collects the remote command file
[fetch.py](https://gist.github.com/philltomlinson/0f729aa9a37c492852d8dc857f990ad8) (this file is in the main ansible repo).
However this was a quick fix in order to solve the problem I had. I wondered if:
1. This would be a feature that would be useful to others?
2. Is there a better level this change could be made (at the module execution level)?
I put the change in the command.py module directly so I would know the host that the command was run on and any ansible variables passed to the shell command have already been correctly substituted.
##### STEPS TO REPRODUCE
Run Ansible in check mode with a shell task. Check remote host for command file under /tmp, however the current gist will need a environment variable set to (for example):
export $command_file_name="check-mode-commands"
##### EXPECTED RESULTS
This will produce files in the expected format, with three columns, datetime, hostname and command that would have run. These files will be on the remote hosts filesystem which we then collect.
```
09:37:31.305791 host echo "hello"
09:37:38.549812 host echo "hello again"
```
##### ACTUAL RESULTS
N/A
| main | support for shell commands in check mode issue type feature idea component name command py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt configuration n a os environment n a summary ansible has a feature known as check mode i have a requirement whereby when this mode is run all shell commands would be logged as they would be run during a real deployment currently the shell module does not support check mode and just skips the task i got around this by forking the command py to allow check mode to run and log the command on the remote host to a temporary file with comments change here as the playbook ends i then collect this file from each host using the fetch py by removing the if condition in check mode so that it runs and collects the remote command file this file is in the main ansible repo however this was a quick fix in order to solve the problem i had i wondered if this would be a feature that would be useful to others is there a better level this change could be made at the module execution level i put the change in the command py module directly so i would know the host that the command was run on and any ansible variables passed to the shell command have already been correctly substituted steps to reproduce run ansible in check mode with a shell task check remote host for command file under tmp however the current gist will need a environment variable set to for example export command file name check mode commands expected results this will produce files in the expected format with three columns datetime hostname and command that would have run these files will be on the remote hosts filesystem which we then collect host echo hello host echo hello again actual results n a | 1 |
9,976 | 8,300,403,068 | IssuesEvent | 2018-09-21 07:59:27 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | System.Net.Sockets ,Version=0.0.0.0 in net 4.6.1 environment. | area-Infrastructure needs more info question | The application is built for in net 4.6.1 .It uses a nugget package built on the .net standard 2.0 with makes use of another nugget package using the System.net.sockets ,ver 4.3.2.
The fusion log indicates failure to resolve System.Net.Sockets ,Version=0.0.0.0 .
The way the system is built , Its not possible to use app.config file.
1.The consuming application of the nugget package if it set to .net core environment, the issue is not observed.
2. The nugget package source ,instead of a library if it run as a executable set to .net 4.6.1 the problem is not observed.
3.In the systems where Visual Studio development environment and nuget environment is set ,the fusion log does indicate run time loading of the System.Net.Sockets .
It is observed only in the test systems (windows 2012 R2 s and with Visual Studio run times)
| 1.0 | System.Net.Sockets ,Version=0.0.0.0 in net 4.6.1 environment. - The application is built for in net 4.6.1 .It uses a nugget package built on the .net standard 2.0 with makes use of another nugget package using the System.net.sockets ,ver 4.3.2.
The fusion log indicates failure to resolve System.Net.Sockets ,Version=0.0.0.0 .
The way the system is built , Its not possible to use app.config file.
1.The consuming application of the nugget package if it set to .net core environment, the issue is not observed.
2. The nugget package source ,instead of a library if it run as a executable set to .net 4.6.1 the problem is not observed.
3.In the systems where Visual Studio development environment and nuget environment is set ,the fusion log does indicate run time loading of the System.Net.Sockets .
It is observed only in the test systems (windows 2012 R2 s and with Visual Studio run times)
| non_main | system net sockets version in net environment the application is built for in net it uses a nugget package built on the net standard with makes use of another nugget package using the system net sockets ver the fusion log indicates failure to resolve system net sockets version the way the system is built its not possible to use app config file the consuming application of the nugget package if it set to net core environment the issue is not observed the nugget package source instead of a library if it run as a executable set to net the problem is not observed in the systems where visual studio development environment and nuget environment is set the fusion log does indicate run time loading of the system net sockets it is observed only in the test systems windows s and with visual studio run times | 0 |
1,155 | 5,040,955,647 | IssuesEvent | 2016-12-19 08:30:35 | Optiboot/optiboot | https://api.github.com/repos/Optiboot/optiboot | closed | Use STK related defines instead of hardcoded values. | Maintainability Type-Other Type-Patch | In optiboot.c there are 2 places where 'if' compare some cryptic values (0x81 and 0x82).
There are standard STK defined values (but absent in shortened Optiboot's stk500.h), so I think for consistency and code readability it should be changed to names/defines.
Patch in pull request #157.
| True | Use STK related defines instead of hardcoded values. - In optiboot.c there are 2 places where 'if' compare some cryptic values (0x81 and 0x82).
There are standard STK defined values (but absent in shortened Optiboot's stk500.h), so I think for consistency and code readability it should be changed to names/defines.
Patch in pull request #157.
| main | use stk related defines instead of hardcoded values in optiboot c there are places where if compare some cryptic values and there are standard stk defined values but absent in shortened optiboot s h so i think for consistency and code readability it should be changed to names defines patch in pull request | 1 |
1,940 | 6,623,391,077 | IssuesEvent | 2017-09-22 06:56:42 | chef/inspec | https://api.github.com/repos/chef/inspec | opened | InSpec `check` and `only_if` | Pending Maintainers Review | We currently execute all code in `only_if` statements during the `inspec check` operation. I have a feeling there may be little value in doing so, as the statement doesn't provide any useful input to the check command. It will execute the code in the block but it may also lead to executing code that doesn't fully compute (e.g. `nil` values because it doesn't talk to a target system). I wonder if we should ignore this block during the execution of `inspec check` to potentially prevent errors with dynamic computations.
On the upside, using `inspec check` in this case also catches potential errors when values aren't computed with the right data dynamically and may lead to better code in the long run. As a downside, users have to actually understand what is going on that is causing the failure and how to solve it, otherwise it only gets frustrating. A failure with dynamic computations at the moment (due to the nature of not catching raised failures) will lead `inspec check` to completely fail. It may be that a combination of catching failures and allowing the user to ignore the error may lead to relief as well. | True | InSpec `check` and `only_if` - We currently execute all code in `only_if` statements during the `inspec check` operation. I have a feeling there may be little value in doing so, as the statement doesn't provide any useful input to the check command. It will execute the code in the block but it may also lead to executing code that doesn't fully compute (e.g. `nil` values because it doesn't talk to a target system). I wonder if we should ignore this block during the execution of `inspec check` to potentially prevent errors with dynamic computations.
On the upside, using `inspec check` in this case also catches potential errors when values aren't computed with the right data dynamically and may lead to better code in the long run. As a downside, users have to actually understand what is going on that is causing the failure and how to solve it, otherwise it only gets frustrating. A failure with dynamic computations at the moment (due to the nature of not catching raised failures) will lead `inspec check` to completely fail. It may be that a combination of catching failures and allowing the user to ignore the error may lead to relief as well. | main | inspec check and only if we currently execute all code in only if statements during the inspec check operation i have a feeling there may be little value in doing so as the statement doesn t provide any useful input to the check command it will execute the code in the block but it may also lead to executing code that doesn t fully compute e g nil values because it doesn t talk to a target system i wonder if we should ignore this block during the execution of inspec check to potentially prevent errors with dynamic computations on the upside using inspec check in this case also catches potential errors when values aren t computed with the right data dynamically and may lead to better code in the long run as a downside users have to actually understand what is going on that is causing the failure and how to solve it otherwise it only gets frustrating a failure with dynamic computations at the moment due to the nature of not catching raised failures will lead inspec check to completely fail it may be that a combination of catching failures and allowing the user to ignore the error may lead to relief as well | 1 |
5,787 | 30,652,759,843 | IssuesEvent | 2023-07-25 09:59:42 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Support unknown Postgres types | type: enhancement work: backend status: draft restricted: maintainers | Mathesar currently (erroniously) presumes that each Postgres data type it sees will be known via the relevant enums ([see comment](https://github.com/centerofci/mathesar/issues/3024#issuecomment-1632812393)) and, when an unknown type is encountered, Mathesar panics ([see bug report](https://github.com/centerofci/mathesar/issues/2709)). We want Mathesar to handle unknown types gracefully. | True | Support unknown Postgres types - Mathesar currently (erroniously) presumes that each Postgres data type it sees will be known via the relevant enums ([see comment](https://github.com/centerofci/mathesar/issues/3024#issuecomment-1632812393)) and, when an unknown type is encountered, Mathesar panics ([see bug report](https://github.com/centerofci/mathesar/issues/2709)). We want Mathesar to handle unknown types gracefully. | main | support unknown postgres types mathesar currently erroniously presumes that each postgres data type it sees will be known via the relevant enums and when an unknown type is encountered mathesar panics we want mathesar to handle unknown types gracefully | 1 |
243,817 | 18,727,078,755 | IssuesEvent | 2021-11-03 17:20:38 | AY2122S1-CS2103T-F13-4/tp | https://api.github.com/repos/AY2122S1-CS2103T-F13-4/tp | closed | [PE-D] Unclear note about command format | documentation | 
Can be used as a blank space seems quite confusing as to what is trying to be said here
<!--session: 1635494163592-873bf944-793e-4937-81c8-51e960c4230f-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.DocumentationBug`
original: benluiwj/ped#9 | 1.0 | [PE-D] Unclear note about command format - 
Can be used as a blank space seems quite confusing as to what is trying to be said here
<!--session: 1635494163592-873bf944-793e-4937-81c8-51e960c4230f-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.DocumentationBug`
original: benluiwj/ped#9 | non_main | unclear note about command format can be used as a blank space seems quite confusing as to what is trying to be said here labels severity medium type documentationbug original benluiwj ped | 0 |
4,774 | 24,593,157,074 | IssuesEvent | 2022-10-14 05:32:42 | coq-community/manifesto | https://api.github.com/repos/coq-community/manifesto | closed | Volunteer interim maintainers needed for VsCoq | change-maintainer maintainer-wanted | The Coq core team and Coq-community are looking for volunteer interim maintainers of the [VsCoq](https://github.com/coq-community/vscoq) project.
VsCoq is an extension of the [Visual Studio Code](https://code.visualstudio.com/) (VS Code) editor to support Coq source files and interaction with Coq. In a recent survey of Coq users, around [one third of respondents](https://coq.discourse.group/t/coq-community-survey-2022-results-part-ii/1746#coq-user-interfaces-7) reported that they use VS Code for Coq.
As described in [the public roadmap](https://github.com/coq-community/vscoq/wiki/VsCoq-1.0-Roadmap), VsCoq developers are currently focusing on building a new simplified architecture for VsCoq. This means that there are no longer any resources available for maintaining the current codebase.
VsCoq is an open source project available under the MIT license and developed in the [Coq-community organization](https://github.com/coq-community/manifesto) on GitHub. VsCoq is written in TypeScript and uses an XML-based protocol to communicate with Coq. Interim maintainers are expected to implement bugfixes, respond to pull requests and issues on GitHub, and release new versions of VsCoq on the [Visual Studio Marketplace](https://marketplace.visualstudio.com/) and [Open VSX](https://open-vsx.org/).
An important goal of interim maintenance is to provide a good experience for Coq users relying on VS Code for Coq while the new VsCoq architecture (and support for this architecture in Coq itself) is being developed and tested. However, interim maintainers should be aware that substantial parts of the current codebase are due to be replaced as the roadmap is implemented.
During their tenure, interim maintainers will be considered part of the [Coq Team](https://coq.inria.fr/coq-team.html) and credited for their work in release notes for Coq releases, for example [on Zenodo](https://doi.org/10.5281/zenodo.1003420).
Please respond to this GitHub issue with your motivation, and summary of relevant experience, for becoming an interim maintainer of VsCoq. The interim maintainer(s) will be selected from the issue responders by the Coq core team and Coq-community owners. Those not selected will still be encouraged to contribute to VsCoq in collaboration with the new maintainer(s). | True | Volunteer interim maintainers needed for VsCoq - The Coq core team and Coq-community are looking for volunteer interim maintainers of the [VsCoq](https://github.com/coq-community/vscoq) project.
VsCoq is an extension of the [Visual Studio Code](https://code.visualstudio.com/) (VS Code) editor to support Coq source files and interaction with Coq. In a recent survey of Coq users, around [one third of respondents](https://coq.discourse.group/t/coq-community-survey-2022-results-part-ii/1746#coq-user-interfaces-7) reported that they use VS Code for Coq.
As described in [the public roadmap](https://github.com/coq-community/vscoq/wiki/VsCoq-1.0-Roadmap), VsCoq developers are currently focusing on building a new simplified architecture for VsCoq. This means that there are no longer any resources available for maintaining the current codebase.
VsCoq is an open source project available under the MIT license and developed in the [Coq-community organization](https://github.com/coq-community/manifesto) on GitHub. VsCoq is written in TypeScript and uses an XML-based protocol to communicate with Coq. Interim maintainers are expected to implement bugfixes, respond to pull requests and issues on GitHub, and release new versions of VsCoq on the [Visual Studio Marketplace](https://marketplace.visualstudio.com/) and [Open VSX](https://open-vsx.org/).
An important goal of interim maintenance is to provide a good experience for Coq users relying on VS Code for Coq while the new VsCoq architecture (and support for this architecture in Coq itself) is being developed and tested. However, interim maintainers should be aware that substantial parts of the current codebase are due to be replaced as the roadmap is implemented.
During their tenure, interim maintainers will be considered part of the [Coq Team](https://coq.inria.fr/coq-team.html) and credited for their work in release notes for Coq releases, for example [on Zenodo](https://doi.org/10.5281/zenodo.1003420).
Please respond to this GitHub issue with your motivation, and summary of relevant experience, for becoming an interim maintainer of VsCoq. The interim maintainer(s) will be selected from the issue responders by the Coq core team and Coq-community owners. Those not selected will still be encouraged to contribute to VsCoq in collaboration with the new maintainer(s). | main | volunteer interim maintainers needed for vscoq the coq core team and coq community are looking for volunteer interim maintainers of the project vscoq is an extension of the vs code editor to support coq source files and interaction with coq in a recent survey of coq users around reported that they use vs code for coq as described in vscoq developers are currently focusing on building a new simplified architecture for vscoq this means that there are no longer any resources available for maintaining the current codebase vscoq is an open source project available under the mit license and developed in the on github vscoq is written in typescript and uses an xml based protocol to communicate with coq interim maintainers are expected to implement bugfixes respond to pull requests and issues on github and release new versions of vscoq on the and an important goal of interim maintenance is to provide a good experience for coq users relying on vs code for coq while the new vscoq architecture and support for this architecture in coq itself is being developed and tested however interim maintainers should be aware that substantial parts of the current codebase are due to be replaced as the roadmap is implemented during their tenure interim maintainers will be considered part of the and credited for their work in release notes for coq releases for example please respond to this github issue with your motivation and summary of relevant experience for becoming an interim maintainer of vscoq the interim maintainer s will be selected from the issue responders by the coq core team and coq community owners those not selected will still be encouraged to contribute to vscoq in collaboration with the new maintainer s | 1 |
295 | 2,788,997,556 | IssuesEvent | 2015-05-08 16:46:12 | tumblr/colossus | https://api.github.com/repos/tumblr/colossus | opened | printing an HttpRequest prints the entire body | enhancement project:service scope:small | Which is not a bad thing...if the body is small..if you are posting an image, or other binary content..the logs become illegible.
We should introduce some way of controlling what, on both a Request and Response get printed into a log. | 1.0 | printing an HttpRequest prints the entire body - Which is not a bad thing...if the body is small..if you are posting an image, or other binary content..the logs become illegible.
We should introduce some way of controlling what, on both a Request and Response get printed into a log. | non_main | printing an httprequest prints the entire body which is not a bad thing if the body is small if you are posting an image or other binary content the logs become illegible we should introduce some way of controlling what on both a request and response get printed into a log | 0 |
4,197 | 20,601,581,014 | IssuesEvent | 2022-03-06 10:49:23 | truecharts/apps | https://api.github.com/repos/truecharts/apps | reopened | Add Phabricator | New App Request No-Maintainer | Phabricator is colection of development tools, including project management, SVN and GIT repositories, code reviews, bug traking, etc.
It has Bitnami Helm charts available: https://artifacthub.io/packages/helm/bitnami/phabricator
| True | Add Phabricator - Phabricator is colection of development tools, including project management, SVN and GIT repositories, code reviews, bug traking, etc.
It has Bitnami Helm charts available: https://artifacthub.io/packages/helm/bitnami/phabricator
| main | add phabricator phabricator is colection of development tools including project management svn and git repositories code reviews bug traking etc it has bitnami helm charts available | 1 |
711,446 | 24,464,496,672 | IssuesEvent | 2022-10-07 13:57:50 | Puzzlepart/prosjektportalen365 | https://api.github.com/repos/Puzzlepart/prosjektportalen365 | closed | Ability to hide certain parts of the Project Information webpart | enhancement easy-task priority: low complexity: small customer funded | **Describe the solution you'd like**
Ability to hide certain certain part of the 'ProjectInformation' webpart through webpart props.
Solution:
- Hide all buttons
- Toggle visible per button
Additional change:
- Change name of webpart based on responses from userforum, new name will be: "Prosjektinformasjon", this includes the buttons
- Also hide "Vis alle egenskaper" when in edit mode. | 1.0 | Ability to hide certain parts of the Project Information webpart - **Describe the solution you'd like**
Ability to hide certain certain part of the 'ProjectInformation' webpart through webpart props.
Solution:
- Hide all buttons
- Toggle visible per button
Additional change:
- Change name of webpart based on responses from userforum, new name will be: "Prosjektinformasjon", this includes the buttons
- Also hide "Vis alle egenskaper" when in edit mode. | non_main | ability to hide certain parts of the project information webpart describe the solution you d like ability to hide certain certain part of the projectinformation webpart through webpart props solution hide all buttons toggle visible per button additional change change name of webpart based on responses from userforum new name will be prosjektinformasjon this includes the buttons also hide vis alle egenskaper when in edit mode | 0 |
494,730 | 14,264,404,752 | IssuesEvent | 2020-11-20 15:42:39 | bounswe/bounswe2020group3 | https://api.github.com/repos/bounswe/bounswe2020group3 | opened | [Android] Connect project page and project main page with backend | Android Priority: High | * **Project:ANDROID**
* **This is a:FEATURE REQUEST**
* **Description of the issue**
Connectin project pages to backend
* **Deadline for resolution:**
22.11.2020
| 1.0 | [Android] Connect project page and project main page with backend - * **Project:ANDROID**
* **This is a:FEATURE REQUEST**
* **Description of the issue**
Connectin project pages to backend
* **Deadline for resolution:**
22.11.2020
| non_main | connect project page and project main page with backend project android this is a feature request description of the issue connectin project pages to backend deadline for resolution | 0 |
507,969 | 14,685,929,661 | IssuesEvent | 2021-01-01 12:10:33 | naev/naev | https://api.github.com/repos/naev/naev | opened | [proposal] generate and pull some assets from naev-artwork-production repo | Priority-High Type-Enhancement | Given that using binary files bloats the repo, the following proposal was discussed on discord.
1. Have naev-artwork generate some files to naev-artwork-production using CI
2. Have naev-artwork-production be a submodule of naev repository
3. Use physfs to look for files in the naev-artwork-production submodule when run from source
4. Install the naev-artwork-production submodule files alongside the regular ones
This will generate automatically production files from source and use them, avoiding bloat in the main repo. Issues that might need to be addressed include making it all run smoothly and ensuring nothing dies when a history rewrite is done to the naev-artwork-production repo (since it will likely bloat heavily).
I consider this to be fairly important given the amount of new images that are needed/planned with the VN framework. | 1.0 | [proposal] generate and pull some assets from naev-artwork-production repo - Given that using binary files bloats the repo, the following proposal was discussed on discord.
1. Have naev-artwork generate some files to naev-artwork-production using CI
2. Have naev-artwork-production be a submodule of naev repository
3. Use physfs to look for files in the naev-artwork-production submodule when run from source
4. Install the naev-artwork-production submodule files alongside the regular ones
This will generate automatically production files from source and use them, avoiding bloat in the main repo. Issues that might need to be addressed include making it all run smoothly and ensuring nothing dies when a history rewrite is done to the naev-artwork-production repo (since it will likely bloat heavily).
I consider this to be fairly important given the amount of new images that are needed/planned with the VN framework. | non_main | generate and pull some assets from naev artwork production repo given that using binary files bloats the repo the following proposal was discussed on discord have naev artwork generate some files to naev artwork production using ci have naev artwork production be a submodule of naev repository use physfs to look for files in the naev artwork production submodule when run from source install the naev artwork production submodule files alongside the regular ones this will generate automatically production files from source and use them avoiding bloat in the main repo issues that might need to be addressed include making it all run smoothly and ensuring nothing dies when a history rewrite is done to the naev artwork production repo since it will likely bloat heavily i consider this to be fairly important given the amount of new images that are needed planned with the vn framework | 0 |
5,387 | 27,072,091,875 | IssuesEvent | 2023-02-14 07:53:06 | kedacore/governance | https://api.github.com/repos/kedacore/governance | opened | Provide guidance on when/how maintainers approve PRs/feature requests | governance community maintainership | Provide guidance on when/how maintainers approve PRs/feature requests.
For example; what are the rules for approving PRs and/or roadmap updates?
See:
- https://docs.google.com/document/d/11Qy7uWginP3gFQVpqnNM4CRp4kQKnEM6FTs2M6pm2o8/edit | True | Provide guidance on when/how maintainers approve PRs/feature requests - Provide guidance on when/how maintainers approve PRs/feature requests.
For example; what are the rules for approving PRs and/or roadmap updates?
See:
- https://docs.google.com/document/d/11Qy7uWginP3gFQVpqnNM4CRp4kQKnEM6FTs2M6pm2o8/edit | main | provide guidance on when how maintainers approve prs feature requests provide guidance on when how maintainers approve prs feature requests for example what are the rules for approving prs and or roadmap updates see | 1 |
1,431 | 6,219,389,040 | IssuesEvent | 2017-07-09 13:17:06 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Requesting maintainership for smsframework module | Maintainer application | Hi, [SMS Framework](https://github.com/backdrop-contrib/smsframework) has already been ported over to Backdrop (thanks to all those who contributed). I'm the maintainer on d.o (https://www.drupal.org/u/almaudoh) and would like to update the current branch to include the latest bug fixes. I would also help to maintain the module (looks like some work still left in the 1.x-1.0.0 branch too)
| True | Requesting maintainership for smsframework module - Hi, [SMS Framework](https://github.com/backdrop-contrib/smsframework) has already been ported over to Backdrop (thanks to all those who contributed). I'm the maintainer on d.o (https://www.drupal.org/u/almaudoh) and would like to update the current branch to include the latest bug fixes. I would also help to maintain the module (looks like some work still left in the 1.x-1.0.0 branch too)
| main | requesting maintainership for smsframework module hi has already been ported over to backdrop thanks to all those who contributed i m the maintainer on d o and would like to update the current branch to include the latest bug fixes i would also help to maintain the module looks like some work still left in the x branch too | 1 |
518,921 | 15,037,350,982 | IssuesEvent | 2021-02-02 16:14:42 | kymckay/f21as-project | https://api.github.com/repos/kymckay/f21as-project | opened | Implement Enums | category|menu category|orders priority|high type|addition | - [ ] Size: S, M, L
- [ ] Milk: None, Regular, Soy, Oat
- [ ] DietaryClass: Vegan, Vegetarian, Gluten Free, Diary Free
- [ ] Label: Coffee, Tea
- [ ] Colour: White, Red, Green
Keep in mind there will need to be a way of tracking the effect on price for some of these (Size, Colour, possibly Milk). See the lecture notes on enums 😃
| 1.0 | Implement Enums - - [ ] Size: S, M, L
- [ ] Milk: None, Regular, Soy, Oat
- [ ] DietaryClass: Vegan, Vegetarian, Gluten Free, Diary Free
- [ ] Label: Coffee, Tea
- [ ] Colour: White, Red, Green
Keep in mind there will need to be a way of tracking the effect on price for some of these (Size, Colour, possibly Milk). See the lecture notes on enums 😃
| non_main | implement enums size s m l milk none regular soy oat dietaryclass vegan vegetarian gluten free diary free label coffee tea colour white red green keep in mind there will need to be a way of tracking the effect on price for some of these size colour possibly milk see the lecture notes on enums 😃 | 0 |
112,523 | 17,090,988,371 | IssuesEvent | 2021-07-08 17:25:19 | globeandmail/sophi-for-wordpress | https://api.github.com/repos/globeandmail/sophi-for-wordpress | opened | CVE-2020-36048 (High) detected in engine.io-3.5.0.tgz | security vulnerability | ## CVE-2020-36048 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>engine.io-3.5.0.tgz</b></p></summary>
<p>The realtime engine behind Socket.IO. Provides the foundation of a bidirectional connection between client and server</p>
<p>Library home page: <a href="https://registry.npmjs.org/engine.io/-/engine.io-3.5.0.tgz">https://registry.npmjs.org/engine.io/-/engine.io-3.5.0.tgz</a></p>
<p>Path to dependency file: sophi-for-wordpress/package.json</p>
<p>Path to vulnerable library: sophi-for-wordpress/node_modules/engine.io/package.json</p>
<p>
Dependency Hierarchy:
- scripts-1.3.1.tgz (Root Library)
- browser-sync-2.26.14.tgz
- socket.io-2.4.0.tgz
- :x: **engine.io-3.5.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/globeandmail/sophi-for-wordpress/commit/f53708b0035f054189b00c3ec0de1de8c8799b41">f53708b0035f054189b00c3ec0de1de8c8799b41</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Engine.IO before 4.0.0 allows attackers to cause a denial of service (resource consumption) via a POST request to the long polling transport.
<p>Publish Date: 2021-01-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36048>CVE-2020-36048</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048</a></p>
<p>Release Date: 2021-01-08</p>
<p>Fix Resolution: engine.io - 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-36048 (High) detected in engine.io-3.5.0.tgz - ## CVE-2020-36048 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>engine.io-3.5.0.tgz</b></p></summary>
<p>The realtime engine behind Socket.IO. Provides the foundation of a bidirectional connection between client and server</p>
<p>Library home page: <a href="https://registry.npmjs.org/engine.io/-/engine.io-3.5.0.tgz">https://registry.npmjs.org/engine.io/-/engine.io-3.5.0.tgz</a></p>
<p>Path to dependency file: sophi-for-wordpress/package.json</p>
<p>Path to vulnerable library: sophi-for-wordpress/node_modules/engine.io/package.json</p>
<p>
Dependency Hierarchy:
- scripts-1.3.1.tgz (Root Library)
- browser-sync-2.26.14.tgz
- socket.io-2.4.0.tgz
- :x: **engine.io-3.5.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/globeandmail/sophi-for-wordpress/commit/f53708b0035f054189b00c3ec0de1de8c8799b41">f53708b0035f054189b00c3ec0de1de8c8799b41</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Engine.IO before 4.0.0 allows attackers to cause a denial of service (resource consumption) via a POST request to the long polling transport.
<p>Publish Date: 2021-01-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36048>CVE-2020-36048</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048</a></p>
<p>Release Date: 2021-01-08</p>
<p>Fix Resolution: engine.io - 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in engine io tgz cve high severity vulnerability vulnerable library engine io tgz the realtime engine behind socket io provides the foundation of a bidirectional connection between client and server library home page a href path to dependency file sophi for wordpress package json path to vulnerable library sophi for wordpress node modules engine io package json dependency hierarchy scripts tgz root library browser sync tgz socket io tgz x engine io tgz vulnerable library found in head commit a href found in base branch develop vulnerability details engine io before allows attackers to cause a denial of service resource consumption via a post request to the long polling transport publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution engine io step up your open source security game with whitesource | 0 |
3,678 | 15,036,638,211 | IssuesEvent | 2021-02-02 15:28:13 | ajour/ajour | https://api.github.com/repos/ajour/ajour | closed | Sort language in drop-down menu | B - missing feature C - waiting on maintainer | **Is your feature request related to a problem? Please describe.**
I noticed that the languages in the drop-down menu were unsorted. I think it would be nice if they were.
**Describe the solution you'd like**
Sort the available locales alphabetically. | True | Sort language in drop-down menu - **Is your feature request related to a problem? Please describe.**
I noticed that the languages in the drop-down menu were unsorted. I think it would be nice if they were.
**Describe the solution you'd like**
Sort the available locales alphabetically. | main | sort language in drop down menu is your feature request related to a problem please describe i noticed that the languages in the drop down menu were unsorted i think it would be nice if they were describe the solution you d like sort the available locales alphabetically | 1 |
105,100 | 16,624,115,009 | IssuesEvent | 2021-06-03 07:24:58 | Thanraj/OpenSSL_1.0.1b | https://api.github.com/repos/Thanraj/OpenSSL_1.0.1b | opened | CVE-2012-2333 (Medium) detected in opensslOpenSSL_1_0_1b, opensslOpenSSL_1_0_1b | security vulnerability | ## CVE-2012-2333 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opensslOpenSSL_1_0_1b</b>, <b>opensslOpenSSL_1_0_1b</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Integer underflow in OpenSSL before 0.9.8x, 1.0.0 before 1.0.0j, and 1.0.1 before 1.0.1c, when TLS 1.1, TLS 1.2, or DTLS is used with CBC encryption, allows remote attackers to cause a denial of service (buffer over-read) or possibly have unspecified other impact via a crafted TLS packet that is not properly handled during a certain explicit IV calculation.
<p>Publish Date: 2012-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-2333>CVE-2012-2333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-2333">https://nvd.nist.gov/vuln/detail/CVE-2012-2333</a></p>
<p>Release Date: 2012-05-14</p>
<p>Fix Resolution: 0.9.8x,1.0.0j,1.0.1c</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2012-2333 (Medium) detected in opensslOpenSSL_1_0_1b, opensslOpenSSL_1_0_1b - ## CVE-2012-2333 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opensslOpenSSL_1_0_1b</b>, <b>opensslOpenSSL_1_0_1b</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Integer underflow in OpenSSL before 0.9.8x, 1.0.0 before 1.0.0j, and 1.0.1 before 1.0.1c, when TLS 1.1, TLS 1.2, or DTLS is used with CBC encryption, allows remote attackers to cause a denial of service (buffer over-read) or possibly have unspecified other impact via a crafted TLS packet that is not properly handled during a certain explicit IV calculation.
<p>Publish Date: 2012-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-2333>CVE-2012-2333</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-2333">https://nvd.nist.gov/vuln/detail/CVE-2012-2333</a></p>
<p>Release Date: 2012-05-14</p>
<p>Fix Resolution: 0.9.8x,1.0.0j,1.0.1c</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in opensslopenssl opensslopenssl cve medium severity vulnerability vulnerable libraries opensslopenssl opensslopenssl vulnerability details integer underflow in openssl before before and before when tls tls or dtls is used with cbc encryption allows remote attackers to cause a denial of service buffer over read or possibly have unspecified other impact via a crafted tls packet that is not properly handled during a certain explicit iv calculation publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
1,735 | 6,574,863,741 | IssuesEvent | 2017-09-11 14:19:36 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | shell.py has \r instead of \n ? | affects_2.3 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ansible/module_utils/shell.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0 (devel c064dce791) last updated 2016/10/19 10:54:36 (GMT -400)
lib/ansible/modules/core: (detached HEAD b59b5d36e0) last updated 2016/10/19 10:54:36 (GMT -400)
lib/ansible/modules/extras: (detached HEAD 3f77bb6857) last updated 2016/10/18 11:43:45 (GMT -400)
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
I noticed that shell.py has carriage-return (\r) in send strings instead of newline in some places, I'm not sure if there was a particular reason for this? After looking deeper, it does appear that it is allocating a PTY by default in paramiko, not sure if that makes this point somewhat mute. Normal convention is to "send" \n as far as I have experienced in various expect scripts over the years? I guess I'm just curious if there was an implementation detail why CR was used.
Below is a diff that I implemented, it appears to work as expected in my test environments. I can submit PR if this fix is desired.
<!--- Paste example playbooks or commands between quotes below -->
```
--- a/lib/ansible/module_utils/shell.py
+++ b/lib/ansible/module_utils/shell.py
@@ -152,7 +152,7 @@ class Shell(object):
responses = list()
try:
for command in to_list(commands):
- cmd = '%s\r' % str(command)
+ cmd = '%s\n' % str(command)
self.shell.sendall(cmd)
responses.append(self.receive(command))
except socket.timeout:
@@ -172,7 +172,7 @@ class Shell(object):
for pr, ans in zip(prompt, response):
match = pr.search(resp)
if match:
- answer = '%s\r' % ans
+ answer = '%s\n' % ans
self.shell.sendall(answer)
return True
```
| True | shell.py has \r instead of \n ? - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ansible/module_utils/shell.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0 (devel c064dce791) last updated 2016/10/19 10:54:36 (GMT -400)
lib/ansible/modules/core: (detached HEAD b59b5d36e0) last updated 2016/10/19 10:54:36 (GMT -400)
lib/ansible/modules/extras: (detached HEAD 3f77bb6857) last updated 2016/10/18 11:43:45 (GMT -400)
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
I noticed that shell.py has carriage-return (\r) in send strings instead of newline in some places, I'm not sure if there was a particular reason for this? After looking deeper, it does appear that it is allocating a PTY by default in paramiko, not sure if that makes this point somewhat mute. Normal convention is to "send" \n as far as I have experienced in various expect scripts over the years? I guess I'm just curious if there was an implementation detail why CR was used.
Below is a diff that I implemented, it appears to work as expected in my test environments. I can submit PR if this fix is desired.
<!--- Paste example playbooks or commands between quotes below -->
```
--- a/lib/ansible/module_utils/shell.py
+++ b/lib/ansible/module_utils/shell.py
@@ -152,7 +152,7 @@ class Shell(object):
responses = list()
try:
for command in to_list(commands):
- cmd = '%s\r' % str(command)
+ cmd = '%s\n' % str(command)
self.shell.sendall(cmd)
responses.append(self.receive(command))
except socket.timeout:
@@ -172,7 +172,7 @@ class Shell(object):
for pr, ans in zip(prompt, response):
match = pr.search(resp)
if match:
- answer = '%s\r' % ans
+ answer = '%s\n' % ans
self.shell.sendall(answer)
return True
```
| main | shell py has r instead of n issue type bug report component name ansible module utils shell py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary i noticed that shell py has carriage return r in send strings instead of newline in some places i m not sure if there was a particular reason for this after looking deeper it does appear that it is allocating a pty by default in paramiko not sure if that makes this point somewhat mute normal convention is to send n as far as i have experienced in various expect scripts over the years i guess i m just curious if there was an implementation detail why cr was used below is a diff that i implemented it appears to work as expected in my test environments i can submit pr if this fix is desired a lib ansible module utils shell py b lib ansible module utils shell py class shell object responses list try for command in to list commands cmd s r str command cmd s n str command self shell sendall cmd responses append self receive command except socket timeout class shell object for pr ans in zip prompt response match pr search resp if match answer s r ans answer s n ans self shell sendall answer return true | 1 |
3,340 | 12,953,786,865 | IssuesEvent | 2020-07-20 01:45:10 | charlesism/manifest_generator | https://api.github.com/repos/charlesism/manifest_generator | closed | File processing code in root package is too convoluted | maintainability | Eg: manifest_generator_process() in "manifest_generator_process.swift" could us refactoring | True | File processing code in root package is too convoluted - Eg: manifest_generator_process() in "manifest_generator_process.swift" could us refactoring | main | file processing code in root package is too convoluted eg manifest generator process in manifest generator process swift could us refactoring | 1 |
5,677 | 29,807,683,343 | IssuesEvent | 2023-06-16 12:55:18 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: $tokens is defined in multiple places in scss | type: bug 🐛 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
@carbon/react
### Browser
Chrome, Firefox
### Package version
1.31.0
### React version
17.0.2
### Description
Looks like some changes have been made to https://github.com/carbon-design-system/carbon/commit/a1a528224476cfe00316ff83582fb2859bc09dea
Which include defining the variable $tokens in _layout
https://github.com/carbon-design-system/carbon/blame/ec85fb2920f2921cbd5194fceaf7efcd7608aaf9/packages/styles/scss/utilities/_layout.scss#L15
This is causes issues now as type forwards the variable $tokens https://github.com/carbon-design-system/carbon/blob/ec85fb2920f2921cbd5194fceaf7efcd7608aaf9/packages/styles/scss/type/_index.scss#L83
This is causing our scss not to compile
### Reproduction/example
https://stackblitz.com/edit/github-vwvzgk?file=src%2Findex.scss
### Steps to reproduce
We use a global style sheet that imports the key parts of carbon styles that are required. This global file looks something like this
```
@forward '@carbon/react/scss/config' with (
$css--font-face: false,
$css--body: true,
$css--default-type: false,
$css--reset: true
);
// Reset
@use '@carbon/react/scss/reset';
// Grid
@forward '@carbon/react/scss/grid';
// Theme helper
@use '@carbon/react/scss/compat/themes' as compat;
@use '@carbon/react/scss/themes';
@forward '@carbon/react/scss/theme' with (
$fallback: compat.$g10,
$theme: themes.$g10
);
// Helpers
@forward '@carbon/react/scss/spacing';
@forward '@carbon/react/scss/type';
@forward '@carbon/react/scss/colors';
@forward '@carbon/react/scss/motion';
@forward '@carbon/react/scss/utilities';
```
When we try to use this global file in other places, the build fails when it hits `@forward '@carbon/react/scss/utilities'; `
### Suggested Severity
Severity 2 = User cannot complete task, and/or no workaround within the user experience of a given component.
### Application/PAL
Pak WAIOPS
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: $tokens is defined in multiple places in scss - ### Package
@carbon/react
### Browser
Chrome, Firefox
### Package version
1.31.0
### React version
17.0.2
### Description
Looks like some changes have been made to https://github.com/carbon-design-system/carbon/commit/a1a528224476cfe00316ff83582fb2859bc09dea
Which include defining the variable $tokens in _layout
https://github.com/carbon-design-system/carbon/blame/ec85fb2920f2921cbd5194fceaf7efcd7608aaf9/packages/styles/scss/utilities/_layout.scss#L15
This is causes issues now as type forwards the variable $tokens https://github.com/carbon-design-system/carbon/blob/ec85fb2920f2921cbd5194fceaf7efcd7608aaf9/packages/styles/scss/type/_index.scss#L83
This is causing our scss not to compile
### Reproduction/example
https://stackblitz.com/edit/github-vwvzgk?file=src%2Findex.scss
### Steps to reproduce
We use a global style sheet that imports the key parts of carbon styles that are required. This global file looks something like this
```
@forward '@carbon/react/scss/config' with (
$css--font-face: false,
$css--body: true,
$css--default-type: false,
$css--reset: true
);
// Reset
@use '@carbon/react/scss/reset';
// Grid
@forward '@carbon/react/scss/grid';
// Theme helper
@use '@carbon/react/scss/compat/themes' as compat;
@use '@carbon/react/scss/themes';
@forward '@carbon/react/scss/theme' with (
$fallback: compat.$g10,
$theme: themes.$g10
);
// Helpers
@forward '@carbon/react/scss/spacing';
@forward '@carbon/react/scss/type';
@forward '@carbon/react/scss/colors';
@forward '@carbon/react/scss/motion';
@forward '@carbon/react/scss/utilities';
```
When we try to use this global file in other places, the build fails when it hits `@forward '@carbon/react/scss/utilities'; `
### Suggested Severity
Severity 2 = User cannot complete task, and/or no workaround within the user experience of a given component.
### Application/PAL
Pak WAIOPS
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | tokens is defined in multiple places in scss package carbon react browser chrome firefox package version react version description looks like some changes have been made to which include defining the variable tokens in layout this is causes issues now as type forwards the variable tokens this is causing our scss not to compile reproduction example steps to reproduce we use a global style sheet that imports the key parts of carbon styles that are required this global file looks something like this forward carbon react scss config with css font face false css body true css default type false css reset true reset use carbon react scss reset grid forward carbon react scss grid theme helper use carbon react scss compat themes as compat use carbon react scss themes forward carbon react scss theme with fallback compat theme themes helpers forward carbon react scss spacing forward carbon react scss type forward carbon react scss colors forward carbon react scss motion forward carbon react scss utilities when we try to use this global file in other places the build fails when it hits forward carbon react scss utilities suggested severity severity user cannot complete task and or no workaround within the user experience of a given component application pal pak waiops code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
617 | 4,111,174,116 | IssuesEvent | 2016-06-07 04:10:24 | Particular/ServiceControl | https://api.github.com/repos/Particular/ServiceControl | closed | SCMU instance action buttons shouldn't show dropshadow under tooltips | Tag: Installer Tag: Maintainer Prio Type: Bug | SCMU instance action buttons shouldn't show dropshadow under tooltips:

CC // @distantcam @gbiellem | True | SCMU instance action buttons shouldn't show dropshadow under tooltips - SCMU instance action buttons shouldn't show dropshadow under tooltips:

CC // @distantcam @gbiellem | main | scmu instance action buttons shouldn t show dropshadow under tooltips scmu instance action buttons shouldn t show dropshadow under tooltips cc distantcam gbiellem | 1 |
1,053 | 4,863,884,893 | IssuesEvent | 2016-11-14 16:31:50 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2 module: spot_wait_timeout exceeded, but spot instance launched anyway | affects_1.9 aws bug_report cloud waiting_on_maintainer | Issue Type: Bug Report
Ansible Version:
1.9.3 from https://launchpad.net/~ansible/+archive/ubuntu/ansible
2.0.0 (devel f0efe1ecb0)
Ansible Configuration: default \ as installed
Environment: Ubuntu 14.04
Summary: spot_wait_timeout exceeded and ec2 task failed, but spot instance launched anyway because spot request is not canceled on failure
Steps To Reproduce:
1. Run playbook https://gist.github.com/kai11/09d9bb952d422348a006
2. Playbook will fail with message "msg: wait for spot requests timeout on ..."
3. Check spot requests in AWS console - t1.micro will be open and eventually will be converted to instance without any tags
Expected Results:
Cancel sport request on spot_wait_timeout
Actual Results:
Spot request still open
| True | ec2 module: spot_wait_timeout exceeded, but spot instance launched anyway - Issue Type: Bug Report
Ansible Version:
1.9.3 from https://launchpad.net/~ansible/+archive/ubuntu/ansible
2.0.0 (devel f0efe1ecb0)
Ansible Configuration: default \ as installed
Environment: Ubuntu 14.04
Summary: spot_wait_timeout exceeded and ec2 task failed, but spot instance launched anyway because spot request is not canceled on failure
Steps To Reproduce:
1. Run playbook https://gist.github.com/kai11/09d9bb952d422348a006
2. Playbook will fail with message "msg: wait for spot requests timeout on ..."
3. Check spot requests in AWS console - t1.micro will be open and eventually will be converted to instance without any tags
Expected Results:
Cancel sport request on spot_wait_timeout
Actual Results:
Spot request still open
| main | module spot wait timeout exceeded but spot instance launched anyway issue type bug report ansible version from devel ansible configuration default as installed environment ubuntu summary spot wait timeout exceeded and task failed but spot instance launched anyway because spot request is not canceled on failure steps to reproduce run playbook playbook will fail with message msg wait for spot requests timeout on check spot requests in aws console micro will be open and eventually will be converted to instance without any tags expected results cancel sport request on spot wait timeout actual results spot request still open | 1 |
4,805 | 24,758,163,411 | IssuesEvent | 2022-10-21 20:02:06 | MDAnalysis/membrane-curvature | https://api.github.com/repos/MDAnalysis/membrane-curvature | opened | Modernize setup to comply with PEP518 | Maintainability | Although still functional, installation with `setup.py` is deprecated. According to [PEP518](https://peps.python.org/pep-0518/#file-format):
> The build system dependencies will be stored in a file named pyproject.toml that is written in the TOML format [[6]](https://peps.python.org/pep-0518/#toml).
Additionally, there are two files that can be deleted in the root directory: [.lgtm.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/.lgtm.yml) and [_config.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/_config.yml)
To fix this issue:
- [ ] Add `pyproject.toml`.
- [ ] If necessary, modify `setup.cfg`.
- [ ] Remove [.lgtm.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/.lgtm.yml) and [_config.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/_config.yml) in root.
| True | Modernize setup to comply with PEP518 - Although still functional, installation with `setup.py` is deprecated. According to [PEP518](https://peps.python.org/pep-0518/#file-format):
> The build system dependencies will be stored in a file named pyproject.toml that is written in the TOML format [[6]](https://peps.python.org/pep-0518/#toml).
Additionally, there are two files that can be deleted in the root directory: [.lgtm.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/.lgtm.yml) and [_config.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/_config.yml)
To fix this issue:
- [ ] Add `pyproject.toml`.
- [ ] If necessary, modify `setup.cfg`.
- [ ] Remove [.lgtm.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/.lgtm.yml) and [_config.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/_config.yml) in root.
| main | modernize setup to comply with although still functional installation with setup py is deprecated according to the build system dependencies will be stored in a file named pyproject toml that is written in the toml format additionally there are two files that can be deleted in the root directory and to fix this issue add pyproject toml if necessary modify setup cfg remove and in root | 1 |
9,212 | 24,235,737,696 | IssuesEvent | 2022-09-26 23:00:12 | terrapower/armi | https://api.github.com/repos/terrapower/armi | closed | Can we finally remove Settings Rules? | architecture cleanup | For the past few years, the `SettingsRules` system has been antiquated at best, and redundant now that we have settings validators:
https://github.com/terrapower/armi/blob/f0d27e7405bde450b1ba01825c95783080974c53/armi/settings/settingsRules.py#L20-L22
Generally, we want to move away from "this code runs when you import ARMI" to things that more controllable and less magical.
The only place I can see in ARMI this will matter is is `armi/physics/neutronics/settings.py` there are some "rules" that will have to be converted to "validators". Which is a very small lift.
> But how many downstream projects would have to be converted first? That's what determines our level-of-effort here. | 1.0 | Can we finally remove Settings Rules? - For the past few years, the `SettingsRules` system has been antiquated at best, and redundant now that we have settings validators:
https://github.com/terrapower/armi/blob/f0d27e7405bde450b1ba01825c95783080974c53/armi/settings/settingsRules.py#L20-L22
Generally, we want to move away from "this code runs when you import ARMI" to things that more controllable and less magical.
The only place I can see in ARMI this will matter is is `armi/physics/neutronics/settings.py` there are some "rules" that will have to be converted to "validators". Which is a very small lift.
> But how many downstream projects would have to be converted first? That's what determines our level-of-effort here. | non_main | can we finally remove settings rules for the past few years the settingsrules system has been antiquated at best and redundant now that we have settings validators generally we want to move away from this code runs when you import armi to things that more controllable and less magical the only place i can see in armi this will matter is is armi physics neutronics settings py there are some rules that will have to be converted to validators which is a very small lift but how many downstream projects would have to be converted first that s what determines our level of effort here | 0 |
33,711 | 4,857,547,763 | IssuesEvent | 2016-11-12 17:24:49 | mlp6/ADPL | https://api.github.com/repos/mlp6/ADPL | opened | missing bucket tip events | bug testing | @aforbis-stokes has reported that the bucket tip count isn't working, specifically tip events are missing, but without any rhyme or reason that he has noticed. This could be an issue on the hardware and/or software side.
I can tackle the software side, but some hardware debugging that should be done:
1. Capture the output of the one-shot on tip events with the o-scope, and save that trace in the repo. Try to reproduce the missed events, and see if there is a correlation b/w unexpected one-shot output and the missed events.
2. From the o-scope output, characterize the actual voltage and duration of the pulse. We currently have a 2.5 s lockout time before re-measuring the bucket pin (``C4``). Determine if this is appropriate. | 1.0 | missing bucket tip events - @aforbis-stokes has reported that the bucket tip count isn't working, specifically tip events are missing, but without any rhyme or reason that he has noticed. This could be an issue on the hardware and/or software side.
I can tackle the software side, but some hardware debugging that should be done:
1. Capture the output of the one-shot on tip events with the o-scope, and save that trace in the repo. Try to reproduce the missed events, and see if there is a correlation b/w unexpected one-shot output and the missed events.
2. From the o-scope output, characterize the actual voltage and duration of the pulse. We currently have a 2.5 s lockout time before re-measuring the bucket pin (``C4``). Determine if this is appropriate. | non_main | missing bucket tip events aforbis stokes has reported that the bucket tip count isn t working specifically tip events are missing but without any rhyme or reason that he has noticed this could be an issue on the hardware and or software side i can tackle the software side but some hardware debugging that should be done capture the output of the one shot on tip events with the o scope and save that trace in the repo try to reproduce the missed events and see if there is a correlation b w unexpected one shot output and the missed events from the o scope output characterize the actual voltage and duration of the pulse we currently have a s lockout time before re measuring the bucket pin determine if this is appropriate | 0 |
26,112 | 12,854,910,956 | IssuesEvent | 2020-07-09 03:27:40 | wlandau/targets | https://api.github.com/repos/wlandau/targets | closed | Consider a light alternative to the queue for tar_outdated() etc. | topic: performance | ## Prework
* [x] I understand and agree to `targets`' [code of conduct](https://github.com/wlandau/targets/blob/master/CODE_OF_CONDUCT.md).
* [x] I understand and agree to `targets`' [contributing guidelines](https://github.com/wlandau/targets/blob/master/CONTRIBUTING.md).
## Description
The queue contributes a bit of overhead in `tar_outdated()` and `tar_make()`. Maybe for those functions we can consider a subclass of the queue that just runs targets in topological order with no need to modify ranks. Latest results from an internal workflow:

| True | Consider a light alternative to the queue for tar_outdated() etc. - ## Prework
* [x] I understand and agree to `targets`' [code of conduct](https://github.com/wlandau/targets/blob/master/CODE_OF_CONDUCT.md).
* [x] I understand and agree to `targets`' [contributing guidelines](https://github.com/wlandau/targets/blob/master/CONTRIBUTING.md).
## Description
The queue contributes a bit of overhead in `tar_outdated()` and `tar_make()`. Maybe for those functions we can consider a subclass of the queue that just runs targets in topological order with no need to modify ranks. Latest results from an internal workflow:

| non_main | consider a light alternative to the queue for tar outdated etc prework i understand and agree to targets i understand and agree to targets description the queue contributes a bit of overhead in tar outdated and tar make maybe for those functions we can consider a subclass of the queue that just runs targets in topological order with no need to modify ranks latest results from an internal workflow | 0 |
3,801 | 16,417,886,495 | IssuesEvent | 2021-05-19 09:01:01 | fluttercommunity/community | https://api.github.com/repos/fluttercommunity/community | closed | Forking an abandoned project | discussion package maintainer wanted package: responsive_scaffold | # Discussion: [Fork for responsive_scaffold]
Hi there,
I started using [responsive_scaffold](https://github.com/fluttercommunity/responsive_scaffold) and enjoy using it. sadly the development has stagnated so I started my own fork adding functionality I needed. I would be happy if I can contribute those back but the project seems to be abandoned (no interaction in the issues, no PR's closed and no commits for the last year).
I've already migrated the package to null saftey and opened a PR for that but I don't see my efforts helping any time soon.
Is it possible to ask for a maintainer change? I'd be happy to help develop this package further :)
So @rodydavis what about that?
## Summary
[Tell us about the topic and why you'd like to start a discussion.]
| True | Forking an abandoned project - # Discussion: [Fork for responsive_scaffold]
Hi there,
I started using [responsive_scaffold](https://github.com/fluttercommunity/responsive_scaffold) and enjoy using it. sadly the development has stagnated so I started my own fork adding functionality I needed. I would be happy if I can contribute those back but the project seems to be abandoned (no interaction in the issues, no PR's closed and no commits for the last year).
I've already migrated the package to null saftey and opened a PR for that but I don't see my efforts helping any time soon.
Is it possible to ask for a maintainer change? I'd be happy to help develop this package further :)
So @rodydavis what about that?
## Summary
[Tell us about the topic and why you'd like to start a discussion.]
| main | forking an abandoned project discussion hi there i started using and enjoy using it sadly the development has stagnated so i started my own fork adding functionality i needed i would be happy if i can contribute those back but the project seems to be abandoned no interaction in the issues no pr s closed and no commits for the last year i ve already migrated the package to null saftey and opened a pr for that but i don t see my efforts helping any time soon is it possible to ask for a maintainer change i d be happy to help develop this package further so rodydavis what about that summary | 1 |
802 | 4,422,179,295 | IssuesEvent | 2016-08-16 00:59:17 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Subsequent clone uses depth 1 | bug_report P3 waiting_on_maintainer | ##### Issue Type:
Bug report
##### Component Name:
git module
##### Ansible Version:
ansible 1.8.4
##### Environment:
Ubuntu 14.10. Installed using `sudo pip install ansible`
##### Summary:
I have two subsequent `git clone` from two different repos on github. The first clone uses `depth 1`. The second does not. Looks like Ansible still uses depth 1 for the second clone so checkout to a branch doesn't work.
##### Steps To Reproduce:
```yaml
- name: clone project
git:
repo: git@github.com:user/app_1.git
version: develop
dest: "{{ app_1_dir }}"
depth: 1
accept_hostkey: yes
- name: clone project
git:
repo: git@github.com:user/app_2.git
version: develop
dest: "{{ app_2_dir }}"
accept_hostkey: yes
```
##### Expected Results:
Both clones should be successful.
##### Actual Results:
```
PLAY [webservers] *************************************************************
GATHERING FACTS ***************************************************************
ok: [virtualbox]
TASK: [fail msg="These tasks were made for Ubuntu 14.04 LTS"] *****************
skipping: [virtualbox]
TASK: [clone project] *********************************************************
changed: [virtualbox]
TASK: [clone project] *********************************************************
failed: [virtualbox] => {"failed": true}
msg: Failed to checkout develop
```
When replaying -- the same result.
When commenting `depth: 1` in the first clone -- the same result.
When I remove both cloned repos and re-run -- it goes ok.
| True | Subsequent clone uses depth 1 - ##### Issue Type:
Bug report
##### Component Name:
git module
##### Ansible Version:
ansible 1.8.4
##### Environment:
Ubuntu 14.10. Installed using `sudo pip install ansible`
##### Summary:
I have two subsequent `git clone` from two different repos on github. The first clone uses `depth 1`. The second does not. Looks like Ansible still uses depth 1 for the second clone so checkout to a branch doesn't work.
##### Steps To Reproduce:
```yaml
- name: clone project
git:
repo: git@github.com:user/app_1.git
version: develop
dest: "{{ app_1_dir }}"
depth: 1
accept_hostkey: yes
- name: clone project
git:
repo: git@github.com:user/app_2.git
version: develop
dest: "{{ app_2_dir }}"
accept_hostkey: yes
```
##### Expected Results:
Both clones should be successful.
##### Actual Results:
```
PLAY [webservers] *************************************************************
GATHERING FACTS ***************************************************************
ok: [virtualbox]
TASK: [fail msg="These tasks were made for Ubuntu 14.04 LTS"] *****************
skipping: [virtualbox]
TASK: [clone project] *********************************************************
changed: [virtualbox]
TASK: [clone project] *********************************************************
failed: [virtualbox] => {"failed": true}
msg: Failed to checkout develop
```
When replaying -- the same result.
When commenting `depth: 1` in the first clone -- the same result.
When I remove both cloned repos and re-run -- it goes ok.
| main | subsequent clone uses depth issue type bug report component name git module ansible version ansible environment ubuntu installed using sudo pip install ansible summary i have two subsequent git clone from two different repos on github the first clone uses depth the second does not looks like ansible still uses depth for the second clone so checkout to a branch doesn t work steps to reproduce yaml name clone project git repo git github com user app git version develop dest app dir depth accept hostkey yes name clone project git repo git github com user app git version develop dest app dir accept hostkey yes expected results both clones should be successful actual results play gathering facts ok task skipping task changed task failed failed true msg failed to checkout develop when replaying the same result when commenting depth in the first clone the same result when i remove both cloned repos and re run it goes ok | 1 |
984 | 4,750,365,740 | IssuesEvent | 2016-10-22 09:34:28 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | AttributeError: 'NoneType' object has no attribute 'describe_alarms' when creating new ec2_metric_alarm | affects_1.8 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
ec2_metric_alarm module
##### ANSIBLE VERSION
ansible (1.8.4)
##### SUMMARY
Versions run on:
ansible (1.8.4)
boto (2.36.0)
botocore (0.93.0)
Playbook run:
- name: Create a Cloudwatch metric alarm for up scaling and associate it with a Scaling Policy
ec2_metric_alarm:
name: Product 15.1.1.0.0 opsdev upscale
description: Triggered when the CPU of a node is more than 50% for 5 minutes
namespace: "AWS/EC2"
metric: CPUUtilization
comparison: ">"
threshold: 50.0
unit: Percent
period: 100
evaluation_periods: 3
statistic: Average
dimensions: {'AutoScalingGroupName':'Product opsdev'}
state: present
alarm_actions: "{{ sp_up_result.arn }}"
The sp_up_result variable is an ec2_scaling_policy task register as demonstrated in this link (http://stackoverflow.com/questions/24686407/unable-to-retrieve-aws-scaling-policy-information-from-ec2-scaling-policy-module).
Error is the following:
failed: [localhost] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm", line 2069, in <module>
main()
File "/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm", line 2065, in main
create_metric_alarm(connection, module)
File "/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm", line 1934, in create_metric_alarm
alarms = connection.describe_alarms(alarm_names=[name])
AttributeError: 'NoneType' object has no attribute 'describe_alarms'
| True | AttributeError: 'NoneType' object has no attribute 'describe_alarms' when creating new ec2_metric_alarm - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
ec2_metric_alarm module
##### ANSIBLE VERSION
ansible (1.8.4)
##### SUMMARY
Versions run on:
ansible (1.8.4)
boto (2.36.0)
botocore (0.93.0)
Playbook run:
- name: Create a Cloudwatch metric alarm for up scaling and associate it with a Scaling Policy
ec2_metric_alarm:
name: Product 15.1.1.0.0 opsdev upscale
description: Triggered when the CPU of a node is more than 50% for 5 minutes
namespace: "AWS/EC2"
metric: CPUUtilization
comparison: ">"
threshold: 50.0
unit: Percent
period: 100
evaluation_periods: 3
statistic: Average
dimensions: {'AutoScalingGroupName':'Product opsdev'}
state: present
alarm_actions: "{{ sp_up_result.arn }}"
The sp_up_result variable is an ec2_scaling_policy task register as demonstrated in this link (http://stackoverflow.com/questions/24686407/unable-to-retrieve-aws-scaling-policy-information-from-ec2-scaling-policy-module).
Error is the following:
failed: [localhost] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm", line 2069, in <module>
main()
File "/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm", line 2065, in main
create_metric_alarm(connection, module)
File "/home/lostmimic/.ansible/tmp/ansible-tmp-1425074403.05-269505839436583/ec2_metric_alarm", line 1934, in create_metric_alarm
alarms = connection.describe_alarms(alarm_names=[name])
AttributeError: 'NoneType' object has no attribute 'describe_alarms'
| main | attributeerror nonetype object has no attribute describe alarms when creating new metric alarm issue type bug report component name metric alarm module ansible version ansible summary versions run on ansible boto botocore playbook run name create a cloudwatch metric alarm for up scaling and associate it with a scaling policy metric alarm name product opsdev upscale description triggered when the cpu of a node is more than for minutes namespace aws metric cpuutilization comparison threshold unit percent period evaluation periods statistic average dimensions autoscalinggroupname product opsdev state present alarm actions sp up result arn the sp up result variable is an scaling policy task register as demonstrated in this link error is the following failed failed true parsed false traceback most recent call last file home lostmimic ansible tmp ansible tmp metric alarm line in main file home lostmimic ansible tmp ansible tmp metric alarm line in main create metric alarm connection module file home lostmimic ansible tmp ansible tmp metric alarm line in create metric alarm alarms connection describe alarms alarm names attributeerror nonetype object has no attribute describe alarms | 1 |
878 | 4,541,245,148 | IssuesEvent | 2016-09-09 17:09:11 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | azure_rm_virualmashine issue | affects_2.1 azure bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
- Feature Idea
- Documentation Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
azure_rm_virtualmachine module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible-2.1.0.0-1.fc23.noarch
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Python 2.7.11
Modules:
azure (2.0.0rc5)
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
fedora 23
##### SUMMARY
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: localhost
connection: local
gather_facts: false
become: false
vars_files:
# - environments/Azure/azure_credentials_encrypted.yml
- ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml
vars:
roles:
- create_azure_vm
And roles/create_azure_vm/main.yml
- name: Create VM with defaults
azure_rm_virtualmachine:
resource_group: Testing
name: testvm10
admin_username: test_user
admin_password: test_vm
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
creatiion of VM.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
PLAYBOOK: provision_azure_playbook.yml *****************************************
1 plays in provision_azure_playbook.yml
PLAY [localhost] ***************************************************************
TASK [create_azure_vm : Create VM with defaults] *******************************
task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" && echo ansible-tmp-1470326423.51-208881287834045="` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf "/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1284, in <module>
main()
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1281, in main
AzureRMVirtualMachine()
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 487, in __init__
for key in VirtualMachineSizeTypes:
NameError: global name 'VirtualMachineSizeTypes' is not defined
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "azure_rm_virtualmachine"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1284, in <module>\n main()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1281, in main\n AzureRMVirtualMachine()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @provision_azure_playbook.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
<!--- Paste verbatim command output between quotes below -->
```
PLAYBOOK: provision_azure_playbook.yml *****************************************
1 plays in provision_azure_playbook.yml
PLAY [localhost] ***************************************************************
TASK [create_azure_vm : Create VM with defaults] *******************************
task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" && echo ansible-tmp-1470326423.51-208881287834045="` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf "/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1284, in <module>
main()
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1281, in main
AzureRMVirtualMachine()
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 487, in __init__
for key in VirtualMachineSizeTypes:
NameError: global name 'VirtualMachineSizeTypes' is not defined
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "azure_rm_virtualmachine"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1284, in <module>\n main()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1281, in main\n AzureRMVirtualMachine()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @provision_azure_playbook.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
| True | azure_rm_virualmashine issue - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
- Feature Idea
- Documentation Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
azure_rm_virtualmachine module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible-2.1.0.0-1.fc23.noarch
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Python 2.7.11
Modules:
azure (2.0.0rc5)
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
fedora 23
##### SUMMARY
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: localhost
connection: local
gather_facts: false
become: false
vars_files:
# - environments/Azure/azure_credentials_encrypted.yml
- ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml
vars:
roles:
- create_azure_vm
And roles/create_azure_vm/main.yml
- name: Create VM with defaults
azure_rm_virtualmachine:
resource_group: Testing
name: testvm10
admin_username: test_user
admin_password: test_vm
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
creatiion of VM.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
PLAYBOOK: provision_azure_playbook.yml *****************************************
1 plays in provision_azure_playbook.yml
PLAY [localhost] ***************************************************************
TASK [create_azure_vm : Create VM with defaults] *******************************
task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" && echo ansible-tmp-1470326423.51-208881287834045="` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf "/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1284, in <module>
main()
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1281, in main
AzureRMVirtualMachine()
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 487, in __init__
for key in VirtualMachineSizeTypes:
NameError: global name 'VirtualMachineSizeTypes' is not defined
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "azure_rm_virtualmachine"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1284, in <module>\n main()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1281, in main\n AzureRMVirtualMachine()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @provision_azure_playbook.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
<!--- Paste verbatim command output between quotes below -->
```
PLAYBOOK: provision_azure_playbook.yml *****************************************
1 plays in provision_azure_playbook.yml
PLAY [localhost] ***************************************************************
TASK [create_azure_vm : Create VM with defaults] *******************************
task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" && echo ansible-tmp-1470326423.51-208881287834045="` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf "/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1284, in <module>
main()
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1281, in main
AzureRMVirtualMachine()
File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 487, in __init__
for key in VirtualMachineSizeTypes:
NameError: global name 'VirtualMachineSizeTypes' is not defined
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "azure_rm_virtualmachine"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1284, in <module>\n main()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1281, in main\n AzureRMVirtualMachine()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @provision_azure_playbook.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
| main | azure rm virualmashine issue issue type bug report feature idea documentation report component name azure rm virtualmachine module ansible version ansible noarch configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables python modules azure os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific fedora summary steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost connection local gather facts false become false vars files environments azure azure credentials encrypted yml inventory environments azure azure credentials encrypted temp passwd yml vars roles create azure vm and roles create azure vm main yml name create vm with defaults azure rm virtualmachine resource group testing name admin username test user admin password test vm image offer centos publisher openlogic sku version latest expected results creatiion of vm actual results playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed | 1 |
5,001 | 25,725,106,677 | IssuesEvent | 2022-12-07 16:08:39 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | `pywintypes-stubs` is now available through `types-pywin32` | contributors/good-first-issue contributors/welcome maintainer/need-followup | Just a heads-up that `types-pywin32` has been added to typeshed, so you don't need to ignore `pywintypes` for mypy and pylint anymore. | True | `pywintypes-stubs` is now available through `types-pywin32` - Just a heads-up that `types-pywin32` has been added to typeshed, so you don't need to ignore `pywintypes` for mypy and pylint anymore. | main | pywintypes stubs is now available through types just a heads up that types has been added to typeshed so you don t need to ignore pywintypes for mypy and pylint anymore | 1 |
5,109 | 26,031,856,248 | IssuesEvent | 2022-12-21 22:16:11 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Unable to deploy successfully when using S3 access point | type/feature stage/pm-review maintainer/need-response | ## Description
Not sure if this a bug per se, or just a lacking feature due to CloudFormation limitations. Basically, I use sam to deploy resources to multiple AWS accounts, but was wondering if a central S3 bucket can be used for artifacts.
SAM itself logs an expected regex to `--s3-bucket` that seems to support S3 access points.
### Steps to reproduce
* Create a simply sam template
* Configure a public S3 bucket with an access point
* Attempt to deploy
I attempted to create this within the same AWS account. Running `sam deploy` with `--s3-bucket <local_bucket_name>` deploys successfully.
### Observed result
```
$ sam deploy --debug --stack-name test-stack --capabilities CAPABILITY_IAM --s3-bucket arn:aws:s3:eu-west-1:<aws_account_id>:accesspoint/<accesspoint_name> --s3-prefix test
Deploying with following values
===============================
Stack name : test-stack
Region : eu-west-1
Confirm changeset : False
Deployment s3 bucket : <same as above>
Capabilities : ["CAPABILITY_IAM"]
Parameter overrides : {}
Initiating deployment
=====================
No Parameters detected in the template
1 resources found in the template
Found Serverless function with name='TestFunc' and CodeUri='.'
Uploading to test/123456789abcdefghijklmnopqrstuvwxyz.template 591 / 591.0 (100.00%)
Error: Failed to create changeset for the stack: test-stack, An error occurred (InternalFailure) when calling the CreateChangeSet operation (reached max retries: 4): Unknown
```
### Expected result
Stack is successfully created, with artifacts stored in a central S3 bucket
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS:
Mac
2. `sam --version`:
SAM CLI, version 1.2.0
| True | Unable to deploy successfully when using S3 access point - ## Description
Not sure if this a bug per se, or just a lacking feature due to CloudFormation limitations. Basically, I use sam to deploy resources to multiple AWS accounts, but was wondering if a central S3 bucket can be used for artifacts.
SAM itself logs an expected regex to `--s3-bucket` that seems to support S3 access points.
### Steps to reproduce
* Create a simply sam template
* Configure a public S3 bucket with an access point
* Attempt to deploy
I attempted to create this within the same AWS account. Running `sam deploy` with `--s3-bucket <local_bucket_name>` deploys successfully.
### Observed result
```
$ sam deploy --debug --stack-name test-stack --capabilities CAPABILITY_IAM --s3-bucket arn:aws:s3:eu-west-1:<aws_account_id>:accesspoint/<accesspoint_name> --s3-prefix test
Deploying with following values
===============================
Stack name : test-stack
Region : eu-west-1
Confirm changeset : False
Deployment s3 bucket : <same as above>
Capabilities : ["CAPABILITY_IAM"]
Parameter overrides : {}
Initiating deployment
=====================
No Parameters detected in the template
1 resources found in the template
Found Serverless function with name='TestFunc' and CodeUri='.'
Uploading to test/123456789abcdefghijklmnopqrstuvwxyz.template 591 / 591.0 (100.00%)
Error: Failed to create changeset for the stack: test-stack, An error occurred (InternalFailure) when calling the CreateChangeSet operation (reached max retries: 4): Unknown
```
### Expected result
Stack is successfully created, with artifacts stored in a central S3 bucket
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS:
Mac
2. `sam --version`:
SAM CLI, version 1.2.0
| main | unable to deploy successfully when using access point description not sure if this a bug per se or just a lacking feature due to cloudformation limitations basically i use sam to deploy resources to multiple aws accounts but was wondering if a central bucket can be used for artifacts sam itself logs an expected regex to bucket that seems to support access points steps to reproduce create a simply sam template configure a public bucket with an access point attempt to deploy i attempted to create this within the same aws account running sam deploy with bucket deploys successfully observed result sam deploy debug stack name test stack capabilities capability iam bucket arn aws eu west accesspoint prefix test deploying with following values stack name test stack region eu west confirm changeset false deployment bucket capabilities parameter overrides initiating deployment no parameters detected in the template resources found in the template found serverless function with name testfunc and codeuri uploading to test template error failed to create changeset for the stack test stack an error occurred internalfailure when calling the createchangeset operation reached max retries unknown expected result stack is successfully created with artifacts stored in a central bucket additional environment details ex windows mac amazon linux etc os mac sam version sam cli version | 1 |
8,332 | 3,163,510,528 | IssuesEvent | 2015-09-20 10:41:17 | Pseudoagentur/soa-sentinel | https://api.github.com/repos/Pseudoagentur/soa-sentinel | closed | Elfinder package | documentation | After ```composer update``` runed ```php artisan vendor:publish --force``` to update files and ```public/packages``` but ```public/packages/barryvdh/elfinder``` are not publishing, had to to run ```php artisan elfinder:publish``` to publish assets. Maybe this is how it should be)) just asking))) | 1.0 | Elfinder package - After ```composer update``` runed ```php artisan vendor:publish --force``` to update files and ```public/packages``` but ```public/packages/barryvdh/elfinder``` are not publishing, had to to run ```php artisan elfinder:publish``` to publish assets. Maybe this is how it should be)) just asking))) | non_main | elfinder package after composer update runed php artisan vendor publish force to update files and public packages but public packages barryvdh elfinder are not publishing had to to run php artisan elfinder publish to publish assets maybe this is how it should be just asking | 0 |
252,529 | 8,037,389,362 | IssuesEvent | 2018-07-30 12:29:39 | smartdevicelink/sdl_core | https://api.github.com/repos/smartdevicelink/sdl_core | opened | Needed fixes sending of UpdateDeviceList notification on device connect | Bug Contributor priority 1: High | ### Bug Report
Fixes sending of UpdateDeviceList notification on device connect
##### Description:
SDL has to notify system with BC.UpdateDeviceList on device connect even
if device does not have any SDL-enabled applications running.
Issue introduced during moving of open source fixes for policy since
open source SDL currently notifies system by using another transport
listener API - OnDeviceAdded/OnDeviceRemoved.
##### OS & Version Information
* OS/Version:
* SDL Core Version:
* Testing Against: | 1.0 | Needed fixes sending of UpdateDeviceList notification on device connect - ### Bug Report
Fixes sending of UpdateDeviceList notification on device connect
##### Description:
SDL has to notify system with BC.UpdateDeviceList on device connect even
if device does not have any SDL-enabled applications running.
Issue introduced during moving of open source fixes for policy since
open source SDL currently notifies system by using another transport
listener API - OnDeviceAdded/OnDeviceRemoved.
##### OS & Version Information
* OS/Version:
* SDL Core Version:
* Testing Against: | non_main | needed fixes sending of updatedevicelist notification on device connect bug report fixes sending of updatedevicelist notification on device connect description sdl has to notify system with bc updatedevicelist on device connect even if device does not have any sdl enabled applications running issue introduced during moving of open source fixes for policy since open source sdl currently notifies system by using another transport listener api ondeviceadded ondeviceremoved os version information os version sdl core version testing against | 0 |
383,767 | 26,564,082,034 | IssuesEvent | 2023-01-20 18:23:50 | hackforla/expunge-assist | https://api.github.com/repos/hackforla/expunge-assist | opened | Update Research Wiki: RP1 - Information Gathering Survey | documentation role: research priority: high size: 1pt feature: wiki | ### Overview
The wiki is transforming into both an internal and public facing wiki, and new pages need to be created to document our research processes and outputs. We need to update the page about RP1 to provide new volunteers and potential partners with information about our ongoing research.
### Action Items
- [ ] Review example from Internship Project: [Research Plan 6](https://github.com/hackforla/internship/wiki/Research-Plan-6:-Intern-Intake-Interviews)
- [ ] Locate necessary information and files on our [Research Timeline](https://github.com/hackforla/expunge-assist/wiki/Expunge-Assist-Research-Plans-and-Goals)
- [ ] Research Plan
- [ ] Relevant docs & links
- [ ] Go to the [RP1 wiki page](https://github.com/hackforla/expunge-assist/wiki/Research-Plan-1:-Information-Gathering-Survey) and click "edit"
- [ ] Update section titled "Summary of Research"
- [ ] Under the subheading "Research Questions", write out key research questions using bullets
- [ ] Under the subheading "Purpose", write out the purpose/goals of this research
- [ ] Under the subheading "Methods", provide a 1-2 sentence summary of methods.
- [ ] Update section titled "Current Status"
- [ ] Write "complete", "ongoing", or "future research"
- [ ] Update section titled "Assets"
- [ ] Create subheading Research Plan (use h3 to generate markdown for subheadings)
- [ ] Add links to the research plan beneath heading
- [ ] Create subheading Google Drive Folder - RP 1
- [ ] Add link to survey [Google Drive folder](https://drive.google.com/drive/folders/15lFLxUEEQ8JFSFrWwNC4YAkwSO5OYyto)
- [ ] Update section titled "Relevant Issues"
- [ ] Using bullet points, add links to the github issues listed in [Research Timeline](https://github.com/hackforla/expunge-assist/wiki/Expunge-Assist-Research-Plans-and-Goals)
- [ ] Click "save page"
- [ ] Request peer review:
- [ ] Ask a team member to review this page
- [ ] Assign that team member to this issue
- [ ] Leave a comment on this issue tagging that team member and asking for feedback
- [ ] Peer reviewer:
- [ ] Read the page
- [ ] Click on all links to make sure they work
- [ ] Provide feedback by:
- [ ] Editing the wiki page (see instructions above). Please explain your changes in the field "edit message" before pressing "save page" _and/or_
- [ ] Writing suggestions as a comment to the issue.
- [ ] Make suggested changes (or explain why you're not making a suggested change in the comments to this issue).
- [ ] Close issue.
### Resources/Instructions
- [RP1 wiki page](https://github.com/hackforla/expunge-assist/wiki/Research-Plan-1:-Information-Gathering-Survey)
- [Research Timeline](https://github.com/hackforla/expunge-assist/wiki/Expunge-Assist-Research-Plans-and-Goals)
- [Example from Internship Project: Research Plan 6](https://github.com/hackforla/internship/wiki/Research-Plan-6:-Intern-Intake-Interviews)
- [RP 1Google Drive folder](https://drive.google.com/drive/folders/15lFLxUEEQ8JFSFrWwNC4YAkwSO5OYyto)
### Related issues
- #774
- #714
- #723
- #473
- #401 | 1.0 | Update Research Wiki: RP1 - Information Gathering Survey - ### Overview
The wiki is transforming into both an internal and public facing wiki, and new pages need to be created to document our research processes and outputs. We need to update the page about RP1 to provide new volunteers and potential partners with information about our ongoing research.
### Action Items
- [ ] Review example from Internship Project: [Research Plan 6](https://github.com/hackforla/internship/wiki/Research-Plan-6:-Intern-Intake-Interviews)
- [ ] Locate necessary information and files on our [Research Timeline](https://github.com/hackforla/expunge-assist/wiki/Expunge-Assist-Research-Plans-and-Goals)
- [ ] Research Plan
- [ ] Relevant docs & links
- [ ] Go to the [RP1 wiki page](https://github.com/hackforla/expunge-assist/wiki/Research-Plan-1:-Information-Gathering-Survey) and click "edit"
- [ ] Update section titled "Summary of Research"
- [ ] Under the subheading "Research Questions", write out key research questions using bullets
- [ ] Under the subheading "Purpose", write out the purpose/goals of this research
- [ ] Under the subheading "Methods", provide a 1-2 sentence summary of methods.
- [ ] Update section titled "Current Status"
- [ ] Write "complete", "ongoing", or "future research"
- [ ] Update section titled "Assets"
- [ ] Create subheading Research Plan (use h3 to generate markdown for subheadings)
- [ ] Add links to the research plan beneath heading
- [ ] Create subheading Google Drive Folder - RP 1
- [ ] Add link to survey [Google Drive folder](https://drive.google.com/drive/folders/15lFLxUEEQ8JFSFrWwNC4YAkwSO5OYyto)
- [ ] Update section titled "Relevant Issues"
- [ ] Using bullet points, add links to the github issues listed in [Research Timeline](https://github.com/hackforla/expunge-assist/wiki/Expunge-Assist-Research-Plans-and-Goals)
- [ ] Click "save page"
- [ ] Request peer review:
- [ ] Ask a team member to review this page
- [ ] Assign that team member to this issue
- [ ] Leave a comment on this issue tagging that team member and asking for feedback
- [ ] Peer reviewer:
- [ ] Read the page
- [ ] Click on all links to make sure they work
- [ ] Provide feedback by:
- [ ] Editing the wiki page (see instructions above). Please explain your changes in the field "edit message" before pressing "save page" _and/or_
- [ ] Writing suggestions as a comment to the issue.
- [ ] Make suggested changes (or explain why you're not making a suggested change in the comments to this issue).
- [ ] Close issue.
### Resources/Instructions
- [RP1 wiki page](https://github.com/hackforla/expunge-assist/wiki/Research-Plan-1:-Information-Gathering-Survey)
- [Research Timeline](https://github.com/hackforla/expunge-assist/wiki/Expunge-Assist-Research-Plans-and-Goals)
- [Example from Internship Project: Research Plan 6](https://github.com/hackforla/internship/wiki/Research-Plan-6:-Intern-Intake-Interviews)
- [RP 1Google Drive folder](https://drive.google.com/drive/folders/15lFLxUEEQ8JFSFrWwNC4YAkwSO5OYyto)
### Related issues
- #774
- #714
- #723
- #473
- #401 | non_main | update research wiki information gathering survey overview the wiki is transforming into both an internal and public facing wiki and new pages need to be created to document our research processes and outputs we need to update the page about to provide new volunteers and potential partners with information about our ongoing research action items review example from internship project locate necessary information and files on our research plan relevant docs links go to the and click edit update section titled summary of research under the subheading research questions write out key research questions using bullets under the subheading purpose write out the purpose goals of this research under the subheading methods provide a sentence summary of methods update section titled current status write complete ongoing or future research update section titled assets create subheading research plan use to generate markdown for subheadings add links to the research plan beneath heading create subheading google drive folder rp add link to survey update section titled relevant issues using bullet points add links to the github issues listed in click save page request peer review ask a team member to review this page assign that team member to this issue leave a comment on this issue tagging that team member and asking for feedback peer reviewer read the page click on all links to make sure they work provide feedback by editing the wiki page see instructions above please explain your changes in the field edit message before pressing save page and or writing suggestions as a comment to the issue make suggested changes or explain why you re not making a suggested change in the comments to this issue close issue resources instructions related issues | 0 |
622,184 | 19,609,672,022 | IssuesEvent | 2022-01-06 14:01:08 | StephanAkkerman/FinTwit_Bot | https://api.github.com/repos/StephanAkkerman/FinTwit_Bot | closed | Use Twitter API to get tweets from following | Important :exclamation: Top priority :1st_place_medal: | Divide tweets into multiple text channels:
Charts category:
- crypto
- stocks
Text category, Images, Other
images has pictures, but no $
other has text, but no $ | 1.0 | Use Twitter API to get tweets from following - Divide tweets into multiple text channels:
Charts category:
- crypto
- stocks
Text category, Images, Other
images has pictures, but no $
other has text, but no $ | non_main | use twitter api to get tweets from following divide tweets into multiple text channels charts category crypto stocks text category images other images has pictures but no other has text but no | 0 |
914 | 4,600,206,779 | IssuesEvent | 2016-09-22 03:21:19 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | os_image module does not delete image | affects_2.1 bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
os_image
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
None.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
OSX 10.11.13
##### SUMMARY
<!--- Explain the problem briefly -->
When trying to use Ansible to delete an OpenStack image, an error saying "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER" for auth values is specified.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Delete the image.
hosts: localhost
tasks:
- name: Delete the old OpenStack image version
os_image:
auth:
auth_url: http://my_openstack_server:5000/v3
password: mypassword
project_name: myproject
username: admin
name: CoreOS-certified-Docker
state: absent
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
At least, a connection to OpenStack
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
The following error is displayed:
<!--- Paste verbatim command output between quotes below -->
```
fatal: [localhost]: FAILED! => {"changed": false, "extra_data": null, "failed": true, "invocation": {"module_args": {"api_timeout": null, "auth": {"auth_url": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "project_name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"}, "auth_type": null, "availability_zone": null, "cacert": null, "cert": null, "cloud": null, "container_format": "bare", "disk_format": "qcow2", "endpoint_type": "public", "filename": null, "is_public": false, "kernel": null, "key": null, "min_disk": 0, "min_ram": 0, "name": "CoreOS-certified-Docker", "owner": null, "properties": {}, "ramdisk": null, "region_name": null, "state": "absent", "timeout": 180, "verify": true, "wait": true}, "module_name": "os_image"}, "msg": "Error fetching image list: Could not determine a suitable URL for the plugin"}
``` | True | os_image module does not delete image - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
os_image
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
None.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
OSX 10.11.13
##### SUMMARY
<!--- Explain the problem briefly -->
When trying to use Ansible to delete an OpenStack image, an error saying "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER" for auth values is specified.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Delete the image.
hosts: localhost
tasks:
- name: Delete the old OpenStack image version
os_image:
auth:
auth_url: http://my_openstack_server:5000/v3
password: mypassword
project_name: myproject
username: admin
name: CoreOS-certified-Docker
state: absent
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
At least, a connection to OpenStack
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
The following error is displayed:
<!--- Paste verbatim command output between quotes below -->
```
fatal: [localhost]: FAILED! => {"changed": false, "extra_data": null, "failed": true, "invocation": {"module_args": {"api_timeout": null, "auth": {"auth_url": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "project_name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"}, "auth_type": null, "availability_zone": null, "cacert": null, "cert": null, "cloud": null, "container_format": "bare", "disk_format": "qcow2", "endpoint_type": "public", "filename": null, "is_public": false, "kernel": null, "key": null, "min_disk": 0, "min_ram": 0, "name": "CoreOS-certified-Docker", "owner": null, "properties": {}, "ramdisk": null, "region_name": null, "state": "absent", "timeout": 180, "verify": true, "wait": true}, "module_name": "os_image"}, "msg": "Error fetching image list: Could not determine a suitable URL for the plugin"}
``` | main | os image module does not delete image issue type bug report component name os image ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific osx summary when trying to use ansible to delete an openstack image an error saying value specified in no log parameter for auth values is specified steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name delete the image hosts localhost tasks name delete the old openstack image version os image auth auth url password mypassword project name myproject username admin name coreos certified docker state absent expected results at least a connection to openstack actual results the following error is displayed fatal failed changed false extra data null failed true invocation module args api timeout null auth auth url value specified in no log parameter password value specified in no log parameter project name value specified in no log parameter username value specified in no log parameter auth type null availability zone null cacert null cert null cloud null container format bare disk format endpoint type public filename null is public false kernel null key null min disk min ram name coreos certified docker owner null properties ramdisk null region name null state absent timeout verify true wait true module name os image msg error fetching image list could not determine a suitable url for the plugin | 1 |
2,419 | 8,584,562,472 | IssuesEvent | 2018-11-13 23:15:16 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | spawning a template/creating a new area does not update SSmapping.areas_in_z | Maintainability/Hinders improvements | [Round ID]: # Local server.
[Reproduction]: #
1 - Either:
- Spawn a new template with admin verbs ~~or code if you are a dirty downstream~~
- Use an item like a bluespace capsule
- Create a new area using a verb
2 - Check SSmapping. The new area is not included in areas_in_z
[Why is this an issue]: # Using SSmapping.areas_in_z lets us avoid running costly loops to get all areas in one z level. Anything that uses this may not return all the areas in that z level. | True | spawning a template/creating a new area does not update SSmapping.areas_in_z - [Round ID]: # Local server.
[Reproduction]: #
1 - Either:
- Spawn a new template with admin verbs ~~or code if you are a dirty downstream~~
- Use an item like a bluespace capsule
- Create a new area using a verb
2 - Check SSmapping. The new area is not included in areas_in_z
[Why is this an issue]: # Using SSmapping.areas_in_z lets us avoid running costly loops to get all areas in one z level. Anything that uses this may not return all the areas in that z level. | main | spawning a template creating a new area does not update ssmapping areas in z local server either spawn a new template with admin verbs or code if you are a dirty downstream use an item like a bluespace capsule create a new area using a verb check ssmapping the new area is not included in areas in z using ssmapping areas in z lets us avoid running costly loops to get all areas in one z level anything that uses this may not return all the areas in that z level | 1 |
13,597 | 8,598,914,470 | IssuesEvent | 2018-11-15 23:31:00 | zaproxy/zaproxy | https://api.github.com/repos/zaproxy/zaproxy | closed | Include proper Accept header when requesting OpenAPI definitions | IdealFirstBug Usability add-on enhancement good first issue | From https://mozilla.logbot.info/websectools/20180816
(Raising issue so that this is not forgotten, in case the pull request is not open in the end.)
The OpenAPI add-on does not add the Accept header which might lead to the target to serve other content than the API definition.
Add-on:
OpenAPI Support | True | Include proper Accept header when requesting OpenAPI definitions - From https://mozilla.logbot.info/websectools/20180816
(Raising issue so that this is not forgotten, in case the pull request is not open in the end.)
The OpenAPI add-on does not add the Accept header which might lead to the target to serve other content than the API definition.
Add-on:
OpenAPI Support | non_main | include proper accept header when requesting openapi definitions from raising issue so that this is not forgotten in case the pull request is not open in the end the openapi add on does not add the accept header which might lead to the target to serve other content than the api definition add on openapi support | 0 |
9,089 | 4,413,688,691 | IssuesEvent | 2016-08-13 01:02:30 | facebook/osquery | https://api.github.com/repos/facebook/osquery | closed | Flaky tests: DaemonTests::test_5_daemon_sigint variance in return code | build/test test error | See:
```
5/9 Test #5: python_test_osqueryd .............***Failed 33.75 sec
.I0721 12:22:59.882045 26907 options.cpp:61] Verbose logging enabled by config option
I0721 12:22:59.882532 26907 daemon.cpp:38] Not starting the distributed query service: Distributed query service not enabled.
...FI0721 12:23:29.781241 26946 options.cpp:61] Verbose logging enabled by config option
I0721 12:23:29.781740 26946 daemon.cpp:38] Not starting the distributed query service: Distributed query service not enabled.
.
======================================================================
FAIL: test_5_daemon_sigint (__main__.DaemonTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/osquery/jenkins/workspace/osqueryPullRequestBuild/TargetSystem/centos7/tools/tests/test_base.py", line 455, in wrapper
raise exceptions[0][0]
AssertionError: -2 != 130
----------------------------------------------------------------------
Ran 6 tests in 33.695s
FAILED (failures=1)
Test (attempt 1) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
Test (attempt 2) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
Test (attempt 3) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
```
For an example see: https://jenkins.osquery.io/job/osqueryPullRequestBuild/2912/TargetSystem=centos7/console | 1.0 | Flaky tests: DaemonTests::test_5_daemon_sigint variance in return code - See:
```
5/9 Test #5: python_test_osqueryd .............***Failed 33.75 sec
.I0721 12:22:59.882045 26907 options.cpp:61] Verbose logging enabled by config option
I0721 12:22:59.882532 26907 daemon.cpp:38] Not starting the distributed query service: Distributed query service not enabled.
...FI0721 12:23:29.781241 26946 options.cpp:61] Verbose logging enabled by config option
I0721 12:23:29.781740 26946 daemon.cpp:38] Not starting the distributed query service: Distributed query service not enabled.
.
======================================================================
FAIL: test_5_daemon_sigint (__main__.DaemonTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/osquery/jenkins/workspace/osqueryPullRequestBuild/TargetSystem/centos7/tools/tests/test_base.py", line 455, in wrapper
raise exceptions[0][0]
AssertionError: -2 != 130
----------------------------------------------------------------------
Ran 6 tests in 33.695s
FAILED (failures=1)
Test (attempt 1) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
Test (attempt 2) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
Test (attempt 3) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
```
For an example see: https://jenkins.osquery.io/job/osqueryPullRequestBuild/2912/TargetSystem=centos7/console | non_main | flaky tests daemontests test daemon sigint variance in return code see test python test osqueryd failed sec options cpp verbose logging enabled by config option daemon cpp not starting the distributed query service distributed query service not enabled options cpp verbose logging enabled by config option daemon cpp not starting the distributed query service distributed query service not enabled fail test daemon sigint main daemontests traceback most recent call last file home osquery jenkins workspace osquerypullrequestbuild targetsystem tools tests test base py line in wrapper raise exceptions assertionerror ran tests in failed failures test attempt daemontests test daemon sigint failed test base py test attempt daemontests test daemon sigint failed test base py test attempt daemontests test daemon sigint failed test base py for an example see | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.