Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,149 | 12,151,514,494 | IssuesEvent | 2020-04-24 20:07:16 | ocaml/opam-repository | https://api.github.com/repos/ocaml/opam-repository | closed | Recent update to Camlp4 4.04+1 broke package's META file | Stale needs maintainer action | A recent update of Camlp4 `4.04+1` broke the package. I noticed the META file was modified, so I replaced it with the old META file, which seems to have fixed the problem. | True | Recent update to Camlp4 4.04+1 broke package's META file - A recent update of Camlp4 `4.04+1` broke the package. I noticed the META file was modified, so I replaced it with the old META file, which seems to have fixed the problem. | main | recent update to broke package s meta file a recent update of broke the package i noticed the meta file was modified so i replaced it with the old meta file which seems to have fixed the problem | 1 |
3,664 | 14,964,448,034 | IssuesEvent | 2021-01-27 11:59:55 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Assert should be preceded and followed by a blank line | Area: analyzer Area: maintainability feature | A call to `Assert` should be preceded by a blank line if the preceding line contains a call to something that is no `Assert`.
The reason is ease of reading (spotting asserts with ease).
Following should report a violation:
```c#
var x = 42;
var y = "something";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo("something"));
```
While following should **not** report a violation:
```c#
var x = 42;
var y = "something";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo("something"));
``` | True | Assert should be preceded and followed by a blank line - A call to `Assert` should be preceded by a blank line if the preceding line contains a call to something that is no `Assert`.
The reason is ease of reading (spotting asserts with ease).
Following should report a violation:
```c#
var x = 42;
var y = "something";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo("something"));
```
While following should **not** report a violation:
```c#
var x = 42;
var y = "something";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo("something"));
``` | main | assert should be preceded and followed by a blank line a call to assert should be preceded by a blank line if the preceding line contains a call to something that is no assert the reason is ease of reading spotting asserts with ease following should report a violation c var x var y something assert that x is equalto assert that y is equalto something while following should not report a violation c var x var y something assert that x is equalto assert that y is equalto something | 1 |
59,818 | 7,296,436,266 | IssuesEvent | 2018-02-26 10:45:06 | matomo-org/matomo | https://api.github.com/repos/matomo-org/matomo | closed | update checker display issues | c: Design / UI | Followup to #12463 and #12459, related to #12485
I don't have time to create a fix, so I'll just post it here:
The gif is still on the right and oddly the box disappears and creates another arrow.

| 1.0 | update checker display issues - Followup to #12463 and #12459, related to #12485
I don't have time to create a fix, so I'll just post it here:
The gif is still on the right and oddly the box disappears and creates another arrow.

| non_main | update checker display issues followup to and related to i don t have time to create a fix so i ll just post it here the gif is still on the right and oddly the box disappears and creates another arrow | 0 |
1,779 | 6,575,820,717 | IssuesEvent | 2017-09-11 17:27:16 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | mysql_user to change password fails with Ubuntu 16.04 on MariaDB | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Using Ansible Tower 3.0.2
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
mysql_user to change password fails with Ubuntu 16.04 on MariaDB. Exactly the same scripts work with Ubuntu 14.04. It seems to try to run the wrong command.
##### STEPS TO REPRODUCE
```
1. Create a host running Ubuntu 16.04
2. Install MariaDB Galera Server 10.0
3. Run the following command: mysql_user: name=debian-sys-maint host=localhost password={{ debian_dbpassword }} state=present login_user=root login_password={{ root_dbpassword }}
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The password for "debian-sys-maint" is changed.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [mariadb-cluster : Set common debian password] **************************** task path: /var/lib/awx/projects/_1137__ansible_obelisk/roles/mariadb-cluster/tasks/main.yml:13 <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r 172.24.32.39 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585 `" && echo ansible-tmp-1475559634.15-95308147439585="` echo $HOME/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585 `" ) && sleep 0'"'"'' <172.24.32.39> PUT /tmp/tmpjR2PoI TO /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user <172.24.32.39> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r '[172.24.32.39]' <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r 172.24.32.39 '/bin/sh -c '"'"'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/ /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user && sleep 0'"'"'' <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r -tt 172.24.32.39 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-mxijegtsjwtriiriwsiwjlltyyadswhu; LANG=en_AU.UTF-8 LC_ALL=en_AU.UTF-8 LC_MESSAGES=en_AU.UTF-8 /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user; rm -rf "/home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' fatal: [Larry Database1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": false, "config_file": "/root/.my.cnf", "connect_timeout": 30, "encrypted": false, "host": "localhost", "host_all": false, "login_host": "localhost", "login_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "login_port": 3306, "login_unix_socket": null, "login_user": "root", "name": "debian-sys-maint", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "sql_log_bin": true, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "debian-sys-maint"}, "module_name": "mysql_user"}, "msg": "(1064, \"You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''' at line 1\")"}
```
| True | mysql_user to change password fails with Ubuntu 16.04 on MariaDB - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Using Ansible Tower 3.0.2
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
mysql_user to change password fails with Ubuntu 16.04 on MariaDB. Exactly the same scripts work with Ubuntu 14.04. It seems to try to run the wrong command.
##### STEPS TO REPRODUCE
```
1. Create a host running Ubuntu 16.04
2. Install MariaDB Galera Server 10.0
3. Run the following command: mysql_user: name=debian-sys-maint host=localhost password={{ debian_dbpassword }} state=present login_user=root login_password={{ root_dbpassword }}
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The password for "debian-sys-maint" is changed.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [mariadb-cluster : Set common debian password] **************************** task path: /var/lib/awx/projects/_1137__ansible_obelisk/roles/mariadb-cluster/tasks/main.yml:13 <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r 172.24.32.39 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585 `" && echo ansible-tmp-1475559634.15-95308147439585="` echo $HOME/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585 `" ) && sleep 0'"'"'' <172.24.32.39> PUT /tmp/tmpjR2PoI TO /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user <172.24.32.39> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r '[172.24.32.39]' <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r 172.24.32.39 '/bin/sh -c '"'"'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/ /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user && sleep 0'"'"'' <172.24.32.39> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.24.32.39> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_dvzt1P/cp/ansible-ssh-%h-%p-%r -tt 172.24.32.39 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-mxijegtsjwtriiriwsiwjlltyyadswhu; LANG=en_AU.UTF-8 LC_ALL=en_AU.UTF-8 LC_MESSAGES=en_AU.UTF-8 /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/mysql_user; rm -rf "/home/ubuntu/.ansible/tmp/ansible-tmp-1475559634.15-95308147439585/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' fatal: [Larry Database1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"append_privs": false, "check_implicit_admin": false, "config_file": "/root/.my.cnf", "connect_timeout": 30, "encrypted": false, "host": "localhost", "host_all": false, "login_host": "localhost", "login_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "login_port": 3306, "login_unix_socket": null, "login_user": "root", "name": "debian-sys-maint", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "priv": null, "sql_log_bin": true, "ssl_ca": null, "ssl_cert": null, "ssl_key": null, "state": "present", "update_password": "always", "user": "debian-sys-maint"}, "module_name": "mysql_user"}, "msg": "(1064, \"You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''' at line 1\")"}
```
| main | mysql user to change password fails with ubuntu on mariadb issue type bug report component name mysql user ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration using ansible tower os environment ubuntu summary mysql user to change password fails with ubuntu on mariadb exactly the same scripts work with ubuntu it seems to try to run the wrong command steps to reproduce create a host running ubuntu install mariadb galera server run the following command mysql user name debian sys maint host localhost password debian dbpassword state present login user root login password root dbpassword expected results the password for debian sys maint is changed actual results task task path var lib awx projects ansible obelisk roles mariadb cluster tasks main yml establish ssh connection for user ubuntu ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ubuntu ansible tmp ansible tmp mysql user ssh exec sftp b c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r establish ssh connection for user ubuntu ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r bin sh c chmod u x home ubuntu ansible tmp ansible tmp home ubuntu ansible tmp ansible tmp mysql user sleep establish ssh connection for user ubuntu ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success mxijegtsjwtriiriwsiwjlltyyadswhu lang en au utf lc all en au utf lc messages en au utf usr bin python home ubuntu ansible tmp ansible tmp mysql user rm rf home ubuntu ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args append privs false check implicit admin false config file root my cnf connect timeout encrypted false host localhost host all false login host localhost login password value specified in no log parameter login port login unix socket null login user root name debian sys maint password value specified in no log parameter priv null sql log bin true ssl ca null ssl cert null ssl key null state present update password always user debian sys maint module name mysql user msg you have an error in your sql syntax check the manual that corresponds to your mariadb server version for the right syntax to use near at line | 1 |
4,678 | 24,175,475,408 | IssuesEvent | 2022-09-23 00:50:00 | Pycord-Development/pycord | https://api.github.com/repos/Pycord-Development/pycord | closed | pages.Paginator.send() context does not accept BridgeContext | bug ext.pages (not maintained) ext.bridge | ### Summary
The context argument for paginator.send() function does not currently accept BridgeContext
### Reproduction Steps
1. Make a command using discord.ext.bridge
2. Create paginator object
3. Send the paginator object using `paginator.send()` and use context provided by the command as the context for the function
### Minimal Reproducible Code
```python
from discord.ext import bridge, pages
client = bridge.Bot(command_prefix=commands.when_mentioned_or("!"), intents=discord.Intents.all())
@client.bridge_command()
async def paginate(ctx):
embeds = [] # list of embeds, in actual code this will always be filled with at least 1 embed
paginator = pages.Paginator(pages=embeds)
await paginator.send(ctx=ctx)
client.run("Bot token")
```
### Expected Results
The page sent correctly.
### Actual Results
An error raised:
```
Ignoring exception in command paginate:
Traceback (most recent call last):
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 126, in wrapped
ret = await coro(arg)
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 856, in _invoke
await self.callback(ctx, **kwargs)
File "e:\Discord Bots (after return lol)\VoteBot\main.py", line 191, in paginate
await paginator.send(ctx=ctx)
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\ext\pages\pagination.py", line 880, in send
raise TypeError(f"expected Context not {ctx.__class__!r}")
TypeError: expected Context not <class 'discord.ext.bridge.context.BridgeApplicationContext'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\bot.py", line 993, in invoke_application_command
await ctx.command.invoke(ctx)
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 357, in invoke
await injected(ctx)
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 134, in wrapped
raise ApplicationCommandInvokeError(exc) from exc
discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: TypeError: expected Context not <class 'discord.ext.bridge.context.BridgeApplicationContext'>
```
### Intents
discord.Intents.all()
### System Information
- Python v3.10.2-final
- py-cord v2.0.0-candidate
- py-cord pkg_resources: v2.0.0rc1
- aiohttp v3.8.1
- system info: Windows 10 10.0.19043
### Checklist
- [X] I have searched the open issues for duplicates.
- [X] I have shown the entire traceback, if possible.
- [X] I have removed my token from display, if visible.
### Additional Context
_No response_ | True | pages.Paginator.send() context does not accept BridgeContext - ### Summary
The context argument for paginator.send() function does not currently accept BridgeContext
### Reproduction Steps
1. Make a command using discord.ext.bridge
2. Create paginator object
3. Send the paginator object using `paginator.send()` and use context provided by the command as the context for the function
### Minimal Reproducible Code
```python
from discord.ext import bridge, pages
client = bridge.Bot(command_prefix=commands.when_mentioned_or("!"), intents=discord.Intents.all())
@client.bridge_command()
async def paginate(ctx):
embeds = [] # list of embeds, in actual code this will always be filled with at least 1 embed
paginator = pages.Paginator(pages=embeds)
await paginator.send(ctx=ctx)
client.run("Bot token")
```
### Expected Results
The page sent correctly.
### Actual Results
An error raised:
```
Ignoring exception in command paginate:
Traceback (most recent call last):
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 126, in wrapped
ret = await coro(arg)
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 856, in _invoke
await self.callback(ctx, **kwargs)
File "e:\Discord Bots (after return lol)\VoteBot\main.py", line 191, in paginate
await paginator.send(ctx=ctx)
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\ext\pages\pagination.py", line 880, in send
raise TypeError(f"expected Context not {ctx.__class__!r}")
TypeError: expected Context not <class 'discord.ext.bridge.context.BridgeApplicationContext'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\bot.py", line 993, in invoke_application_command
await ctx.command.invoke(ctx)
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 357, in invoke
await injected(ctx)
File "C:\Users\ACER\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 134, in wrapped
raise ApplicationCommandInvokeError(exc) from exc
discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: TypeError: expected Context not <class 'discord.ext.bridge.context.BridgeApplicationContext'>
```
### Intents
discord.Intents.all()
### System Information
- Python v3.10.2-final
- py-cord v2.0.0-candidate
- py-cord pkg_resources: v2.0.0rc1
- aiohttp v3.8.1
- system info: Windows 10 10.0.19043
### Checklist
- [X] I have searched the open issues for duplicates.
- [X] I have shown the entire traceback, if possible.
- [X] I have removed my token from display, if visible.
### Additional Context
_No response_ | main | pages paginator send context does not accept bridgecontext summary the context argument for paginator send function does not currently accept bridgecontext reproduction steps make a command using discord ext bridge create paginator object send the paginator object using paginator send and use context provided by the command as the context for the function minimal reproducible code python from discord ext import bridge pages client bridge bot command prefix commands when mentioned or intents discord intents all client bridge command async def paginate ctx embeds list of embeds in actual code this will always be filled with at least embed paginator pages paginator pages embeds await paginator send ctx ctx client run bot token expected results the page sent correctly actual results an error raised ignoring exception in command paginate traceback most recent call last file c users acer appdata local programs python lib site packages discord commands core py line in wrapped ret await coro arg file c users acer appdata local programs python lib site packages discord commands core py line in invoke await self callback ctx kwargs file e discord bots after return lol votebot main py line in paginate await paginator send ctx ctx file c users acer appdata local programs python lib site packages discord ext pages pagination py line in send raise typeerror f expected context not ctx class r typeerror expected context not the above exception was the direct cause of the following exception traceback most recent call last file c users acer appdata local programs python lib site packages discord bot py line in invoke application command await ctx command invoke ctx file c users acer appdata local programs python lib site packages discord commands core py line in invoke await injected ctx file c users acer appdata local programs python lib site packages discord commands core py line in wrapped raise applicationcommandinvokeerror exc from exc discord errors applicationcommandinvokeerror application command raised an exception typeerror expected context not intents discord intents all system information python final py cord candidate py cord pkg resources aiohttp system info windows checklist i have searched the open issues for duplicates i have shown the entire traceback if possible i have removed my token from display if visible additional context no response | 1 |
5,127 | 26,141,952,060 | IssuesEvent | 2022-12-29 19:56:43 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Application to contributor community | Port in progress Maintainer application | I request joining the contributor community for Backdrop.
My first project is a port of the [Drupal 7 Real Name module](https://www.drupal.org/project/realname). [The repository for this port is here](https://github.com/bizmarkdev/backdrop-realname).
I agree to all items in the Backdrop Contributed Project Agreement.
Thanks for all you do. | True | Application to contributor community - I request joining the contributor community for Backdrop.
My first project is a port of the [Drupal 7 Real Name module](https://www.drupal.org/project/realname). [The repository for this port is here](https://github.com/bizmarkdev/backdrop-realname).
I agree to all items in the Backdrop Contributed Project Agreement.
Thanks for all you do. | main | application to contributor community i request joining the contributor community for backdrop my first project is a port of the i agree to all items in the backdrop contributed project agreement thanks for all you do | 1 |
177,116 | 13,684,013,237 | IssuesEvent | 2020-09-30 03:36:23 | kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines | closed | The compiler tests are brittle: They check that "nothing has changed" instead of verifying the intended behavior. | area/sdk area/testing help wanted lifecycle/stale priority/p2 | Good tests check the intended behavior of a feature.
Most of our compiler tests are only checking that "nothing has changed". Every small change affecting the attributes of a pipeline requires changing dozens of tests. https://travis-ci.com/kubeflow/pipelines/jobs/223317709 or https://github.com/kubeflow/pipelines/pull/1381/files
When the tests are brittle, it's also easy to overlook an actual error, since the tests break so often and in hard to debug ways.
We should replace many of the compiler tests with test that check for the feature behavior.
Good examples:
https://github.com/kubeflow/pipelines/blob/6f2decf2b1660d16047e477b311cce21c3df1331/sdk/python/tests/compiler/compiler_tests.py#L526
https://github.com/kubeflow/pipelines/blob/6f2decf2b1660d16047e477b311cce21c3df1331/sdk/python/tests/compiler/compiler_tests.py#L543
https://github.com/kubeflow/pipelines/blob/6f2decf2b1660d16047e477b311cce21c3df1331/sdk/python/tests/compiler/compiler_tests.py#L568
| 1.0 | The compiler tests are brittle: They check that "nothing has changed" instead of verifying the intended behavior. - Good tests check the intended behavior of a feature.
Most of our compiler tests are only checking that "nothing has changed". Every small change affecting the attributes of a pipeline requires changing dozens of tests. https://travis-ci.com/kubeflow/pipelines/jobs/223317709 or https://github.com/kubeflow/pipelines/pull/1381/files
When the tests are brittle, it's also easy to overlook an actual error, since the tests break so often and in hard to debug ways.
We should replace many of the compiler tests with test that check for the feature behavior.
Good examples:
https://github.com/kubeflow/pipelines/blob/6f2decf2b1660d16047e477b311cce21c3df1331/sdk/python/tests/compiler/compiler_tests.py#L526
https://github.com/kubeflow/pipelines/blob/6f2decf2b1660d16047e477b311cce21c3df1331/sdk/python/tests/compiler/compiler_tests.py#L543
https://github.com/kubeflow/pipelines/blob/6f2decf2b1660d16047e477b311cce21c3df1331/sdk/python/tests/compiler/compiler_tests.py#L568
| non_main | the compiler tests are brittle they check that nothing has changed instead of verifying the intended behavior good tests check the intended behavior of a feature most of our compiler tests are only checking that nothing has changed every small change affecting the attributes of a pipeline requires changing dozens of tests or when the tests are brittle it s also easy to overlook an actual error since the tests break so often and in hard to debug ways we should replace many of the compiler tests with test that check for the feature behavior good examples | 0 |
231,719 | 25,531,934,187 | IssuesEvent | 2022-11-29 09:07:44 | AOSC-Dev/aosc-os-abbs | https://api.github.com/repos/AOSC-Dev/aosc-os-abbs | closed | git: Several Vulnerabilities (2.37.1, CVE-2022-{39253,39260}) | security has-fix | ### CVE IDs
CVE-2022-{39253,39260}
### Other security advisory IDs
Ubuntu: https://ubuntu.com/security/notices/USN-5686-1
### Description
CVE-2022-39253:
When relying on the `--local` clone optimization, Git dereferences
symbolic links in the source repository before creating hardlinks
(or copies) of the dereferenced link in the destination repository.
This can lead to surprising behavior where arbitrary files are
present in a repository's `$GIT_DIR` when cloning from a malicious
repository.
Git will no longer dereference symbolic links via the `--local`
clone mechanism, and will instead refuse to clone repositories that
have symbolic links present in the `$GIT_DIR/objects` directory.
Additionally, the value of `protocol.file.allow` is changed to be
"user" by default.
CVE-2022-39260:
An overly-long command string given to `git shell` can result in
overflow in `split_cmdline()`, leading to arbitrary heap writes and
remote code execution when `git shell` is exposed and the directory
`$HOME/git-shell-commands` exists.
`git shell` is taught to refuse interactive commands that are
longer than 4MiB in size. `split_cmdline()` is hardened to reject
inputs larger than 2GiB.
### Patches
- Update to 2.38.1
### PoC(s)
N/A | True | git: Several Vulnerabilities (2.37.1, CVE-2022-{39253,39260}) - ### CVE IDs
CVE-2022-{39253,39260}
### Other security advisory IDs
Ubuntu: https://ubuntu.com/security/notices/USN-5686-1
### Description
CVE-2022-39253:
When relying on the `--local` clone optimization, Git dereferences
symbolic links in the source repository before creating hardlinks
(or copies) of the dereferenced link in the destination repository.
This can lead to surprising behavior where arbitrary files are
present in a repository's `$GIT_DIR` when cloning from a malicious
repository.
Git will no longer dereference symbolic links via the `--local`
clone mechanism, and will instead refuse to clone repositories that
have symbolic links present in the `$GIT_DIR/objects` directory.
Additionally, the value of `protocol.file.allow` is changed to be
"user" by default.
CVE-2022-39260:
An overly-long command string given to `git shell` can result in
overflow in `split_cmdline()`, leading to arbitrary heap writes and
remote code execution when `git shell` is exposed and the directory
`$HOME/git-shell-commands` exists.
`git shell` is taught to refuse interactive commands that are
longer than 4MiB in size. `split_cmdline()` is hardened to reject
inputs larger than 2GiB.
### Patches
- Update to 2.38.1
### PoC(s)
N/A | non_main | git several vulnerabilities cve cve ids cve other security advisory ids ubuntu description cve when relying on the local clone optimization git dereferences symbolic links in the source repository before creating hardlinks or copies of the dereferenced link in the destination repository this can lead to surprising behavior where arbitrary files are present in a repository s git dir when cloning from a malicious repository git will no longer dereference symbolic links via the local clone mechanism and will instead refuse to clone repositories that have symbolic links present in the git dir objects directory additionally the value of protocol file allow is changed to be user by default cve an overly long command string given to git shell can result in overflow in split cmdline leading to arbitrary heap writes and remote code execution when git shell is exposed and the directory home git shell commands exists git shell is taught to refuse interactive commands that are longer than in size split cmdline is hardened to reject inputs larger than patches update to poc s n a | 0 |
3,221 | 12,342,462,604 | IssuesEvent | 2020-05-15 00:52:08 | frej/fast-export | https://api.github.com/repos/frej/fast-export | closed | ../fast-export/hg-fast-export.sh: 156: python: not found | not-available-to-maintainer user-support wintendo | I have a error when I migrate the mercurial to git. I block on this step -- ../fast-export/hg-fast-export.sh -r ../demoapp
Windows 10 64 bit
Python 2.7.6 installed and in the path
Mercurial 4.8 install, and in the path
Git 2.26.2.windows.1 installed, and in the path
Running the following command in git-bash:
../fast-export/hg-fast-export.sh -r ../demoapp/ --fe cp1251
Looking at ../fast-export/hg-fast-export.sh: 156: python: not found, so I am not sure how to fix it. If anyone can help to figure out what's reason about this issue?
Any help appreciated. | True | ../fast-export/hg-fast-export.sh: 156: python: not found - I have a error when I migrate the mercurial to git. I block on this step -- ../fast-export/hg-fast-export.sh -r ../demoapp
Windows 10 64 bit
Python 2.7.6 installed and in the path
Mercurial 4.8 install, and in the path
Git 2.26.2.windows.1 installed, and in the path
Running the following command in git-bash:
../fast-export/hg-fast-export.sh -r ../demoapp/ --fe cp1251
Looking at ../fast-export/hg-fast-export.sh: 156: python: not found, so I am not sure how to fix it. If anyone can help to figure out what's reason about this issue?
Any help appreciated. | main | fast export hg fast export sh python not found i have a error when i migrate the mercurial to git i block on this step fast export hg fast export sh r demoapp windows bit python installed and in the path mercurial install and in the path git windows installed and in the path running the following command in git bash fast export hg fast export sh r demoapp fe looking at fast export hg fast export sh python not found so i am not sure how to fix it if anyone can help to figure out what s reason about this issue any help appreciated | 1 |
2,655 | 9,083,504,549 | IssuesEvent | 2019-02-17 21:00:14 | pound-python/infobob | https://api.github.com/repos/pound-python/infobob | closed | Add a dockerfile | maintainability | The one in pound-python/infobob-docker is a good start, but it should just run the bot. No ancillary script, config hacking, or database init, instead accept the config file path through an env var (`INFOBOB_CONFIG`), and allow overriding the irc.password config entry via another env var (`INFOBOB_IRC_PASSWORD`).
Probably requires #8 | True | Add a dockerfile - The one in pound-python/infobob-docker is a good start, but it should just run the bot. No ancillary script, config hacking, or database init, instead accept the config file path through an env var (`INFOBOB_CONFIG`), and allow overriding the irc.password config entry via another env var (`INFOBOB_IRC_PASSWORD`).
Probably requires #8 | main | add a dockerfile the one in pound python infobob docker is a good start but it should just run the bot no ancillary script config hacking or database init instead accept the config file path through an env var infobob config and allow overriding the irc password config entry via another env var infobob irc password probably requires | 1 |
4,757 | 24,524,388,417 | IssuesEvent | 2022-10-11 12:04:40 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Use a URL query param instead of a hash to store the filtering/sorting/grouping/pagination | type: enhancement work: frontend status: ready restricted: maintainers | ## Current behavior
- On the table page, the filtering/sorting/grouping/pagination is synchronized into the URL via a hash.
- The hash won't survive login redirection and also makes it a tiny bit more annoying to refresh a page.
## Desired behavior
- We'd like to use a URL query param instead.
## Additional context
- We originally had been using a query param, but I changed it to a hash in #1517.
CC @pavish
| True | Use a URL query param instead of a hash to store the filtering/sorting/grouping/pagination - ## Current behavior
- On the table page, the filtering/sorting/grouping/pagination is synchronized into the URL via a hash.
- The hash won't survive login redirection and also makes it a tiny bit more annoying to refresh a page.
## Desired behavior
- We'd like to use a URL query param instead.
## Additional context
- We originally had been using a query param, but I changed it to a hash in #1517.
CC @pavish
| main | use a url query param instead of a hash to store the filtering sorting grouping pagination current behavior on the table page the filtering sorting grouping pagination is synchronized into the url via a hash the hash won t survive login redirection and also makes it a tiny bit more annoying to refresh a page desired behavior we d like to use a url query param instead additional context we originally had been using a query param but i changed it to a hash in cc pavish | 1 |
2,348 | 8,394,096,878 | IssuesEvent | 2018-10-09 22:52:33 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | opened | Use versioneer (or similar) | Difficulty-easy maintainability release | Part of reducing release "friction".
The current method of incrementing the version number of the package involves editing 5 separate files*. This could be better. There is a script in `maintainer/` but it doesn't seem to work for me? (seems to `cat` a lot of the directory, no changes visible in git afterwards).
Ideally something like [versioneer](https://github.com/warner/python-versioneer) might be good, but it would have to handle the double-package that this repo is.
* MDAnalysis is 2 packages, `package/` and `testsuite`.
* 2 files per package (`setup.py`, `package/version.py`, `testsuite/__init__.py`)
* conda recipe | True | Use versioneer (or similar) - Part of reducing release "friction".
The current method of incrementing the version number of the package involves editing 5 separate files*. This could be better. There is a script in `maintainer/` but it doesn't seem to work for me? (seems to `cat` a lot of the directory, no changes visible in git afterwards).
Ideally something like [versioneer](https://github.com/warner/python-versioneer) might be good, but it would have to handle the double-package that this repo is.
* MDAnalysis is 2 packages, `package/` and `testsuite`.
* 2 files per package (`setup.py`, `package/version.py`, `testsuite/__init__.py`)
* conda recipe | main | use versioneer or similar part of reducing release friction the current method of incrementing the version number of the package involves editing separate files this could be better there is a script in maintainer but it doesn t seem to work for me seems to cat a lot of the directory no changes visible in git afterwards ideally something like might be good but it would have to handle the double package that this repo is mdanalysis is packages package and testsuite files per package setup py package version py testsuite init py conda recipe | 1 |
422,937 | 28,488,321,760 | IssuesEvent | 2023-04-18 09:23:03 | camunda/camunda-bpm-platform | https://api.github.com/repos/camunda/camunda-bpm-platform | closed | Fix privacy link in documentation website footer | type:task scope:documentation version:7.20.0 version:7.18.7 version:7.17.12 version:7.19.1 | ### Acceptance Criteria (Required on creation)
* The link to our privacy statement is changed to https://camunda.com/legal/privacy/
### Hints
* Needs to be adjusted in the theme and the docs-manual accordingly
### Links
<!--
- https://jira.camunda.com/browse/CAM-12398
-->
### Breakdown
- [x] https://github.com/camunda/camunda-docs-theme/pull/40
- [X] https://github.com/camunda/camunda-docs-manual/pull/1439
- [ ] Update static docs (getting started guides, etc.)
### Dev2QA handover
- [ ] Does this ticket need a QA test and the testing goals are not clear from the description? Add a [Dev2QA handover comment](https://confluence.camunda.com/display/AP/Handover+Dev+-%3E+Testing) | 1.0 | Fix privacy link in documentation website footer - ### Acceptance Criteria (Required on creation)
* The link to our privacy statement is changed to https://camunda.com/legal/privacy/
### Hints
* Needs to be adjusted in the theme and the docs-manual accordingly
### Links
<!--
- https://jira.camunda.com/browse/CAM-12398
-->
### Breakdown
- [x] https://github.com/camunda/camunda-docs-theme/pull/40
- [X] https://github.com/camunda/camunda-docs-manual/pull/1439
- [ ] Update static docs (getting started guides, etc.)
### Dev2QA handover
- [ ] Does this ticket need a QA test and the testing goals are not clear from the description? Add a [Dev2QA handover comment](https://confluence.camunda.com/display/AP/Handover+Dev+-%3E+Testing) | non_main | fix privacy link in documentation website footer acceptance criteria required on creation the link to our privacy statement is changed to hints needs to be adjusted in the theme and the docs manual accordingly links breakdown update static docs getting started guides etc handover does this ticket need a qa test and the testing goals are not clear from the description add a | 0 |
15,638 | 27,585,958,353 | IssuesEvent | 2023-03-08 19:47:34 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | GITHUB_COM_TOKEN locally with cli | type:bug status:requirements priority-5-triage | ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
npm cli
### If you're self-hosting Renovate, select which platform you are using.
None
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
I can confirm my GH token works:
```
❯ curl -s -X GET -u $GITHUB_COM_TOKEN:x-oauth-basic 'https://api.github.com/user' | head -3
{
"login": "cdenneen",
"id": 720097,
```
but doesn't seem to work with renovate
```
❯ renovate --dry-run
WARN: cli config dryRun property has been changed to full
FATAL: Authentication failure
INFO: Renovate is exiting with a non-zero code due to the following logged errors
"loggerErrors": [
{
"name": "renovate",
"level": 60,
"logContext": "IgqlRoFjTli0EtVo5M66K",
"msg": "Authentication failure"
}
]
```
### Relevant debug logs
<details><summary>Logs</summary>
```
DEBUG: Using RE2 as regex engine
DEBUG: Parsing configs
DEBUG: Checking for config file in /Users/cdenneen/src/gitlab/gitops/cluster/config/config.js
DEBUG: No config file found on disk - skipping
WARN: cli config dryRun property has been changed to full
DEBUG: Converting GITHUB_COM_TOKEN into a global host rule
DEBUG: File config
"config": {}
DEBUG: CLI config
"config": {"dryRun": "full"}
DEBUG: Env config
"config": {
"hostRules": [
{"hostType": "github", "matchHost": "github.com", "token": "***********"}
],
"token": "***********"
}
DEBUG: Combined config
"config": {
"hostRules": [
{"hostType": "github", "matchHost": "github.com", "token": "***********"}
],
"token": "***********",
"dryRun": "full"
}
DEBUG: Found valid git version: 2.39.2
DEBUG: Using default github endpoint: https://api.github.com/
DEBUG: GET https://api.github.com/user = (code=ERR_NON_2XX_3XX_RESPONSE, statusCode=401 retryCount=0, duration=162)
DEBUG: GitHub failure: Bad credentials
"token": "***********",
"err": {
"name": "HTTPError",
"code": "ERR_NON_2XX_3XX_RESPONSE",
"timings": {
"start": 1678304262505,
"socket": 1678304262507,
"lookup": 1678304262550,
"connect": 1678304262582,
"secureConnect": 1678304262618,
"upload": 1678304262618,
"response": 1678304262664,
"end": 1678304262667,
"phases": {
"wait": 2,
"dns": 43,
"tcp": 32,
"tls": 36,
"request": 0,
"firstByte": 46,
"download": 3,
"total": 162
}
},
"message": "Response code 401 (Unauthorized)",
"stack": "HTTPError: Response code 401 (Unauthorized)\n at Request.<anonymous> (/usr/local/lib/node_modules/renovate/node_modules/got/dist/source/as-promise/index.js:118:42)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)",
"options": {
"headers": {
"user-agent": "RenovateBot/34.159.1 (https://github.com/renovatebot/renovate)",
"accept": "application/json, application/vnd.github.v3+json",
"authorization": "***********",
"accept-encoding": "gzip, deflate, br"
},
"url": "https://api.github.com/user",
"hostType": "github",
"username": "",
"password": "",
"method": "GET",
"http2": false
},
"response": {
"statusCode": 401,
"statusMessage": "Unauthorized",
"body": {
"message": "Bad credentials",
"documentation_url": "https://docs.github.com/rest"
},
"headers": {
"server": "GitHub.com",
"date": "Wed, 08 Mar 2023 19:37:42 GMT",
"content-type": "application/json; charset=utf-8",
"content-length": "80",
"x-github-media-type": "github.v3",
"x-ratelimit-limit": "60",
"x-ratelimit-remaining": "42",
"x-ratelimit-reset": "1678306329",
"x-ratelimit-used": "18",
"x-ratelimit-resource": "core",
"access-control-expose-headers": "ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset",
"access-control-allow-origin": "*",
"strict-transport-security": "max-age=31536000; includeSubdomains; preload",
"x-frame-options": "deny",
"x-content-type-options": "nosniff",
"x-xss-protection": "0",
"referrer-policy": "origin-when-cross-origin, strict-origin-when-cross-origin",
"content-security-policy": "default-src 'none'",
"vary": "Accept-Encoding, Accept, X-Requested-With",
"x-github-request-id": "9DAF:6543:46329D5:8F7109F:6408E406"
},
"httpVersion": "1.1",
"retryCount": 0
}
}
DEBUG: Error authenticating with GitHub
"err": {
"hostType": "github",
"err": {
"name": "HTTPError",
"code": "ERR_NON_2XX_3XX_RESPONSE",
"timings": {
"start": 1678304262505,
"socket": 1678304262507,
"lookup": 1678304262550,
"connect": 1678304262582,
"secureConnect": 1678304262618,
"upload": 1678304262618,
"response": 1678304262664,
"end": 1678304262667,
"phases": {
"wait": 2,
"dns": 43,
"tcp": 32,
"tls": 36,
"request": 0,
"firstByte": 46,
"download": 3,
"total": 162
}
},
"message": "Response code 401 (Unauthorized)",
"stack": "HTTPError: Response code 401 (Unauthorized)\n at Request.<anonymous> (/usr/local/lib/node_modules/renovate/node_modules/got/dist/source/as-promise/index.js:118:42)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)",
"options": {
"headers": {
"user-agent": "RenovateBot/34.159.1 (https://github.com/renovatebot/renovate)",
"accept": "application/json, application/vnd.github.v3+json",
"authorization": "***********",
"accept-encoding": "gzip, deflate, br"
},
"url": "https://api.github.com/user",
"hostType": "github",
"username": "",
"password": "",
"method": "GET",
"http2": false
},
"response": {
"statusCode": 401,
"statusMessage": "Unauthorized",
"body": {
"message": "Bad credentials",
"documentation_url": "https://docs.github.com/rest"
},
"headers": {
"server": "GitHub.com",
"date": "Wed, 08 Mar 2023 19:37:42 GMT",
"content-type": "application/json; charset=utf-8",
"content-length": "80",
"x-github-media-type": "github.v3",
"x-ratelimit-limit": "60",
"x-ratelimit-remaining": "42",
"x-ratelimit-reset": "1678306329",
"x-ratelimit-used": "18",
"x-ratelimit-resource": "core",
"access-control-expose-headers": "ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset",
"access-control-allow-origin": "*",
"strict-transport-security": "max-age=31536000; includeSubdomains; preload",
"x-frame-options": "deny",
"x-content-type-options": "nosniff",
"x-xss-protection": "0",
"referrer-policy": "origin-when-cross-origin, strict-origin-when-cross-origin",
"content-security-policy": "default-src 'none'",
"vary": "Accept-Encoding, Accept, X-Requested-With",
"x-github-request-id": "9DAF:6543:46329D5:8F7109F:6408E406"
},
"httpVersion": "1.1",
"retryCount": 0
}
},
"message": "external-host-error",
"stack": "Error: external-host-error\n at handleGotError (/usr/local/lib/node_modules/renovate/lib/util/http/github.ts:128:14)\n at GithubHttp.request (/usr/local/lib/node_modules/renovate/lib/util/http/github.ts:370:13)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at GithubHttp.requestJson (/usr/local/lib/node_modules/renovate/lib/util/http/index.ts:256:17)\n at getUserDetails (/usr/local/lib/node_modules/renovate/lib/modules/platform/github/user.ts:13:7)\n at Proxy.initPlatform (/usr/local/lib/node_modules/renovate/lib/modules/platform/github/index.ts:153:36)\n at initPlatform (/usr/local/lib/node_modules/renovate/lib/modules/platform/index.ts:49:24)\n at globalInitialize (/usr/local/lib/node_modules/renovate/lib/workers/global/initialize.ts:71:12)\n at /usr/local/lib/node_modules/renovate/lib/workers/global/index.ts:131:16\n at start (/usr/local/lib/node_modules/renovate/lib/workers/global/index.ts:120:5)\n at /usr/local/lib/node_modules/renovate/lib/renovate.ts:18:22"
}
FATAL: Authentication failure
```
</details>
### Have you created a minimal reproduction repository?
I have explained in the description why a minimal reproduction is impossible | 1.0 | GITHUB_COM_TOKEN locally with cli - ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
npm cli
### If you're self-hosting Renovate, select which platform you are using.
None
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
I can confirm my GH token works:
```
❯ curl -s -X GET -u $GITHUB_COM_TOKEN:x-oauth-basic 'https://api.github.com/user' | head -3
{
"login": "cdenneen",
"id": 720097,
```
but doesn't seem to work with renovate
```
❯ renovate --dry-run
WARN: cli config dryRun property has been changed to full
FATAL: Authentication failure
INFO: Renovate is exiting with a non-zero code due to the following logged errors
"loggerErrors": [
{
"name": "renovate",
"level": 60,
"logContext": "IgqlRoFjTli0EtVo5M66K",
"msg": "Authentication failure"
}
]
```
### Relevant debug logs
<details><summary>Logs</summary>
```
DEBUG: Using RE2 as regex engine
DEBUG: Parsing configs
DEBUG: Checking for config file in /Users/cdenneen/src/gitlab/gitops/cluster/config/config.js
DEBUG: No config file found on disk - skipping
WARN: cli config dryRun property has been changed to full
DEBUG: Converting GITHUB_COM_TOKEN into a global host rule
DEBUG: File config
"config": {}
DEBUG: CLI config
"config": {"dryRun": "full"}
DEBUG: Env config
"config": {
"hostRules": [
{"hostType": "github", "matchHost": "github.com", "token": "***********"}
],
"token": "***********"
}
DEBUG: Combined config
"config": {
"hostRules": [
{"hostType": "github", "matchHost": "github.com", "token": "***********"}
],
"token": "***********",
"dryRun": "full"
}
DEBUG: Found valid git version: 2.39.2
DEBUG: Using default github endpoint: https://api.github.com/
DEBUG: GET https://api.github.com/user = (code=ERR_NON_2XX_3XX_RESPONSE, statusCode=401 retryCount=0, duration=162)
DEBUG: GitHub failure: Bad credentials
"token": "***********",
"err": {
"name": "HTTPError",
"code": "ERR_NON_2XX_3XX_RESPONSE",
"timings": {
"start": 1678304262505,
"socket": 1678304262507,
"lookup": 1678304262550,
"connect": 1678304262582,
"secureConnect": 1678304262618,
"upload": 1678304262618,
"response": 1678304262664,
"end": 1678304262667,
"phases": {
"wait": 2,
"dns": 43,
"tcp": 32,
"tls": 36,
"request": 0,
"firstByte": 46,
"download": 3,
"total": 162
}
},
"message": "Response code 401 (Unauthorized)",
"stack": "HTTPError: Response code 401 (Unauthorized)\n at Request.<anonymous> (/usr/local/lib/node_modules/renovate/node_modules/got/dist/source/as-promise/index.js:118:42)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)",
"options": {
"headers": {
"user-agent": "RenovateBot/34.159.1 (https://github.com/renovatebot/renovate)",
"accept": "application/json, application/vnd.github.v3+json",
"authorization": "***********",
"accept-encoding": "gzip, deflate, br"
},
"url": "https://api.github.com/user",
"hostType": "github",
"username": "",
"password": "",
"method": "GET",
"http2": false
},
"response": {
"statusCode": 401,
"statusMessage": "Unauthorized",
"body": {
"message": "Bad credentials",
"documentation_url": "https://docs.github.com/rest"
},
"headers": {
"server": "GitHub.com",
"date": "Wed, 08 Mar 2023 19:37:42 GMT",
"content-type": "application/json; charset=utf-8",
"content-length": "80",
"x-github-media-type": "github.v3",
"x-ratelimit-limit": "60",
"x-ratelimit-remaining": "42",
"x-ratelimit-reset": "1678306329",
"x-ratelimit-used": "18",
"x-ratelimit-resource": "core",
"access-control-expose-headers": "ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset",
"access-control-allow-origin": "*",
"strict-transport-security": "max-age=31536000; includeSubdomains; preload",
"x-frame-options": "deny",
"x-content-type-options": "nosniff",
"x-xss-protection": "0",
"referrer-policy": "origin-when-cross-origin, strict-origin-when-cross-origin",
"content-security-policy": "default-src 'none'",
"vary": "Accept-Encoding, Accept, X-Requested-With",
"x-github-request-id": "9DAF:6543:46329D5:8F7109F:6408E406"
},
"httpVersion": "1.1",
"retryCount": 0
}
}
DEBUG: Error authenticating with GitHub
"err": {
"hostType": "github",
"err": {
"name": "HTTPError",
"code": "ERR_NON_2XX_3XX_RESPONSE",
"timings": {
"start": 1678304262505,
"socket": 1678304262507,
"lookup": 1678304262550,
"connect": 1678304262582,
"secureConnect": 1678304262618,
"upload": 1678304262618,
"response": 1678304262664,
"end": 1678304262667,
"phases": {
"wait": 2,
"dns": 43,
"tcp": 32,
"tls": 36,
"request": 0,
"firstByte": 46,
"download": 3,
"total": 162
}
},
"message": "Response code 401 (Unauthorized)",
"stack": "HTTPError: Response code 401 (Unauthorized)\n at Request.<anonymous> (/usr/local/lib/node_modules/renovate/node_modules/got/dist/source/as-promise/index.js:118:42)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)",
"options": {
"headers": {
"user-agent": "RenovateBot/34.159.1 (https://github.com/renovatebot/renovate)",
"accept": "application/json, application/vnd.github.v3+json",
"authorization": "***********",
"accept-encoding": "gzip, deflate, br"
},
"url": "https://api.github.com/user",
"hostType": "github",
"username": "",
"password": "",
"method": "GET",
"http2": false
},
"response": {
"statusCode": 401,
"statusMessage": "Unauthorized",
"body": {
"message": "Bad credentials",
"documentation_url": "https://docs.github.com/rest"
},
"headers": {
"server": "GitHub.com",
"date": "Wed, 08 Mar 2023 19:37:42 GMT",
"content-type": "application/json; charset=utf-8",
"content-length": "80",
"x-github-media-type": "github.v3",
"x-ratelimit-limit": "60",
"x-ratelimit-remaining": "42",
"x-ratelimit-reset": "1678306329",
"x-ratelimit-used": "18",
"x-ratelimit-resource": "core",
"access-control-expose-headers": "ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset",
"access-control-allow-origin": "*",
"strict-transport-security": "max-age=31536000; includeSubdomains; preload",
"x-frame-options": "deny",
"x-content-type-options": "nosniff",
"x-xss-protection": "0",
"referrer-policy": "origin-when-cross-origin, strict-origin-when-cross-origin",
"content-security-policy": "default-src 'none'",
"vary": "Accept-Encoding, Accept, X-Requested-With",
"x-github-request-id": "9DAF:6543:46329D5:8F7109F:6408E406"
},
"httpVersion": "1.1",
"retryCount": 0
}
},
"message": "external-host-error",
"stack": "Error: external-host-error\n at handleGotError (/usr/local/lib/node_modules/renovate/lib/util/http/github.ts:128:14)\n at GithubHttp.request (/usr/local/lib/node_modules/renovate/lib/util/http/github.ts:370:13)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at GithubHttp.requestJson (/usr/local/lib/node_modules/renovate/lib/util/http/index.ts:256:17)\n at getUserDetails (/usr/local/lib/node_modules/renovate/lib/modules/platform/github/user.ts:13:7)\n at Proxy.initPlatform (/usr/local/lib/node_modules/renovate/lib/modules/platform/github/index.ts:153:36)\n at initPlatform (/usr/local/lib/node_modules/renovate/lib/modules/platform/index.ts:49:24)\n at globalInitialize (/usr/local/lib/node_modules/renovate/lib/workers/global/initialize.ts:71:12)\n at /usr/local/lib/node_modules/renovate/lib/workers/global/index.ts:131:16\n at start (/usr/local/lib/node_modules/renovate/lib/workers/global/index.ts:120:5)\n at /usr/local/lib/node_modules/renovate/lib/renovate.ts:18:22"
}
FATAL: Authentication failure
```
</details>
### Have you created a minimal reproduction repository?
I have explained in the description why a minimal reproduction is impossible | non_main | github com token locally with cli how are you running renovate self hosted if you re self hosting renovate tell us what version of renovate you run npm cli if you re self hosting renovate select which platform you are using none if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug i can confirm my gh token works ❯ curl s x get u github com token x oauth basic head login cdenneen id but doesn t seem to work with renovate ❯ renovate dry run warn cli config dryrun property has been changed to full fatal authentication failure info renovate is exiting with a non zero code due to the following logged errors loggererrors name renovate level logcontext msg authentication failure relevant debug logs logs debug using as regex engine debug parsing configs debug checking for config file in users cdenneen src gitlab gitops cluster config config js debug no config file found on disk skipping warn cli config dryrun property has been changed to full debug converting github com token into a global host rule debug file config config debug cli config config dryrun full debug env config config hostrules hosttype github matchhost github com token token debug combined config config hostrules hosttype github matchhost github com token token dryrun full debug found valid git version debug using default github endpoint debug get code err non response statuscode retrycount duration debug github failure bad credentials token err name httperror code err non response timings start socket lookup connect secureconnect upload response end phases wait dns tcp tls request firstbyte download total message response code unauthorized stack httperror response code unauthorized n at request usr local lib node modules renovate node modules got dist source as promise index js n at processticksandrejections node internal process task queues options headers user agent renovatebot accept application json application vnd github json authorization accept encoding gzip deflate br url hosttype github username password method get false response statuscode statusmessage unauthorized body message bad credentials documentation url headers server github com date wed mar gmt content type application json charset utf content length x github media type github x ratelimit limit x ratelimit remaining x ratelimit reset x ratelimit used x ratelimit resource core access control expose headers etag link location retry after x github otp x ratelimit limit x ratelimit remaining x ratelimit used x ratelimit resource x ratelimit reset x oauth scopes x accepted oauth scopes x poll interval x github media type x github sso x github request id deprecation sunset access control allow origin strict transport security max age includesubdomains preload x frame options deny x content type options nosniff x xss protection referrer policy origin when cross origin strict origin when cross origin content security policy default src none vary accept encoding accept x requested with x github request id httpversion retrycount debug error authenticating with github err hosttype github err name httperror code err non response timings start socket lookup connect secureconnect upload response end phases wait dns tcp tls request firstbyte download total message response code unauthorized stack httperror response code unauthorized n at request usr local lib node modules renovate node modules got dist source as promise index js n at processticksandrejections node internal process task queues options headers user agent renovatebot accept application json application vnd github json authorization accept encoding gzip deflate br url hosttype github username password method get false response statuscode statusmessage unauthorized body message bad credentials documentation url headers server github com date wed mar gmt content type application json charset utf content length x github media type github x ratelimit limit x ratelimit remaining x ratelimit reset x ratelimit used x ratelimit resource core access control expose headers etag link location retry after x github otp x ratelimit limit x ratelimit remaining x ratelimit used x ratelimit resource x ratelimit reset x oauth scopes x accepted oauth scopes x poll interval x github media type x github sso x github request id deprecation sunset access control allow origin strict transport security max age includesubdomains preload x frame options deny x content type options nosniff x xss protection referrer policy origin when cross origin strict origin when cross origin content security policy default src none vary accept encoding accept x requested with x github request id httpversion retrycount message external host error stack error external host error n at handlegoterror usr local lib node modules renovate lib util http github ts n at githubhttp request usr local lib node modules renovate lib util http github ts n at processticksandrejections node internal process task queues n at githubhttp requestjson usr local lib node modules renovate lib util http index ts n at getuserdetails usr local lib node modules renovate lib modules platform github user ts n at proxy initplatform usr local lib node modules renovate lib modules platform github index ts n at initplatform usr local lib node modules renovate lib modules platform index ts n at globalinitialize usr local lib node modules renovate lib workers global initialize ts n at usr local lib node modules renovate lib workers global index ts n at start usr local lib node modules renovate lib workers global index ts n at usr local lib node modules renovate lib renovate ts fatal authentication failure have you created a minimal reproduction repository i have explained in the description why a minimal reproduction is impossible | 0 |
3,697 | 15,094,024,665 | IssuesEvent | 2021-02-07 03:55:19 | IITIDIDX597/sp_2021_team1 | https://api.github.com/repos/IITIDIDX597/sp_2021_team1 | reopened | Reporting issues on the platform | Epic: 5 Maintaining the system Story Week 3 | **Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care.
**Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform.
**Sub-Hill Statements:**
1. The learning platform will be routinely updated with S Lab's own research advancements, as well as outside discoveries and best practices developed for rehabilitation treatments.
### **Story Details:**
As a: clinician
I want: to able to report any challenges that I am experiencing with the system
So that: it can get fixed and does not hamper my learning process | True | Reporting issues on the platform - **Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care.
**Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform.
**Sub-Hill Statements:**
1. The learning platform will be routinely updated with S Lab's own research advancements, as well as outside discoveries and best practices developed for rehabilitation treatments.
### **Story Details:**
As a: clinician
I want: to able to report any challenges that I am experiencing with the system
So that: it can get fixed and does not hamper my learning process | main | reporting issues on the platform project goal s lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way while at the same time foster deeper learning experiences in order to deliver better abilitylab patient care hill statement individual clinicians can reference relevant continuously evolving information for their patient s therapy needs to self manage their approach patient care plan development in a single platform sub hill statements the learning platform will be routinely updated with s lab s own research advancements as well as outside discoveries and best practices developed for rehabilitation treatments story details as a clinician i want to able to report any challenges that i am experiencing with the system so that it can get fixed and does not hamper my learning process | 1 |
246,711 | 18,851,934,739 | IssuesEvent | 2021-11-11 22:11:43 | opencv/opencv | https://api.github.com/repos/opencv/opencv | closed | gen_pattern.py doesn't work | bug category: documentation category: calib3d | ##### System information (version)
<!-- Example
opencv-contrib-python 4.5.4.58
opencv-python 4.5.4.58
- Operating System / Platform => Ubuntu 18.04
-->
##### Detailed description
I followed [tutorial_camera_calibration_pattern](https://docs.opencv.org/4.x/da/d0d/tutorial_camera_calibration_pattern.html) to generate circle pattern:
```bash
python gen_pattern.py -o circleboard.svg --rows 7 --columns 5 --type circles --square_size 15
```
and then i got error:
```bash
Traceback (most recent call last):
File "/home/kb/gen_pattern.py", line 217, in <module>
main()
File "/home/kb/gen_pattern.py", line 198, in main
if len(args.markers) % 2 == 1:
TypeError: object of type 'NoneType' has no len()
```
It seems that `-m` is **ONLY** for radon checkerboard, but i cannot generate the other types of pattern without this parameter. | 1.0 | gen_pattern.py doesn't work - ##### System information (version)
<!-- Example
opencv-contrib-python 4.5.4.58
opencv-python 4.5.4.58
- Operating System / Platform => Ubuntu 18.04
-->
##### Detailed description
I followed [tutorial_camera_calibration_pattern](https://docs.opencv.org/4.x/da/d0d/tutorial_camera_calibration_pattern.html) to generate circle pattern:
```bash
python gen_pattern.py -o circleboard.svg --rows 7 --columns 5 --type circles --square_size 15
```
and then i got error:
```bash
Traceback (most recent call last):
File "/home/kb/gen_pattern.py", line 217, in <module>
main()
File "/home/kb/gen_pattern.py", line 198, in main
if len(args.markers) % 2 == 1:
TypeError: object of type 'NoneType' has no len()
```
It seems that `-m` is **ONLY** for radon checkerboard, but i cannot generate the other types of pattern without this parameter. | non_main | gen pattern py doesn t work system information version example opencv contrib python opencv python operating system platform ubuntu detailed description i followed to generate circle pattern bash python gen pattern py o circleboard svg rows columns type circles square size and then i got error bash traceback most recent call last file home kb gen pattern py line in main file home kb gen pattern py line in main if len args markers typeerror object of type nonetype has no len it seems that m is only for radon checkerboard but i cannot generate the other types of pattern without this parameter | 0 |
3,601 | 14,545,379,165 | IssuesEvent | 2020-12-15 19:33:40 | adda-team/adda | https://api.github.com/repos/adda-team/adda | closed | Finish migration to GitHub | maintainability pri-Critical task | This includes a number of components:
- [x] notifying/incorporating other contributors/project members. Possibly update acknowledgments.
- [x] issues (assignees, labels, milestones, links in comments).
- [x] wiki pages (placement at wiki tab, images, cross links).
- [x] readme at main landing page.
- [x] try to place all downloads in "releases" (Release Notes can also go there)
- [x] update links in documentation (manual and text files).
- [x] update build scripts
- [x] set some git property on `win32\README` and `win64\README` so that they are always checked with Windows-style EOL
- [x] transfer svn properties to git attributes
- [x] change code license to a github style, as `LICENSE`
- [x] use README in different folders as much, as possible. Especially, in `misc/`. Very convenient for code browsing.
- [x] add files CONTRIBUTING and CODE_OF_CONDUCT
- [x] add issue templates for bugs, questions, etc. | True | Finish migration to GitHub - This includes a number of components:
- [x] notifying/incorporating other contributors/project members. Possibly update acknowledgments.
- [x] issues (assignees, labels, milestones, links in comments).
- [x] wiki pages (placement at wiki tab, images, cross links).
- [x] readme at main landing page.
- [x] try to place all downloads in "releases" (Release Notes can also go there)
- [x] update links in documentation (manual and text files).
- [x] update build scripts
- [x] set some git property on `win32\README` and `win64\README` so that they are always checked with Windows-style EOL
- [x] transfer svn properties to git attributes
- [x] change code license to a github style, as `LICENSE`
- [x] use README in different folders as much, as possible. Especially, in `misc/`. Very convenient for code browsing.
- [x] add files CONTRIBUTING and CODE_OF_CONDUCT
- [x] add issue templates for bugs, questions, etc. | main | finish migration to github this includes a number of components notifying incorporating other contributors project members possibly update acknowledgments issues assignees labels milestones links in comments wiki pages placement at wiki tab images cross links readme at main landing page try to place all downloads in releases release notes can also go there update links in documentation manual and text files update build scripts set some git property on readme and readme so that they are always checked with windows style eol transfer svn properties to git attributes change code license to a github style as license use readme in different folders as much as possible especially in misc very convenient for code browsing add files contributing and code of conduct add issue templates for bugs questions etc | 1 |
3,352 | 13,018,009,379 | IssuesEvent | 2020-07-26 15:17:30 | RapidField/solid-instruments | https://api.github.com/repos/RapidField/solid-instruments | closed | Fix 'Complex Method' issue in src\RapidField.SolidInstruments.Cryptography\Extensions\RandomNumberGeneratorExtensions.cs | Category-Maintenance Source-Maintainer Stage-4-Complete Subcategory-Conventions Subsystem-Cryptography Tag-AddReleaseNote Verdict-Released Version-1.0.25 WindowForDelivery-2020-Q4 | # Maintenance Request
This issue represents a request for documentation, testing, refactoring or other non-functional changes.
## Overview
[CodeFactor](https://www.codefactor.io/repository/github/rapidfield/solid-instruments) found an issue: Complex Method
It's currently on:
[src\RapidField.SolidInstruments.Cryptography\Extensions\RandomNumberGeneratorExtensions.cs:1666-1724
](https://www.codefactor.io/repository/github/rapidfield/solid-instruments/source/master/src/RapidField.SolidInstruments.Cryptography/Extensions/RandomNumberGeneratorExtensions.cs#L1666)Commit 93a0cd4d6caba3732f36ad3ffa12fc08337971a0
## Statement of work
The following list describes the work to be done and defines acceptance criteria for the feature.
1. Resolve the complex method.
## Revision control plan
**Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue.
- `master` is the pull request target for
- `release/v1.0.25-preview1`, which is the pull request target for
- `develop`, which is the pull request target for
- `feature/0055_complex-rng-extensions`, which is the pull request target for contributing user branches, which should be named using the pattern
- `user/{username}/0055_complex-rng-extensions` | True | Fix 'Complex Method' issue in src\RapidField.SolidInstruments.Cryptography\Extensions\RandomNumberGeneratorExtensions.cs - # Maintenance Request
This issue represents a request for documentation, testing, refactoring or other non-functional changes.
## Overview
[CodeFactor](https://www.codefactor.io/repository/github/rapidfield/solid-instruments) found an issue: Complex Method
It's currently on:
[src\RapidField.SolidInstruments.Cryptography\Extensions\RandomNumberGeneratorExtensions.cs:1666-1724
](https://www.codefactor.io/repository/github/rapidfield/solid-instruments/source/master/src/RapidField.SolidInstruments.Cryptography/Extensions/RandomNumberGeneratorExtensions.cs#L1666)Commit 93a0cd4d6caba3732f36ad3ffa12fc08337971a0
## Statement of work
The following list describes the work to be done and defines acceptance criteria for the feature.
1. Resolve the complex method.
## Revision control plan
**Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue.
- `master` is the pull request target for
- `release/v1.0.25-preview1`, which is the pull request target for
- `develop`, which is the pull request target for
- `feature/0055_complex-rng-extensions`, which is the pull request target for contributing user branches, which should be named using the pattern
- `user/{username}/0055_complex-rng-extensions` | main | fix complex method issue in src rapidfield solidinstruments cryptography extensions randomnumbergeneratorextensions cs maintenance request this issue represents a request for documentation testing refactoring or other non functional changes overview found an issue complex method it s currently on src rapidfield solidinstruments cryptography extensions randomnumbergeneratorextensions cs statement of work the following list describes the work to be done and defines acceptance criteria for the feature resolve the complex method revision control plan solid instruments uses the individual contributors should follow the branching plan below when working on this issue master is the pull request target for release which is the pull request target for develop which is the pull request target for feature complex rng extensions which is the pull request target for contributing user branches which should be named using the pattern user username complex rng extensions | 1 |
587,333 | 17,613,371,885 | IssuesEvent | 2021-08-18 06:28:41 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | 9gag.com - site is not usable | priority-important browser-focus-geckoview engine-gecko | <!-- @browser: Firefox Mobile 91.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:91.0) Gecko/91.0 Firefox/91.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/83746 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://9gag.com/
**Browser / Version**: Firefox Mobile 91.0
**Operating System**: Android 9
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
I can't slide the drawer to select open un browser.... Then the home page is locked....
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | 9gag.com - site is not usable - <!-- @browser: Firefox Mobile 91.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:91.0) Gecko/91.0 Firefox/91.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/83746 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://9gag.com/
**Browser / Version**: Firefox Mobile 91.0
**Operating System**: Android 9
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
I can't slide the drawer to select open un browser.... Then the home page is locked....
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | com site is not usable url browser version firefox mobile operating system android tested another browser yes other problem type site is not usable description buttons or links not working steps to reproduce i can t slide the drawer to select open un browser then the home page is locked browser configuration none from with ❤️ | 0 |
131,683 | 18,359,884,498 | IssuesEvent | 2021-10-09 03:11:14 | fjontran/fa21-cse110-lab3 | https://api.github.com/repos/fjontran/fa21-cse110-lab3 | closed | [Task]: Clean up form section of MM | enhancement polish required push for later design | ### Contact Details
_No response_
### What happened?
The form section of the meeting minutes page is really messy and needs to be cleaned up
### Version
1.0.2 (Default)
### What browsers are you seeing the problem on?
_No response_
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1.0 | [Task]: Clean up form section of MM - ### Contact Details
_No response_
### What happened?
The form section of the meeting minutes page is really messy and needs to be cleaned up
### Version
1.0.2 (Default)
### What browsers are you seeing the problem on?
_No response_
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | non_main | clean up form section of mm contact details no response what happened the form section of the meeting minutes page is really messy and needs to be cleaned up version default what browsers are you seeing the problem on no response relevant log output no response code of conduct i agree to follow this project s code of conduct | 0 |
213,568 | 16,524,966,155 | IssuesEvent | 2021-05-26 18:49:22 | hashicorp/terraform-provider-google | https://api.github.com/repos/hashicorp/terraform-provider-google | opened | Flakey tests: Quota exceeded for quota metric 'Mutate requests' and limit 'Mutate requests per minute' | test failure | Causing failures in at least the following tests:
- TestAccRedisInstance_redisInstancePrivateServiceExample
- TestAccMemcacheInstance_update | 1.0 | Flakey tests: Quota exceeded for quota metric 'Mutate requests' and limit 'Mutate requests per minute' - Causing failures in at least the following tests:
- TestAccRedisInstance_redisInstancePrivateServiceExample
- TestAccMemcacheInstance_update | non_main | flakey tests quota exceeded for quota metric mutate requests and limit mutate requests per minute causing failures in at least the following tests testaccredisinstance redisinstanceprivateserviceexample testaccmemcacheinstance update | 0 |
283,377 | 8,719,392,154 | IssuesEvent | 2018-12-08 00:32:31 | briggySmalls/skin-deep-server | https://api.github.com/repos/briggySmalls/skin-deep-server | closed | Suggestions widget no longer styles dark on video archive | bug high-priority | 26b1d39755f54e51e6727428f142057955cf1cb8 introduced a regression that causes the styles for the suggested posts widget to not be applied now it is no longer a widget.
Small update required to target correct classes (note probably requires a wrapper element). | 1.0 | Suggestions widget no longer styles dark on video archive - 26b1d39755f54e51e6727428f142057955cf1cb8 introduced a regression that causes the styles for the suggested posts widget to not be applied now it is no longer a widget.
Small update required to target correct classes (note probably requires a wrapper element). | non_main | suggestions widget no longer styles dark on video archive introduced a regression that causes the styles for the suggested posts widget to not be applied now it is no longer a widget small update required to target correct classes note probably requires a wrapper element | 0 |
5,365 | 26,987,887,854 | IssuesEvent | 2023-02-09 17:26:05 | pulp/pulp-oci-images | https://api.github.com/repos/pulp/pulp-oci-images | opened | Deduplicate the CI jobs for single-process pulp vs single-process galaxy images | Triage-Needed Maintainability | These 2 sections are so similar, that they should be differentiated via variables. | True | Deduplicate the CI jobs for single-process pulp vs single-process galaxy images - These 2 sections are so similar, that they should be differentiated via variables. | main | deduplicate the ci jobs for single process pulp vs single process galaxy images these sections are so similar that they should be differentiated via variables | 1 |
40,108 | 8,729,100,932 | IssuesEvent | 2018-12-10 19:16:39 | CDCgov/MicrobeTrace | https://api.github.com/repos/CDCgov/MicrobeTrace | closed | Warning Message for IE Users | [effort] small [issue-type] enhancement [skill-level] beginner code.gov help-wanted | **Background**
Internet Explorer has not been actively supported by Microsoft for a number of years. It has hobbled along beyond its life cycle and lingers on as a relic of the past. That being said, MicrobeTrace does not currently warn Internet Explorer users of its incompatibility. We require a banner that detects Internet Explorer and warns users that they should join the 21st century.
**Open Task Description**
We should really add a banner warning people that their [terrible](https://www.wired.com/2016/01/the-sorry-legacy-of-microsoft-internet-explorer/), [unsupported](https://www.microsoft.com/en-us/windowsforbusiness/end-of-ie-support), non-standards-compliant browser is rubbish and they should switch to _literally anything else_. Not because I have an axe to grind, mind you, but because [MicrobeTrace does not and will never work on Internet Explorer](https://github.com/CDCgov/WebMicrobeTrace/wiki/Internet-Explorer). | 1.0 | Warning Message for IE Users - **Background**
Internet Explorer has not been actively supported by Microsoft for a number of years. It has hobbled along beyond its life cycle and lingers on as a relic of the past. That being said, MicrobeTrace does not currently warn Internet Explorer users of its incompatibility. We require a banner that detects Internet Explorer and warns users that they should join the 21st century.
**Open Task Description**
We should really add a banner warning people that their [terrible](https://www.wired.com/2016/01/the-sorry-legacy-of-microsoft-internet-explorer/), [unsupported](https://www.microsoft.com/en-us/windowsforbusiness/end-of-ie-support), non-standards-compliant browser is rubbish and they should switch to _literally anything else_. Not because I have an axe to grind, mind you, but because [MicrobeTrace does not and will never work on Internet Explorer](https://github.com/CDCgov/WebMicrobeTrace/wiki/Internet-Explorer). | non_main | warning message for ie users background internet explorer has not been actively supported by microsoft for a number of years it has hobbled along beyond its life cycle and lingers on as a relic of the past that being said microbetrace does not currently warn internet explorer users of its incompatibility we require a banner that detects internet explorer and warns users that they should join the century open task description we should really add a banner warning people that their non standards compliant browser is rubbish and they should switch to literally anything else not because i have an axe to grind mind you but because | 0 |
804,926 | 29,505,553,849 | IssuesEvent | 2023-06-03 09:02:19 | googleapis/google-cloud-ruby | https://api.github.com/repos/googleapis/google-cloud-ruby | opened | [Nightly CI Failures] Failures detected for google-cloud-gsuite_add_ons-v1 | type: bug priority: p1 nightly failure | At 2023-06-03 09:02:17 UTC, detected failures in google-cloud-gsuite_add_ons-v1 for: rubocop.
The CI logs can be found [here](https://github.com/googleapis/google-cloud-ruby/actions/runs/5162726703)
report_key_710bb3618b6e205d3ed8673da7682fe8 | 1.0 | [Nightly CI Failures] Failures detected for google-cloud-gsuite_add_ons-v1 - At 2023-06-03 09:02:17 UTC, detected failures in google-cloud-gsuite_add_ons-v1 for: rubocop.
The CI logs can be found [here](https://github.com/googleapis/google-cloud-ruby/actions/runs/5162726703)
report_key_710bb3618b6e205d3ed8673da7682fe8 | non_main | failures detected for google cloud gsuite add ons at utc detected failures in google cloud gsuite add ons for rubocop the ci logs can be found report key | 0 |
163,418 | 6,198,052,094 | IssuesEvent | 2017-07-05 18:15:48 | mozilla/MozDef | https://api.github.com/repos/mozilla/MozDef | closed | Front End Log Processing: last seen plugin | category:feature priority:low state:stale | Use the esworker plugin system to add reference to last event (or last 10 or last day) for a user/server/endpoint
| 1.0 | Front End Log Processing: last seen plugin - Use the esworker plugin system to add reference to last event (or last 10 or last day) for a user/server/endpoint
| non_main | front end log processing last seen plugin use the esworker plugin system to add reference to last event or last or last day for a user server endpoint | 0 |
1,974 | 6,694,172,483 | IssuesEvent | 2017-10-10 00:05:05 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Amazon: Review Stars not appearing | Maintainer Input Requested | All of these have reviews on amazon but only one displays stars.
---

IA Page: http://duck.co/ia/view/products
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @bsstoner
| True | Amazon: Review Stars not appearing - All of these have reviews on amazon but only one displays stars.
---

IA Page: http://duck.co/ia/view/products
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @bsstoner
| main | amazon review stars not appearing all of these have reviews on amazon but only one displays stars ia page bsstoner | 1 |
5,635 | 28,302,721,318 | IssuesEvent | 2023-04-10 07:55:18 | cncf/glossary | https://api.github.com/repos/cncf/glossary | closed | `Browse by Tags` not works with a Tag title including a space | bug maintainers |
`Browse by Tags` not works with a Tag title including a space
For instance,
[https://glossary.cncf.io/ko/tags/?핵심%20개념=true](https://glossary.cncf.io/ko/tags/?%ED%95%B5%EC%8B%AC%20%EA%B0%9C%EB%85%90=true)
https://glossary.cncf.io/ko/scalability/ should be listed, but the result list is empty.

This feature is important considering some languages need to utilize spaces to localize Tag titles.
(requested from Urdu localization)
| True | `Browse by Tags` not works with a Tag title including a space -
`Browse by Tags` not works with a Tag title including a space
For instance,
[https://glossary.cncf.io/ko/tags/?핵심%20개념=true](https://glossary.cncf.io/ko/tags/?%ED%95%B5%EC%8B%AC%20%EA%B0%9C%EB%85%90=true)
https://glossary.cncf.io/ko/scalability/ should be listed, but the result list is empty.

This feature is important considering some languages need to utilize spaces to localize Tag titles.
(requested from Urdu localization)
| main | browse by tags not works with a tag title including a space browse by tags not works with a tag title including a space for instance should be listed but the result list is empty this feature is important considering some languages need to utilize spaces to localize tag titles requested from urdu localization | 1 |
16,879 | 3,573,161,979 | IssuesEvent | 2016-01-27 04:03:42 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Flaky test : project system csharp unit test failed randomly in an unrelated PR against future branch. | Area-IDE Bug Flaky Test | ###### Failing build: [link](http://dotnet-ci.cloudapp.net/job/roslyn_prtest_win_dbg_unit64/2948/)
###### Failure log : [link](http://dotnet-ci.cloudapp.net/job/roslyn_prtest_win_dbg_unit64/2948/artifact/Binaries/Debug/ProjectSystem/Tests/xUnitResults/Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests.dll.out.log)
~~~
xUnit.net Console Runner (64-bit .NET 4.0.30319.42000)
Discovering: Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests
Discovered: Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests
Starting: Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests
ApplyModifications_TreeWithNestedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_TreeWithNestedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_TreeWithNestedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder1 "...)
Constructor_NullAsImageProvider_ThrowsArgumentNull
Constructor_NullAsDesignerService_ThrowsArgumentNull
ApplyModifications_TreeWithPropertiesCandidateButSupportsProjectDesignerFalse_ReturnsUnmodifiedTree
ApplyModifications_TreeWithPropertiesCandidateAlreadyMarkedAsAppDesigner_ReturnsUnmodifiedTree(input: "\r\nRoot(capabilities: {ProjectRoot})\r\n Propertie"...)
ApplyModifications_TreeWithPropertiesCandidateAlreadyMarkedAsAppDesigner_ReturnsUnmodifiedTree(input: "\r\nRoot(capabilities: {ProjectRoot})\r\n Propertie"...)
ApplyModifications_TreeWithPropertiesCandidateAlreadyMarkedAsAppDesigner_ReturnsUnmodifiedTree(input: "\r\nRoot(capabilities: {ProjectRoot})\r\n Propertie"...)
ApplyModifications2_NullAsTree_ThrowsArgumentNull
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n PROPERTI"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n PROPERTI"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications1_NullAsTreeProvider_ThrowsArgumentNull
ApplyModifications_ProjectWithNonDefaultPropertiesFolder_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp
ApplyModifications2_NullAsTreeProvider_ThrowsArgumentNull
ApplyModifications_TreeWithFileCalledProperties_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithFileCalledProperties_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithFileCalledProperties_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
Constructor_NullAsProjectServices_ThrowsArgumentNull
ApplyModifications_TreeWithExcludedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithExcludedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithExcludedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithMyProjectFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n My Proje"...)
ApplyModifications_TreeWithoutPropertiesCandidate_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n")
ApplyModifications_TreeWithoutPropertiesCandidate_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_TreeWithoutPropertiesCandidate_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_TreeWithoutPropertiesCandidate_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_ProjectWithEmptyPropertiesFolder_DefaultsToProperties
ApplyModifications1_NullAsTree_ThrowsArgumentNull
ApplyModifications_ProjectWithNullPropertiesFolder_DefaultsToProperties
Finished: Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests
System.InvalidOperationException: Collection was modified; enumeration operation may not execute.
at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource)
at System.Collections.Generic.List`1.Enumerator.MoveNextRare()
at System.Diagnostics.Tracing.EventListener.DisposeOnShutdown(Object sender, EventArgs e)
~~~ | 1.0 | Flaky test : project system csharp unit test failed randomly in an unrelated PR against future branch. - ###### Failing build: [link](http://dotnet-ci.cloudapp.net/job/roslyn_prtest_win_dbg_unit64/2948/)
###### Failure log : [link](http://dotnet-ci.cloudapp.net/job/roslyn_prtest_win_dbg_unit64/2948/artifact/Binaries/Debug/ProjectSystem/Tests/xUnitResults/Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests.dll.out.log)
~~~
xUnit.net Console Runner (64-bit .NET 4.0.30319.42000)
Discovering: Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests
Discovered: Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests
Starting: Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests
ApplyModifications_TreeWithNestedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_TreeWithNestedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_TreeWithNestedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder1 "...)
Constructor_NullAsImageProvider_ThrowsArgumentNull
Constructor_NullAsDesignerService_ThrowsArgumentNull
ApplyModifications_TreeWithPropertiesCandidateButSupportsProjectDesignerFalse_ReturnsUnmodifiedTree
ApplyModifications_TreeWithPropertiesCandidateAlreadyMarkedAsAppDesigner_ReturnsUnmodifiedTree(input: "\r\nRoot(capabilities: {ProjectRoot})\r\n Propertie"...)
ApplyModifications_TreeWithPropertiesCandidateAlreadyMarkedAsAppDesigner_ReturnsUnmodifiedTree(input: "\r\nRoot(capabilities: {ProjectRoot})\r\n Propertie"...)
ApplyModifications_TreeWithPropertiesCandidateAlreadyMarkedAsAppDesigner_ReturnsUnmodifiedTree(input: "\r\nRoot(capabilities: {ProjectRoot})\r\n Propertie"...)
ApplyModifications2_NullAsTree_ThrowsArgumentNull
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n PROPERTI"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n PROPERTI"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithPropertiesCandidate_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"..., expected: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications1_NullAsTreeProvider_ThrowsArgumentNull
ApplyModifications_ProjectWithNonDefaultPropertiesFolder_ReturnsCandidateMarkedWithAppDesignerFolderAndBubbleUp
ApplyModifications2_NullAsTreeProvider_ThrowsArgumentNull
ApplyModifications_TreeWithFileCalledProperties_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithFileCalledProperties_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithFileCalledProperties_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
Constructor_NullAsProjectServices_ThrowsArgumentNull
ApplyModifications_TreeWithExcludedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithExcludedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithExcludedPropertiesFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Properti"...)
ApplyModifications_TreeWithMyProjectFolder_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n My Proje"...)
ApplyModifications_TreeWithoutPropertiesCandidate_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n")
ApplyModifications_TreeWithoutPropertiesCandidate_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_TreeWithoutPropertiesCandidate_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_TreeWithoutPropertiesCandidate_ReturnsUnmodifiedTree(input: "\r\nRoot (capabilities: {ProjectRoot})\r\n Folder ("...)
ApplyModifications_ProjectWithEmptyPropertiesFolder_DefaultsToProperties
ApplyModifications1_NullAsTree_ThrowsArgumentNull
ApplyModifications_ProjectWithNullPropertiesFolder_DefaultsToProperties
Finished: Microsoft.VisualStudio.ProjectSystem.CSharp.UnitTests
System.InvalidOperationException: Collection was modified; enumeration operation may not execute.
at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource)
at System.Collections.Generic.List`1.Enumerator.MoveNextRare()
at System.Diagnostics.Tracing.EventListener.DisposeOnShutdown(Object sender, EventArgs e)
~~~ | non_main | flaky test project system csharp unit test failed randomly in an unrelated pr against future branch failing build failure log xunit net console runner bit net discovering microsoft visualstudio projectsystem csharp unittests discovered microsoft visualstudio projectsystem csharp unittests starting microsoft visualstudio projectsystem csharp unittests applymodifications treewithnestedpropertiesfolder returnsunmodifiedtree input r nroot capabilities projectroot r n folder applymodifications treewithnestedpropertiesfolder returnsunmodifiedtree input r nroot capabilities projectroot r n folder applymodifications treewithnestedpropertiesfolder returnsunmodifiedtree input r nroot capabilities projectroot r n constructor nullasimageprovider throwsargumentnull constructor nullasdesignerservice throwsargumentnull applymodifications treewithpropertiescandidatebutsupportsprojectdesignerfalse returnsunmodifiedtree applymodifications treewithpropertiescandidatealreadymarkedasappdesigner returnsunmodifiedtree input r nroot capabilities projectroot r n propertie applymodifications treewithpropertiescandidatealreadymarkedasappdesigner returnsunmodifiedtree input r nroot capabilities projectroot r n propertie applymodifications treewithpropertiescandidatealreadymarkedasappdesigner returnsunmodifiedtree input r nroot capabilities projectroot r n propertie nullastree throwsargumentnull applymodifications treewithpropertiescandidate returnscandidatemarkedwithappdesignerfolderandbubbleup input r nroot capabilities projectroot r n properti expected r nroot capabilities projectroot r n properti applymodifications treewithpropertiescandidate returnscandidatemarkedwithappdesignerfolderandbubbleup input r nroot capabilities projectroot r n properti expected r nroot capabilities projectroot r n properti applymodifications treewithpropertiescandidate returnscandidatemarkedwithappdesignerfolderandbubbleup input r nroot capabilities projectroot r n properti expected r nroot capabilities projectroot r n properti applymodifications treewithpropertiescandidate returnscandidatemarkedwithappdesignerfolderandbubbleup input r nroot capabilities projectroot r n properti expected r nroot capabilities projectroot r n properti applymodifications treewithpropertiescandidate returnscandidatemarkedwithappdesignerfolderandbubbleup input r nroot capabilities projectroot r n properti expected r nroot capabilities projectroot r n properti applymodifications treewithpropertiescandidate returnscandidatemarkedwithappdesignerfolderandbubbleup input r nroot capabilities projectroot r n properti expected r nroot capabilities projectroot r n properti applymodifications treewithpropertiescandidate returnscandidatemarkedwithappdesignerfolderandbubbleup input r nroot capabilities projectroot r n properti expected r nroot capabilities projectroot r n properti applymodifications treewithpropertiescandidate returnscandidatemarkedwithappdesignerfolderandbubbleup input r nroot capabilities projectroot r n properti expected r nroot capabilities projectroot r n properti nullastreeprovider throwsargumentnull applymodifications projectwithnondefaultpropertiesfolder returnscandidatemarkedwithappdesignerfolderandbubbleup nullastreeprovider throwsargumentnull applymodifications treewithfilecalledproperties returnsunmodifiedtree input r nroot capabilities projectroot r n properti applymodifications treewithfilecalledproperties returnsunmodifiedtree input r nroot capabilities projectroot r n properti applymodifications treewithfilecalledproperties returnsunmodifiedtree input r nroot capabilities projectroot r n properti constructor nullasprojectservices throwsargumentnull applymodifications treewithexcludedpropertiesfolder returnsunmodifiedtree input r nroot capabilities projectroot r n properti applymodifications treewithexcludedpropertiesfolder returnsunmodifiedtree input r nroot capabilities projectroot r n properti applymodifications treewithexcludedpropertiesfolder returnsunmodifiedtree input r nroot capabilities projectroot r n properti applymodifications treewithmyprojectfolder returnsunmodifiedtree input r nroot capabilities projectroot r n my proje applymodifications treewithoutpropertiescandidate returnsunmodifiedtree input r nroot capabilities projectroot r n applymodifications treewithoutpropertiescandidate returnsunmodifiedtree input r nroot capabilities projectroot r n folder applymodifications treewithoutpropertiescandidate returnsunmodifiedtree input r nroot capabilities projectroot r n folder applymodifications treewithoutpropertiescandidate returnsunmodifiedtree input r nroot capabilities projectroot r n folder applymodifications projectwithemptypropertiesfolder defaultstoproperties nullastree throwsargumentnull applymodifications projectwithnullpropertiesfolder defaultstoproperties finished microsoft visualstudio projectsystem csharp unittests system invalidoperationexception collection was modified enumeration operation may not execute at system throwhelper throwinvalidoperationexception exceptionresource resource at system collections generic list enumerator movenextrare at system diagnostics tracing eventlistener disposeonshutdown object sender eventargs e | 0 |
2,934 | 10,514,312,370 | IssuesEvent | 2019-09-27 23:54:16 | laravel-notification-channels/new-channels | https://api.github.com/repos/laravel-notification-channels/new-channels | opened | Pubnub | needs-maintainer | Repo exists here:
https://github.com/laravel-notification-channels/pubnub
Maintainer needed, let me know if you would like to adopt the package | True | Pubnub - Repo exists here:
https://github.com/laravel-notification-channels/pubnub
Maintainer needed, let me know if you would like to adopt the package | main | pubnub repo exists here maintainer needed let me know if you would like to adopt the package | 1 |
1,636 | 6,572,661,761 | IssuesEvent | 2017-09-11 04:11:08 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | dnsimple does not return all records | affects_2.1 bug_report networking waiting_on_maintainer |
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
dnsimple
##### ANSIBLE VERSION
2.1.3.0
##### SUMMARY
According to dnsimple docs [1] it should be possible to list all records of a given domain but it actually is not.
[1] http://docs.ansible.com/ansible/dnsimple_module.html
##### STEPS TO REPRODUCE
- Register a domain in dnsimple.com
- Create a record
- Try to fetch its record using the documentation example:
# fetch my.com domain records
- local_action: dnsimple domain=my.com state=present
register: records
##### EXPECTED RESULTS
'records' should contain the list of records of the domain
##### ACTUAL RESULTS
Some information replaced by "xxx":
{
"changed": false,
"result": {
"account_id": xxxxx,
"auto_renew": false,
"created_at": "2016-11-12T20:48:13.481Z",
"expires_on": null,
"id": xxxxxx,
"lockable": true,
"name": "xxxxxxx",
"record_count": 19,
"registrant_id": null,
"service_count": 0,
"state": "hosted",
"token": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"unicode_name": "xxxxxxxx",
"updated_at": "2016-11-12T20:48:13.481Z",
"user_id": null,
"whois_protected": false
}
} | True | dnsimple does not return all records -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
dnsimple
##### ANSIBLE VERSION
2.1.3.0
##### SUMMARY
According to dnsimple docs [1] it should be possible to list all records of a given domain but it actually is not.
[1] http://docs.ansible.com/ansible/dnsimple_module.html
##### STEPS TO REPRODUCE
- Register a domain in dnsimple.com
- Create a record
- Try to fetch its record using the documentation example:
# fetch my.com domain records
- local_action: dnsimple domain=my.com state=present
register: records
##### EXPECTED RESULTS
'records' should contain the list of records of the domain
##### ACTUAL RESULTS
Some information replaced by "xxx":
{
"changed": false,
"result": {
"account_id": xxxxx,
"auto_renew": false,
"created_at": "2016-11-12T20:48:13.481Z",
"expires_on": null,
"id": xxxxxx,
"lockable": true,
"name": "xxxxxxx",
"record_count": 19,
"registrant_id": null,
"service_count": 0,
"state": "hosted",
"token": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"unicode_name": "xxxxxxxx",
"updated_at": "2016-11-12T20:48:13.481Z",
"user_id": null,
"whois_protected": false
}
} | main | dnsimple does not return all records issue type bug report component name dnsimple ansible version summary according to dnsimple docs it should be possible to list all records of a given domain but it actually is not steps to reproduce register a domain in dnsimple com create a record try to fetch its record using the documentation example fetch my com domain records local action dnsimple domain my com state present register records expected results records should contain the list of records of the domain actual results some information replaced by xxx changed false result account id xxxxx auto renew false created at expires on null id xxxxxx lockable true name xxxxxxx record count registrant id null service count state hosted token xxxxxxxxxxxxxxxxxxxxxxxxxxxx unicode name xxxxxxxx updated at user id null whois protected false | 1 |
753,894 | 26,366,812,924 | IssuesEvent | 2023-01-11 17:11:40 | apache/arrow | https://api.github.com/repos/apache/arrow | closed | [R] Add StructArray$create() | Type: enhancement Component: R Priority: Critical | In ARROW-13371 we implemented the `make_struct` compute function bound to `data.frame()` / `tibble()` in dplyr evaluation; however, we didn't actually implement `StructArray$create()`. In ARROW-15168, it turns out that we need to do this to support `StructArray` creation from data.frames whose columns aren't all convertable using the internal C++ conversion. The hack used in that PR is below (but we should clearly implement the C++ function instead of using the hack):
```R
library(arrow, warn.conflicts = FALSE)
struct_array <- function(...) {
batch <- record_batch(...)
array_ptr <- arrow:::allocate_arrow_array()
schema_ptr <- arrow:::allocate_arrow_schema()
batch$export_to_c(array_ptr, schema_ptr)
Array$import_from_c(array_ptr, schema_ptr)
}
struct_array(a = 1, b = "two")
#> StructArray
#> <struct<a: double, b: string>>
#> -- is_valid: all not null
#> -- child 0 type: double
#> [
#> 1
#> ]
#> -- child 1 type: string
#> [
#> "two"
#> ]
```
**Reporter**: [Dewey Dunnington](https://issues.apache.org/jira/browse/ARROW-16266) / @paleolimbot
**Assignee**: [Nicola Crane](https://issues.apache.org/jira/browse/ARROW-16266) / @thisisnic
#### PRs and other links:
- [GitHub Pull Request #14922](https://github.com/apache/arrow/pull/14922)
<sub>**Note**: *This issue was originally created as [ARROW-16266](https://issues.apache.org/jira/browse/ARROW-16266). Please see the [migration documentation](https://github.com/apache/arrow/issues/14542) for further details.*</sub> | 1.0 | [R] Add StructArray$create() - In ARROW-13371 we implemented the `make_struct` compute function bound to `data.frame()` / `tibble()` in dplyr evaluation; however, we didn't actually implement `StructArray$create()`. In ARROW-15168, it turns out that we need to do this to support `StructArray` creation from data.frames whose columns aren't all convertable using the internal C++ conversion. The hack used in that PR is below (but we should clearly implement the C++ function instead of using the hack):
```R
library(arrow, warn.conflicts = FALSE)
struct_array <- function(...) {
batch <- record_batch(...)
array_ptr <- arrow:::allocate_arrow_array()
schema_ptr <- arrow:::allocate_arrow_schema()
batch$export_to_c(array_ptr, schema_ptr)
Array$import_from_c(array_ptr, schema_ptr)
}
struct_array(a = 1, b = "two")
#> StructArray
#> <struct<a: double, b: string>>
#> -- is_valid: all not null
#> -- child 0 type: double
#> [
#> 1
#> ]
#> -- child 1 type: string
#> [
#> "two"
#> ]
```
**Reporter**: [Dewey Dunnington](https://issues.apache.org/jira/browse/ARROW-16266) / @paleolimbot
**Assignee**: [Nicola Crane](https://issues.apache.org/jira/browse/ARROW-16266) / @thisisnic
#### PRs and other links:
- [GitHub Pull Request #14922](https://github.com/apache/arrow/pull/14922)
<sub>**Note**: *This issue was originally created as [ARROW-16266](https://issues.apache.org/jira/browse/ARROW-16266). Please see the [migration documentation](https://github.com/apache/arrow/issues/14542) for further details.*</sub> | non_main | add structarray create in arrow we implemented the make struct compute function bound to data frame tibble in dplyr evaluation however we didn t actually implement structarray create in arrow it turns out that we need to do this to support structarray creation from data frames whose columns aren t all convertable using the internal c conversion the hack used in that pr is below but we should clearly implement the c function instead of using the hack r library arrow warn conflicts false struct array function batch record batch array ptr arrow allocate arrow array schema ptr arrow allocate arrow schema batch export to c array ptr schema ptr array import from c array ptr schema ptr struct array a b two structarray is valid all not null child type double child type string two reporter paleolimbot assignee thisisnic prs and other links note this issue was originally created as please see the for further details | 0 |
556 | 4,005,694,384 | IssuesEvent | 2016-05-12 12:34:44 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | SQL Cheat Sheet: the 'FOREIGN KEY' statement appear 2 times | Maintainer Input Requested | Hi,
I was read sql cheat sheet when I note that the 'FOREIGN KEY' appear 2 times. For information I use firefox 49.0.1 on updated Archlinux.
------
IA Page: http://duck.co/ia/view/sql_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @jophab | True | SQL Cheat Sheet: the 'FOREIGN KEY' statement appear 2 times - Hi,
I was read sql cheat sheet when I note that the 'FOREIGN KEY' appear 2 times. For information I use firefox 49.0.1 on updated Archlinux.
------
IA Page: http://duck.co/ia/view/sql_cheat_sheet
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @jophab | main | sql cheat sheet the foreign key statement appear times hi i was read sql cheat sheet when i note that the foreign key appear times for information i use firefox on updated archlinux ia page jophab | 1 |
218 | 2,873,273,638 | IssuesEvent | 2015-06-08 16:11:20 | github/hubot-scripts | https://api.github.com/repos/github/hubot-scripts | closed | Wolfram.coffee error | needs-maintainer | I recently ran this wolfram.coffee script and got this error, could anyone help me resolve the issue. Thanks
Error: Cannot find module 'wolfram-alpha'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/Users/angmingliang/myhubot/scripts/wolfram.coffee:1:11, <js>:4:13)
at Object.<anonymous> (/Users/angmingliang/myhubot/scripts/wolfram.coffee:1:1, <js>:24:4)
at Module._compile (module.js:456:26)
at Object.loadFile (/Users/angmingliang/myhubot/node_modules/hubot/node_modules/coffee-script/lib/coffee-script/coffee-script.js:182:19)
at Module.load (/Users/angmingliang/myhubot/node_modules/hubot/node_modules/coffee-script/lib/coffee-script/coffee-script.js:211:36)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Robot.loadFile (/Users/angmingliang/myHubot/node_modules/hubot/src/robot.coffee:218:9, <js>:164:11)
at /Users/angmingliang/myHubot/node_modules/hubot/src/robot.coffee:234:11, <js>:184:33
at Object.cb [as oncomplete] (fs.js:168:19) | True | Wolfram.coffee error - I recently ran this wolfram.coffee script and got this error, could anyone help me resolve the issue. Thanks
Error: Cannot find module 'wolfram-alpha'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/Users/angmingliang/myhubot/scripts/wolfram.coffee:1:11, <js>:4:13)
at Object.<anonymous> (/Users/angmingliang/myhubot/scripts/wolfram.coffee:1:1, <js>:24:4)
at Module._compile (module.js:456:26)
at Object.loadFile (/Users/angmingliang/myhubot/node_modules/hubot/node_modules/coffee-script/lib/coffee-script/coffee-script.js:182:19)
at Module.load (/Users/angmingliang/myhubot/node_modules/hubot/node_modules/coffee-script/lib/coffee-script/coffee-script.js:211:36)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Robot.loadFile (/Users/angmingliang/myHubot/node_modules/hubot/src/robot.coffee:218:9, <js>:164:11)
at /Users/angmingliang/myHubot/node_modules/hubot/src/robot.coffee:234:11, <js>:184:33
at Object.cb [as oncomplete] (fs.js:168:19) | main | wolfram coffee error i recently ran this wolfram coffee script and got this error could anyone help me resolve the issue thanks error cannot find module wolfram alpha at function module resolvefilename module js at function module load module js at module require module js at require module js at object users angmingliang myhubot scripts wolfram coffee at object users angmingliang myhubot scripts wolfram coffee at module compile module js at object loadfile users angmingliang myhubot node modules hubot node modules coffee script lib coffee script coffee script js at module load users angmingliang myhubot node modules hubot node modules coffee script lib coffee script coffee script js at function module load module js at module require module js at require module js at robot loadfile users angmingliang myhubot node modules hubot src robot coffee at users angmingliang myhubot node modules hubot src robot coffee at object cb fs js | 1 |
450,283 | 12,992,852,691 | IssuesEvent | 2020-07-23 07:46:23 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [master-preview] VoiceManager ArgumentException | Category: Tech Priority: Medium Status: Investigate |
[Player-prev.log](https://github.com/StrangeLoopGames/EcoIssues/files/3647723/Player-prev.log)
```
ArgumentException: An item with the same key has already been added. Key: 76561198072228212
at System.Collections.Generic.Dictionary`2[TKey,TValue].TryInsert (TKey key, TValue value, System.Collections.Generic.InsertionBehavior behavior) [0x00000] in <00000000000000000000000000000000>:0
at System.Collections.Generic.Dictionary`2[TKey,TValue].Add (TKey key, TValue value) [0x00000] in <00000000000000000000000000000000>:0
at VoiceManager.Participants_AfterKeyAdded (System.Object sender, VivoxUnity.KeyEventArg`1[TK] keyEventArg) [0x00000] in <00000000000000000000000000000000>:0
at System.EventHandler`1[TEventArgs].Invoke (System.Object sender, TEventArgs e) [0x00000] in <00000000000000000000000000000000>:0
at VivoxUnity.Common.ReadWriteDictionary`3[TK,TI,T].set_Item (TK key, TI value) [0x00000] in <00000000000000000000000000000000>:0
at VivoxUnity.Private.ChannelSession.InstanceOnEventMessageReceived (vx_evt_base_t eventMessage) [0x00000] in <00000000000000000000000000000000>:0
at System.Action`1[T].Invoke (T obj) [0x00000] in <00000000000000000000000000000000>:0
at VivoxUnity.VxClient.InstanceOnMainLoopRun (System.Boolean& didWork) [0x00000] in <00000000000000000000000000000000>:0
at System.Threading.OSSpecificSynchronizationContext+InvocationEntryDelegate.Invoke (System.IntPtr arg) [0x00000] in <00000000000000000000000000000000>:0
at MessagePump.RunUntil (LoopDone done) [0x00000] in <00000000000000000000000000000000>:0
at VoiceManager.Update () [0x00000] in <00000000000000000000000000000000>:0
``` | 1.0 | [master-preview] VoiceManager ArgumentException -
[Player-prev.log](https://github.com/StrangeLoopGames/EcoIssues/files/3647723/Player-prev.log)
```
ArgumentException: An item with the same key has already been added. Key: 76561198072228212
at System.Collections.Generic.Dictionary`2[TKey,TValue].TryInsert (TKey key, TValue value, System.Collections.Generic.InsertionBehavior behavior) [0x00000] in <00000000000000000000000000000000>:0
at System.Collections.Generic.Dictionary`2[TKey,TValue].Add (TKey key, TValue value) [0x00000] in <00000000000000000000000000000000>:0
at VoiceManager.Participants_AfterKeyAdded (System.Object sender, VivoxUnity.KeyEventArg`1[TK] keyEventArg) [0x00000] in <00000000000000000000000000000000>:0
at System.EventHandler`1[TEventArgs].Invoke (System.Object sender, TEventArgs e) [0x00000] in <00000000000000000000000000000000>:0
at VivoxUnity.Common.ReadWriteDictionary`3[TK,TI,T].set_Item (TK key, TI value) [0x00000] in <00000000000000000000000000000000>:0
at VivoxUnity.Private.ChannelSession.InstanceOnEventMessageReceived (vx_evt_base_t eventMessage) [0x00000] in <00000000000000000000000000000000>:0
at System.Action`1[T].Invoke (T obj) [0x00000] in <00000000000000000000000000000000>:0
at VivoxUnity.VxClient.InstanceOnMainLoopRun (System.Boolean& didWork) [0x00000] in <00000000000000000000000000000000>:0
at System.Threading.OSSpecificSynchronizationContext+InvocationEntryDelegate.Invoke (System.IntPtr arg) [0x00000] in <00000000000000000000000000000000>:0
at MessagePump.RunUntil (LoopDone done) [0x00000] in <00000000000000000000000000000000>:0
at VoiceManager.Update () [0x00000] in <00000000000000000000000000000000>:0
``` | non_main | voicemanager argumentexception argumentexception an item with the same key has already been added key at system collections generic dictionary tryinsert tkey key tvalue value system collections generic insertionbehavior behavior in at system collections generic dictionary add tkey key tvalue value in at voicemanager participants afterkeyadded system object sender vivoxunity keyeventarg keyeventarg in at system eventhandler invoke system object sender teventargs e in at vivoxunity common readwritedictionary set item tk key ti value in at vivoxunity private channelsession instanceoneventmessagereceived vx evt base t eventmessage in at system action invoke t obj in at vivoxunity vxclient instanceonmainlooprun system boolean didwork in at system threading osspecificsynchronizationcontext invocationentrydelegate invoke system intptr arg in at messagepump rununtil loopdone done in at voicemanager update in | 0 |
2,125 | 7,267,077,798 | IssuesEvent | 2018-02-20 02:19:06 | dgets/DANT2a | https://api.github.com/repos/dgets/DANT2a | closed | Switch enum & related code to EntryType.Entries.* | enhancement maintainability | Doing this should make it possible to simplify the code in the `GreenwichAtomic_Tick` and other areas considerably, as well as make them more maintainable/expandable. | True | Switch enum & related code to EntryType.Entries.* - Doing this should make it possible to simplify the code in the `GreenwichAtomic_Tick` and other areas considerably, as well as make them more maintainable/expandable. | main | switch enum related code to entrytype entries doing this should make it possible to simplify the code in the greenwichatomic tick and other areas considerably as well as make them more maintainable expandable | 1 |
59,036 | 17,015,010,287 | IssuesEvent | 2021-07-02 10:42:16 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | opened | TypeError: wrong argument type nil (expected String) | Component: merkaartor Priority: major Type: defect | **[Submitted to the original trac issue database at 9.19am, Friday, 21st May 2010]**
Sometimes when I upload, the server refuses it with this message:
```
There was an error uploading this request (500)
"TypeError: wrong argument type nil (expected String)"
Please redownload the problematic feature to handle the conflict.
```
There doesn't seem to be any way to upload after that, until a New File is started and the current edits discarded. | 1.0 | TypeError: wrong argument type nil (expected String) - **[Submitted to the original trac issue database at 9.19am, Friday, 21st May 2010]**
Sometimes when I upload, the server refuses it with this message:
```
There was an error uploading this request (500)
"TypeError: wrong argument type nil (expected String)"
Please redownload the problematic feature to handle the conflict.
```
There doesn't seem to be any way to upload after that, until a New File is started and the current edits discarded. | non_main | typeerror wrong argument type nil expected string sometimes when i upload the server refuses it with this message there was an error uploading this request typeerror wrong argument type nil expected string please redownload the problematic feature to handle the conflict there doesn t seem to be any way to upload after that until a new file is started and the current edits discarded | 0 |
4,480 | 23,353,535,978 | IssuesEvent | 2022-08-10 04:17:36 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Set special column icons based on constraints | type: enhancement work: frontend status: ready restricted: new maintainers | ## Current behavior
- The column header displays an icon to the left of the column name.
- The icon changes depending on the column type.
## Desired behavior
- The icon _also_ changes depending on the _constraints_ associated with the column.
- If the column has a PK constraint, then it gets a [key](https://fontawesome.com/icons/key?s=solid) icon.
- If the column has a FK constraint, then it gets a [link](https://fontawesome.com/icons/link?s=solid) icon.
- Otherwise, the existing logic applies, giving the column a type-dependent icon.
| True | Set special column icons based on constraints - ## Current behavior
- The column header displays an icon to the left of the column name.
- The icon changes depending on the column type.
## Desired behavior
- The icon _also_ changes depending on the _constraints_ associated with the column.
- If the column has a PK constraint, then it gets a [key](https://fontawesome.com/icons/key?s=solid) icon.
- If the column has a FK constraint, then it gets a [link](https://fontawesome.com/icons/link?s=solid) icon.
- Otherwise, the existing logic applies, giving the column a type-dependent icon.
| main | set special column icons based on constraints current behavior the column header displays an icon to the left of the column name the icon changes depending on the column type desired behavior the icon also changes depending on the constraints associated with the column if the column has a pk constraint then it gets a icon if the column has a fk constraint then it gets a icon otherwise the existing logic applies giving the column a type dependent icon | 1 |
4,865 | 25,016,062,807 | IssuesEvent | 2022-11-03 18:55:25 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | closed | add local implementor for pubsub | ✨ feature 📐 proposal 🚧 maintainer issue | **Describe the solution you'd like**
something that can be run in a dev environment like filesystem-kv/mq for pubsub
**Additional context**
n/a
| True | add local implementor for pubsub - **Describe the solution you'd like**
something that can be run in a dev environment like filesystem-kv/mq for pubsub
**Additional context**
n/a
| main | add local implementor for pubsub describe the solution you d like something that can be run in a dev environment like filesystem kv mq for pubsub additional context n a | 1 |
165,676 | 20,614,178,049 | IssuesEvent | 2022-03-07 11:33:34 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | [Security Solution] Deleted query is getting populated when click on save changes option and "Save" button is enabled. | bug triage_needed impact:medium Team:Detections and Resp Team: SecuritySolution v8.1.0 | **Describe the bug:**
Deleted query is getting populated when click on save changes option and "Save" button is enabled.
**Build Details:**
```
Version: 8.1.0 BC6
Build: 50485
Commit: 4aaeda23aea9c3bf29698878c70a0107ea3c1659
```
**Preconditions**
1. Elasticsearch should be up and running
2. Kibana should be up and running
**Steps to Reproduce**
1. Navigate to Rules under security.
2. Click on "Create New rule" button.
3. Select Custom query rule.
4. Save two queries under queries text box.
5. Delete any one query.
6. After click on "Save Changes" button.
**Expected Result:**
Deleted query should not be pre populated and save button should be disabled.
**Actual Result:**
Deleted query is getting pre populated and save button is enabled.
**Screen Records:**
https://user-images.githubusercontent.com/91867110/157023238-01f517ea-23b4-40ce-9369-f6be4a570ded.mp4
| True | [Security Solution] Deleted query is getting populated when click on save changes option and "Save" button is enabled. - **Describe the bug:**
Deleted query is getting populated when click on save changes option and "Save" button is enabled.
**Build Details:**
```
Version: 8.1.0 BC6
Build: 50485
Commit: 4aaeda23aea9c3bf29698878c70a0107ea3c1659
```
**Preconditions**
1. Elasticsearch should be up and running
2. Kibana should be up and running
**Steps to Reproduce**
1. Navigate to Rules under security.
2. Click on "Create New rule" button.
3. Select Custom query rule.
4. Save two queries under queries text box.
5. Delete any one query.
6. After click on "Save Changes" button.
**Expected Result:**
Deleted query should not be pre populated and save button should be disabled.
**Actual Result:**
Deleted query is getting pre populated and save button is enabled.
**Screen Records:**
https://user-images.githubusercontent.com/91867110/157023238-01f517ea-23b4-40ce-9369-f6be4a570ded.mp4
| non_main | deleted query is getting populated when click on save changes option and save button is enabled describe the bug deleted query is getting populated when click on save changes option and save button is enabled build details version build commit preconditions elasticsearch should be up and running kibana should be up and running steps to reproduce navigate to rules under security click on create new rule button select custom query rule save two queries under queries text box delete any one query after click on save changes button expected result deleted query should not be pre populated and save button should be disabled actual result deleted query is getting pre populated and save button is enabled screen records | 0 |
54,575 | 3,069,706,842 | IssuesEvent | 2015-08-18 21:52:09 | airoldilab/sgd | https://api.github.com/repos/airoldilab/sgd | opened | print method for sgd objects | enhancement low priority | It should do something more interesting than just printing out the coefficients. | 1.0 | print method for sgd objects - It should do something more interesting than just printing out the coefficients. | non_main | print method for sgd objects it should do something more interesting than just printing out the coefficients | 0 |
309,525 | 26,667,449,553 | IssuesEvent | 2023-01-26 06:31:36 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | pkg/ccl/importerccl/importerccl_test: TestImportMultiRegion failed | C-test-failure O-robot branch-master | pkg/ccl/importerccl/importerccl_test.TestImportMultiRegion [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8455889?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8455889?buildTab=artifacts#/) on master @ [2ad8df3df3272110705984efc32f1453631ce602](https://github.com/cockroachdb/cockroach/commits/2ad8df3df3272110705984efc32f1453631ce602):
```
github.com/cockroachdb/cockroach/pkg/jobs/adopt.go:413 +0x7ab
github.com/cockroachdb/cockroach/pkg/jobs.(*Registry).resumeJob.func1()
github.com/cockroachdb/cockroach/pkg/jobs/adopt.go:333 +0x128
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx.func2()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:470 +0x1f6
Goroutine 478 (running) created at:
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:461 +0x619
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:332 +0x1cb
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.GRPCTransportFactory()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/transport_race.go:98 +0x161
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).sendToReplicas()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:2060 +0xd0d
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatch()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1668 +0xa44
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).divideAndSendBatchToRanges()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1240 +0x592
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*RangeIterator).Seek()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/range_iter.go:208 +0x73a
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).divideAndSendBatchToRanges()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1234 +0x2b7
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).Send()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:861 +0xa59
github.com/cockroachdb/cockroach/pkg/kv.lookupRangeFwdScan()
github.com/cockroachdb/cockroach/pkg/kv/range_lookup.go:330 +0x832
github.com/cockroachdb/cockroach/pkg/kv.RangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/range_lookup.go:205 +0x315
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).RangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:570 +0x128
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.(*RangeCache).performRangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:1032 +0x3fe
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.tryLookupImpl.func1()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:920 +0xc5
github.com/cockroachdb/cockroach/pkg/util/contextutil.RunWithTimeout()
github.com/cockroachdb/cockroach/pkg/util/contextutil/context.go:104 +0x1a9
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.tryLookupImpl()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:917 +0x1a8
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.(*RangeCache).tryLookup.func3()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:815 +0xd9
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).doCall.func1()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:387 +0x51
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunTask()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:305 +0x147
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).doCall()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:386 +0x2a4
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).DoChan.func1()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:356 +0xd0
==================
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #79412 pkg/ccl/importerccl/importerccl_test: TestImportMultiRegion failed [C-test-failure O-robot branch-release-22.1]
</p>
</details>
/cc @cockroachdb/sql-sessions
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestImportMultiRegion.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 1.0 | pkg/ccl/importerccl/importerccl_test: TestImportMultiRegion failed - pkg/ccl/importerccl/importerccl_test.TestImportMultiRegion [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8455889?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8455889?buildTab=artifacts#/) on master @ [2ad8df3df3272110705984efc32f1453631ce602](https://github.com/cockroachdb/cockroach/commits/2ad8df3df3272110705984efc32f1453631ce602):
```
github.com/cockroachdb/cockroach/pkg/jobs/adopt.go:413 +0x7ab
github.com/cockroachdb/cockroach/pkg/jobs.(*Registry).resumeJob.func1()
github.com/cockroachdb/cockroach/pkg/jobs/adopt.go:333 +0x128
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx.func2()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:470 +0x1f6
Goroutine 478 (running) created at:
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:461 +0x619
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:332 +0x1cb
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.GRPCTransportFactory()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/transport_race.go:98 +0x161
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).sendToReplicas()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:2060 +0xd0d
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatch()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1668 +0xa44
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).divideAndSendBatchToRanges()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1240 +0x592
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*RangeIterator).Seek()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/range_iter.go:208 +0x73a
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).divideAndSendBatchToRanges()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1234 +0x2b7
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).Send()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:861 +0xa59
github.com/cockroachdb/cockroach/pkg/kv.lookupRangeFwdScan()
github.com/cockroachdb/cockroach/pkg/kv/range_lookup.go:330 +0x832
github.com/cockroachdb/cockroach/pkg/kv.RangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/range_lookup.go:205 +0x315
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).RangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:570 +0x128
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.(*RangeCache).performRangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:1032 +0x3fe
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.tryLookupImpl.func1()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:920 +0xc5
github.com/cockroachdb/cockroach/pkg/util/contextutil.RunWithTimeout()
github.com/cockroachdb/cockroach/pkg/util/contextutil/context.go:104 +0x1a9
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.tryLookupImpl()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:917 +0x1a8
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.(*RangeCache).tryLookup.func3()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:815 +0xd9
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).doCall.func1()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:387 +0x51
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunTask()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:305 +0x147
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).doCall()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:386 +0x2a4
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).DoChan.func1()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:356 +0xd0
==================
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #79412 pkg/ccl/importerccl/importerccl_test: TestImportMultiRegion failed [C-test-failure O-robot branch-release-22.1]
</p>
</details>
/cc @cockroachdb/sql-sessions
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestImportMultiRegion.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_main | pkg ccl importerccl importerccl test testimportmultiregion failed pkg ccl importerccl importerccl test testimportmultiregion with on master github com cockroachdb cockroach pkg jobs adopt go github com cockroachdb cockroach pkg jobs registry resumejob github com cockroachdb cockroach pkg jobs adopt go github com cockroachdb cockroach pkg util stop stopper runasynctaskex github com cockroachdb cockroach pkg util stop stopper go goroutine running created at github com cockroachdb cockroach pkg util stop stopper runasynctaskex github com cockroachdb cockroach pkg util stop stopper go github com cockroachdb cockroach pkg util stop stopper runasynctask github com cockroachdb cockroach pkg util stop stopper go github com cockroachdb cockroach pkg kv kvclient kvcoord grpctransportfactory github com cockroachdb cockroach pkg kv kvclient kvcoord transport race go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender sendtoreplicas github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender sendpartialbatch github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender divideandsendbatchtoranges github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient kvcoord rangeiterator seek github com cockroachdb cockroach pkg kv kvclient kvcoord range iter go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender divideandsendbatchtoranges github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender send github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv lookuprangefwdscan github com cockroachdb cockroach pkg kv range lookup go github com cockroachdb cockroach pkg kv rangelookup github com cockroachdb cockroach pkg kv range lookup go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender rangelookup github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient rangecache rangecache performrangelookup github com cockroachdb cockroach pkg kv kvclient rangecache range cache go github com cockroachdb cockroach pkg kv kvclient rangecache trylookupimpl github com cockroachdb cockroach pkg kv kvclient rangecache range cache go github com cockroachdb cockroach pkg util contextutil runwithtimeout github com cockroachdb cockroach pkg util contextutil context go github com cockroachdb cockroach pkg kv kvclient rangecache trylookupimpl github com cockroachdb cockroach pkg kv kvclient rangecache range cache go github com cockroachdb cockroach pkg kv kvclient rangecache rangecache trylookup github com cockroachdb cockroach pkg kv kvclient rangecache range cache go github com cockroachdb cockroach pkg util syncutil singleflight group docall github com cockroachdb cockroach pkg util syncutil singleflight singleflight go github com cockroachdb cockroach pkg util stop stopper runtask github com cockroachdb cockroach pkg util stop stopper go github com cockroachdb cockroach pkg util syncutil singleflight group docall github com cockroachdb cockroach pkg util syncutil singleflight singleflight go github com cockroachdb cockroach pkg util syncutil singleflight group dochan github com cockroachdb cockroach pkg util syncutil singleflight singleflight go parameters tags bazel gss race help see also same failure on other branches pkg ccl importerccl importerccl test testimportmultiregion failed cc cockroachdb sql sessions | 0 |
367 | 3,355,501,447 | IssuesEvent | 2015-11-18 16:39:43 | christoff-buerger/racr | https://api.github.com/repos/christoff-buerger/racr | opened | Refactoring of the modularisation of the Questionnaires example | low maintainability | In contradiction to best practices in the design of _RACR_-based languages, the `user-interface` library instead of the actual `language`/`analyses` library defines query support functions for language analyses. In all the other examples, the `language` module comprises the language specification _and_ its respective query API. This makes much more sense from a developer and user perspective.
The reason why in the Questionnaires example the `user-interface` library defines not only the user but also the language API is, that the loading and saving user functions are referenced when widgets are constructed throughout analyses. This has to be disentangled, such that the language module defines its API and becomes independent of the user API. | True | Refactoring of the modularisation of the Questionnaires example - In contradiction to best practices in the design of _RACR_-based languages, the `user-interface` library instead of the actual `language`/`analyses` library defines query support functions for language analyses. In all the other examples, the `language` module comprises the language specification _and_ its respective query API. This makes much more sense from a developer and user perspective.
The reason why in the Questionnaires example the `user-interface` library defines not only the user but also the language API is, that the loading and saving user functions are referenced when widgets are constructed throughout analyses. This has to be disentangled, such that the language module defines its API and becomes independent of the user API. | main | refactoring of the modularisation of the questionnaires example in contradiction to best practices in the design of racr based languages the user interface library instead of the actual language analyses library defines query support functions for language analyses in all the other examples the language module comprises the language specification and its respective query api this makes much more sense from a developer and user perspective the reason why in the questionnaires example the user interface library defines not only the user but also the language api is that the loading and saving user functions are referenced when widgets are constructed throughout analyses this has to be disentangled such that the language module defines its api and becomes independent of the user api | 1 |
224,676 | 24,783,423,457 | IssuesEvent | 2022-10-24 07:50:25 | sast-automation-dev/openidm-community-edition-43 | https://api.github.com/repos/sast-automation-dev/openidm-community-edition-43 | opened | orientdb-server-1.3.0.jar: 1 vulnerabilities (highest severity is: 8.8) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>orientdb-server-1.3.0.jar</b></p></summary>
<p>OrientDB NoSQL document graph dbms</p>
<p>Library home page: <a href="http://www.orientechnologies.com/orientdb-server">http://www.orientechnologies.com/orientdb-server</a></p>
<p>Path to dependency file: /openidm-repo-orientdb/pom.xml</p>
<p>Path to vulnerable library: /entechnologies/orientdb-server/1.3.0/orientdb-server-1.3.0.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/openidm-community-edition-43/commit/0aad6d987ba225eeadc591c7c188b6deef985e1b">0aad6d987ba225eeadc591c7c188b6deef985e1b</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (orientdb-server version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2015-2912](https://www.mend.io/vulnerability-database/CVE-2015-2912) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.8 | orientdb-server-1.3.0.jar | Direct | 2.0.15 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-2912</summary>
### Vulnerable Library - <b>orientdb-server-1.3.0.jar</b></p>
<p>OrientDB NoSQL document graph dbms</p>
<p>Library home page: <a href="http://www.orientechnologies.com/orientdb-server">http://www.orientechnologies.com/orientdb-server</a></p>
<p>Path to dependency file: /openidm-repo-orientdb/pom.xml</p>
<p>Path to vulnerable library: /entechnologies/orientdb-server/1.3.0/orientdb-server-1.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **orientdb-server-1.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/openidm-community-edition-43/commit/0aad6d987ba225eeadc591c7c188b6deef985e1b">0aad6d987ba225eeadc591c7c188b6deef985e1b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The JSONP endpoint in the Studio component in OrientDB Server Community Edition before 2.0.15 and 2.1.x before 2.1.1 does not properly restrict callback values, which allows remote attackers to conduct cross-site request forgery (CSRF) attacks, and obtain sensitive information, via a crafted HTTP request.
<p>Publish Date: Dec 31, 2015 5:59:09 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-2912>CVE-2015-2912</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-2912">https://nvd.nist.gov/vuln/detail/CVE-2015-2912</a></p>
<p>Release Date: Dec 31, 2015 5:59:09 AM</p>
<p>Fix Resolution: 2.0.15</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | orientdb-server-1.3.0.jar: 1 vulnerabilities (highest severity is: 8.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>orientdb-server-1.3.0.jar</b></p></summary>
<p>OrientDB NoSQL document graph dbms</p>
<p>Library home page: <a href="http://www.orientechnologies.com/orientdb-server">http://www.orientechnologies.com/orientdb-server</a></p>
<p>Path to dependency file: /openidm-repo-orientdb/pom.xml</p>
<p>Path to vulnerable library: /entechnologies/orientdb-server/1.3.0/orientdb-server-1.3.0.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/openidm-community-edition-43/commit/0aad6d987ba225eeadc591c7c188b6deef985e1b">0aad6d987ba225eeadc591c7c188b6deef985e1b</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (orientdb-server version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2015-2912](https://www.mend.io/vulnerability-database/CVE-2015-2912) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.8 | orientdb-server-1.3.0.jar | Direct | 2.0.15 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-2912</summary>
### Vulnerable Library - <b>orientdb-server-1.3.0.jar</b></p>
<p>OrientDB NoSQL document graph dbms</p>
<p>Library home page: <a href="http://www.orientechnologies.com/orientdb-server">http://www.orientechnologies.com/orientdb-server</a></p>
<p>Path to dependency file: /openidm-repo-orientdb/pom.xml</p>
<p>Path to vulnerable library: /entechnologies/orientdb-server/1.3.0/orientdb-server-1.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **orientdb-server-1.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/openidm-community-edition-43/commit/0aad6d987ba225eeadc591c7c188b6deef985e1b">0aad6d987ba225eeadc591c7c188b6deef985e1b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The JSONP endpoint in the Studio component in OrientDB Server Community Edition before 2.0.15 and 2.1.x before 2.1.1 does not properly restrict callback values, which allows remote attackers to conduct cross-site request forgery (CSRF) attacks, and obtain sensitive information, via a crafted HTTP request.
<p>Publish Date: Dec 31, 2015 5:59:09 AM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-2912>CVE-2015-2912</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-2912">https://nvd.nist.gov/vuln/detail/CVE-2015-2912</a></p>
<p>Release Date: Dec 31, 2015 5:59:09 AM</p>
<p>Fix Resolution: 2.0.15</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_main | orientdb server jar vulnerabilities highest severity is vulnerable library orientdb server jar orientdb nosql document graph dbms library home page a href path to dependency file openidm repo orientdb pom xml path to vulnerable library entechnologies orientdb server orientdb server jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in orientdb server version remediation available high orientdb server jar direct details cve vulnerable library orientdb server jar orientdb nosql document graph dbms library home page a href path to dependency file openidm repo orientdb pom xml path to vulnerable library entechnologies orientdb server orientdb server jar dependency hierarchy x orientdb server jar vulnerable library found in head commit a href found in base branch master vulnerability details the jsonp endpoint in the studio component in orientdb server community edition before and x before does not properly restrict callback values which allows remote attackers to conduct cross site request forgery csrf attacks and obtain sensitive information via a crafted http request publish date dec am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date dec am fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
686,589 | 23,497,200,809 | IssuesEvent | 2022-08-18 03:40:18 | LuanRT/YouTube.js | https://api.github.com/repos/LuanRT/YouTube.js | closed | getComments will throw out an error when the comment count of the video is 0 | bug Stale priority: medium | ### Steps to reproduce
Example:
1.find a video that with no coment( for example: Ja2H4xHnX-0 )
2.take its video id and invoke getComents function
### Failure Logs
```shell
(node:17108) UnhandledPromiseRejectionWarning: Error: Expected to find "onResponseReceivedEndpoints" with content "commentRenderer" but got undefined
at Object.findNode (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\utils\Utils.js:39:22)
at parseComments (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\parser\index.js:281:27)
at Parser.#processComments (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\parser\index.js:351:12)
at Object.COMMENTS (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\parser\index.js:33:48)
at Parser.parse (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\parser\index.js:37:22)
at Innertube.getComments (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\Innertube.js:239:8)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:17108) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:17108) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
### Expected behavior
return normal response with 0 comment count.
### Current behavior
throw out an error when the comment count of the video is 0
### Version
Default
### Anything else?
_No response_
### Checklist
- [X] I am running the latest version.
- [X] I checked the documentation and found no answer.
- [X] I have searched the existing issues and made sure this is not a duplicate.
- [X] I have provided sufficient information. | 1.0 | getComments will throw out an error when the comment count of the video is 0 - ### Steps to reproduce
Example:
1.find a video that with no coment( for example: Ja2H4xHnX-0 )
2.take its video id and invoke getComents function
### Failure Logs
```shell
(node:17108) UnhandledPromiseRejectionWarning: Error: Expected to find "onResponseReceivedEndpoints" with content "commentRenderer" but got undefined
at Object.findNode (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\utils\Utils.js:39:22)
at parseComments (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\parser\index.js:281:27)
at Parser.#processComments (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\parser\index.js:351:12)
at Object.COMMENTS (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\parser\index.js:33:48)
at Parser.parse (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\parser\index.js:37:22)
at Innertube.getComments (D:\code\demo\youtubeReptile\node_modules\youtubei.js\lib\Innertube.js:239:8)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:17108) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:17108) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
### Expected behavior
return normal response with 0 comment count.
### Current behavior
throw out an error when the comment count of the video is 0
### Version
Default
### Anything else?
_No response_
### Checklist
- [X] I am running the latest version.
- [X] I checked the documentation and found no answer.
- [X] I have searched the existing issues and made sure this is not a duplicate.
- [X] I have provided sufficient information. | non_main | getcomments will throw out an error when the comment count of the video is steps to reproduce example find a video that with no coment for example take its video id and invoke getcoments function failure logs shell node unhandledpromiserejectionwarning error expected to find onresponsereceivedendpoints with content commentrenderer but got undefined at object findnode d code demo youtubereptile node modules youtubei js lib utils utils js at parsecomments d code demo youtubereptile node modules youtubei js lib parser index js at parser processcomments d code demo youtubereptile node modules youtubei js lib parser index js at object comments d code demo youtubereptile node modules youtubei js lib parser index js at parser parse d code demo youtubereptile node modules youtubei js lib parser index js at innertube getcomments d code demo youtubereptile node modules youtubei js lib innertube js at processticksandrejections internal process task queues js use node trace warnings to show where the warning was created node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch to terminate the node process on unhandled promise rejection use the cli flag unhandled rejections strict see rejection id node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code expected behavior return normal response with comment count current behavior throw out an error when the comment count of the video is version default anything else no response checklist i am running the latest version i checked the documentation and found no answer i have searched the existing issues and made sure this is not a duplicate i have provided sufficient information | 0 |
408,187 | 11,942,930,902 | IssuesEvent | 2020-04-02 22:00:12 | leveler-dba/leveler | https://api.github.com/repos/leveler-dba/leveler | closed | spike: venmo payment link issues | bug priority 1 | as we chatted about during sprint planning there's an intermittent issue with some people's venmo payment links becoming appended to the end of the leveler url | 1.0 | spike: venmo payment link issues - as we chatted about during sprint planning there's an intermittent issue with some people's venmo payment links becoming appended to the end of the leveler url | non_main | spike venmo payment link issues as we chatted about during sprint planning there s an intermittent issue with some people s venmo payment links becoming appended to the end of the leveler url | 0 |
1,763 | 6,575,013,535 | IssuesEvent | 2017-09-11 14:46:40 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Cannot pass args to docker container | affects_2.1 cloud docker feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
Sometimes there is a need to pass additional arguments when invoking `docker run`. Currently, there is no such option, so it led us to running docker containers from shell.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- docker_container:
name: consul
image: progrium/consul
args: -server
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| True | Cannot pass args to docker container - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
Sometimes there is a need to pass additional arguments when invoking `docker run`. Currently, there is no such option, so it led us to running docker containers from shell.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- docker_container:
name: consul
image: progrium/consul
args: -server
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| main | cannot pass args to docker container issue type feature idea component name docker container ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary sometimes there is a need to pass additional arguments when invoking docker run currently there is no such option so it led us to running docker containers from shell steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used docker container name consul image progrium consul args server expected results actual results | 1 |
2,656 | 9,087,410,050 | IssuesEvent | 2019-02-18 13:41:12 | RDIL/area4 | https://api.github.com/repos/RDIL/area4 | closed | The move... | pinned subject: maintaining-project | PSA: Soon this repo will be moved to `github.com/area4lib`. All collaborators should be receiving an invite soon. | True | The move... - PSA: Soon this repo will be moved to `github.com/area4lib`. All collaborators should be receiving an invite soon. | main | the move psa soon this repo will be moved to github com all collaborators should be receiving an invite soon | 1 |
47,562 | 25,072,553,979 | IssuesEvent | 2022-11-07 13:17:25 | oracle/opengrok | https://api.github.com/repos/oracle/opengrok | closed | xref watcher should use thread pool with >1 parallelism level | bug indexer performance | In my run within IDEA, I noticed this in one of the thread snapshots. There were 30 threads (running with `--threads 32`) parking at this spot:
```
"ForkJoinPool-1-worker-37" #22 daemon prio=5 os_prio=0 cpu=7362,01ms elapsed=223,51s tid=0x00007f7300285800 nid=0x3d33b waiting on condition [0x00007f72eb2f6000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.16/Native Method)
- parking to wait for <merged>(a java.util.concurrent.CompletableFuture$Signaller)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.16/LockSupport.java:194)
at java.util.concurrent.CompletableFuture$Signaller.block(java.base@11.0.16/CompletableFuture.java:1796)
at java.util.concurrent.ForkJoinPool.managedBlock(java.base@11.0.16/ForkJoinPool.java:3118)
at java.util.concurrent.CompletableFuture.waitingGet(java.base@11.0.16/CompletableFuture.java:1823)
at java.util.concurrent.CompletableFuture.get(java.base@11.0.16/CompletableFuture.java:1998)
at org.opengrok.indexer.analysis.plain.PlainAnalyzer.analyze(PlainAnalyzer.java:176)
at org.opengrok.indexer.analysis.AnalyzerGuru.populateDocument(AnalyzerGuru.java:626)
at org.opengrok.indexer.index.IndexDatabase.addFile(IndexDatabase.java:1068)
at org.opengrok.indexer.index.IndexDatabase.lambda$indexParallel$4(IndexDatabase.java:1687)
at org.opengrok.indexer.index.IndexDatabase$$Lambda$274/0x00007f72eb688968.apply(Unknown Source)
at java.util.stream.Collectors.lambda$groupingByConcurrent$59(java.base@11.0.16/Collectors.java:1304)
at java.util.stream.Collectors$$Lambda$276/0x00007f72eb2fed08.accept(java.base@11.0.16/Unknown
```
which matches https://github.com/oracle/opengrok/blob/8a7aa08e2f11ac1acfba209e18a1444600fe18df/opengrok-indexer/src/main/java/org/opengrok/indexer/analysis/plain/PlainAnalyzer.java#L168-L175
and interestingly the executor is instantiated like this: https://github.com/oracle/opengrok/blob/8a7aa08e2f11ac1acfba209e18a1444600fe18df/opengrok-indexer/src/main/java/org/opengrok/indexer/index/IndexerParallelizer.java#L249-L253
so it looks like there is just a [single thread in the pool](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ScheduledThreadPoolExecutor.html#ScheduledThreadPoolExecutor(int,%20java.util.concurrent.ThreadFactory)). Not completely sure if this is the problem, though.
_Originally posted by @vladak in https://github.com/oracle/opengrok/discussions/4089#discussioncomment-4057808_
| True | xref watcher should use thread pool with >1 parallelism level - In my run within IDEA, I noticed this in one of the thread snapshots. There were 30 threads (running with `--threads 32`) parking at this spot:
```
"ForkJoinPool-1-worker-37" #22 daemon prio=5 os_prio=0 cpu=7362,01ms elapsed=223,51s tid=0x00007f7300285800 nid=0x3d33b waiting on condition [0x00007f72eb2f6000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.16/Native Method)
- parking to wait for <merged>(a java.util.concurrent.CompletableFuture$Signaller)
at java.util.concurrent.locks.LockSupport.park(java.base@11.0.16/LockSupport.java:194)
at java.util.concurrent.CompletableFuture$Signaller.block(java.base@11.0.16/CompletableFuture.java:1796)
at java.util.concurrent.ForkJoinPool.managedBlock(java.base@11.0.16/ForkJoinPool.java:3118)
at java.util.concurrent.CompletableFuture.waitingGet(java.base@11.0.16/CompletableFuture.java:1823)
at java.util.concurrent.CompletableFuture.get(java.base@11.0.16/CompletableFuture.java:1998)
at org.opengrok.indexer.analysis.plain.PlainAnalyzer.analyze(PlainAnalyzer.java:176)
at org.opengrok.indexer.analysis.AnalyzerGuru.populateDocument(AnalyzerGuru.java:626)
at org.opengrok.indexer.index.IndexDatabase.addFile(IndexDatabase.java:1068)
at org.opengrok.indexer.index.IndexDatabase.lambda$indexParallel$4(IndexDatabase.java:1687)
at org.opengrok.indexer.index.IndexDatabase$$Lambda$274/0x00007f72eb688968.apply(Unknown Source)
at java.util.stream.Collectors.lambda$groupingByConcurrent$59(java.base@11.0.16/Collectors.java:1304)
at java.util.stream.Collectors$$Lambda$276/0x00007f72eb2fed08.accept(java.base@11.0.16/Unknown
```
which matches https://github.com/oracle/opengrok/blob/8a7aa08e2f11ac1acfba209e18a1444600fe18df/opengrok-indexer/src/main/java/org/opengrok/indexer/analysis/plain/PlainAnalyzer.java#L168-L175
and interestingly the executor is instantiated like this: https://github.com/oracle/opengrok/blob/8a7aa08e2f11ac1acfba209e18a1444600fe18df/opengrok-indexer/src/main/java/org/opengrok/indexer/index/IndexerParallelizer.java#L249-L253
so it looks like there is just a [single thread in the pool](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ScheduledThreadPoolExecutor.html#ScheduledThreadPoolExecutor(int,%20java.util.concurrent.ThreadFactory)). Not completely sure if this is the problem, though.
_Originally posted by @vladak in https://github.com/oracle/opengrok/discussions/4089#discussioncomment-4057808_
| non_main | xref watcher should use thread pool with parallelism level in my run within idea i noticed this in one of the thread snapshots there were threads running with threads parking at this spot forkjoinpool worker daemon prio os prio cpu elapsed tid nid waiting on condition java lang thread state waiting parking at jdk internal misc unsafe park java base native method parking to wait for a java util concurrent completablefuture signaller at java util concurrent locks locksupport park java base locksupport java at java util concurrent completablefuture signaller block java base completablefuture java at java util concurrent forkjoinpool managedblock java base forkjoinpool java at java util concurrent completablefuture waitingget java base completablefuture java at java util concurrent completablefuture get java base completablefuture java at org opengrok indexer analysis plain plainanalyzer analyze plainanalyzer java at org opengrok indexer analysis analyzerguru populatedocument analyzerguru java at org opengrok indexer index indexdatabase addfile indexdatabase java at org opengrok indexer index indexdatabase lambda indexparallel indexdatabase java at org opengrok indexer index indexdatabase lambda apply unknown source at java util stream collectors lambda groupingbyconcurrent java base collectors java at java util stream collectors lambda accept java base unknown which matches and interestingly the executor is instantiated like this so it looks like there is just a not completely sure if this is the problem though originally posted by vladak in | 0 |
1,115 | 4,989,020,312 | IssuesEvent | 2016-12-08 10:23:36 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | apt_repository: does not understand arch= option | affects_2.1 feature_idea waiting_on_maintainer | I'm my systems I have a lot of repositories with an arch-qualifier.
e.g.:
```
deb [arch=ppc64el] http://ftp.debian.org/debian/ sid main
deb [arch=amd64,i386] http://dl.google.com/linux/talkplugin/deb/ stable main
```
apt_repository does not understand them and tries to re-add that repos, actually duplicating them.
Moreover I'd like to be able to specify these option in ansible. Currently available options are (from sources.list(5):
```
· arch=arch1,arch2,... can be used to specify for which architectures information should be downloaded. If this option is not set all architectures
defined by the APT::Architectures option will be downloaded.
· arch+=arch1,arch2,... and arch-=arch1,arch2,... which can be used to add/remove architectures from the set which will be downloaded.
· trusted=yes can be set to indicate that packages from this source are always authenticated even if the Release file is not signed or the signature can't
be checked. This disables parts of apt-secure(8) and should therefore only be used in a local and trusted context. trusted=no is the opposite which
handles even correctly authenticated sources as not authenticated.
```
these options are surrounded by square brackets, and can or cannot have leading and trailing spaces between the options and the brackets.
| True | apt_repository: does not understand arch= option - I'm my systems I have a lot of repositories with an arch-qualifier.
e.g.:
```
deb [arch=ppc64el] http://ftp.debian.org/debian/ sid main
deb [arch=amd64,i386] http://dl.google.com/linux/talkplugin/deb/ stable main
```
apt_repository does not understand them and tries to re-add that repos, actually duplicating them.
Moreover I'd like to be able to specify these option in ansible. Currently available options are (from sources.list(5):
```
· arch=arch1,arch2,... can be used to specify for which architectures information should be downloaded. If this option is not set all architectures
defined by the APT::Architectures option will be downloaded.
· arch+=arch1,arch2,... and arch-=arch1,arch2,... which can be used to add/remove architectures from the set which will be downloaded.
· trusted=yes can be set to indicate that packages from this source are always authenticated even if the Release file is not signed or the signature can't
be checked. This disables parts of apt-secure(8) and should therefore only be used in a local and trusted context. trusted=no is the opposite which
handles even correctly authenticated sources as not authenticated.
```
these options are surrounded by square brackets, and can or cannot have leading and trailing spaces between the options and the brackets.
| main | apt repository does not understand arch option i m my systems i have a lot of repositories with an arch qualifier e g deb sid main deb stable main apt repository does not understand them and tries to re add that repos actually duplicating them moreover i d like to be able to specify these option in ansible currently available options are from sources list · arch can be used to specify for which architectures information should be downloaded if this option is not set all architectures defined by the apt architectures option will be downloaded · arch and arch which can be used to add remove architectures from the set which will be downloaded · trusted yes can be set to indicate that packages from this source are always authenticated even if the release file is not signed or the signature can t be checked this disables parts of apt secure and should therefore only be used in a local and trusted context trusted no is the opposite which handles even correctly authenticated sources as not authenticated these options are surrounded by square brackets and can or cannot have leading and trailing spaces between the options and the brackets | 1 |
2,812 | 10,059,748,136 | IssuesEvent | 2019-07-22 17:09:59 | clearlinux/swupd-client | https://api.github.com/repos/clearlinux/swupd-client | closed | Move verify_file to consistent location | maintainability | The `verify_file` function is called in several unexpected and obscure parts of the code, making it hard to determine which codepaths are actually verifying file hashes and which are not. The `verify_file` function should be moved to a more visible location that touches all code paths.
The best location that I can tell is in `do_staging` which is called by all paths that actually put files on the system. | True | Move verify_file to consistent location - The `verify_file` function is called in several unexpected and obscure parts of the code, making it hard to determine which codepaths are actually verifying file hashes and which are not. The `verify_file` function should be moved to a more visible location that touches all code paths.
The best location that I can tell is in `do_staging` which is called by all paths that actually put files on the system. | main | move verify file to consistent location the verify file function is called in several unexpected and obscure parts of the code making it hard to determine which codepaths are actually verifying file hashes and which are not the verify file function should be moved to a more visible location that touches all code paths the best location that i can tell is in do staging which is called by all paths that actually put files on the system | 1 |
35,058 | 4,963,517,920 | IssuesEvent | 2016-12-03 08:41:19 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | github.com/cockroachdb/cockroach/vendor/cloud.google.com/go/logging/apiv2: (unknown) failed under stress | Robot test-failure | SHA: https://github.com/cockroachdb/cockroach/commits/3b96bf09c468253ae24064665b2fa2fa1796f417
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=
TAGS=
GOFLAGS=-race
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=75248&tab=buildLog
```
Makefile:231: .bootstrap: No such file or directory
git submodule update --init
cmd ./pkg/cmd/github-post [[0;32mOK[0m]
cmd ./pkg/cmd/github-pull-request-make [[0;32mOK[0m]
cmd ./pkg/cmd/glock-diff-parser [[0;32mOK[0m]
cmd ./pkg/cmd/metacheck [[0;32mOK[0m]
cmd ./pkg/cmd/protoc-gen-gogoroach [[0;32mOK[0m]
cmd ./pkg/cmd/teamcity-trigger [[0;32mOK[0m]
cmd ./vendor/github.com/client9/misspell/cmd/misspell [[0;32mOK[0m]
cmd ./vendor/github.com/cockroachdb/c-protobuf/cmd/protoc [[0;32mOK[0m]
cmd ./vendor/github.com/cockroachdb/crlfmt [[0;32mOK[0m]
cmd ./vendor/github.com/cockroachdb/stress [[0;32mOK[0m]
cmd ./vendor/github.com/golang/lint/golint [[0;32mOK[0m]
cmd ./vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen [[0;32mOK[0m]
cmd ./vendor/github.com/jteeuwen/go-bindata/go-bindata [[0;32mOK[0m]
cmd ./vendor/github.com/kisielk/errcheck [[0;32mOK[0m]
cmd ./vendor/github.com/kkaneda/returncheck [[0;32mOK[0m]
cmd ./vendor/github.com/mattn/goveralls [[0;32mOK[0m]
cmd ./vendor/github.com/mdempsky/unconvert [[0;32mOK[0m]
cmd ./vendor/github.com/mibk/dupl [[0;32mOK[0m]
cmd ./vendor/github.com/robfig/glock [[0;32mOK[0m]
cmd ./vendor/github.com/wadey/gocovmerge [[0;32mOK[0m]
cmd ./vendor/golang.org/x/tools/cmd/goimports [[0;32mOK[0m]
cmd ./vendor/golang.org/x/tools/cmd/goyacc [[0;32mOK[0m]
cmd ./vendor/golang.org/x/tools/cmd/stringer [[0;32mOK[0m]
touch .bootstrap
go list -tags '' -f 'go test -v -race -tags '\'''\'' -ldflags '\'''\'' -i -c {{.ImportPath}} -o {{.Dir}}/stress.test && (cd {{.Dir}} && if [ -f stress.test ]; then stress -maxtime 15m -maxfails 1 -stderr ./stress.test -test.run '\''.'\'' -test.timeout 30m -test.v; fi)' github.com/cockroachdb/cockroach/vendor/cloud.google.com/go/logging/apiv2 | /bin/bash
vendor/cloud.google.com/go/logging/apiv2/logging_client.go:30:2: cannot find package "google.golang.org/genproto/googleapis/api/monitoredres" in any of:
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/genproto/googleapis/api/monitoredres (vendor tree)
/usr/local/go/src/google.golang.org/genproto/googleapis/api/monitoredres (from $GOROOT)
/go/src/google.golang.org/genproto/googleapis/api/monitoredres (from $GOPATH)
vendor/cloud.google.com/go/logging/apiv2/config_client.go:30:2: cannot find package "google.golang.org/genproto/googleapis/logging/v2" in any of:
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/genproto/googleapis/logging/v2 (vendor tree)
/usr/local/go/src/google.golang.org/genproto/googleapis/logging/v2 (from $GOROOT)
/go/src/google.golang.org/genproto/googleapis/logging/v2 (from $GOPATH)
Makefile:138: recipe for target 'stress' failed
make: *** [stress] Error 1
``` | 1.0 | github.com/cockroachdb/cockroach/vendor/cloud.google.com/go/logging/apiv2: (unknown) failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/3b96bf09c468253ae24064665b2fa2fa1796f417
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=
TAGS=
GOFLAGS=-race
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=75248&tab=buildLog
```
Makefile:231: .bootstrap: No such file or directory
git submodule update --init
cmd ./pkg/cmd/github-post [[0;32mOK[0m]
cmd ./pkg/cmd/github-pull-request-make [[0;32mOK[0m]
cmd ./pkg/cmd/glock-diff-parser [[0;32mOK[0m]
cmd ./pkg/cmd/metacheck [[0;32mOK[0m]
cmd ./pkg/cmd/protoc-gen-gogoroach [[0;32mOK[0m]
cmd ./pkg/cmd/teamcity-trigger [[0;32mOK[0m]
cmd ./vendor/github.com/client9/misspell/cmd/misspell [[0;32mOK[0m]
cmd ./vendor/github.com/cockroachdb/c-protobuf/cmd/protoc [[0;32mOK[0m]
cmd ./vendor/github.com/cockroachdb/crlfmt [[0;32mOK[0m]
cmd ./vendor/github.com/cockroachdb/stress [[0;32mOK[0m]
cmd ./vendor/github.com/golang/lint/golint [[0;32mOK[0m]
cmd ./vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen [[0;32mOK[0m]
cmd ./vendor/github.com/jteeuwen/go-bindata/go-bindata [[0;32mOK[0m]
cmd ./vendor/github.com/kisielk/errcheck [[0;32mOK[0m]
cmd ./vendor/github.com/kkaneda/returncheck [[0;32mOK[0m]
cmd ./vendor/github.com/mattn/goveralls [[0;32mOK[0m]
cmd ./vendor/github.com/mdempsky/unconvert [[0;32mOK[0m]
cmd ./vendor/github.com/mibk/dupl [[0;32mOK[0m]
cmd ./vendor/github.com/robfig/glock [[0;32mOK[0m]
cmd ./vendor/github.com/wadey/gocovmerge [[0;32mOK[0m]
cmd ./vendor/golang.org/x/tools/cmd/goimports [[0;32mOK[0m]
cmd ./vendor/golang.org/x/tools/cmd/goyacc [[0;32mOK[0m]
cmd ./vendor/golang.org/x/tools/cmd/stringer [[0;32mOK[0m]
touch .bootstrap
go list -tags '' -f 'go test -v -race -tags '\'''\'' -ldflags '\'''\'' -i -c {{.ImportPath}} -o {{.Dir}}/stress.test && (cd {{.Dir}} && if [ -f stress.test ]; then stress -maxtime 15m -maxfails 1 -stderr ./stress.test -test.run '\''.'\'' -test.timeout 30m -test.v; fi)' github.com/cockroachdb/cockroach/vendor/cloud.google.com/go/logging/apiv2 | /bin/bash
vendor/cloud.google.com/go/logging/apiv2/logging_client.go:30:2: cannot find package "google.golang.org/genproto/googleapis/api/monitoredres" in any of:
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/genproto/googleapis/api/monitoredres (vendor tree)
/usr/local/go/src/google.golang.org/genproto/googleapis/api/monitoredres (from $GOROOT)
/go/src/google.golang.org/genproto/googleapis/api/monitoredres (from $GOPATH)
vendor/cloud.google.com/go/logging/apiv2/config_client.go:30:2: cannot find package "google.golang.org/genproto/googleapis/logging/v2" in any of:
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/genproto/googleapis/logging/v2 (vendor tree)
/usr/local/go/src/google.golang.org/genproto/googleapis/logging/v2 (from $GOROOT)
/go/src/google.golang.org/genproto/googleapis/logging/v2 (from $GOPATH)
Makefile:138: recipe for target 'stress' failed
make: *** [stress] Error 1
``` | non_main | github com cockroachdb cockroach vendor cloud google com go logging unknown failed under stress sha parameters cockroach proposer evaluated kv tags goflags race stress build found a failed test makefile bootstrap no such file or directory git submodule update init cmd pkg cmd github post cmd pkg cmd github pull request make cmd pkg cmd glock diff parser cmd pkg cmd metacheck cmd pkg cmd protoc gen gogoroach cmd pkg cmd teamcity trigger cmd vendor github com misspell cmd misspell cmd vendor github com cockroachdb c protobuf cmd protoc cmd vendor github com cockroachdb crlfmt cmd vendor github com cockroachdb stress cmd vendor github com golang lint golint cmd vendor github com grpc ecosystem grpc gateway protoc gen cmd vendor github com jteeuwen go bindata go bindata cmd vendor github com kisielk errcheck cmd vendor github com kkaneda returncheck cmd vendor github com mattn goveralls cmd vendor github com mdempsky unconvert cmd vendor github com mibk dupl cmd vendor github com robfig glock cmd vendor github com wadey gocovmerge cmd vendor golang org x tools cmd goimports cmd vendor golang org x tools cmd goyacc cmd vendor golang org x tools cmd stringer touch bootstrap go list tags f go test v race tags ldflags i c importpath o dir stress test cd dir if then stress maxtime maxfails stderr stress test test run test timeout test v fi github com cockroachdb cockroach vendor cloud google com go logging bin bash vendor cloud google com go logging logging client go cannot find package google golang org genproto googleapis api monitoredres in any of go src github com cockroachdb cockroach vendor google golang org genproto googleapis api monitoredres vendor tree usr local go src google golang org genproto googleapis api monitoredres from goroot go src google golang org genproto googleapis api monitoredres from gopath vendor cloud google com go logging config client go cannot find package google golang org genproto googleapis logging in any of go src github com cockroachdb cockroach vendor google golang org genproto googleapis logging vendor tree usr local go src google golang org genproto googleapis logging from goroot go src google golang org genproto googleapis logging from gopath makefile recipe for target stress failed make error | 0 |
56,372 | 3,079,504,061 | IssuesEvent | 2015-08-21 16:37:23 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | При первом запуске программы без настроек в настройках устанавливается английский язык вместо русского. | bug Component-Logic Component-UI imported Priority-High Usability | _From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on November 13, 2011 19:14:02_
При первом запуске программы на вкладке настроек "общие" устанавливается английский язык. Однако до открытия настроек язык как и положено русский.
**Attachment:** [Без-имени-1.png](http://code.google.com/p/flylinkdc/issues/detail?id=593)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=593_ | 1.0 | При первом запуске программы без настроек в настройках устанавливается английский язык вместо русского. - _From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on November 13, 2011 19:14:02_
При первом запуске программы на вкладке настроек "общие" устанавливается английский язык. Однако до открытия настроек язык как и положено русский.
**Attachment:** [Без-имени-1.png](http://code.google.com/p/flylinkdc/issues/detail?id=593)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=593_ | non_main | при первом запуске программы без настроек в настройках устанавливается английский язык вместо русского from on november при первом запуске программы на вкладке настроек общие устанавливается английский язык однако до открытия настроек язык как и положено русский attachment original issue | 0 |
176,004 | 13,623,041,800 | IssuesEvent | 2020-09-24 05:23:09 | hpi-swa/smalltalkCI | https://api.github.com/repos/hpi-swa/smalltalkCI | closed | Make it easier to use Travis CI for projects on SmalltalkHub | All Dialects testing | Hi @estebanlm,
I understand you are one of the SmalltalkHub admins?
For [Fuel](https://github.com/theseion/Fuel), @theseion has set up a Jenkins job that triggers a Travis build when it detects code changes on SmalltalkHub.
I was thinking that it'd be cool to make it easy to use Travis for SmalltalkHub. Unfortunately, one will still need to use a GitHub project, but that repository does not need to contain the actual code.
For this, SmalltalkHub would need to notify Travis on push via a web hook.
How hard would it be to add support for web hooks to SmalltalkHub?
| 1.0 | Make it easier to use Travis CI for projects on SmalltalkHub - Hi @estebanlm,
I understand you are one of the SmalltalkHub admins?
For [Fuel](https://github.com/theseion/Fuel), @theseion has set up a Jenkins job that triggers a Travis build when it detects code changes on SmalltalkHub.
I was thinking that it'd be cool to make it easy to use Travis for SmalltalkHub. Unfortunately, one will still need to use a GitHub project, but that repository does not need to contain the actual code.
For this, SmalltalkHub would need to notify Travis on push via a web hook.
How hard would it be to add support for web hooks to SmalltalkHub?
| non_main | make it easier to use travis ci for projects on smalltalkhub hi estebanlm i understand you are one of the smalltalkhub admins for theseion has set up a jenkins job that triggers a travis build when it detects code changes on smalltalkhub i was thinking that it d be cool to make it easy to use travis for smalltalkhub unfortunately one will still need to use a github project but that repository does not need to contain the actual code for this smalltalkhub would need to notify travis on push via a web hook how hard would it be to add support for web hooks to smalltalkhub | 0 |
4,209 | 20,756,567,400 | IssuesEvent | 2022-03-15 12:47:06 | Lissy93/dashy | https://api.github.com/repos/Lissy93/dashy | closed | [BUG] Config options in router not honoured | 🐛 Bug 👤 Awaiting Maintainer Response | ### Discussed in https://github.com/Lissy93/dashy/discussions/542
<div type='discussions-op-text'>
<sup>Originally posted by **lephtHanded** March 8, 2022</sup>
heya!
latest update (or my config) seems borked with the latest docker pull.
the `routingMode` and `startingView` options don't seem to have an effect.
i have mine set to `hash` and `minimal` respectively and the page loads to the default view.
links to `../minimal` or `../#/minimal` that I got working from [this thread](https://github.com/Lissy93/dashy/discussions/148#discussioncomment-1660414_) just open the default view now.
</div>
### Probable Cause
The config isn't finished fetching when the [`router`](https://github.com/Lissy93/dashy/blob/master/src/router.js) is initialised | True | [BUG] Config options in router not honoured - ### Discussed in https://github.com/Lissy93/dashy/discussions/542
<div type='discussions-op-text'>
<sup>Originally posted by **lephtHanded** March 8, 2022</sup>
heya!
latest update (or my config) seems borked with the latest docker pull.
the `routingMode` and `startingView` options don't seem to have an effect.
i have mine set to `hash` and `minimal` respectively and the page loads to the default view.
links to `../minimal` or `../#/minimal` that I got working from [this thread](https://github.com/Lissy93/dashy/discussions/148#discussioncomment-1660414_) just open the default view now.
</div>
### Probable Cause
The config isn't finished fetching when the [`router`](https://github.com/Lissy93/dashy/blob/master/src/router.js) is initialised | main | config options in router not honoured discussed in originally posted by lephthanded march heya latest update or my config seems borked with the latest docker pull the routingmode and startingview options don t seem to have an effect i have mine set to hash and minimal respectively and the page loads to the default view links to minimal or minimal that i got working from just open the default view now probable cause the config isn t finished fetching when the is initialised | 1 |
85,154 | 24,525,038,281 | IssuesEvent | 2022-10-11 12:32:37 | microsoft/fluentui | https://api.github.com/repos/microsoft/fluentui | closed | [Feature]: Remove unnecessary checks from screener code | Area: Build System Type: Epic | ### Library
React / v8 (@fluentui/react)
### Describe the feature that you would like added
With the changes from #24292, the ``skipScreener`` environment variable can now be used to determine whether a screener check should be run or not, so other redundant checks and usage of other environment variables can be discarded.
### Have you discussed this feature with our team
ling1726
### Additional context
_No response_
### Validations
- [X] Check that there isn't already an issue that request the same feature to avoid creating a duplicate. | 1.0 | [Feature]: Remove unnecessary checks from screener code - ### Library
React / v8 (@fluentui/react)
### Describe the feature that you would like added
With the changes from #24292, the ``skipScreener`` environment variable can now be used to determine whether a screener check should be run or not, so other redundant checks and usage of other environment variables can be discarded.
### Have you discussed this feature with our team
ling1726
### Additional context
_No response_
### Validations
- [X] Check that there isn't already an issue that request the same feature to avoid creating a duplicate. | non_main | remove unnecessary checks from screener code library react fluentui react describe the feature that you would like added with the changes from the skipscreener environment variable can now be used to determine whether a screener check should be run or not so other redundant checks and usage of other environment variables can be discarded have you discussed this feature with our team additional context no response validations check that there isn t already an issue that request the same feature to avoid creating a duplicate | 0 |
362,561 | 25,381,495,573 | IssuesEvent | 2022-11-21 17:55:48 | gnosischain/documentation | https://api.github.com/repos/gnosischain/documentation | closed | Validators - Rewards & Penalties | documentation validators | Current Page:
https://docs.gnosischain.com/node/incentives
## Tasks
- [ ] Check validity of information
### Rewards
- [ ] Calculate and show the current Yield
- [ ] Embed key information: https://dune.xyz/maxaleks/Gnosis-Beacon-Chain-(Deposits)
### Penalties
- [ ] Penalties
- [ ] Link to Penalties docs (?)
- [ ] Emphasize slashing
- [ ] Emphasize inactivity leaks (including mass-inactivity leaks)
| 1.0 | Validators - Rewards & Penalties - Current Page:
https://docs.gnosischain.com/node/incentives
## Tasks
- [ ] Check validity of information
### Rewards
- [ ] Calculate and show the current Yield
- [ ] Embed key information: https://dune.xyz/maxaleks/Gnosis-Beacon-Chain-(Deposits)
### Penalties
- [ ] Penalties
- [ ] Link to Penalties docs (?)
- [ ] Emphasize slashing
- [ ] Emphasize inactivity leaks (including mass-inactivity leaks)
| non_main | validators rewards penalties current page tasks check validity of information rewards calculate and show the current yield embed key information penalties penalties link to penalties docs emphasize slashing emphasize inactivity leaks including mass inactivity leaks | 0 |
279,452 | 21,160,802,065 | IssuesEvent | 2022-04-07 09:11:19 | metal3-io/metal3-docs | https://api.github.com/repos/metal3-io/metal3-docs | closed | User-guide: add introduction document for IPAM | kind/documentation triage/accepted | Write down below sections for the Introduction section to ip-address-manager section in the user-guide book.
Describe:
- What is IPAM
- Why and when should/can it be used
- It is relationship with CAPM3
/kind documentation
/help | 1.0 | User-guide: add introduction document for IPAM - Write down below sections for the Introduction section to ip-address-manager section in the user-guide book.
Describe:
- What is IPAM
- Why and when should/can it be used
- It is relationship with CAPM3
/kind documentation
/help | non_main | user guide add introduction document for ipam write down below sections for the introduction section to ip address manager section in the user guide book describe what is ipam why and when should can it be used it is relationship with kind documentation help | 0 |
137,197 | 20,092,915,559 | IssuesEvent | 2022-02-06 03:09:02 | oshi/oshi | https://api.github.com/repos/oshi/oshi | closed | Use Maven Profiles for finer-grained control over releases | external dependency documentation design discussion performance maven dependencies github_actions | We should switch our pom to use [Maven Profiles](https://maven.apache.org/guides/introduction/introduction-to-profiles.html) to help address several current workflows and issues we're trying to address with band-aid fixes.
1. JPMS release
2. Site publishing
3. Shaded JAR vs. JMPS conflicts
**JPMS.** I decided against an MRJAR for the modular build, instead doing a full release of a separate project. Initially I thought I might try to make some JDK11+ optimizations in the code, but eventually decided it wasn't worth the effort. Right now it's essentially a clone of master with a few things changed in the POM files (for maven coordinates or site publishing) or removed from the POM (for compatibility or speed), plus the module descriptor and a related config file adding command lines for the test environment. See this commit that basically sums up the differences: https://github.com/oshi/oshi/commit/ff455f78a81b8f677357ed15b3e69bb4c974fa3b
One problem here is that my current approach (fork the master branch, apply the above commit, and regularly cherry-pick every other commit, occasionally restarting from a fresh fork) is manually intensive (with lots of silly conflict resolution in the cherry-picking). Another problem is that the Java11 SNAPSHOT doesn't get updated until I go through the above exercise.
All of this can be handled in the POM with profiles. The incompatible or time-consuming plugins/goals (animal-sniffer) can be configured to only run on a jdk8 profile (the default) and we can streamline a jdk11 release by only including things necessary to publish the artifacts. We can configure the snapshot deployment GHA to run both profiles sequentially.
**Site Publishing.** I'll admit that I've never really used the site. OK, not never. Maybe once or twice. It never made any sense to me that the reports on things to fix (PMD, Checkstyle, etc.) got published on a release... those are the things that should have been fixed *before* the release! In any case, our Sonarqube integration and other code quality tools (LGTM, Scrutinizer, Sonatype-lift, Coverity, CodeQL) are pretty dang good at catching things, and I'm not sure any of the reporting is really that useful.
Now with #1819 there's an interest in reorganizing the documentation, and in fact our "site" now looks different. But in my experience unless you have your own custom URL the place most people will visit is the README so it is the place that needs tuning up. There are lots of things we could do (bumping stuff down into a single directory to make the content appear earlier, streamlining content with links to pages) to improve the site. But that should be updated when there's a major change to the docs, not just on release.
Plus it's now become a painful part of the release:
> Not exactly sure why this is taking so long. Waffle does too. But I've never had others take so long. Maven does seem to do things incredibly backwards here.
>
> What I mean by that...you have to run the site first then deploy it so that is really mvn site followed by mvn site:deploy as far as I know. So you have the site sitting in the target directory. Wagon performs a checkout which looks like all immediately to Temp space. It then appears to copy one file at a time to new location. They even claim they do no cleanup of old stuff so it doesn't make any sense that they don't just copy recursvely, commit, and push. That would take seconds not hours. In fact, I'm pretty sure that is how the other plugin was working.
_Originally posted by @hazendaz in https://github.com/oshi/oshi/issues/1854#issuecomment-1018287534_
We can just remove the site stuff from the regular profile and deploy it only when we want to (trigger the site GHA or push master to site branch I think does it).
**Shaded JAR.**
The thing that triggered the shaded jar incompatibility was an automatic module name, our first step toward JPMS support before actually making a modular release. Since we have a full modular release with a full module descriptor, we could probably remove that, to see if that fixes the javadoc plugin issues. (That will change the module name back to the default "oshi.core" but people shouldn't be using that, and perhaps that "breakage" will be a forcing function to switch to the correct artifact. In fact, it may actually be Harmful™️ to have both; people can include `oshi-core` in a modular project now when they should be including `oshi-core-java11`.
This is one other thing we can do with the profiles, only do the `oshi-core-java11` release with the modular stuff, and put the rest of the "jdk8 support" release back into full "we don't talk about that" mode.
| 1.0 | Use Maven Profiles for finer-grained control over releases - We should switch our pom to use [Maven Profiles](https://maven.apache.org/guides/introduction/introduction-to-profiles.html) to help address several current workflows and issues we're trying to address with band-aid fixes.
1. JPMS release
2. Site publishing
3. Shaded JAR vs. JMPS conflicts
**JPMS.** I decided against an MRJAR for the modular build, instead doing a full release of a separate project. Initially I thought I might try to make some JDK11+ optimizations in the code, but eventually decided it wasn't worth the effort. Right now it's essentially a clone of master with a few things changed in the POM files (for maven coordinates or site publishing) or removed from the POM (for compatibility or speed), plus the module descriptor and a related config file adding command lines for the test environment. See this commit that basically sums up the differences: https://github.com/oshi/oshi/commit/ff455f78a81b8f677357ed15b3e69bb4c974fa3b
One problem here is that my current approach (fork the master branch, apply the above commit, and regularly cherry-pick every other commit, occasionally restarting from a fresh fork) is manually intensive (with lots of silly conflict resolution in the cherry-picking). Another problem is that the Java11 SNAPSHOT doesn't get updated until I go through the above exercise.
All of this can be handled in the POM with profiles. The incompatible or time-consuming plugins/goals (animal-sniffer) can be configured to only run on a jdk8 profile (the default) and we can streamline a jdk11 release by only including things necessary to publish the artifacts. We can configure the snapshot deployment GHA to run both profiles sequentially.
**Site Publishing.** I'll admit that I've never really used the site. OK, not never. Maybe once or twice. It never made any sense to me that the reports on things to fix (PMD, Checkstyle, etc.) got published on a release... those are the things that should have been fixed *before* the release! In any case, our Sonarqube integration and other code quality tools (LGTM, Scrutinizer, Sonatype-lift, Coverity, CodeQL) are pretty dang good at catching things, and I'm not sure any of the reporting is really that useful.
Now with #1819 there's an interest in reorganizing the documentation, and in fact our "site" now looks different. But in my experience unless you have your own custom URL the place most people will visit is the README so it is the place that needs tuning up. There are lots of things we could do (bumping stuff down into a single directory to make the content appear earlier, streamlining content with links to pages) to improve the site. But that should be updated when there's a major change to the docs, not just on release.
Plus it's now become a painful part of the release:
> Not exactly sure why this is taking so long. Waffle does too. But I've never had others take so long. Maven does seem to do things incredibly backwards here.
>
> What I mean by that...you have to run the site first then deploy it so that is really mvn site followed by mvn site:deploy as far as I know. So you have the site sitting in the target directory. Wagon performs a checkout which looks like all immediately to Temp space. It then appears to copy one file at a time to new location. They even claim they do no cleanup of old stuff so it doesn't make any sense that they don't just copy recursvely, commit, and push. That would take seconds not hours. In fact, I'm pretty sure that is how the other plugin was working.
_Originally posted by @hazendaz in https://github.com/oshi/oshi/issues/1854#issuecomment-1018287534_
We can just remove the site stuff from the regular profile and deploy it only when we want to (trigger the site GHA or push master to site branch I think does it).
**Shaded JAR.**
The thing that triggered the shaded jar incompatibility was an automatic module name, our first step toward JPMS support before actually making a modular release. Since we have a full modular release with a full module descriptor, we could probably remove that, to see if that fixes the javadoc plugin issues. (That will change the module name back to the default "oshi.core" but people shouldn't be using that, and perhaps that "breakage" will be a forcing function to switch to the correct artifact. In fact, it may actually be Harmful™️ to have both; people can include `oshi-core` in a modular project now when they should be including `oshi-core-java11`.
This is one other thing we can do with the profiles, only do the `oshi-core-java11` release with the modular stuff, and put the rest of the "jdk8 support" release back into full "we don't talk about that" mode.
| non_main | use maven profiles for finer grained control over releases we should switch our pom to use to help address several current workflows and issues we re trying to address with band aid fixes jpms release site publishing shaded jar vs jmps conflicts jpms i decided against an mrjar for the modular build instead doing a full release of a separate project initially i thought i might try to make some optimizations in the code but eventually decided it wasn t worth the effort right now it s essentially a clone of master with a few things changed in the pom files for maven coordinates or site publishing or removed from the pom for compatibility or speed plus the module descriptor and a related config file adding command lines for the test environment see this commit that basically sums up the differences one problem here is that my current approach fork the master branch apply the above commit and regularly cherry pick every other commit occasionally restarting from a fresh fork is manually intensive with lots of silly conflict resolution in the cherry picking another problem is that the snapshot doesn t get updated until i go through the above exercise all of this can be handled in the pom with profiles the incompatible or time consuming plugins goals animal sniffer can be configured to only run on a profile the default and we can streamline a release by only including things necessary to publish the artifacts we can configure the snapshot deployment gha to run both profiles sequentially site publishing i ll admit that i ve never really used the site ok not never maybe once or twice it never made any sense to me that the reports on things to fix pmd checkstyle etc got published on a release those are the things that should have been fixed before the release in any case our sonarqube integration and other code quality tools lgtm scrutinizer sonatype lift coverity codeql are pretty dang good at catching things and i m not sure any of the reporting is really that useful now with there s an interest in reorganizing the documentation and in fact our site now looks different but in my experience unless you have your own custom url the place most people will visit is the readme so it is the place that needs tuning up there are lots of things we could do bumping stuff down into a single directory to make the content appear earlier streamlining content with links to pages to improve the site but that should be updated when there s a major change to the docs not just on release plus it s now become a painful part of the release not exactly sure why this is taking so long waffle does too but i ve never had others take so long maven does seem to do things incredibly backwards here what i mean by that you have to run the site first then deploy it so that is really mvn site followed by mvn site deploy as far as i know so you have the site sitting in the target directory wagon performs a checkout which looks like all immediately to temp space it then appears to copy one file at a time to new location they even claim they do no cleanup of old stuff so it doesn t make any sense that they don t just copy recursvely commit and push that would take seconds not hours in fact i m pretty sure that is how the other plugin was working originally posted by hazendaz in we can just remove the site stuff from the regular profile and deploy it only when we want to trigger the site gha or push master to site branch i think does it shaded jar the thing that triggered the shaded jar incompatibility was an automatic module name our first step toward jpms support before actually making a modular release since we have a full modular release with a full module descriptor we could probably remove that to see if that fixes the javadoc plugin issues that will change the module name back to the default oshi core but people shouldn t be using that and perhaps that breakage will be a forcing function to switch to the correct artifact in fact it may actually be harmful™️ to have both people can include oshi core in a modular project now when they should be including oshi core this is one other thing we can do with the profiles only do the oshi core release with the modular stuff and put the rest of the support release back into full we don t talk about that mode | 0 |
4,554 | 23,724,886,711 | IssuesEvent | 2022-08-30 18:36:45 | rustsec/advisory-db | https://api.github.com/repos/rustsec/advisory-db | closed | `rusttype` is unmaintained | Unmaintained | See: https://gitlab.redox-os.org/redox-os/rusttype/-/issues/148
The author says they don't plan on making any new releases and suggests their new [`ab_glyph`](https://github.com/alexheretic/ab-glyph) crate as a successor. | True | `rusttype` is unmaintained - See: https://gitlab.redox-os.org/redox-os/rusttype/-/issues/148
The author says they don't plan on making any new releases and suggests their new [`ab_glyph`](https://github.com/alexheretic/ab-glyph) crate as a successor. | main | rusttype is unmaintained see the author says they don t plan on making any new releases and suggests their new crate as a successor | 1 |
8,662 | 27,172,055,000 | IssuesEvent | 2023-02-17 20:24:54 | OneDrive/onedrive-api-docs | https://api.github.com/repos/OneDrive/onedrive-api-docs | closed | Deleting subscriptions as a whole | automation:Closed | Currently the only way to delete a subscription is to know the access token or respond to the OneDrive server with errors.
During development (and sometimes production), this can be a annoying problem with multiple subscriptions coming back if not deleted.
Is there any plans to have a different mechanism to delete subscriptions using maybe the application token? | 1.0 | Deleting subscriptions as a whole - Currently the only way to delete a subscription is to know the access token or respond to the OneDrive server with errors.
During development (and sometimes production), this can be a annoying problem with multiple subscriptions coming back if not deleted.
Is there any plans to have a different mechanism to delete subscriptions using maybe the application token? | non_main | deleting subscriptions as a whole currently the only way to delete a subscription is to know the access token or respond to the onedrive server with errors during development and sometimes production this can be a annoying problem with multiple subscriptions coming back if not deleted is there any plans to have a different mechanism to delete subscriptions using maybe the application token | 0 |
465,660 | 13,389,850,051 | IssuesEvent | 2020-09-02 19:33:43 | NMGRL/pychron | https://api.github.com/repos/NMGRL/pychron | closed | Error when fitting Felix Air IC to Felix Data | Bug Data Reduction Data Specific Duplicate Pipeline Priority | Traceback (most recent call last):
File "/Users/argonlab2/miniconda2/envs/pychron3/lib/python3.7/site-packages/pyface/ui/qt4/action/action_item.py", line 371, in _qt4_on_triggered
self.controller.perform(action, action_event)
File "/Users/argonlab2/miniconda2/envs/pychron3/lib/python3.7/site-packages/pyface/tasks/action/task_action_controller.py", line 31, in perform
return action.perform(event)
File "/Users/argonlab2/miniconda2/envs/pychron3/lib/python3.7/site-packages/pyface/action/listening_action.py", line 74, in perform
method()
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/tasks/task.py", line 348, in run
self._run_pipeline()
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/tasks/task.py", line 535, in _run_pipeline
self._run('run pipeline', 'run_pipeline')
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/tasks/task.py", line 514, in _run
if not getattr(self.engine, func)():
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/engine.py", line 728, in run_pipeline
node.run(state)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/nodes/fit.py", line 118, in run
self.editor.force_update(force=True)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/editors/figure_editor.py", line 85, in force_update
model.refresh(force=force)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/models/figure_model.py", line 43, in refresh
p.make_graph()
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/panels/figure_panel.py", line 120, in make_graph
fig.plot(plots, legend)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/references_series.py", line 94, in plot
self._new_fit_series(i, p)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/references_series.py", line 145, in _new_fit_series
args = self._plot_references(pid, po)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/references_series.py", line 259, in _plot_references
data = self.reference_data(po)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/references_series.py", line 180, in reference_data
ans, xs, ys = self._get_reference_data(po)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/icfactor.py", line 78, in _get_reference_data
rys = nys / dys
File "/Users/argonlab2/miniconda2/envs/pychron3/lib/python3.7/site-packages/uncertainties/core.py", line 697, in f_with_affine_output
f_nominal_value = f(*args_values, **kwargs)
ZeroDivisionError: float division by zero
| 1.0 | Error when fitting Felix Air IC to Felix Data - Traceback (most recent call last):
File "/Users/argonlab2/miniconda2/envs/pychron3/lib/python3.7/site-packages/pyface/ui/qt4/action/action_item.py", line 371, in _qt4_on_triggered
self.controller.perform(action, action_event)
File "/Users/argonlab2/miniconda2/envs/pychron3/lib/python3.7/site-packages/pyface/tasks/action/task_action_controller.py", line 31, in perform
return action.perform(event)
File "/Users/argonlab2/miniconda2/envs/pychron3/lib/python3.7/site-packages/pyface/action/listening_action.py", line 74, in perform
method()
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/tasks/task.py", line 348, in run
self._run_pipeline()
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/tasks/task.py", line 535, in _run_pipeline
self._run('run pipeline', 'run_pipeline')
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/tasks/task.py", line 514, in _run
if not getattr(self.engine, func)():
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/engine.py", line 728, in run_pipeline
node.run(state)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/nodes/fit.py", line 118, in run
self.editor.force_update(force=True)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/editors/figure_editor.py", line 85, in force_update
model.refresh(force=force)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/models/figure_model.py", line 43, in refresh
p.make_graph()
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/panels/figure_panel.py", line 120, in make_graph
fig.plot(plots, legend)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/references_series.py", line 94, in plot
self._new_fit_series(i, p)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/references_series.py", line 145, in _new_fit_series
args = self._plot_references(pid, po)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/references_series.py", line 259, in _plot_references
data = self.reference_data(po)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/references_series.py", line 180, in reference_data
ans, xs, ys = self._get_reference_data(po)
File "/Users/argonlab2/.pychron.1/updates/pychron/pipeline/plot/plotter/icfactor.py", line 78, in _get_reference_data
rys = nys / dys
File "/Users/argonlab2/miniconda2/envs/pychron3/lib/python3.7/site-packages/uncertainties/core.py", line 697, in f_with_affine_output
f_nominal_value = f(*args_values, **kwargs)
ZeroDivisionError: float division by zero
| non_main | error when fitting felix air ic to felix data traceback most recent call last file users envs lib site packages pyface ui action action item py line in on triggered self controller perform action action event file users envs lib site packages pyface tasks action task action controller py line in perform return action perform event file users envs lib site packages pyface action listening action py line in perform method file users pychron updates pychron pipeline tasks task py line in run self run pipeline file users pychron updates pychron pipeline tasks task py line in run pipeline self run run pipeline run pipeline file users pychron updates pychron pipeline tasks task py line in run if not getattr self engine func file users pychron updates pychron pipeline engine py line in run pipeline node run state file users pychron updates pychron pipeline nodes fit py line in run self editor force update force true file users pychron updates pychron pipeline plot editors figure editor py line in force update model refresh force force file users pychron updates pychron pipeline plot models figure model py line in refresh p make graph file users pychron updates pychron pipeline plot panels figure panel py line in make graph fig plot plots legend file users pychron updates pychron pipeline plot plotter references series py line in plot self new fit series i p file users pychron updates pychron pipeline plot plotter references series py line in new fit series args self plot references pid po file users pychron updates pychron pipeline plot plotter references series py line in plot references data self reference data po file users pychron updates pychron pipeline plot plotter references series py line in reference data ans xs ys self get reference data po file users pychron updates pychron pipeline plot plotter icfactor py line in get reference data rys nys dys file users envs lib site packages uncertainties core py line in f with affine output f nominal value f args values kwargs zerodivisionerror float division by zero | 0 |
195,019 | 6,901,906,834 | IssuesEvent | 2017-11-25 14:08:09 | s1kx/unison | https://api.github.com/repos/s1kx/unison | opened | Socket duplication issue | point: 8 priority: highest type:bug type:chore | The log shows that this prints again after a day or two https://github.com/s1kx/unison/blob/master/bot.go#L269
and that behavior repeats to create N amount of connections causing the bot to reply N number of times to any command, running the same hook N times, etc.
fishysnake in (discord gophers) pointed out it might be related to https://discordapp.com/developers/docs/topics/gateway#resuming
This is where the listener is added: https://github.com/s1kx/unison/blob/master/bot.go#L171 | 1.0 | Socket duplication issue - The log shows that this prints again after a day or two https://github.com/s1kx/unison/blob/master/bot.go#L269
and that behavior repeats to create N amount of connections causing the bot to reply N number of times to any command, running the same hook N times, etc.
fishysnake in (discord gophers) pointed out it might be related to https://discordapp.com/developers/docs/topics/gateway#resuming
This is where the listener is added: https://github.com/s1kx/unison/blob/master/bot.go#L171 | non_main | socket duplication issue the log shows that this prints again after a day or two and that behavior repeats to create n amount of connections causing the bot to reply n number of times to any command running the same hook n times etc fishysnake in discord gophers pointed out it might be related to this is where the listener is added | 0 |
3,770 | 15,834,650,429 | IssuesEvent | 2021-04-06 17:03:04 | zaproxy/zaproxy | https://api.github.com/repos/zaproxy/zaproxy | closed | Add-ons Log4j 2.x Uplift | Maintainability add-on tracker | Align add-ons with core logging changes per:
* https://github.com/zaproxy/zaproxy/pull/6228
* https://github.com/zaproxy/zaproxy/pull/6327
See also: http://logging.apache.org/log4j/2.x/manual/api.html
- [x] accessControl (Done in: https://github.com/zaproxy/zap-extensions/pull/2690)
- [x] alertFilters (Done in: https://github.com/zaproxy/zap-extensions/pull/2687)
- [x] alertReport (Done in: https://github.com/zaproxy/zap-extensions/pull/2681)
- [x] allinonenotes (Done in: https://github.com/zaproxy/zap-extensions/pull/2682)
- [x] ascanrules (Done in: https://github.com/zaproxy/zap-extensions/pull/2673)
- [x] ascanrulesAlpha (Done in: https://github.com/zaproxy/zap-extensions/pull/2689)
- [x] ascanrulesBeta (Done in: https://github.com/zaproxy/zap-extensions/pull/2715)
- [x] authstats (Done in: https://github.com/zaproxy/zap-extensions/pull/2716)
- [x] beanshell (Done in: https://github.com/zaproxy/zap-extensions/pull/2717)
- [x] birtreports (Done in: https://github.com/zaproxy/zap-extensions/pull/2777)
- [x] browserView (Done in: https://github.com/zaproxy/zap-extensions/pull/2718)
- [x] bruteforce (Done in: zaproxy/zap-extensions#2818)
- [x] bugtracker (Done in: https://github.com/zaproxy/zap-extensions/pull/2719)
- [x] callgraph (Done in: https://github.com/zaproxy/zap-extensions/pull/2720)
- [x] codedx (Done in: https://github.com/zaproxy/zap-extensions/pull/2721)
- [x] commonlib (Done in: https://github.com/zaproxy/zap-extensions/pull/2722)
- [x] custompayloads (Done in: zaproxy/zap-extensions#2828)
- ~~customreport~~
- [x] diff (Done in: https://github.com/zaproxy/zap-extensions/pull/2728)
- [x] domxss (Done in: https://github.com/zaproxy/zap-extensions/pull/2729)
- [x] encoder (Done in: https://github.com/zaproxy/zap-extensions/pull/2730)
- ~~exportreport~~
- [x] formhandler (Done in: https://github.com/zaproxy/zap-extensions/pull/2732)
- [x] frontendscanner (Done in: https://github.com/zaproxy/zap-extensions/pull/2733)
- [x] fuzz (Done in: zaproxy/zap-extensions#2804)
- [x] gettingStarted (Done in: https://github.com/zaproxy/zap-extensions/pull/2734)
- [x] graphql (Done in: https://github.com/zaproxy/zap-extensions/pull/2742)
- [x] httpsInfo (Done in: https://github.com/zaproxy/zap-extensions/pull/2775)
- [x] hud (Done in: zaproxy/zap-hud#920)
- [x] imagelocationscanner (Done in: https://github.com/zaproxy/zap-extensions/pull/2674)
- [x] importLogFiles (Done in: https://github.com/zaproxy/zap-extensions/pull/2743)
- [x] importurls (Done in: https://github.com/zaproxy/zap-extensions/pull/2744)
- [x] invoke (Done in: https://github.com/zaproxy/zap-extensions/pull/2724)
- [x] jython (Done in: https://github.com/zaproxy/zap-extensions/pull/2746)
- [x] openapi (Done in: https://github.com/zaproxy/zap-extensions/pull/2767)
- [x] plugnhack (Done in: https://github.com/zaproxy/zap-extensions/pull/2747)
- [x] portscan (Done in: https://github.com/zaproxy/zap-extensions/pull/2748)
- [x] pscanrules (Done in: https://github.com/zaproxy/zap-extensions/pull/2749)
- [x] pscanrulesAlpha (Done in: https://github.com/zaproxy/zap-extensions/pull/2774)
- [x] pscanrulesBeta (Done in: https://github.com/zaproxy/zap-extensions/pull/2680)
- [x] quickstart (Done in: https://github.com/zaproxy/zap-extensions/pull/2776)
- [x] replacer (Done in: https://github.com/zaproxy/zap-extensions/pull/2800)
- [x] requester (Done in: https://github.com/zaproxy/zap-extensions/pull/2750)
- [x] retire (Done in: https://github.com/zaproxy/zap-extensions/pull/2741)
- [x] reveal (Done in: zaproxy/zap-extensions#2801)
- [x] revisit (Done in: zaproxy/zap-extensions#2826)
- [x] saml (Done in: zaproxy/zap-extensions#2823)
- [x] saverawmessage (Done in: https://github.com/zaproxy/zap-extensions/pull/2672)
- [x] savexmlmessage (Done in: zaproxy/zap-extensions#2805)
- [x] scripts (Done in: zaproxy/zap-extensions#2807)
- [x] selenium (Done in: zaproxy/zap-extensions#2806)
- [x] sequence (Done in: zaproxy/zap-extensions#2809)
- [x] simpleexample (Done in: zaproxy/zap-extensions#2808)
- [x] soap (Done in: zaproxy/zap-extensions#2810)
- [x] spiderAjax (Done in: zaproxy/zap-extensions#2811)
- [x] sqliplugin (Done in: zaproxy/zap-extensions#2822)
- [x] sse (Done in: zaproxy/zap-extensions#2814)
- [x] tlsdebug (Done in: zaproxy/zap-extensions#2815)
- [x] todo (Done in: zaproxy/zap-extensions#2816)
- [x] tokengen (Done in: zaproxy/zap-extensions#2820)
- [x] viewstate (Done in: zaproxy/zap-extensions#2817)
- [x] wappalyzer (Done in: https://github.com/zaproxy/zap-extensions/pull/2679)
- ~~wavsepRpt~~
- [x] websocket (Done in: zaproxy/zap-extensions#2813)
- [x] zestAddOn (Done in: zaproxy/zap-extensions#2812) | True | Add-ons Log4j 2.x Uplift - Align add-ons with core logging changes per:
* https://github.com/zaproxy/zaproxy/pull/6228
* https://github.com/zaproxy/zaproxy/pull/6327
See also: http://logging.apache.org/log4j/2.x/manual/api.html
- [x] accessControl (Done in: https://github.com/zaproxy/zap-extensions/pull/2690)
- [x] alertFilters (Done in: https://github.com/zaproxy/zap-extensions/pull/2687)
- [x] alertReport (Done in: https://github.com/zaproxy/zap-extensions/pull/2681)
- [x] allinonenotes (Done in: https://github.com/zaproxy/zap-extensions/pull/2682)
- [x] ascanrules (Done in: https://github.com/zaproxy/zap-extensions/pull/2673)
- [x] ascanrulesAlpha (Done in: https://github.com/zaproxy/zap-extensions/pull/2689)
- [x] ascanrulesBeta (Done in: https://github.com/zaproxy/zap-extensions/pull/2715)
- [x] authstats (Done in: https://github.com/zaproxy/zap-extensions/pull/2716)
- [x] beanshell (Done in: https://github.com/zaproxy/zap-extensions/pull/2717)
- [x] birtreports (Done in: https://github.com/zaproxy/zap-extensions/pull/2777)
- [x] browserView (Done in: https://github.com/zaproxy/zap-extensions/pull/2718)
- [x] bruteforce (Done in: zaproxy/zap-extensions#2818)
- [x] bugtracker (Done in: https://github.com/zaproxy/zap-extensions/pull/2719)
- [x] callgraph (Done in: https://github.com/zaproxy/zap-extensions/pull/2720)
- [x] codedx (Done in: https://github.com/zaproxy/zap-extensions/pull/2721)
- [x] commonlib (Done in: https://github.com/zaproxy/zap-extensions/pull/2722)
- [x] custompayloads (Done in: zaproxy/zap-extensions#2828)
- ~~customreport~~
- [x] diff (Done in: https://github.com/zaproxy/zap-extensions/pull/2728)
- [x] domxss (Done in: https://github.com/zaproxy/zap-extensions/pull/2729)
- [x] encoder (Done in: https://github.com/zaproxy/zap-extensions/pull/2730)
- ~~exportreport~~
- [x] formhandler (Done in: https://github.com/zaproxy/zap-extensions/pull/2732)
- [x] frontendscanner (Done in: https://github.com/zaproxy/zap-extensions/pull/2733)
- [x] fuzz (Done in: zaproxy/zap-extensions#2804)
- [x] gettingStarted (Done in: https://github.com/zaproxy/zap-extensions/pull/2734)
- [x] graphql (Done in: https://github.com/zaproxy/zap-extensions/pull/2742)
- [x] httpsInfo (Done in: https://github.com/zaproxy/zap-extensions/pull/2775)
- [x] hud (Done in: zaproxy/zap-hud#920)
- [x] imagelocationscanner (Done in: https://github.com/zaproxy/zap-extensions/pull/2674)
- [x] importLogFiles (Done in: https://github.com/zaproxy/zap-extensions/pull/2743)
- [x] importurls (Done in: https://github.com/zaproxy/zap-extensions/pull/2744)
- [x] invoke (Done in: https://github.com/zaproxy/zap-extensions/pull/2724)
- [x] jython (Done in: https://github.com/zaproxy/zap-extensions/pull/2746)
- [x] openapi (Done in: https://github.com/zaproxy/zap-extensions/pull/2767)
- [x] plugnhack (Done in: https://github.com/zaproxy/zap-extensions/pull/2747)
- [x] portscan (Done in: https://github.com/zaproxy/zap-extensions/pull/2748)
- [x] pscanrules (Done in: https://github.com/zaproxy/zap-extensions/pull/2749)
- [x] pscanrulesAlpha (Done in: https://github.com/zaproxy/zap-extensions/pull/2774)
- [x] pscanrulesBeta (Done in: https://github.com/zaproxy/zap-extensions/pull/2680)
- [x] quickstart (Done in: https://github.com/zaproxy/zap-extensions/pull/2776)
- [x] replacer (Done in: https://github.com/zaproxy/zap-extensions/pull/2800)
- [x] requester (Done in: https://github.com/zaproxy/zap-extensions/pull/2750)
- [x] retire (Done in: https://github.com/zaproxy/zap-extensions/pull/2741)
- [x] reveal (Done in: zaproxy/zap-extensions#2801)
- [x] revisit (Done in: zaproxy/zap-extensions#2826)
- [x] saml (Done in: zaproxy/zap-extensions#2823)
- [x] saverawmessage (Done in: https://github.com/zaproxy/zap-extensions/pull/2672)
- [x] savexmlmessage (Done in: zaproxy/zap-extensions#2805)
- [x] scripts (Done in: zaproxy/zap-extensions#2807)
- [x] selenium (Done in: zaproxy/zap-extensions#2806)
- [x] sequence (Done in: zaproxy/zap-extensions#2809)
- [x] simpleexample (Done in: zaproxy/zap-extensions#2808)
- [x] soap (Done in: zaproxy/zap-extensions#2810)
- [x] spiderAjax (Done in: zaproxy/zap-extensions#2811)
- [x] sqliplugin (Done in: zaproxy/zap-extensions#2822)
- [x] sse (Done in: zaproxy/zap-extensions#2814)
- [x] tlsdebug (Done in: zaproxy/zap-extensions#2815)
- [x] todo (Done in: zaproxy/zap-extensions#2816)
- [x] tokengen (Done in: zaproxy/zap-extensions#2820)
- [x] viewstate (Done in: zaproxy/zap-extensions#2817)
- [x] wappalyzer (Done in: https://github.com/zaproxy/zap-extensions/pull/2679)
- ~~wavsepRpt~~
- [x] websocket (Done in: zaproxy/zap-extensions#2813)
- [x] zestAddOn (Done in: zaproxy/zap-extensions#2812) | main | add ons x uplift align add ons with core logging changes per see also accesscontrol done in alertfilters done in alertreport done in allinonenotes done in ascanrules done in ascanrulesalpha done in ascanrulesbeta done in authstats done in beanshell done in birtreports done in browserview done in bruteforce done in zaproxy zap extensions bugtracker done in callgraph done in codedx done in commonlib done in custompayloads done in zaproxy zap extensions customreport diff done in domxss done in encoder done in exportreport formhandler done in frontendscanner done in fuzz done in zaproxy zap extensions gettingstarted done in graphql done in httpsinfo done in hud done in zaproxy zap hud imagelocationscanner done in importlogfiles done in importurls done in invoke done in jython done in openapi done in plugnhack done in portscan done in pscanrules done in pscanrulesalpha done in pscanrulesbeta done in quickstart done in replacer done in requester done in retire done in reveal done in zaproxy zap extensions revisit done in zaproxy zap extensions saml done in zaproxy zap extensions saverawmessage done in savexmlmessage done in zaproxy zap extensions scripts done in zaproxy zap extensions selenium done in zaproxy zap extensions sequence done in zaproxy zap extensions simpleexample done in zaproxy zap extensions soap done in zaproxy zap extensions spiderajax done in zaproxy zap extensions sqliplugin done in zaproxy zap extensions sse done in zaproxy zap extensions tlsdebug done in zaproxy zap extensions todo done in zaproxy zap extensions tokengen done in zaproxy zap extensions viewstate done in zaproxy zap extensions wappalyzer done in wavseprpt websocket done in zaproxy zap extensions zestaddon done in zaproxy zap extensions | 1 |
1,353 | 5,829,570,324 | IssuesEvent | 2017-05-08 14:52:00 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | [Regression] ios_command module failing with paramiko.hostkeys.InvalidHostKey | affects_2.2 bug_report networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
`ios_command`
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/rob/code/ansible/anchor-ansible/ansible.cfg
configured module search path = ['../ntc-ansible/library', '../napalm-ansible/library']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
[defaults]
host_key_checking = False
inventory = ./hosts
library = ../ntc-ansible/library:../napalm-ansible/library
log_path = ./logfile
retry_files_save_path = ./retry/
forks = 50
[paramiko_connection]
record_host_keys = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ansible Host Machine: Ubuntu 16.04
Connecting to Cisco routers/switches
##### SUMMARY
<!--- Explain the problem briefly -->
Using the below playbook to pull running configs from a number of cisco devices, I keep getting the below errors for many fo the devices.
```
```
This is only part of the error output.
These are devices I can successfully ssh to manually and with third party modules. Also note that the invalid host key is the same for each different device.
```
paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Rented Leasehold Superfast Routers Show Run to Local File
gather_facts: no
hosts: rl_superfast
tasks:
- name: Execute Show Run Command
ios_command:
provider: "{{ provider }}"
commands:
- show run
register: output
- name: Write Output to File
template:
src: output.txt.j2
dest: "./files/show_files/rl_superfast/superfast/{{ ansible_host }}.txt"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<10.250.0.38> ESTABLISH LOCAL CONNECTION FOR USER: rob
<10.250.0.38> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361 `" && echo ansible-tmp-1479917911.16-22520618528361="` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361 `" ) && sleep 0'
<10.250.0.38> PUT /tmp/tmpvgXT4L TO /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py
<10.250.0.38> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py && sleep 0'
<10.250.0.38> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py; rm -rf "/home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_POTd5B/ansible_module_ios_command.py", line 237, in <module>
main()
File "/tmp/ansible_POTd5B/ansible_module_ios_command.py", line 200, in main
runner.add_command(**cmd)
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/netcli.py", line 147, in add_command
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py", line 117, in cli
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py", line 148, in connect
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/ios.py", line 180, in connect
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py", line 226, in connect
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py", line 76, in open
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 101, in load_system_host_keys
self._system_host_keys.load(filename)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 101, in load
e = HostKeyEntry.from_line(line, lineno)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 341, in from_line
raise InvalidHostKey(line, e)
paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))
fatal: [10.250.0.38]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ios_command"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_POTd5B/ansible_module_ios_command.py\", line 237, in <module>\n main()\n File \"/tmp/ansible_POTd5B/ansible_module_ios_command.py\", line 200, in main\n runner.add_command(**cmd)\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 147, in add_command\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py\", line 117, in cli\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py\", line 148, in connect\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/ios.py\", line 180, in connect\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py\", line 226, in connect\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py\", line 76, in open\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/client.py\", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<10.250.0.26> ESTABLISH LOCAL CONNECTION FOR USER: rob
<10.250.0.26> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090 `" && echo ansible-tmp-1479917911.73-151987012680090="` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090 `" ) && sleep 0'
<10.250.0.26> PUT /tmp/tmpMkESve TO /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py
<10.250.0.26> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py && sleep 0'
<10.250.0.26> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py; rm -rf "/home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Wmwv85/ansible_module_ios_command.py", line 237, in <module>
main()
File "/tmp/ansible_Wmwv85/ansible_module_ios_command.py", line 200, in main
runner.add_command(**cmd)
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/netcli.py", line 147, in add_command
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py", line 117, in cli
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py", line 148, in connect
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/ios.py", line 180, in connect
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py", line 226, in connect
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py", line 76, in open
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 101, in load_system_host_keys
self._system_host_keys.load(filename)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 101, in load
e = HostKeyEntry.from_line(line, lineno)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 341, in from_line
raise InvalidHostKey(line, e)
paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))
fatal: [10.250.0.26]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ios_command"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Wmwv85/ansible_module_ios_command.py\", line 237, in <module>\n main()\n File \"/tmp/ansible_Wmwv85/ansible_module_ios_command.py\", line 200, in main\n runner.add_command(**cmd)\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 147, in add_command\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py\", line 117, in cli\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py\", line 148, in connect\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/ios.py\", line 180, in connect\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py\", line 226, in connect\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py\", line 76, in open\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/client.py\", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<10.250.0.27> ESTABLISH LOCAL CONNECTION FOR USER: rob
<10.250.0.27> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347 `" && echo ansible-tmp-1479917912.28-248435414010347="` echo $HOME/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347 `" ) && sleep 0'
<10.250.0.27> PUT /tmp/tmpVwd3xv TO /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py
<10.250.0.27> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py && sleep 0'
<10.250.0.27> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py; rm -rf "/home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_N6os2U/ansible_module_ios_command.py", line 237, in <module>
main()
File "/tmp/ansible_N6os2U/ansible_module_ios_command.py", line 200, in main
runner.add_command(**cmd)
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/netcli.py", line 147, in add_command
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py", line 117, in cli
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py", line 148, in connect
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/ios.py", line 180, in connect
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py", line 226, in connect
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py", line 76, in open
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 101, in load_system_host_keys
self._system_host_keys.load(filename)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 101, in load
e = HostKeyEntry.from_line(line, lineno)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 341, in from_line
raise InvalidHostKey(line, e)
paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))
fatal: [10.250.0.27]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ios_command"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_N6os2U/ansible_module_ios_command.py\", line 237, in <module>\n main()\n File \"/tmp/ansible_N6os2U/ansible_module_ios_command.py\", line 200, in main\n runner.add_command(**cmd)\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 147, in add_command\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py\", line 117, in cli\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py\", line 148, in connect\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/ios.py\", line 180, in connect\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py\", line 226, in connect\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py\", line 76, in open\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/client.py\", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
```
| True | [Regression] ios_command module failing with paramiko.hostkeys.InvalidHostKey - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
`ios_command`
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/rob/code/ansible/anchor-ansible/ansible.cfg
configured module search path = ['../ntc-ansible/library', '../napalm-ansible/library']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
[defaults]
host_key_checking = False
inventory = ./hosts
library = ../ntc-ansible/library:../napalm-ansible/library
log_path = ./logfile
retry_files_save_path = ./retry/
forks = 50
[paramiko_connection]
record_host_keys = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ansible Host Machine: Ubuntu 16.04
Connecting to Cisco routers/switches
##### SUMMARY
<!--- Explain the problem briefly -->
Using the below playbook to pull running configs from a number of cisco devices, I keep getting the below errors for many fo the devices.
```
```
This is only part of the error output.
These are devices I can successfully ssh to manually and with third party modules. Also note that the invalid host key is the same for each different device.
```
paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Rented Leasehold Superfast Routers Show Run to Local File
gather_facts: no
hosts: rl_superfast
tasks:
- name: Execute Show Run Command
ios_command:
provider: "{{ provider }}"
commands:
- show run
register: output
- name: Write Output to File
template:
src: output.txt.j2
dest: "./files/show_files/rl_superfast/superfast/{{ ansible_host }}.txt"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<10.250.0.38> ESTABLISH LOCAL CONNECTION FOR USER: rob
<10.250.0.38> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361 `" && echo ansible-tmp-1479917911.16-22520618528361="` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361 `" ) && sleep 0'
<10.250.0.38> PUT /tmp/tmpvgXT4L TO /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py
<10.250.0.38> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py && sleep 0'
<10.250.0.38> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/ios_command.py; rm -rf "/home/rob/.ansible/tmp/ansible-tmp-1479917911.16-22520618528361/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_POTd5B/ansible_module_ios_command.py", line 237, in <module>
main()
File "/tmp/ansible_POTd5B/ansible_module_ios_command.py", line 200, in main
runner.add_command(**cmd)
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/netcli.py", line 147, in add_command
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py", line 117, in cli
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py", line 148, in connect
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/ios.py", line 180, in connect
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py", line 226, in connect
File "/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py", line 76, in open
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 101, in load_system_host_keys
self._system_host_keys.load(filename)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 101, in load
e = HostKeyEntry.from_line(line, lineno)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 341, in from_line
raise InvalidHostKey(line, e)
paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))
fatal: [10.250.0.38]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ios_command"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_POTd5B/ansible_module_ios_command.py\", line 237, in <module>\n main()\n File \"/tmp/ansible_POTd5B/ansible_module_ios_command.py\", line 200, in main\n runner.add_command(**cmd)\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 147, in add_command\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py\", line 117, in cli\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/network.py\", line 148, in connect\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/ios.py\", line 180, in connect\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py\", line 226, in connect\n File \"/tmp/ansible_POTd5B/ansible_modlib.zip/ansible/module_utils/shell.py\", line 76, in open\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/client.py\", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<10.250.0.26> ESTABLISH LOCAL CONNECTION FOR USER: rob
<10.250.0.26> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090 `" && echo ansible-tmp-1479917911.73-151987012680090="` echo $HOME/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090 `" ) && sleep 0'
<10.250.0.26> PUT /tmp/tmpMkESve TO /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py
<10.250.0.26> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py && sleep 0'
<10.250.0.26> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/ios_command.py; rm -rf "/home/rob/.ansible/tmp/ansible-tmp-1479917911.73-151987012680090/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Wmwv85/ansible_module_ios_command.py", line 237, in <module>
main()
File "/tmp/ansible_Wmwv85/ansible_module_ios_command.py", line 200, in main
runner.add_command(**cmd)
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/netcli.py", line 147, in add_command
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py", line 117, in cli
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py", line 148, in connect
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/ios.py", line 180, in connect
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py", line 226, in connect
File "/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py", line 76, in open
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 101, in load_system_host_keys
self._system_host_keys.load(filename)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 101, in load
e = HostKeyEntry.from_line(line, lineno)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 341, in from_line
raise InvalidHostKey(line, e)
paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))
fatal: [10.250.0.26]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ios_command"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Wmwv85/ansible_module_ios_command.py\", line 237, in <module>\n main()\n File \"/tmp/ansible_Wmwv85/ansible_module_ios_command.py\", line 200, in main\n runner.add_command(**cmd)\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 147, in add_command\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py\", line 117, in cli\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/network.py\", line 148, in connect\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/ios.py\", line 180, in connect\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py\", line 226, in connect\n File \"/tmp/ansible_Wmwv85/ansible_modlib.zip/ansible/module_utils/shell.py\", line 76, in open\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/client.py\", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<10.250.0.27> ESTABLISH LOCAL CONNECTION FOR USER: rob
<10.250.0.27> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347 `" && echo ansible-tmp-1479917912.28-248435414010347="` echo $HOME/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347 `" ) && sleep 0'
<10.250.0.27> PUT /tmp/tmpVwd3xv TO /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py
<10.250.0.27> EXEC /bin/sh -c 'chmod u+x /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py && sleep 0'
<10.250.0.27> EXEC /bin/sh -c '/usr/bin/python /home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/ios_command.py; rm -rf "/home/rob/.ansible/tmp/ansible-tmp-1479917912.28-248435414010347/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_N6os2U/ansible_module_ios_command.py", line 237, in <module>
main()
File "/tmp/ansible_N6os2U/ansible_module_ios_command.py", line 200, in main
runner.add_command(**cmd)
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/netcli.py", line 147, in add_command
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py", line 117, in cli
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py", line 148, in connect
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/ios.py", line 180, in connect
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py", line 226, in connect
File "/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py", line 76, in open
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 101, in load_system_host_keys
self._system_host_keys.load(filename)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 101, in load
e = HostKeyEntry.from_line(line, lineno)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 341, in from_line
raise InvalidHostKey(line, e)
paramiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))
fatal: [10.250.0.27]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ios_command"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_N6os2U/ansible_module_ios_command.py\", line 237, in <module>\n main()\n File \"/tmp/ansible_N6os2U/ansible_module_ios_command.py\", line 200, in main\n runner.add_command(**cmd)\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 147, in add_command\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py\", line 117, in cli\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/network.py\", line 148, in connect\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/ios.py\", line 180, in connect\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py\", line 226, in connect\n File \"/tmp/ansible_N6os2U/ansible_modlib.zip/ansible/module_utils/shell.py\", line 76, in open\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/client.py\", line 101, in load_system_host_keys\n self._system_host_keys.load(filename)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 101, in load\n e = HostKeyEntry.from_line(line, lineno)\n File \"/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py\", line 341, in from_line\n raise InvalidHostKey(line, e)\nparamiko.hostkeys.InvalidHostKey: ('10.250.0.28 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqfdQq3Qd77fugHDUAEMHrwz87klhADA7ysowuc74l20Jj8KCaCubacRMhi9KRFcAsXtAQlUV2krnGrdMksWjsOeBTUb7IxV4SW+65VD8lGKPYZLJATuUfnD1pJMwYCAb8eiukcMNNxgtG7M8lGEEF8kDNn5H5kozxFoIHS4MP9Fn7SGvpWPVMrHnipNJFB3RdDJyDHee5nmEf3wawsMmAs7sqK+utjzKSCpGHFjkxNS7cw0kKA5F/fn8g/PectES+ZqoUz7Hr6AKExQOVFzMFzFXq/IgAxrRSz9gzqana5xNlEGEHY9/j4ICXmbz88LQzp1QK+XTP3gthuYWzofoKb', Error('Incorrect padding',))\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
```
| main | ios command module failing with paramiko hostkeys invalidhostkey issue type bug report component name ios command ansible version ansible config file home rob code ansible anchor ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables host key checking false inventory hosts library ntc ansible library napalm ansible library log path logfile retry files save path retry forks record host keys false ssh args o controlmaster auto o controlpersist o userknownhostsfile dev null o stricthostkeychecking no os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ansible host machine ubuntu connecting to cisco routers switches summary using the below playbook to pull running configs from a number of cisco devices i keep getting the below errors for many fo the devices this is only part of the error output these are devices i can successfully ssh to manually and with third party modules also note that the invalid host key is the same for each different device paramiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name rented leasehold superfast routers show run to local file gather facts no hosts rl superfast tasks name execute show run command ios command provider provider commands show run register output name write output to file template src output txt dest files show files rl superfast superfast ansible host txt expected results actual results using module file usr local lib dist packages ansible modules core network ios ios command py establish local connection for user rob exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home rob ansible tmp ansible tmp ios command py exec bin sh c chmod u x home rob ansible tmp ansible tmp home rob ansible tmp ansible tmp ios command py sleep exec bin sh c usr bin python home rob ansible tmp ansible tmp ios command py rm rf home rob ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios command py line in main file tmp ansible ansible module ios command py line in main runner add command cmd file tmp ansible ansible modlib zip ansible module utils netcli py line in add command file tmp ansible ansible modlib zip ansible module utils network py line in cli file tmp ansible ansible modlib zip ansible module utils network py line in connect file tmp ansible ansible modlib zip ansible module utils ios py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in open file usr local lib dist packages paramiko client py line in load system host keys self system host keys load filename file usr local lib dist packages paramiko hostkeys py line in load e hostkeyentry from line line lineno file usr local lib dist packages paramiko hostkeys py line in from line raise invalidhostkey line e paramiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding fatal failed changed false failed true invocation module name ios command module stderr traceback most recent call last n file tmp ansible ansible module ios command py line in n main n file tmp ansible ansible module ios command py line in main n runner add command cmd n file tmp ansible ansible modlib zip ansible module utils netcli py line in add command n file tmp ansible ansible modlib zip ansible module utils network py line in cli n file tmp ansible ansible modlib zip ansible module utils network py line in connect n file tmp ansible ansible modlib zip ansible module utils ios py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in open n file usr local lib dist packages paramiko client py line in load system host keys n self system host keys load filename n file usr local lib dist packages paramiko hostkeys py line in load n e hostkeyentry from line line lineno n file usr local lib dist packages paramiko hostkeys py line in from line n raise invalidhostkey line e nparamiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding n module stdout msg module failure using module file usr local lib dist packages ansible modules core network ios ios command py establish local connection for user rob exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpmkesve to home rob ansible tmp ansible tmp ios command py exec bin sh c chmod u x home rob ansible tmp ansible tmp home rob ansible tmp ansible tmp ios command py sleep exec bin sh c usr bin python home rob ansible tmp ansible tmp ios command py rm rf home rob ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios command py line in main file tmp ansible ansible module ios command py line in main runner add command cmd file tmp ansible ansible modlib zip ansible module utils netcli py line in add command file tmp ansible ansible modlib zip ansible module utils network py line in cli file tmp ansible ansible modlib zip ansible module utils network py line in connect file tmp ansible ansible modlib zip ansible module utils ios py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in open file usr local lib dist packages paramiko client py line in load system host keys self system host keys load filename file usr local lib dist packages paramiko hostkeys py line in load e hostkeyentry from line line lineno file usr local lib dist packages paramiko hostkeys py line in from line raise invalidhostkey line e paramiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding fatal failed changed false failed true invocation module name ios command module stderr traceback most recent call last n file tmp ansible ansible module ios command py line in n main n file tmp ansible ansible module ios command py line in main n runner add command cmd n file tmp ansible ansible modlib zip ansible module utils netcli py line in add command n file tmp ansible ansible modlib zip ansible module utils network py line in cli n file tmp ansible ansible modlib zip ansible module utils network py line in connect n file tmp ansible ansible modlib zip ansible module utils ios py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in open n file usr local lib dist packages paramiko client py line in load system host keys n self system host keys load filename n file usr local lib dist packages paramiko hostkeys py line in load n e hostkeyentry from line line lineno n file usr local lib dist packages paramiko hostkeys py line in from line n raise invalidhostkey line e nparamiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding n module stdout msg module failure using module file usr local lib dist packages ansible modules core network ios ios command py establish local connection for user rob exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home rob ansible tmp ansible tmp ios command py exec bin sh c chmod u x home rob ansible tmp ansible tmp home rob ansible tmp ansible tmp ios command py sleep exec bin sh c usr bin python home rob ansible tmp ansible tmp ios command py rm rf home rob ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios command py line in main file tmp ansible ansible module ios command py line in main runner add command cmd file tmp ansible ansible modlib zip ansible module utils netcli py line in add command file tmp ansible ansible modlib zip ansible module utils network py line in cli file tmp ansible ansible modlib zip ansible module utils network py line in connect file tmp ansible ansible modlib zip ansible module utils ios py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in connect file tmp ansible ansible modlib zip ansible module utils shell py line in open file usr local lib dist packages paramiko client py line in load system host keys self system host keys load filename file usr local lib dist packages paramiko hostkeys py line in load e hostkeyentry from line line lineno file usr local lib dist packages paramiko hostkeys py line in from line raise invalidhostkey line e paramiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding fatal failed changed false failed true invocation module name ios command module stderr traceback most recent call last n file tmp ansible ansible module ios command py line in n main n file tmp ansible ansible module ios command py line in main n runner add command cmd n file tmp ansible ansible modlib zip ansible module utils netcli py line in add command n file tmp ansible ansible modlib zip ansible module utils network py line in cli n file tmp ansible ansible modlib zip ansible module utils network py line in connect n file tmp ansible ansible modlib zip ansible module utils ios py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in connect n file tmp ansible ansible modlib zip ansible module utils shell py line in open n file usr local lib dist packages paramiko client py line in load system host keys n self system host keys load filename n file usr local lib dist packages paramiko hostkeys py line in load n e hostkeyentry from line line lineno n file usr local lib dist packages paramiko hostkeys py line in from line n raise invalidhostkey line e nparamiko hostkeys invalidhostkey ssh rsa pectes error incorrect padding n module stdout msg module failure | 1 |
2,175 | 7,624,077,370 | IssuesEvent | 2018-05-03 16:50:37 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Methods should not use boolean parameters | Area: analyzer Area: maintainability feature in progress | Boolean parameters should never be used by methods. Instead enums should be used.
See:
https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/parameter-design | True | Methods should not use boolean parameters - Boolean parameters should never be used by methods. Instead enums should be used.
See:
https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/parameter-design | main | methods should not use boolean parameters boolean parameters should never be used by methods instead enums should be used see | 1 |
7,578 | 10,687,100,460 | IssuesEvent | 2019-10-22 15:31:19 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | GO:0080185 effector-dependent induction by symbiont of host immune response | multi-species process | GO:0080185 effector-dependent induction by symbiont of host immune response
GO:0080185 JSON
effector-dependent induction by symbiont of host immune response
Biological Process
Definition (GO:0080185 GONUTS page)
Any process that involves recognition of an effector, and by which a symbiont activates, maintains or increases the frequency, rate or extent of the immune response of the host organism; the immune response is any immune system process that functions in the calibrated response of an organism to a potential internal or invasive threat. The host is defined as the larger of the organisms involved in a symbiotic interaction. Effectors are proteins secreted into the host cell by pathogenic microbes, presumably to alter host immune response signaling. The best characterized effectors are bacterial effectors delivered into the host cell by type III secretion system (TTSS). Effector-triggered immunity (ETI) involves the direct or indirect recognition of an effector protein by the host (for example through plant resistance or R proteins) and subsequent activation of host immune response. PMID:16497589
we need to discuss what this can be used for.
I think you would only annotate this process from a pathogen perspective if it was intentional?
I.e necrotrophs.
From the pathogen perspective we would . be annotatin immune aviodence.
On Friday . I'd like to look at gthe parentage of this.
| 1.0 | GO:0080185 effector-dependent induction by symbiont of host immune response - GO:0080185 effector-dependent induction by symbiont of host immune response
GO:0080185 JSON
effector-dependent induction by symbiont of host immune response
Biological Process
Definition (GO:0080185 GONUTS page)
Any process that involves recognition of an effector, and by which a symbiont activates, maintains or increases the frequency, rate or extent of the immune response of the host organism; the immune response is any immune system process that functions in the calibrated response of an organism to a potential internal or invasive threat. The host is defined as the larger of the organisms involved in a symbiotic interaction. Effectors are proteins secreted into the host cell by pathogenic microbes, presumably to alter host immune response signaling. The best characterized effectors are bacterial effectors delivered into the host cell by type III secretion system (TTSS). Effector-triggered immunity (ETI) involves the direct or indirect recognition of an effector protein by the host (for example through plant resistance or R proteins) and subsequent activation of host immune response. PMID:16497589
we need to discuss what this can be used for.
I think you would only annotate this process from a pathogen perspective if it was intentional?
I.e necrotrophs.
From the pathogen perspective we would . be annotatin immune aviodence.
On Friday . I'd like to look at gthe parentage of this.
| non_main | go effector dependent induction by symbiont of host immune response go effector dependent induction by symbiont of host immune response go json effector dependent induction by symbiont of host immune response biological process definition go gonuts page any process that involves recognition of an effector and by which a symbiont activates maintains or increases the frequency rate or extent of the immune response of the host organism the immune response is any immune system process that functions in the calibrated response of an organism to a potential internal or invasive threat the host is defined as the larger of the organisms involved in a symbiotic interaction effectors are proteins secreted into the host cell by pathogenic microbes presumably to alter host immune response signaling the best characterized effectors are bacterial effectors delivered into the host cell by type iii secretion system ttss effector triggered immunity eti involves the direct or indirect recognition of an effector protein by the host for example through plant resistance or r proteins and subsequent activation of host immune response pmid we need to discuss what this can be used for i think you would only annotate this process from a pathogen perspective if it was intentional i e necrotrophs from the pathogen perspective we would be annotatin immune aviodence on friday i d like to look at gthe parentage of this | 0 |
2,517 | 4,737,718,816 | IssuesEvent | 2016-10-20 00:00:32 | 18F/cg-dashboard | https://api.github.com/repos/18F/cg-dashboard | closed | Refine service states | Liberator service-instances | As a user, I want a clear description of the current state of my services.
Notes:
- Cloud Foundry API doesn't give us official state of stopped
- API gives us the last operation (create, update, or delete) and whether or not the operation was successful and we determine a state from this
- From this, we are currently displaying these states: Failed, deleting, processing, running, and stopped
- Ref: https://apidocs.cloudfoundry.org/242/service_instances/retrieve_a_particular_service_instance.html
last_operation | 1.0 | Refine service states - As a user, I want a clear description of the current state of my services.
Notes:
- Cloud Foundry API doesn't give us official state of stopped
- API gives us the last operation (create, update, or delete) and whether or not the operation was successful and we determine a state from this
- From this, we are currently displaying these states: Failed, deleting, processing, running, and stopped
- Ref: https://apidocs.cloudfoundry.org/242/service_instances/retrieve_a_particular_service_instance.html
last_operation | non_main | refine service states as a user i want a clear description of the current state of my services notes cloud foundry api doesn t give us official state of stopped api gives us the last operation create update or delete and whether or not the operation was successful and we determine a state from this from this we are currently displaying these states failed deleting processing running and stopped ref last operation | 0 |
10,881 | 2,622,512,094 | IssuesEvent | 2015-03-04 03:39:30 | tswast/pywiiuse | https://api.github.com/repos/tswast/pywiiuse | closed | example.py does not recogonize which button I pressed on wii | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. I just connect my wii and ran the program
What is the expected output? What do you see instead?
example.py should tell me which button I pressed, but it does not. It only
knows that and event has happened, but does not know which button I pressed.
What version of the product are you using? On what operating system?
I'm using windows 7 64 bits. But I installed 32 bits python 2.5
Thanks.
```
Original issue reported on code.google.com by `yangyh0...@gmail.com` on 14 Jun 2011 at 8:33 | 1.0 | example.py does not recogonize which button I pressed on wii - ```
What steps will reproduce the problem?
1. I just connect my wii and ran the program
What is the expected output? What do you see instead?
example.py should tell me which button I pressed, but it does not. It only
knows that and event has happened, but does not know which button I pressed.
What version of the product are you using? On what operating system?
I'm using windows 7 64 bits. But I installed 32 bits python 2.5
Thanks.
```
Original issue reported on code.google.com by `yangyh0...@gmail.com` on 14 Jun 2011 at 8:33 | non_main | example py does not recogonize which button i pressed on wii what steps will reproduce the problem i just connect my wii and ran the program what is the expected output what do you see instead example py should tell me which button i pressed but it does not it only knows that and event has happened but does not know which button i pressed what version of the product are you using on what operating system i m using windows bits but i installed bits python thanks original issue reported on code google com by gmail com on jun at | 0 |
122,433 | 16,114,555,200 | IssuesEvent | 2021-04-28 05:02:44 | CDCgov/prime-data-hub | https://api.github.com/repos/CDCgov/prime-data-hub | opened | Automated weekly digest email | Content Design engineering | ## Propose
There are currently no notifications about weekly data.
Send receivers a weekly digest that calls out key points ... ex: data that hasn't been downloaded over the last 7 days
## To do
- [ ] Every week (M-F, or S-S?), check to see if data has been reported for each receiver
- [ ] If data has been sent this week, trigger email to send to each receiver every week on [day?]
- [ ] If no data has been sent this week, [send another kind of email?]
- [ ] Write email content. Will include:
- Foo
- Bar
- Baz | 1.0 | Automated weekly digest email - ## Propose
There are currently no notifications about weekly data.
Send receivers a weekly digest that calls out key points ... ex: data that hasn't been downloaded over the last 7 days
## To do
- [ ] Every week (M-F, or S-S?), check to see if data has been reported for each receiver
- [ ] If data has been sent this week, trigger email to send to each receiver every week on [day?]
- [ ] If no data has been sent this week, [send another kind of email?]
- [ ] Write email content. Will include:
- Foo
- Bar
- Baz | non_main | automated weekly digest email propose there are currently no notifications about weekly data send receivers a weekly digest that calls out key points ex data that hasn t been downloaded over the last days to do every week m f or s s check to see if data has been reported for each receiver if data has been sent this week trigger email to send to each receiver every week on if no data has been sent this week write email content will include foo bar baz | 0 |
91,309 | 3,851,763,082 | IssuesEvent | 2016-04-06 04:35:19 | pombase/canto | https://api.github.com/repos/pombase/canto | closed | translate IDs into terms in views | bug extension interface low priority |
Longer term it would be nicer if extensions were presented as terms rather than IDs
i.e
present_during (mitotic S phase)
instead of
present_during( GO:0000084) | 1.0 | translate IDs into terms in views -
Longer term it would be nicer if extensions were presented as terms rather than IDs
i.e
present_during (mitotic S phase)
instead of
present_during( GO:0000084) | non_main | translate ids into terms in views longer term it would be nicer if extensions were presented as terms rather than ids i e present during mitotic s phase instead of present during go | 0 |
151,368 | 5,814,250,220 | IssuesEvent | 2017-05-05 02:30:13 | TCHayes/best-card-v2 | https://api.github.com/repos/TCHayes/best-card-v2 | opened | Allow users to enter cards and rewards information for cards not in our database | enhancement Priority 5 | Allow users to enter cards and rewards information for cards not in our database | 1.0 | Allow users to enter cards and rewards information for cards not in our database - Allow users to enter cards and rewards information for cards not in our database | non_main | allow users to enter cards and rewards information for cards not in our database allow users to enter cards and rewards information for cards not in our database | 0 |
606,221 | 18,757,628,215 | IssuesEvent | 2021-11-05 12:56:35 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.google.com - "Street View" image doesn't load | browser-firefox priority-critical priority-normal severity-critical engine-gecko ml-needsdiagnosis-false | <!-- @browser: Firefox 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/55931 -->
**URL**: https://www.google.com/search?q=Wiejska+94%2C+Inwa%C5%82d
**Browser / Version**: Firefox 79.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Image background for Google street view is invisible due incorrect linear-gradient generated by Gooogle:
`linear-gradient(top, rgba(0,0,0,0), rgba(0,0,0,.5))`
Google "script" for Firefox 57+ ommit needed `-moz-` prefix also no try use corect syntax:
`linear-gradient(to bottom, rgba(0,0,0,0), rgba(0,0,0,.5))`
Supported by Firefox Quantum.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/7/0ee89900-df43-45f4-8397-712c59d5334f.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 2.0 | www.google.com - "Street View" image doesn't load - <!-- @browser: Firefox 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/55931 -->
**URL**: https://www.google.com/search?q=Wiejska+94%2C+Inwa%C5%82d
**Browser / Version**: Firefox 79.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Image background for Google street view is invisible due incorrect linear-gradient generated by Gooogle:
`linear-gradient(top, rgba(0,0,0,0), rgba(0,0,0,.5))`
Google "script" for Firefox 57+ ommit needed `-moz-` prefix also no try use corect syntax:
`linear-gradient(to bottom, rgba(0,0,0,0), rgba(0,0,0,.5))`
Supported by Firefox Quantum.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/7/0ee89900-df43-45f4-8397-712c59d5334f.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | street view image doesn t load url browser version firefox operating system windows tested another browser yes chrome problem type design is broken description images not loaded steps to reproduce image background for google street view is invisible due incorrect linear gradient generated by gooogle linear gradient top rgba rgba google script for firefox ommit needed moz prefix also no try use corect syntax linear gradient to bottom rgba rgba supported by firefox quantum view the screenshot img alt screenshot src browser configuration none from with ❤️ | 0 |
729 | 4,320,954,936 | IssuesEvent | 2016-07-25 08:15:08 | Particular/NServiceBus.AzureServiceBus | https://api.github.com/repos/Particular/NServiceBus.AzureServiceBus | closed | ICreateQueues implementation doesn't use queueBindings provided value | State: In Progress - Maintainer Prio Type: Refactoring | `TransportResourcesCreator.CreateQueueIfNecessary` receives `QueueBindings` as first parameter with all needed info about resource names to create, but this implementation doesn't pass this value to `ITopologySectionManager` dependency.
`ITopologySectionManager` implementations use `EndpointName` from settings as input queue name | True | ICreateQueues implementation doesn't use queueBindings provided value - `TransportResourcesCreator.CreateQueueIfNecessary` receives `QueueBindings` as first parameter with all needed info about resource names to create, but this implementation doesn't pass this value to `ITopologySectionManager` dependency.
`ITopologySectionManager` implementations use `EndpointName` from settings as input queue name | main | icreatequeues implementation doesn t use queuebindings provided value transportresourcescreator createqueueifnecessary receives queuebindings as first parameter with all needed info about resource names to create but this implementation doesn t pass this value to itopologysectionmanager dependency itopologysectionmanager implementations use endpointname from settings as input queue name | 1 |
54,931 | 13,486,503,517 | IssuesEvent | 2020-09-11 09:33:01 | ckeditor/ckeditor5 | https://api.github.com/repos/ckeditor/ckeditor5 | closed | Consider enabling AutoLink in all builds | intro package:build-balloon package:build-balloon-block package:build-classic package:build-decoupled-document package:build-inline squad:ux type:task | ## Provide a description of the task
Demo is here: https://ckeditor5.github.io/docs/nightly/ckeditor5/latest/features/link.html#autolink-feature
Do we want this feature in all builds? I'm :+1: | 5.0 | Consider enabling AutoLink in all builds - ## Provide a description of the task
Demo is here: https://ckeditor5.github.io/docs/nightly/ckeditor5/latest/features/link.html#autolink-feature
Do we want this feature in all builds? I'm :+1: | non_main | consider enabling autolink in all builds provide a description of the task demo is here do we want this feature in all builds i m | 0 |
4,485 | 23,368,159,469 | IssuesEvent | 2022-08-10 17:12:01 | warengonzaga/gathertown.js | https://api.github.com/repos/warengonzaga/gathertown.js | closed | move the homepage to orphan branch | feature maintainers-only | ### 🤔 Not Existing Feature Request?
- [X] Yes, I'm sure, this is a new requested feature!
### 🤔 Not an Idea or Suggestion?
- [X] Yes, I'm sure, this is not idea or suggestion!
### 📋 Request Details
This is to completely separate the development of the homepage from the SDK.
### 📜 Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/warengonzaga/gathertown.js/blob/main/CODE_OF_CONDUCT.md). | True | move the homepage to orphan branch - ### 🤔 Not Existing Feature Request?
- [X] Yes, I'm sure, this is a new requested feature!
### 🤔 Not an Idea or Suggestion?
- [X] Yes, I'm sure, this is not idea or suggestion!
### 📋 Request Details
This is to completely separate the development of the homepage from the SDK.
### 📜 Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/warengonzaga/gathertown.js/blob/main/CODE_OF_CONDUCT.md). | main | move the homepage to orphan branch 🤔 not existing feature request yes i m sure this is a new requested feature 🤔 not an idea or suggestion yes i m sure this is not idea or suggestion 📋 request details this is to completely separate the development of the homepage from the sdk 📜 code of conduct i agree to follow this project s | 1 |
2,888 | 10,319,607,777 | IssuesEvent | 2019-08-30 18:00:27 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Social SimpleSharer | Maintainer application | non-Javascript social (Facebook, Twitter, etc.) sharing module is ported to Backdrop 1.x
https://github.com/ElliotChristenson/backdrop-simplesharer
| True | Social SimpleSharer - non-Javascript social (Facebook, Twitter, etc.) sharing module is ported to Backdrop 1.x
https://github.com/ElliotChristenson/backdrop-simplesharer
| main | social simplesharer non javascript social facebook twitter etc sharing module is ported to backdrop x | 1 |
5,529 | 27,641,193,015 | IssuesEvent | 2023-03-10 18:12:23 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Ship source maps in prod | type: enhancement work: frontend status: ready restricted: maintainers | [Original discussion (private)](https://matrix.to/#/!xPrzlsyLMNbsMzrcrX:matrix.mathesar.org/$Mlhz6J4SiEI6KVbxTc1ZWrboBZyhqeDnRNHtHputVPU?via=matrix.mathesar.org)
@silentninja suggests shipping source maps, at least for selective installations
@seancolsen says:
> This is a very good idea. We should be shipping sourcemaps to prod for sure
@rajatvijay says:
> I agree. Separate from main bundle.
| True | Ship source maps in prod - [Original discussion (private)](https://matrix.to/#/!xPrzlsyLMNbsMzrcrX:matrix.mathesar.org/$Mlhz6J4SiEI6KVbxTc1ZWrboBZyhqeDnRNHtHputVPU?via=matrix.mathesar.org)
@silentninja suggests shipping source maps, at least for selective installations
@seancolsen says:
> This is a very good idea. We should be shipping sourcemaps to prod for sure
@rajatvijay says:
> I agree. Separate from main bundle.
| main | ship source maps in prod silentninja suggests shipping source maps at least for selective installations seancolsen says this is a very good idea we should be shipping sourcemaps to prod for sure rajatvijay says i agree separate from main bundle | 1 |
548,916 | 16,081,319,878 | IssuesEvent | 2021-04-26 05:18:17 | airbytehq/airbyte | https://api.github.com/repos/airbytehq/airbyte | closed | Source Google Spreadsheets: silent failure when discovering a sheet with duplicate headers | accepting-contributions area/integration good first issue priority/medium type/bug | ## Expected Behavior
If my spreadsheet has duplicate header and I try to replicate it with the google sheets connector, I expect two things to happen:
1. The UI should display a warning message saying "We can't replicate column X because its header is duplicated. Please resolve the duplicity and try again".
2. The connector should allow me to replicate every other non-duplicate column.
## Current Behavior
The connector fails to discover the schema entirely, instead returning an empty schema.
## Steps to Reproduce
1. Create a spreadsheet with duplicate headers
1. run `discover`
1. no schema is returned
## Severity of the bug for you
Medium -- an edge case but blocking when it happens
## Airbyte Version
spreadsheets 0.1.7 (latest) -- Airbyte version irrelevant
## Acceptance Criteria
1. If there are duplicate columns, the connector logs warning messages and continues with schema discovery as normal.
2. If there are no non-duplicate columns in a sheet, raise an exception and fail discovery.
| 1.0 | Source Google Spreadsheets: silent failure when discovering a sheet with duplicate headers - ## Expected Behavior
If my spreadsheet has duplicate header and I try to replicate it with the google sheets connector, I expect two things to happen:
1. The UI should display a warning message saying "We can't replicate column X because its header is duplicated. Please resolve the duplicity and try again".
2. The connector should allow me to replicate every other non-duplicate column.
## Current Behavior
The connector fails to discover the schema entirely, instead returning an empty schema.
## Steps to Reproduce
1. Create a spreadsheet with duplicate headers
1. run `discover`
1. no schema is returned
## Severity of the bug for you
Medium -- an edge case but blocking when it happens
## Airbyte Version
spreadsheets 0.1.7 (latest) -- Airbyte version irrelevant
## Acceptance Criteria
1. If there are duplicate columns, the connector logs warning messages and continues with schema discovery as normal.
2. If there are no non-duplicate columns in a sheet, raise an exception and fail discovery.
| non_main | source google spreadsheets silent failure when discovering a sheet with duplicate headers expected behavior if my spreadsheet has duplicate header and i try to replicate it with the google sheets connector i expect two things to happen the ui should display a warning message saying we can t replicate column x because its header is duplicated please resolve the duplicity and try again the connector should allow me to replicate every other non duplicate column current behavior the connector fails to discover the schema entirely instead returning an empty schema steps to reproduce create a spreadsheet with duplicate headers run discover no schema is returned severity of the bug for you medium an edge case but blocking when it happens airbyte version spreadsheets latest airbyte version irrelevant acceptance criteria if there are duplicate columns the connector logs warning messages and continues with schema discovery as normal if there are no non duplicate columns in a sheet raise an exception and fail discovery | 0 |
911 | 4,581,941,584 | IssuesEvent | 2016-09-19 08:22:34 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | eos_eapi: TypeError: load_config() got an unexpected keyword argument 'session'\n" | affects_2.2 bug_report networking P1 waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_eapi
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 70e63ddf6c) last updated 2016/09/15 10:17:19 (GMT +100)
lib/ansible/modules/core: (devel 683e5e4d1a) last updated 2016/09/15 10:17:22 (GMT +100)
lib/ansible/modules/extras: (devel 170adf16bd) last updated 2016/09/15 10:17:23 (GMT +100)
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
I believe the fix is
```
diff --git a/network/eos/eos_eapi.py b/network/eos/eos_eapi.py
index 7edbe3f..c09cddb 100644
--- a/network/eos/eos_eapi.py
+++ b/network/eos/eos_eapi.py
@@ -190,8 +190,10 @@ urls:
import re
import time
+import ansible.module_utils.eos
+
+from ansible.module_utils.network import NetworkModule, NetworkError
from ansible.module_utils.netcfg import NetworkConfig, dumps
-from ansible.module_utils.eos import NetworkModule, NetworkError
from ansible.module_utils.basic import get_exception
PRIVATE_KEYS_RE = re.compile('__.+__')
@@ -273,7 +275,7 @@ def load_config(module, commands, result):
session = 'ansible_%s' % int(time.time())
commit = not module.check_mode
- diff = module.config.load_config(commands, session=session, commit=commit)
+ diff = module.config.load_config(commands, commit=commit)
# once the configuration is done, remove the config session and
# remove the session name from the result
```
Based on https://github.com/ansible/ansible-modules-core/pull/4804/files
However there may be other things that need changing
##### STEPS TO REPRODUCE
```
```
##### EXPECTED RESULTS
##### ACTUAL RESULTS
| True | eos_eapi: TypeError: load_config() got an unexpected keyword argument 'session'\n" - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_eapi
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 70e63ddf6c) last updated 2016/09/15 10:17:19 (GMT +100)
lib/ansible/modules/core: (devel 683e5e4d1a) last updated 2016/09/15 10:17:22 (GMT +100)
lib/ansible/modules/extras: (devel 170adf16bd) last updated 2016/09/15 10:17:23 (GMT +100)
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
I believe the fix is
```
diff --git a/network/eos/eos_eapi.py b/network/eos/eos_eapi.py
index 7edbe3f..c09cddb 100644
--- a/network/eos/eos_eapi.py
+++ b/network/eos/eos_eapi.py
@@ -190,8 +190,10 @@ urls:
import re
import time
+import ansible.module_utils.eos
+
+from ansible.module_utils.network import NetworkModule, NetworkError
from ansible.module_utils.netcfg import NetworkConfig, dumps
-from ansible.module_utils.eos import NetworkModule, NetworkError
from ansible.module_utils.basic import get_exception
PRIVATE_KEYS_RE = re.compile('__.+__')
@@ -273,7 +275,7 @@ def load_config(module, commands, result):
session = 'ansible_%s' % int(time.time())
commit = not module.check_mode
- diff = module.config.load_config(commands, session=session, commit=commit)
+ diff = module.config.load_config(commands, commit=commit)
# once the configuration is done, remove the config session and
# remove the session name from the result
```
Based on https://github.com/ansible/ansible-modules-core/pull/4804/files
However there may be other things that need changing
##### STEPS TO REPRODUCE
```
```
##### EXPECTED RESULTS
##### ACTUAL RESULTS
| main | eos eapi typeerror load config got an unexpected keyword argument session n issue type bug report component name eos eapi ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary i believe the fix is diff git a network eos eos eapi py b network eos eos eapi py index a network eos eos eapi py b network eos eos eapi py urls import re import time import ansible module utils eos from ansible module utils network import networkmodule networkerror from ansible module utils netcfg import networkconfig dumps from ansible module utils eos import networkmodule networkerror from ansible module utils basic import get exception private keys re re compile def load config module commands result session ansible s int time time commit not module check mode diff module config load config commands session session commit commit diff module config load config commands commit commit once the configuration is done remove the config session and remove the session name from the result based on however there may be other things that need changing steps to reproduce expected results actual results | 1 |
623,005 | 19,660,177,923 | IssuesEvent | 2022-01-10 16:15:59 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | Organizer of Parent Group, Dont show or cant be search in the list of can be send invites on Sub Groups | bug priority-medium Stale | **Describe the bug**
When you create a subgroup, you cannot send an invite to an organizer in the parent group.
**To Reproduce**
[https://prnt.sc/xqtfrp](https://prnt.sc/xqtfrp)
**Expected behavior**
In a subgroup, you can invite regular member and organizer that are already a member in the parent group.
**Screenshots**
[https://prnt.sc/xqtfrp](https://prnt.sc/xqtfrp)
[https://prnt.sc/xsp0k9](https://prnt.sc/xsp0k9)
[https://prnt.sc/xsp6rr](https://prnt.sc/xsp6rr)
**Support ticket links**
https://secure.helpscout.net/conversation/1405111562/121621/
**Jira issue** : [PROD-863]
[PROD-863]: https://buddyboss.atlassian.net/browse/PROD-863?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | Organizer of Parent Group, Dont show or cant be search in the list of can be send invites on Sub Groups - **Describe the bug**
When you create a subgroup, you cannot send an invite to an organizer in the parent group.
**To Reproduce**
[https://prnt.sc/xqtfrp](https://prnt.sc/xqtfrp)
**Expected behavior**
In a subgroup, you can invite regular member and organizer that are already a member in the parent group.
**Screenshots**
[https://prnt.sc/xqtfrp](https://prnt.sc/xqtfrp)
[https://prnt.sc/xsp0k9](https://prnt.sc/xsp0k9)
[https://prnt.sc/xsp6rr](https://prnt.sc/xsp6rr)
**Support ticket links**
https://secure.helpscout.net/conversation/1405111562/121621/
**Jira issue** : [PROD-863]
[PROD-863]: https://buddyboss.atlassian.net/browse/PROD-863?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | non_main | organizer of parent group dont show or cant be search in the list of can be send invites on sub groups describe the bug when you create a subgroup you cannot send an invite to an organizer in the parent group to reproduce expected behavior in a subgroup you can invite regular member and organizer that are already a member in the parent group screenshots support ticket links jira issue | 0 |
198 | 2,831,495,787 | IssuesEvent | 2015-05-24 17:47:24 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | opened | split CI into "core" and "analysis" | maintainability | Given that travis-ci/coverall works so well, we can think about how to get more out of it and the flexibility it provides us. I propose to split the CI and especially the coverage reports into two batches whose status would also be displayed separately in the README.rst:
* analysis: `MDAnalysis.analysis`
* core: everything else (`core`, `topology`, `coordinate`, `lib`, `util`, ...)
advantages
* more accurate reflection of the coverage of the most crucial code parts (the core) because at the moment, analysis has much fewer unit tests than core (we need to change that but it won't happen overnight)
* (small) speed up of the CI runs
disadvantages
* no global view anymore
* mildly confusing when we have multiple badges on the front page (although we could simply put *core library* and *analysis modules* in front of the badges)
Opinions? | True | split CI into "core" and "analysis" - Given that travis-ci/coverall works so well, we can think about how to get more out of it and the flexibility it provides us. I propose to split the CI and especially the coverage reports into two batches whose status would also be displayed separately in the README.rst:
* analysis: `MDAnalysis.analysis`
* core: everything else (`core`, `topology`, `coordinate`, `lib`, `util`, ...)
advantages
* more accurate reflection of the coverage of the most crucial code parts (the core) because at the moment, analysis has much fewer unit tests than core (we need to change that but it won't happen overnight)
* (small) speed up of the CI runs
disadvantages
* no global view anymore
* mildly confusing when we have multiple badges on the front page (although we could simply put *core library* and *analysis modules* in front of the badges)
Opinions? | main | split ci into core and analysis given that travis ci coverall works so well we can think about how to get more out of it and the flexibility it provides us i propose to split the ci and especially the coverage reports into two batches whose status would also be displayed separately in the readme rst analysis mdanalysis analysis core everything else core topology coordinate lib util advantages more accurate reflection of the coverage of the most crucial code parts the core because at the moment analysis has much fewer unit tests than core we need to change that but it won t happen overnight small speed up of the ci runs disadvantages no global view anymore mildly confusing when we have multiple badges on the front page although we could simply put core library and analysis modules in front of the badges opinions | 1 |
1,037 | 4,832,806,444 | IssuesEvent | 2016-11-08 08:53:33 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Bad consul_kv token default | affects_2.3 docs_report waiting_on_maintainer | Currently the documentation says the token defaults to `None` but the code does not:
https://github.com/ansible/ansible-modules-extras/blob/devel/clustering/consul_kv.py#L244
Note: The current default breaks bootstrapped agent acls.
I can try and send a PR shortly to fix this unless someone else beats me to it!
| True | Bad consul_kv token default - Currently the documentation says the token defaults to `None` but the code does not:
https://github.com/ansible/ansible-modules-extras/blob/devel/clustering/consul_kv.py#L244
Note: The current default breaks bootstrapped agent acls.
I can try and send a PR shortly to fix this unless someone else beats me to it!
| main | bad consul kv token default currently the documentation says the token defaults to none but the code does not note the current default breaks bootstrapped agent acls i can try and send a pr shortly to fix this unless someone else beats me to it | 1 |
658 | 4,173,680,628 | IssuesEvent | 2016-06-21 11:30:04 | Particular/ServicePulse | https://api.github.com/repos/Particular/ServicePulse | closed | Number formatting | Impact: S Size: S State: In Progress - Maintainer Prio Tag: Maintainer Prio Tag: Triaged Type: Feature | Noticed that not all numbers are being formatted:

Would be nice if numbers would be containing decimal separators for easy reading. | True | Number formatting - Noticed that not all numbers are being formatted:

Would be nice if numbers would be containing decimal separators for easy reading. | main | number formatting noticed that not all numbers are being formatted would be nice if numbers would be containing decimal separators for easy reading | 1 |
3,732 | 15,588,496,991 | IssuesEvent | 2021-03-18 06:30:42 | yast/yast-auth-client | https://api.github.com/repos/yast/yast-auth-client | closed | Change LDAP auth client setup from binddn and bindpwd to rootbinddn and 600 /etc/ldap.secret | other-maintainer | Hello Team,
I would like to suggest to make this change to improve the security for LDAP Client auth setup, I had a look at the code and does not seem to be so hard to change that, this is related to #70 as well.
The suggestion it is to change the binddn from /etc/ldap.conf to rootbinddn, remove the bindpwd option from ldap.conf and create a file with 0600 as /etc/ldap.secret with the password in clear text.
I would like to start to contribute with some OpenSUSE project and I can give a try if you want. | True | Change LDAP auth client setup from binddn and bindpwd to rootbinddn and 600 /etc/ldap.secret - Hello Team,
I would like to suggest to make this change to improve the security for LDAP Client auth setup, I had a look at the code and does not seem to be so hard to change that, this is related to #70 as well.
The suggestion it is to change the binddn from /etc/ldap.conf to rootbinddn, remove the bindpwd option from ldap.conf and create a file with 0600 as /etc/ldap.secret with the password in clear text.
I would like to start to contribute with some OpenSUSE project and I can give a try if you want. | main | change ldap auth client setup from binddn and bindpwd to rootbinddn and etc ldap secret hello team i would like to suggest to make this change to improve the security for ldap client auth setup i had a look at the code and does not seem to be so hard to change that this is related to as well the suggestion it is to change the binddn from etc ldap conf to rootbinddn remove the bindpwd option from ldap conf and create a file with as etc ldap secret with the password in clear text i would like to start to contribute with some opensuse project and i can give a try if you want | 1 |
3,650 | 2,610,066,426 | IssuesEvent | 2015-02-26 18:19:31 | chrsmith/jsjsj122 | https://api.github.com/repos/chrsmith/jsjsj122 | opened | 临海检查前列腺炎去哪里效果好 | auto-migrated Priority-Medium Type-Defect | ```
临海检查前列腺炎去哪里效果好【台州五洲生殖医院】24小时
健康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地�
��:台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐1
04、108、118、198及椒江一金清公交车直达枫南小区,乘坐107、
105、109、112、901、
902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:34 | 1.0 | 临海检查前列腺炎去哪里效果好 - ```
临海检查前列腺炎去哪里效果好【台州五洲生殖医院】24小时
健康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地�
��:台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐1
04、108、118、198及椒江一金清公交车直达枫南小区,乘坐107、
105、109、112、901、
902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:34 | non_main | 临海检查前列腺炎去哪里效果好 临海检查前列腺炎去哪里效果好【台州五洲生殖医院】 健康咨询热线 微信号tzwzszyy 医院地� �� (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at | 0 |
1,743 | 6,574,903,654 | IssuesEvent | 2017-09-11 14:26:52 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Weird git behavior on branch different from master. | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Git module
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /root/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Debian Jessie
##### SUMMARY
Weird behavior on git update
I use folllowing play to update our sw:
```
- name: Clone git repository
git: >
dest=/var/www/lb.binweevils.com/
repo=xxxxxxxxxxxxxxx
accept_hostkey=true
force=yes
register: cloned
```
And on one branch variation ansible constantly pulling back my changes.
So just to explain better:
Let's reset our current branch to it's remote counterpart:
```
#git fetch --all
#git checkout new-er
Already on 'new-er'
Your branch is behind 'origin/new-er' by 9 commits, and can be fast-forwarded.
(use "git pull" to update your local branch)
# git reset --hard origin/new-er
HEAD is now at ced92ae Merge remote-tracking branch 'origin/new-er' into new-er
```
Right now git pull is working fine (as in saying up to date)
```
# git pull
Already up-to-date.
```
Show ref shows following info
```
# git show-ref
ad3286b43a528b1e2430f4037d5a6b67fb1691d1 refs/heads/master
...
ced92aeb5e45e9008798f9f81da90a7a2962edb0 refs/heads/new-er
...
```
and git rev-parse (which is used by git module if I am not wrong) shows the same info
```
# git rev-parse HEAD
ced92aeb5e45e9008798f9f81da90a7a2962edb0
```
##### EXPECTED RESULTS
I would expect that launching my rulebook nothing will happen (if no changes has been pushed to repository)
##### ACTUAL RESULTS
```
TASK [deploy/xxx : Clone git repository] *************************
changed: [10.3.0.103] => {"after": "2007c264a5b2639577d279890a45de0dfd93dd23", "before": "ced92aeb5e45e9008798f9f81da90a7a2962edb0", "changed": true, "warnings": []}
```
Now, if I launch git status on remote server I can see
```
# git status
On branch new-er
Your branch and 'origin/new-er' have diverged,
and have 3 and 9 different commits each, respectively.
(use "git pull" to merge the remote branch into yours)
nothing to commit, working directory clean
# git show-ref
ad3286b43a528b1e2430f4037d5a6b67fb1691d1 refs/heads/master
2007c264a5b2639577d279890a45de0dfd93dd23 refs/heads/new-er
# git rev-parse HEAD
2007c264a5b2639577d279890a45de0dfd93dd23
```
If I try to run by hand git pull, it will update all the files
```
# git rev-parse HEAD
278b53fef04e600b8fea989fb04b47910f87ee82
root@lamp-dev-2-socket:/var/www/lb.binweevils.com# git pull
Already up-to-date.
```
but they will be overridden after next sync (code is being resetted to 2007c264a5b2639577d279890a45de0dfd93dd23 again)
TASK [deploy/lb.binweevils.com : Clone git repository] *************************
changed: [10.3.0.103] => {"after": "2007c264a5b2639577d279890a45de0dfd93dd23", "before": "278b53fef04e600b8fea989fb04b47910f87ee82", "changed": true, "warnings": []}
After a bit of investigation I found out the issue: looks like in that occasion git module is trying to bring current version to commit contained in the master HEAD branch, ignoring what branch is it on currently.
| True | Weird git behavior on branch different from master. - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Git module
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /root/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Debian Jessie
##### SUMMARY
Weird behavior on git update
I use folllowing play to update our sw:
```
- name: Clone git repository
git: >
dest=/var/www/lb.binweevils.com/
repo=xxxxxxxxxxxxxxx
accept_hostkey=true
force=yes
register: cloned
```
And on one branch variation ansible constantly pulling back my changes.
So just to explain better:
Let's reset our current branch to it's remote counterpart:
```
#git fetch --all
#git checkout new-er
Already on 'new-er'
Your branch is behind 'origin/new-er' by 9 commits, and can be fast-forwarded.
(use "git pull" to update your local branch)
# git reset --hard origin/new-er
HEAD is now at ced92ae Merge remote-tracking branch 'origin/new-er' into new-er
```
Right now git pull is working fine (as in saying up to date)
```
# git pull
Already up-to-date.
```
Show ref shows following info
```
# git show-ref
ad3286b43a528b1e2430f4037d5a6b67fb1691d1 refs/heads/master
...
ced92aeb5e45e9008798f9f81da90a7a2962edb0 refs/heads/new-er
...
```
and git rev-parse (which is used by git module if I am not wrong) shows the same info
```
# git rev-parse HEAD
ced92aeb5e45e9008798f9f81da90a7a2962edb0
```
##### EXPECTED RESULTS
I would expect that launching my rulebook nothing will happen (if no changes has been pushed to repository)
##### ACTUAL RESULTS
```
TASK [deploy/xxx : Clone git repository] *************************
changed: [10.3.0.103] => {"after": "2007c264a5b2639577d279890a45de0dfd93dd23", "before": "ced92aeb5e45e9008798f9f81da90a7a2962edb0", "changed": true, "warnings": []}
```
Now, if I launch git status on remote server I can see
```
# git status
On branch new-er
Your branch and 'origin/new-er' have diverged,
and have 3 and 9 different commits each, respectively.
(use "git pull" to merge the remote branch into yours)
nothing to commit, working directory clean
# git show-ref
ad3286b43a528b1e2430f4037d5a6b67fb1691d1 refs/heads/master
2007c264a5b2639577d279890a45de0dfd93dd23 refs/heads/new-er
# git rev-parse HEAD
2007c264a5b2639577d279890a45de0dfd93dd23
```
If I try to run by hand git pull, it will update all the files
```
# git rev-parse HEAD
278b53fef04e600b8fea989fb04b47910f87ee82
root@lamp-dev-2-socket:/var/www/lb.binweevils.com# git pull
Already up-to-date.
```
but they will be overridden after next sync (code is being resetted to 2007c264a5b2639577d279890a45de0dfd93dd23 again)
TASK [deploy/lb.binweevils.com : Clone git repository] *************************
changed: [10.3.0.103] => {"after": "2007c264a5b2639577d279890a45de0dfd93dd23", "before": "278b53fef04e600b8fea989fb04b47910f87ee82", "changed": true, "warnings": []}
After a bit of investigation I found out the issue: looks like in that occasion git module is trying to bring current version to commit contained in the master HEAD branch, ignoring what branch is it on currently.
| main | weird git behavior on branch different from master issue type bug report component name git module ansible version ansible config file root ansible ansible cfg configured module search path configuration os environment debian jessie summary weird behavior on git update i use folllowing play to update our sw name clone git repository git dest var www lb binweevils com repo xxxxxxxxxxxxxxx accept hostkey true force yes register cloned and on one branch variation ansible constantly pulling back my changes so just to explain better let s reset our current branch to it s remote counterpart git fetch all git checkout new er already on new er your branch is behind origin new er by commits and can be fast forwarded use git pull to update your local branch git reset hard origin new er head is now at merge remote tracking branch origin new er into new er right now git pull is working fine as in saying up to date git pull already up to date show ref shows following info git show ref refs heads master refs heads new er and git rev parse which is used by git module if i am not wrong shows the same info git rev parse head expected results i would expect that launching my rulebook nothing will happen if no changes has been pushed to repository actual results task changed after before changed true warnings now if i launch git status on remote server i can see git status on branch new er your branch and origin new er have diverged and have and different commits each respectively use git pull to merge the remote branch into yours nothing to commit working directory clean git show ref refs heads master refs heads new er git rev parse head if i try to run by hand git pull it will update all the files git rev parse head root lamp dev socket var www lb binweevils com git pull already up to date but they will be overridden after next sync code is being resetted to again task changed after before changed true warnings after a bit of investigation i found out the issue looks like in that occasion git module is trying to bring current version to commit contained in the master head branch ignoring what branch is it on currently | 1 |
82,397 | 3,606,304,338 | IssuesEvent | 2016-02-04 10:39:19 | cs2103jan2016-t16-3j/main | https://api.github.com/repos/cs2103jan2016-t16-3j/main | closed | (User Story) People | priority.low type.story | As a user, I can link events with people (e.g. boss) so that I know who to submit to/sort tasks by people. (similar to hashtag? See HashTag) | 1.0 | (User Story) People - As a user, I can link events with people (e.g. boss) so that I know who to submit to/sort tasks by people. (similar to hashtag? See HashTag) | non_main | user story people as a user i can link events with people e g boss so that i know who to submit to sort tasks by people similar to hashtag see hashtag | 0 |
320,650 | 27,448,607,027 | IssuesEvent | 2023-03-02 15:51:06 | PluginBugs/Issues-ItemsAdder | https://api.github.com/repos/PluginBugs/Issues-ItemsAdder | closed | Same `custom model data` with different `material type` cannot merge if using custom json `overrides` definition | Bug Need testing | ### Terms
- [X] I'm using the very latest version of ItemsAdder and its dependencies.
- [X] I am sure this is a bug and it is not caused by a misconfiguration or by another plugin.
- [X] I already searched on this [Github page](https://github.com/PluginBugs/Issues-ItemsAdder/issues) to check if the same issue was already reported.
- [X] I already searched on the [plugin wiki](https://itemsadder.devs.beer/) to know if a solution is already known.
- [X] I already searched on the [forums](https://forum.devs.beer/) to check if anyone already has a solution for this.
### Discord tag (optional)
Nailm#9364
### What happened?
IA cannot correctly merge the item model overrides if:
1. a model is defined in the IA config, with custom model id being X and material being Y
2. then I manually define a model override by writing a json file, with custom model id also being X but for a different material other than Y
Ideally, IA should be able to merge the files because the material is different, despite the custom model id is the same.
### Steps to reproduce the issue
I have defined a custom furniture like this (the file is at `plugins/ItemsAdder/contents/furnitures/configs/anniversary/1st_year_cake.yml`):
```yaml
info:
namespace: furnitures
items:
1st_year_cake:
display_name: 1st Year Cake
resource:
material: PAPER
generate: false
model_id: 40000
model_path: anniversary/1st_year_cake
behaviours:
furniture:
entity: item_frame
gravity: false
small: false
solid: true
fixed_rotation: false
placeable_on:
floor: true
ceiling: false
walls: false
```
**Note that the `material` is `PAPER` and the `model_id` is 40000**
Now if I were to define a custom model override by manually writing a jsosn file: the used model and custom model data are identical to the one defined in IA config above but it's for **`leather_horse_armor` other than `paper`**. I put it in the path `plugins/ItemsAdder/contents/_colorable/resourcepack/minecraft/models/item/leather_horse_armor.json`:
```json
{
"parent": "item/generated",
"textures": {
"layer0": "item/leather_horse_armor"
},
"overrides": [
{
"predicate": {
"custom_model_data": 40000
},
"model": "furnitures:anniversary/1st_year_cake"
}
]
}
```
Then, run `iazip`
IA would throw a warning:
> [23:24:12 WARN]: [!] CustomModelData 40000 for item 'leather_horse_armor' already used by ItemsAdder custom item 'furnitures:1st_year_cake'. Skipped.
As a result, the `overrides` that are manually written by me is not included in the output pack.
### Server version
Current: git-Purpur-1920 (MC: 1.19.3)*
Previous: git-Purpur-1919 (MC: 1.19.3)
### ItemsAdder Version
ItemsAdder version 3.3.1
### ProtocolLib Version
ProtocolLib version 5.0.0-SNAPSHOT-b612
### LoneLibs Version
LoneLibs version 1.0.23
### LightAPI Version (optional)
_No response_
### LibsDisguises Version (optional)
_No response_
### FULL server log
_No response_
### Error (optional)
_No response_
### Problematic items yml configuration file (optional)
_No response_
### Other files, you can drag and drop them here to upload. (optional)
My ItemsAdder `config.yml`: https://pastes.dev/uo04MGQkkr
### Screenshots/Videos (you can drag and drop files or paste links)
_No response_ | 1.0 | Same `custom model data` with different `material type` cannot merge if using custom json `overrides` definition - ### Terms
- [X] I'm using the very latest version of ItemsAdder and its dependencies.
- [X] I am sure this is a bug and it is not caused by a misconfiguration or by another plugin.
- [X] I already searched on this [Github page](https://github.com/PluginBugs/Issues-ItemsAdder/issues) to check if the same issue was already reported.
- [X] I already searched on the [plugin wiki](https://itemsadder.devs.beer/) to know if a solution is already known.
- [X] I already searched on the [forums](https://forum.devs.beer/) to check if anyone already has a solution for this.
### Discord tag (optional)
Nailm#9364
### What happened?
IA cannot correctly merge the item model overrides if:
1. a model is defined in the IA config, with custom model id being X and material being Y
2. then I manually define a model override by writing a json file, with custom model id also being X but for a different material other than Y
Ideally, IA should be able to merge the files because the material is different, despite the custom model id is the same.
### Steps to reproduce the issue
I have defined a custom furniture like this (the file is at `plugins/ItemsAdder/contents/furnitures/configs/anniversary/1st_year_cake.yml`):
```yaml
info:
namespace: furnitures
items:
1st_year_cake:
display_name: 1st Year Cake
resource:
material: PAPER
generate: false
model_id: 40000
model_path: anniversary/1st_year_cake
behaviours:
furniture:
entity: item_frame
gravity: false
small: false
solid: true
fixed_rotation: false
placeable_on:
floor: true
ceiling: false
walls: false
```
**Note that the `material` is `PAPER` and the `model_id` is 40000**
Now if I were to define a custom model override by manually writing a jsosn file: the used model and custom model data are identical to the one defined in IA config above but it's for **`leather_horse_armor` other than `paper`**. I put it in the path `plugins/ItemsAdder/contents/_colorable/resourcepack/minecraft/models/item/leather_horse_armor.json`:
```json
{
"parent": "item/generated",
"textures": {
"layer0": "item/leather_horse_armor"
},
"overrides": [
{
"predicate": {
"custom_model_data": 40000
},
"model": "furnitures:anniversary/1st_year_cake"
}
]
}
```
Then, run `iazip`
IA would throw a warning:
> [23:24:12 WARN]: [!] CustomModelData 40000 for item 'leather_horse_armor' already used by ItemsAdder custom item 'furnitures:1st_year_cake'. Skipped.
As a result, the `overrides` that are manually written by me is not included in the output pack.
### Server version
Current: git-Purpur-1920 (MC: 1.19.3)*
Previous: git-Purpur-1919 (MC: 1.19.3)
### ItemsAdder Version
ItemsAdder version 3.3.1
### ProtocolLib Version
ProtocolLib version 5.0.0-SNAPSHOT-b612
### LoneLibs Version
LoneLibs version 1.0.23
### LightAPI Version (optional)
_No response_
### LibsDisguises Version (optional)
_No response_
### FULL server log
_No response_
### Error (optional)
_No response_
### Problematic items yml configuration file (optional)
_No response_
### Other files, you can drag and drop them here to upload. (optional)
My ItemsAdder `config.yml`: https://pastes.dev/uo04MGQkkr
### Screenshots/Videos (you can drag and drop files or paste links)
_No response_ | non_main | same custom model data with different material type cannot merge if using custom json overrides definition terms i m using the very latest version of itemsadder and its dependencies i am sure this is a bug and it is not caused by a misconfiguration or by another plugin i already searched on this to check if the same issue was already reported i already searched on the to know if a solution is already known i already searched on the to check if anyone already has a solution for this discord tag optional nailm what happened ia cannot correctly merge the item model overrides if a model is defined in the ia config with custom model id being x and material being y then i manually define a model override by writing a json file with custom model id also being x but for a different material other than y ideally ia should be able to merge the files because the material is different despite the custom model id is the same steps to reproduce the issue i have defined a custom furniture like this the file is at plugins itemsadder contents furnitures configs anniversary year cake yml yaml info namespace furnitures items year cake display name year cake resource material paper generate false model id model path anniversary year cake behaviours furniture entity item frame gravity false small false solid true fixed rotation false placeable on floor true ceiling false walls false note that the material is paper and the model id is now if i were to define a custom model override by manually writing a jsosn file the used model and custom model data are identical to the one defined in ia config above but it s for leather horse armor other than paper i put it in the path plugins itemsadder contents colorable resourcepack minecraft models item leather horse armor json json parent item generated textures item leather horse armor overrides predicate custom model data model furnitures anniversary year cake then run iazip ia would throw a warning custommodeldata for item leather horse armor already used by itemsadder custom item furnitures year cake skipped as a result the overrides that are manually written by me is not included in the output pack server version current git purpur mc previous git purpur mc itemsadder version itemsadder version protocollib version protocollib version snapshot lonelibs version lonelibs version lightapi version optional no response libsdisguises version optional no response full server log no response error optional no response problematic items yml configuration file optional no response other files you can drag and drop them here to upload optional my itemsadder config yml screenshots videos you can drag and drop files or paste links no response | 0 |
2,772 | 9,886,760,905 | IssuesEvent | 2019-06-25 07:37:56 | adda-team/adda | https://api.github.com/repos/adda-team/adda | opened | Consider deprecating SSE3 code | comp-Logic maintainability performance pri-Medium sparse | Recent optimizations of imExp (#169) made SSE3 code only marginally faster than the standard c99 one (with compiler optimizations). It also showed that part of the SSE3 speedup is due to unsafe optimizations (can lead to precision loss).
Thus, better maintainability can be obtained if SSE3 code is removed altogether. However, it is worth studying where the remaining 10% speedup due to SSE3 comes from and optimizing the standard code accordingly.
/cc @jleinonen | True | Consider deprecating SSE3 code - Recent optimizations of imExp (#169) made SSE3 code only marginally faster than the standard c99 one (with compiler optimizations). It also showed that part of the SSE3 speedup is due to unsafe optimizations (can lead to precision loss).
Thus, better maintainability can be obtained if SSE3 code is removed altogether. However, it is worth studying where the remaining 10% speedup due to SSE3 comes from and optimizing the standard code accordingly.
/cc @jleinonen | main | consider deprecating code recent optimizations of imexp made code only marginally faster than the standard one with compiler optimizations it also showed that part of the speedup is due to unsafe optimizations can lead to precision loss thus better maintainability can be obtained if code is removed altogether however it is worth studying where the remaining speedup due to comes from and optimizing the standard code accordingly cc jleinonen | 1 |
568,917 | 16,990,528,441 | IssuesEvent | 2021-06-30 19:47:44 | Javacord/Javacord | https://api.github.com/repos/Javacord/Javacord | closed | NPE when listener was running too long | bug low priority | ```
[ERROR] 2021-01-11 00:04:03 [Javacord - Central Scheduler - 1] EventDispatcherBase - Interrupted a a listener thread for SERVERNAME, because it was running over 120 seconds! This was most likely caused by a deadlock or very heavy computation/blocking operations in the listener thread. Make sure to not block listener threads!
[ERROR] 2021-01-11 00:04:03 [Javacord - Central ExecutorService - 991] EventDispatcherBase - Unhandled exception in a listener thread for SERVERNAME
java.lang.NullPointerException: null
at org.javacord.core.util.event.EventDispatcherBase.lambda$checkRunningListenersAndStartIfPossible$17(EventDispatcherBase.java:265) ~[javacord-core-3.1.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
```
Note: The application was absolutely broken and I couldn't find the cause for it so far meaning I can't reproduce this | 1.0 | NPE when listener was running too long - ```
[ERROR] 2021-01-11 00:04:03 [Javacord - Central Scheduler - 1] EventDispatcherBase - Interrupted a a listener thread for SERVERNAME, because it was running over 120 seconds! This was most likely caused by a deadlock or very heavy computation/blocking operations in the listener thread. Make sure to not block listener threads!
[ERROR] 2021-01-11 00:04:03 [Javacord - Central ExecutorService - 991] EventDispatcherBase - Unhandled exception in a listener thread for SERVERNAME
java.lang.NullPointerException: null
at org.javacord.core.util.event.EventDispatcherBase.lambda$checkRunningListenersAndStartIfPossible$17(EventDispatcherBase.java:265) ~[javacord-core-3.1.2.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
```
Note: The application was absolutely broken and I couldn't find the cause for it so far meaning I can't reproduce this | non_main | npe when listener was running too long eventdispatcherbase interrupted a a listener thread for servername because it was running over seconds this was most likely caused by a deadlock or very heavy computation blocking operations in the listener thread make sure to not block listener threads eventdispatcherbase unhandled exception in a listener thread for servername java lang nullpointerexception null at org javacord core util event eventdispatcherbase lambda checkrunninglistenersandstartifpossible eventdispatcherbase java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java note the application was absolutely broken and i couldn t find the cause for it so far meaning i can t reproduce this | 0 |
48,069 | 7,373,386,199 | IssuesEvent | 2018-03-13 17:07:59 | blackbaud/skyux2 | https://api.github.com/repos/blackbaud/skyux2 | closed | Some demo components are pulling template contents into component class | bug documentation | Look at some of the component classes `template:` property; the template is printed inline. For example: https://developer.blackbaud.com/skyux/components/avatar#code
<img width="521" alt="screen shot 2018-02-21 at 10 50 07 pm" src="https://user-images.githubusercontent.com/12497062/36519450-98a54fae-1759-11e8-94c7-83fe52ccd720.png">
| 1.0 | Some demo components are pulling template contents into component class - Look at some of the component classes `template:` property; the template is printed inline. For example: https://developer.blackbaud.com/skyux/components/avatar#code
<img width="521" alt="screen shot 2018-02-21 at 10 50 07 pm" src="https://user-images.githubusercontent.com/12497062/36519450-98a54fae-1759-11e8-94c7-83fe52ccd720.png">
| non_main | some demo components are pulling template contents into component class look at some of the component classes template property the template is printed inline for example img width alt screen shot at pm src | 0 |
362,367 | 25,372,842,155 | IssuesEvent | 2022-11-21 11:54:33 | kubernetes/website | https://api.github.com/repos/kubernetes/website | closed | [hi] Replace Objective with Hindi in content/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html | kind/bug kind/documentation needs-triage | **This is a Bug Report**
Objective should be replace by उद्देश्य in content/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
<!--Required Information-->
**Problem:**
Objective is english word in hindi content
**Proposed Solution:**
Replace with उद्देश्य
**Page to Update:**
https://kubernetes.io/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
| 1.0 | [hi] Replace Objective with Hindi in content/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html - **This is a Bug Report**
Objective should be replace by उद्देश्य in content/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
<!--Required Information-->
**Problem:**
Objective is english word in hindi content
**Proposed Solution:**
Replace with उद्देश्य
**Page to Update:**
https://kubernetes.io/hi/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
| non_main | replace objective with hindi in content hi docs tutorials kubernetes basics create cluster cluster intro html this is a bug report objective should be replace by उद्देश्य in content hi docs tutorials kubernetes basics create cluster cluster intro html problem objective is english word in hindi content proposed solution replace with उद्देश्य page to update | 0 |
133,795 | 18,953,955,824 | IssuesEvent | 2021-11-18 17:57:07 | department-of-veterans-affairs/vets-design-system-documentation | https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation | opened | Fix enableAnalytics prop in radio button story | vsp-design-system-team | # Feature Request
- [ ] I’ve searched for any related issues and avoided creating a duplicate issue.
## This update is for:
- [ ] Content styleguide
- [ ] Component
- [ ] Pattern
- [ ] Utility
- [x] Other
## What is the name
`<va-radio>`
## What is the nature of this update?
The story for the radio web component isn't using the `enableAnalytics` prop correctly.

The story needs to be updated so that this prop is used correctly.
- [ ] How to build this component/pattern
- [ ] When to use this component/pattern
- [ ] When to use something else
- [x] Usability guidance
- [ ] Accessibility
- [ ] Implementation
- [ ] Research insights
- [ ] Package information
## Additional Context
None
| 1.0 | Fix enableAnalytics prop in radio button story - # Feature Request
- [ ] I’ve searched for any related issues and avoided creating a duplicate issue.
## This update is for:
- [ ] Content styleguide
- [ ] Component
- [ ] Pattern
- [ ] Utility
- [x] Other
## What is the name
`<va-radio>`
## What is the nature of this update?
The story for the radio web component isn't using the `enableAnalytics` prop correctly.

The story needs to be updated so that this prop is used correctly.
- [ ] How to build this component/pattern
- [ ] When to use this component/pattern
- [ ] When to use something else
- [x] Usability guidance
- [ ] Accessibility
- [ ] Implementation
- [ ] Research insights
- [ ] Package information
## Additional Context
None
| non_main | fix enableanalytics prop in radio button story feature request i’ve searched for any related issues and avoided creating a duplicate issue this update is for content styleguide component pattern utility other what is the name what is the nature of this update the story for the radio web component isn t using the enableanalytics prop correctly the story needs to be updated so that this prop is used correctly how to build this component pattern when to use this component pattern when to use something else usability guidance accessibility implementation research insights package information additional context none | 0 |
4,925 | 25,327,012,112 | IssuesEvent | 2022-11-18 10:06:44 | precice/precice | https://api.github.com/repos/precice/precice | closed | Redesign the XML configuration to avoid attributes | usability maintainability breaking change | _This is supposed to be primarily a place to discuss this_
**Please describe the problem you are trying to solve.**
The preCICE configuration uses an attribute-heavy XML style, while XML is generally used in a tag-heavy style.
The tag-heavy style as some benefits, such as [extensive library support](https://www.boost.org/doc/libs/1_78_0/doc/html/property_tree/tutorial.html) as well as the ability to transform between formats such as json and yaml. Using `boost.property_tree` would allow us to drop the libxml2 dependency and remove part of our custom XML parsing logic.
```xml
<!-- Attribute-heavy -->
<mesh name="MeshA">
<use-data name="DataA"/>
</mesh>
<mesh name="MeshB">
<use-data name="DataA"/>
</mesh>
<!-- Tag-heavy -->
<meshes>
<mesh>
<name>MeshA</name>
<use-datas>
<use-data>Data></use-data>
</use-datas>
</mesh>
<mesh>
<name>MeshB</name>
<use-datas>
<use-data>Data></use-data>
</use-datas>
</mesh>
</meshes>
```
**YAML and JSON equivalents**
```yml
meshes:
- name: MeshA
use-data:
- name: Data
- name: MeshB
use-data:
- name: Data
```
```json
"meshes": [
{
"name": "MeshA",
"use-data": [ "Data" ]
},
{
"name": "MeshB",
"use-data": [ "Data" ]
}
]
```
**Describe the solution you propose.**
Rethink the configuration to be fully tag-based, allowing us to painlessly process various formats and replace our XML processing. This is especially useful in tooling.
**Describe alternatives you've considered**
* Agree to do nothing
* Change to a completely different configuration format (prototypes needed)
* Use tag nesting for all grouping logical units, such as data, meshes, m2ns etc.
**Additional context**
#1235 #928
| True | Redesign the XML configuration to avoid attributes - _This is supposed to be primarily a place to discuss this_
**Please describe the problem you are trying to solve.**
The preCICE configuration uses an attribute-heavy XML style, while XML is generally used in a tag-heavy style.
The tag-heavy style as some benefits, such as [extensive library support](https://www.boost.org/doc/libs/1_78_0/doc/html/property_tree/tutorial.html) as well as the ability to transform between formats such as json and yaml. Using `boost.property_tree` would allow us to drop the libxml2 dependency and remove part of our custom XML parsing logic.
```xml
<!-- Attribute-heavy -->
<mesh name="MeshA">
<use-data name="DataA"/>
</mesh>
<mesh name="MeshB">
<use-data name="DataA"/>
</mesh>
<!-- Tag-heavy -->
<meshes>
<mesh>
<name>MeshA</name>
<use-datas>
<use-data>Data></use-data>
</use-datas>
</mesh>
<mesh>
<name>MeshB</name>
<use-datas>
<use-data>Data></use-data>
</use-datas>
</mesh>
</meshes>
```
**YAML and JSON equivalents**
```yml
meshes:
- name: MeshA
use-data:
- name: Data
- name: MeshB
use-data:
- name: Data
```
```json
"meshes": [
{
"name": "MeshA",
"use-data": [ "Data" ]
},
{
"name": "MeshB",
"use-data": [ "Data" ]
}
]
```
**Describe the solution you propose.**
Rethink the configuration to be fully tag-based, allowing us to painlessly process various formats and replace our XML processing. This is especially useful in tooling.
**Describe alternatives you've considered**
* Agree to do nothing
* Change to a completely different configuration format (prototypes needed)
* Use tag nesting for all grouping logical units, such as data, meshes, m2ns etc.
**Additional context**
#1235 #928
| main | redesign the xml configuration to avoid attributes this is supposed to be primarily a place to discuss this please describe the problem you are trying to solve the precice configuration uses an attribute heavy xml style while xml is generally used in a tag heavy style the tag heavy style as some benefits such as as well as the ability to transform between formats such as json and yaml using boost property tree would allow us to drop the dependency and remove part of our custom xml parsing logic xml mesha data meshb data yaml and json equivalents yml meshes name mesha use data name data name meshb use data name data json meshes name mesha use data name meshb use data describe the solution you propose rethink the configuration to be fully tag based allowing us to painlessly process various formats and replace our xml processing this is especially useful in tooling describe alternatives you ve considered agree to do nothing change to a completely different configuration format prototypes needed use tag nesting for all grouping logical units such as data meshes etc additional context | 1 |
5,552 | 27,784,281,592 | IssuesEvent | 2023-03-17 00:55:11 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: FileUploader does not clear value when file is deleted | type: bug 🐛 severity: 2 role: dev 🤖 status: waiting for maintainer response 💬 | ### Package
@carbon/react
### Browser
Chrome, Safari
### Package version
1.23.1
### React version
17.0.1
### Description
FileUploader: When uploading a file, deleting it, and trying to upload the same file it does not work. It seems to be an issue with the delete functionality not clearing the value, and hence Chrome and Safari do not trigger the onChange event.
NOTE: If you upload a different file, it will work, since the value will change and the onChange will trigger.
### Reproduction/example
https://stackblitz.com/edit/github-94x55y?file=src/App.jsx
### Steps to reproduce
(1) Upload a file.
(2) Delete the file.
(3) Upload the same file.
(4) See that it does not work.
NOTE: If you upload a different file, it will work, since the value will change and the onChange will trigger.
### Suggested Severity
Severity 3 = User can complete task, and/or has a workaround within the user experience of a given component.
### Application/PAL
_No response_
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: FileUploader does not clear value when file is deleted - ### Package
@carbon/react
### Browser
Chrome, Safari
### Package version
1.23.1
### React version
17.0.1
### Description
FileUploader: When uploading a file, deleting it, and trying to upload the same file it does not work. It seems to be an issue with the delete functionality not clearing the value, and hence Chrome and Safari do not trigger the onChange event.
NOTE: If you upload a different file, it will work, since the value will change and the onChange will trigger.
### Reproduction/example
https://stackblitz.com/edit/github-94x55y?file=src/App.jsx
### Steps to reproduce
(1) Upload a file.
(2) Delete the file.
(3) Upload the same file.
(4) See that it does not work.
NOTE: If you upload a different file, it will work, since the value will change and the onChange will trigger.
### Suggested Severity
Severity 3 = User can complete task, and/or has a workaround within the user experience of a given component.
### Application/PAL
_No response_
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | fileuploader does not clear value when file is deleted package carbon react browser chrome safari package version react version description fileuploader when uploading a file deleting it and trying to upload the same file it does not work it seems to be an issue with the delete functionality not clearing the value and hence chrome and safari do not trigger the onchange event note if you upload a different file it will work since the value will change and the onchange will trigger reproduction example steps to reproduce upload a file delete the file upload the same file see that it does not work note if you upload a different file it will work since the value will change and the onchange will trigger suggested severity severity user can complete task and or has a workaround within the user experience of a given component application pal no response code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
3,902 | 17,376,851,908 | IssuesEvent | 2021-07-30 23:28:21 | chorman0773/Clever-ISA | https://api.github.com/repos/chorman0773/Clever-ISA | closed | Encodings (within square brackets) do not denote unit size. | I-unclear S-blocked-on-maintainer X-generic | Currently the document indicates encodings as serieses of named bits delimited by square brackets and designates the meanings of each bit by group. However, it is not made clear that these groups are bits. This should be solved | True | Encodings (within square brackets) do not denote unit size. - Currently the document indicates encodings as serieses of named bits delimited by square brackets and designates the meanings of each bit by group. However, it is not made clear that these groups are bits. This should be solved | main | encodings within square brackets do not denote unit size currently the document indicates encodings as serieses of named bits delimited by square brackets and designates the meanings of each bit by group however it is not made clear that these groups are bits this should be solved | 1 |
5,631 | 28,263,067,074 | IssuesEvent | 2023-04-07 02:23:44 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [a11y]: Dropdown does not use aria-describedby to link help text with component | type: bug 🐛 severity: 2 type: a11y ♿ role: dev 🤖 component: multi-select status: waiting for maintainer response 💬 screen-reader: JAWS | ### Package
@carbon/react
### Browser
Chrome
### Operating System
MacOS
### Package version
React storybook
### React version
https://react.carbondesignsystem.com/?path=/story/components-multiselect--default
### Automated testing tool and ruleset
n/a
### Assistive technology
JAWS
### Description
Any helper text should be automatically surfaced to assistive technologies through the use of aria-describedby.
This should happen on ALL Carbon components that can have helper text.
it is in place for Text input, so it only needs to be emulated for Dropdown/MultiSelect Dropdown and any other components that do not have it

Steps to resolve:
1. Given the div holding the helper text an appropriate ID
2. use this ID with aria-describedby on the div holding the input.
This means that the input will have its programmatic label announced by a Screen Reader, and then have any helper text read out after a pause.
It also means that any caution or warning text appearing in the helper text area will also be announced.
### WCAG 2.1 Violation
https://www.w3.org/WAI/WCAG21/Understanding/info-and-relationships.html
### Reproduction/example
https://react.carbondesignsystem.com/?path=/story/components-multiselect--default
### Steps to reproduce
1. Inspect the Helper text and confirm there is no ID associated with it.
2. If ID exists, inspect input and confirm there is no use of aria-describedby
OR
Turn on JAWS and tab to Dropdown. JAWS will read the input and any option value, but will not announce the helper text.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [a11y]: Dropdown does not use aria-describedby to link help text with component - ### Package
@carbon/react
### Browser
Chrome
### Operating System
MacOS
### Package version
React storybook
### React version
https://react.carbondesignsystem.com/?path=/story/components-multiselect--default
### Automated testing tool and ruleset
n/a
### Assistive technology
JAWS
### Description
Any helper text should be automatically surfaced to assistive technologies through the use of aria-describedby.
This should happen on ALL Carbon components that can have helper text.
it is in place for Text input, so it only needs to be emulated for Dropdown/MultiSelect Dropdown and any other components that do not have it

Steps to resolve:
1. Given the div holding the helper text an appropriate ID
2. use this ID with aria-describedby on the div holding the input.
This means that the input will have its programmatic label announced by a Screen Reader, and then have any helper text read out after a pause.
It also means that any caution or warning text appearing in the helper text area will also be announced.
### WCAG 2.1 Violation
https://www.w3.org/WAI/WCAG21/Understanding/info-and-relationships.html
### Reproduction/example
https://react.carbondesignsystem.com/?path=/story/components-multiselect--default
### Steps to reproduce
1. Inspect the Helper text and confirm there is no ID associated with it.
2. If ID exists, inspect input and confirm there is no use of aria-describedby
OR
Turn on JAWS and tab to Dropdown. JAWS will read the input and any option value, but will not announce the helper text.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | dropdown does not use aria describedby to link help text with component package carbon react browser chrome operating system macos package version react storybook react version automated testing tool and ruleset n a assistive technology jaws description any helper text should be automatically surfaced to assistive technologies through the use of aria describedby this should happen on all carbon components that can have helper text it is in place for text input so it only needs to be emulated for dropdown multiselect dropdown and any other components that do not have it steps to resolve given the div holding the helper text an appropriate id use this id with aria describedby on the div holding the input this means that the input will have its programmatic label announced by a screen reader and then have any helper text read out after a pause it also means that any caution or warning text appearing in the helper text area will also be announced wcag violation reproduction example steps to reproduce inspect the helper text and confirm there is no id associated with it if id exists inspect input and confirm there is no use of aria describedby or turn on jaws and tab to dropdown jaws will read the input and any option value but will not announce the helper text code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
105,289 | 23,023,966,346 | IssuesEvent | 2022-07-22 07:44:01 | gitpod-io/gitpod | https://api.github.com/repos/gitpod-io/gitpod | closed | Overriddes VSCode config for SSH Config Location | type: bug team: IDE editor: code (desktop) | ### Bug description
this literally changes the vscode global config for this to work, and it is *quite* annoying to have to switch it back every single time as I have workspaces on remote hardware that I need to be able to access quickly
### Steps to reproduce
have `~/.ssh/config` set as your default config *locally*
then click "open in vscode" on gitpod,
### Workspace affected
_No response_
### Expected behavior
_No response_
### Example repository
_No response_
### Anything else?
It is very annoying and I believe there should be an way to not do this, or rather add an *tempory* config into the pre-defined `.ssh/config` that is then deleted after use
| 1.0 | Overriddes VSCode config for SSH Config Location - ### Bug description
this literally changes the vscode global config for this to work, and it is *quite* annoying to have to switch it back every single time as I have workspaces on remote hardware that I need to be able to access quickly
### Steps to reproduce
have `~/.ssh/config` set as your default config *locally*
then click "open in vscode" on gitpod,
### Workspace affected
_No response_
### Expected behavior
_No response_
### Example repository
_No response_
### Anything else?
It is very annoying and I believe there should be an way to not do this, or rather add an *tempory* config into the pre-defined `.ssh/config` that is then deleted after use
| non_main | overriddes vscode config for ssh config location bug description this literally changes the vscode global config for this to work and it is quite annoying to have to switch it back every single time as i have workspaces on remote hardware that i need to be able to access quickly steps to reproduce have ssh config set as your default config locally then click open in vscode on gitpod workspace affected no response expected behavior no response example repository no response anything else it is very annoying and i believe there should be an way to not do this or rather add an tempory config into the pre defined ssh config that is then deleted after use | 0 |
84,915 | 10,572,687,676 | IssuesEvent | 2019-10-07 10:08:03 | Yann4/Thesis | https://api.github.com/repos/Yann4/Thesis | opened | Design power ups | design | Idea is that you'll be able to buy them from a shopkeeper/trainer/whatever and it'll do _something_ to enhance either your combat or movement abilities. Define a small set that each tribe will be able to give you through one way or another. | 1.0 | Design power ups - Idea is that you'll be able to buy them from a shopkeeper/trainer/whatever and it'll do _something_ to enhance either your combat or movement abilities. Define a small set that each tribe will be able to give you through one way or another. | non_main | design power ups idea is that you ll be able to buy them from a shopkeeper trainer whatever and it ll do something to enhance either your combat or movement abilities define a small set that each tribe will be able to give you through one way or another | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.