Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,301 | 5,541,936,639 | IssuesEvent | 2017-03-22 14:02:21 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Feature request: allow iterating listeners with with_items inside ec2_elb_lb task | affects_2.1 aws cloud feature_idea waiting_on_maintainer | ISSUE TYPE
Feature Idea
ANSIBLE VERSION
```
$ ansible --version
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
CONFIGURATION
$ egrep -v "^#|^$" /etc/ansible/ansible.cfg
[defaults]
gathering = explicit
host_key_checking = False
callback_whitelist = profile_tasks
remote_user = ec2-user
private_key_file = /Users/myuser/.ssh/mykey.pem
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
display_skipped_hosts = False
command_warnings = True
retry_files_enabled = False
squash_actions = apk,apt,dnf,package,pacman,pkgng,yum,zypper
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
[accelerate]
[selinux]
[colors]
Env vars:
export ANSIBLE_HOST_KEY_CHECKING='false'
export AWS_REGION='eu-central-1'
and the obvious AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
```
OS / ENVIRONMENT
From Mac OS X Capitan to N/a (AWS)
SUMMARY
I need to create an elastic load balancer which listens on a lot of ports (i.e. 100+). I am aware that AWS ELBs do not allow for port ranges in their ELB module (just on security groups), so I would like to do something like this:
```
- name: Create ELB for sender servers
local_action:
module: ec2_elb_lb
name: "{{ elb_sender }}"
state: present
zones: "{{ availability_zones }}"
tags:
Name: "{{ elb_sender }}"
listeners:
- protocol: tcp
load_balancer_port: "{{ item }}"
instance_port: "{{ item }}"
proxy_protocol: True
cross_az_load_balancing: "yes"
security_group_names: "{{ security_group_sender }}"
wait: yes
with_items:
- 80
- 81
```
Which does not error out but only creates an ELB listening on the last port on the items list.
STEPS TO REPRODUCE
The new feature would be as above.
```
- name: Create ELB for sender servers
local_action:
module: ec2_elb_lb
name: "{{ elb_sender }}"
state: present
zones: "{{ availability_zones }}"
tags:
Name: "{{ elb_sender }}"
listeners:
- protocol: tcp
load_balancer_port: "{{ item }}"
instance_port: "{{ item }}"
proxy_protocol: True
cross_az_load_balancing: "yes"
security_group_names: "{{ security_group_sender }}"
wait: yes
with_items:
- 80
- 81
```
EXPECTED RESULTS
I expected/hoped for a loop of the listeners part of the task.
ACTUAL RESULTS
An ELB with a listener on port 81.
Thanks for considering it.
| True | Feature request: allow iterating listeners with with_items inside ec2_elb_lb task - ISSUE TYPE
Feature Idea
ANSIBLE VERSION
```
$ ansible --version
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
CONFIGURATION
$ egrep -v "^#|^$" /etc/ansible/ansible.cfg
[defaults]
gathering = explicit
host_key_checking = False
callback_whitelist = profile_tasks
remote_user = ec2-user
private_key_file = /Users/myuser/.ssh/mykey.pem
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
display_skipped_hosts = False
command_warnings = True
retry_files_enabled = False
squash_actions = apk,apt,dnf,package,pacman,pkgng,yum,zypper
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
[accelerate]
[selinux]
[colors]
Env vars:
export ANSIBLE_HOST_KEY_CHECKING='false'
export AWS_REGION='eu-central-1'
and the obvious AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
```
OS / ENVIRONMENT
From Mac OS X Capitan to N/a (AWS)
SUMMARY
I need to create an elastic load balancer which listens on a lot of ports (i.e. 100+). I am aware that AWS ELBs do not allow for port ranges in their ELB module (just on security groups), so I would like to do something like this:
```
- name: Create ELB for sender servers
local_action:
module: ec2_elb_lb
name: "{{ elb_sender }}"
state: present
zones: "{{ availability_zones }}"
tags:
Name: "{{ elb_sender }}"
listeners:
- protocol: tcp
load_balancer_port: "{{ item }}"
instance_port: "{{ item }}"
proxy_protocol: True
cross_az_load_balancing: "yes"
security_group_names: "{{ security_group_sender }}"
wait: yes
with_items:
- 80
- 81
```
Which does not error out but only creates an ELB listening on the last port on the items list.
STEPS TO REPRODUCE
The new feature would be as above.
```
- name: Create ELB for sender servers
local_action:
module: ec2_elb_lb
name: "{{ elb_sender }}"
state: present
zones: "{{ availability_zones }}"
tags:
Name: "{{ elb_sender }}"
listeners:
- protocol: tcp
load_balancer_port: "{{ item }}"
instance_port: "{{ item }}"
proxy_protocol: True
cross_az_load_balancing: "yes"
security_group_names: "{{ security_group_sender }}"
wait: yes
with_items:
- 80
- 81
```
EXPECTED RESULTS
I expected/hoped for a loop of the listeners part of the task.
ACTUAL RESULTS
An ELB with a listener on port 81.
Thanks for considering it.
| main | feature request allow iterating listeners with with items inside elb lb task issue type feature idea ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration egrep v etc ansible ansible cfg gathering explicit host key checking false callback whitelist profile tasks remote user user private key file users myuser ssh mykey pem ansible managed ansible managed file modified on y m d h m s by uid on host display skipped hosts false command warnings true retry files enabled false squash actions apk apt dnf package pacman pkgng yum zypper env vars export ansible host key checking false export aws region eu central and the obvious aws access key id and aws secret access key os environment from mac os x capitan to n a aws summary i need to create an elastic load balancer which listens on a lot of ports i e i am aware that aws elbs do not allow for port ranges in their elb module just on security groups so i would like to do something like this name create elb for sender servers local action module elb lb name elb sender state present zones availability zones tags name elb sender listeners protocol tcp load balancer port item instance port item proxy protocol true cross az load balancing yes security group names security group sender wait yes with items which does not error out but only creates an elb listening on the last port on the items list steps to reproduce the new feature would be as above name create elb for sender servers local action module elb lb name elb sender state present zones availability zones tags name elb sender listeners protocol tcp load balancer port item instance port item proxy protocol true cross az load balancing yes security group names security group sender wait yes with items expected results i expected hoped for a loop of the listeners part of the task actual results an elb with a listener on port thanks for considering it | 1 |
4,570 | 23,750,897,652 | IssuesEvent | 2022-08-31 20:30:11 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Billing Duration reported in 100ms intervals | stage/in-progress area/local/invoke maintainer/need-followup | ### Description:
When using `sam local start-api` the billing duration is reported in 100ms intervals
### Steps to reproduce:
I used the Hello World quick start template
Run `sam local start-api --port 8080`
Hit the function
### Observed result:
Duration: 255.59 ms Billed Duration: 300 ms
### Expected result:
I would have expected billing duration to be 256ms
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
I ran it in Cloud9
I looked through the code and couldn't find anything in this repo relating to printing this out. I'm assuming its pulling this in from somewhere else so maybe an old dependency?
| True | Billing Duration reported in 100ms intervals - ### Description:
When using `sam local start-api` the billing duration is reported in 100ms intervals
### Steps to reproduce:
I used the Hello World quick start template
Run `sam local start-api --port 8080`
Hit the function
### Observed result:
Duration: 255.59 ms Billed Duration: 300 ms
### Expected result:
I would have expected billing duration to be 256ms
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
I ran it in Cloud9
I looked through the code and couldn't find anything in this repo relating to printing this out. I'm assuming its pulling this in from somewhere else so maybe an old dependency?
| main | billing duration reported in intervals description when using sam local start api the billing duration is reported in intervals steps to reproduce i used the hello world quick start template run sam local start api port hit the function observed result duration ms billed duration ms expected result i would have expected billing duration to be additional environment details ex windows mac amazon linux etc i ran it in i looked through the code and couldn t find anything in this repo relating to printing this out i m assuming its pulling this in from somewhere else so maybe an old dependency | 1 |
1,345 | 5,722,625,376 | IssuesEvent | 2017-04-20 09:57:16 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Cannot uninstall lightkey 1.6.3 - and others | awaiting maintainer feedback | #### General troubleshooting steps
- [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [X] None of the templates was appropriate for my issue, or I’m not sure.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
#### Description of issue
I have been having trouble with uninstalling a few casks which performing cask updates (lightkey being one of them) and had thought that it was due to the recent 10.12.4 update. I have finally managed to set up a clean system with 10.12.3 and the same problem exists so the issue is unrelated to the minor version upgrade.
Turns out that `plist/parser.rb` is not handling the plist, at least for 10.12.3 and 10.12.4, if the plist file ends up in ascii rather than utf8. Applying the below diff fixes uninstall but of course this is a brew issue rather than a hbc issue.
This same issue affects a few other casks that I have found as well: tableau-public, drobo-dashboard, resolume-avenue, plover.
Just wondering if anyone has advice on this issue, or should I just raise it in Homebrew?
```
diff --git a/Library/Homebrew/vendor/plist/plist/parser.rb b/Library/Homebrew/vendor/plist/plist/parser.rb
index de441fc..7bd60fd 100644
--- a/Library/Homebrew/vendor/plist/plist/parser.rb
+++ b/Library/Homebrew/vendor/plist/plist/parser.rb
@@ -86,7 +86,7 @@ module Plist
require 'strscan'
- @scanner = StringScanner.new( @xml )
+ @scanner = StringScanner.new( @xml.force_encoding("utf-8") )
until @scanner.eos?
if @scanner.scan(COMMENT_START)
@scanner.scan(COMMENT_END)
```
#### Output of your command with `--verbose --debug`
```
$ brew cask uninstall lightkey --verbose --debug
==> Uninstalling Cask lightkey
==> Hbc::Installer#uninstall
==> Un-installing artifacts
==> Determining which artifacts are present in Cask lightkey
==> 3 artifact/s defined
#<Hbc::Artifact::Uninstall:0x007f8ba7019a98>
#<Hbc::Artifact::Pkg:0x007f8ba7019b10>
#<Hbc::Artifact::Zap:0x007f8ba7019a20>
==> Un-installing artifact of class Hbc::Artifact::Uninstall
==> Running uninstall process for lightkey; your password may be necessary
==> Uninstalling packages:
==> Executing: ["/usr/sbin/pkgutil", "--pkgs=de.monospc.lightkey.pkg.App"]
de.monospc.lightkey.pkg.App
==> Executing: ["/usr/sbin/pkgutil", "--export-plist", "de.monospc.lightkey.pkg.App"]
Error: incompatible encoding regexp match (UTF-8 regexp with ASCII-8BIT string)
Follow the instructions here:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/usr/local/Homebrew/Library/Homebrew/vendor/plist/plist/parser.rb:91:in `scan'
/usr/local/Homebrew/Library/Homebrew/vendor/plist/plist/parser.rb:91:in `parse'
/usr/local/Homebrew/Library/Homebrew/vendor/plist/plist/parser.rb:29:in `parse_xml'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:160:in `_parse_plist'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:131:in `plist'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:75:in `info'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:66:in `pkgutil_bom_all'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:54:in `pkgutil_bom_files'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:17:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:191:in `block (2 levels) in uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:189:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:189:in `block in uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:188:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:188:in `uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:34:in `block (2 levels) in dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:32:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:32:in `block in dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:31:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:31:in `dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall.rb:7:in `uninstall_phase'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:330:in `block in uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:327:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:327:in `uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:312:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:26:in `block in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:9:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:9:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:115:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:158:in `process'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:91:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:169:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:169:in `rescue in process'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:149:in `process'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:91:in `<main>'
```
#### Output of `brew cask doctor`
```
$ brew doctor
Your system is ready to brew.
```
| True | Cannot uninstall lightkey 1.6.3 - and others - #### General troubleshooting steps
- [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [X] None of the templates was appropriate for my issue, or I’m not sure.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
#### Description of issue
I have been having trouble with uninstalling a few casks which performing cask updates (lightkey being one of them) and had thought that it was due to the recent 10.12.4 update. I have finally managed to set up a clean system with 10.12.3 and the same problem exists so the issue is unrelated to the minor version upgrade.
Turns out that `plist/parser.rb` is not handling the plist, at least for 10.12.3 and 10.12.4, if the plist file ends up in ascii rather than utf8. Applying the below diff fixes uninstall but of course this is a brew issue rather than a hbc issue.
This same issue affects a few other casks that I have found as well: tableau-public, drobo-dashboard, resolume-avenue, plover.
Just wondering if anyone has advice on this issue, or should I just raise it in Homebrew?
```
diff --git a/Library/Homebrew/vendor/plist/plist/parser.rb b/Library/Homebrew/vendor/plist/plist/parser.rb
index de441fc..7bd60fd 100644
--- a/Library/Homebrew/vendor/plist/plist/parser.rb
+++ b/Library/Homebrew/vendor/plist/plist/parser.rb
@@ -86,7 +86,7 @@ module Plist
require 'strscan'
- @scanner = StringScanner.new( @xml )
+ @scanner = StringScanner.new( @xml.force_encoding("utf-8") )
until @scanner.eos?
if @scanner.scan(COMMENT_START)
@scanner.scan(COMMENT_END)
```
#### Output of your command with `--verbose --debug`
```
$ brew cask uninstall lightkey --verbose --debug
==> Uninstalling Cask lightkey
==> Hbc::Installer#uninstall
==> Un-installing artifacts
==> Determining which artifacts are present in Cask lightkey
==> 3 artifact/s defined
#<Hbc::Artifact::Uninstall:0x007f8ba7019a98>
#<Hbc::Artifact::Pkg:0x007f8ba7019b10>
#<Hbc::Artifact::Zap:0x007f8ba7019a20>
==> Un-installing artifact of class Hbc::Artifact::Uninstall
==> Running uninstall process for lightkey; your password may be necessary
==> Uninstalling packages:
==> Executing: ["/usr/sbin/pkgutil", "--pkgs=de.monospc.lightkey.pkg.App"]
de.monospc.lightkey.pkg.App
==> Executing: ["/usr/sbin/pkgutil", "--export-plist", "de.monospc.lightkey.pkg.App"]
Error: incompatible encoding regexp match (UTF-8 regexp with ASCII-8BIT string)
Follow the instructions here:
https://github.com/caskroom/homebrew-cask#reporting-bugs
/usr/local/Homebrew/Library/Homebrew/vendor/plist/plist/parser.rb:91:in `scan'
/usr/local/Homebrew/Library/Homebrew/vendor/plist/plist/parser.rb:91:in `parse'
/usr/local/Homebrew/Library/Homebrew/vendor/plist/plist/parser.rb:29:in `parse_xml'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:160:in `_parse_plist'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:131:in `plist'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:75:in `info'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:66:in `pkgutil_bom_all'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:54:in `pkgutil_bom_files'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/pkg.rb:17:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:191:in `block (2 levels) in uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:189:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:189:in `block in uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:188:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:188:in `uninstall_pkgutil'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:34:in `block (2 levels) in dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:32:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:32:in `block in dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:31:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall_base.rb:31:in `dispatch_uninstall_directives'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/uninstall.rb:7:in `uninstall_phase'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:330:in `block in uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:327:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:327:in `uninstall_artifacts'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:312:in `uninstall'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:26:in `block in run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:9:in `each'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:9:in `run'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:115:in `run_command'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:158:in `process'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:91:in `<main>'
Error: Kernel.exit
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:169:in `exit'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:169:in `rescue in process'
/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:149:in `process'
/usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask'
/usr/local/Homebrew/Library/Homebrew/brew.rb:91:in `<main>'
```
#### Output of `brew cask doctor`
```
$ brew doctor
Your system is ready to brew.
```
| main | cannot uninstall lightkey and others general troubleshooting steps i have checked the instructions for or before opening the issue none of the templates was appropriate for my issue or i’m not sure i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue i have been having trouble with uninstalling a few casks which performing cask updates lightkey being one of them and had thought that it was due to the recent update i have finally managed to set up a clean system with and the same problem exists so the issue is unrelated to the minor version upgrade turns out that plist parser rb is not handling the plist at least for and if the plist file ends up in ascii rather than applying the below diff fixes uninstall but of course this is a brew issue rather than a hbc issue this same issue affects a few other casks that i have found as well tableau public drobo dashboard resolume avenue plover just wondering if anyone has advice on this issue or should i just raise it in homebrew diff git a library homebrew vendor plist plist parser rb b library homebrew vendor plist plist parser rb index a library homebrew vendor plist plist parser rb b library homebrew vendor plist plist parser rb module plist require strscan scanner stringscanner new xml scanner stringscanner new xml force encoding utf until scanner eos if scanner scan comment start scanner scan comment end output of your command with verbose debug brew cask uninstall lightkey verbose debug uninstalling cask lightkey hbc installer uninstall un installing artifacts determining which artifacts are present in cask lightkey artifact s defined un installing artifact of class hbc artifact uninstall running uninstall process for lightkey your password may be necessary uninstalling packages executing de monospc lightkey pkg app executing error incompatible encoding regexp match utf regexp with ascii string follow the instructions here usr local homebrew library homebrew vendor plist plist parser rb in scan usr local homebrew library homebrew vendor plist plist parser rb in parse usr local homebrew library homebrew vendor plist plist parser rb in parse xml usr local homebrew library homebrew cask lib hbc system command rb in parse plist usr local homebrew library homebrew cask lib hbc system command rb in plist usr local homebrew library homebrew cask lib hbc pkg rb in info usr local homebrew library homebrew cask lib hbc pkg rb in pkgutil bom all usr local homebrew library homebrew cask lib hbc pkg rb in pkgutil bom files usr local homebrew library homebrew cask lib hbc pkg rb in uninstall usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in block levels in uninstall pkgutil usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in each usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in block in uninstall pkgutil usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in each usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in uninstall pkgutil usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in block levels in dispatch uninstall directives usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in each usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in block in dispatch uninstall directives usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in each usr local homebrew library homebrew cask lib hbc artifact uninstall base rb in dispatch uninstall directives usr local homebrew library homebrew cask lib hbc artifact uninstall rb in uninstall phase usr local homebrew library homebrew cask lib hbc installer rb in block in uninstall artifacts usr local homebrew library homebrew cask lib hbc installer rb in each usr local homebrew library homebrew cask lib hbc installer rb in uninstall artifacts usr local homebrew library homebrew cask lib hbc installer rb in uninstall usr local homebrew library homebrew cask lib hbc cli uninstall rb in block in run usr local homebrew library homebrew cask lib hbc cli uninstall rb in each usr local homebrew library homebrew cask lib hbc cli uninstall rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in process usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in process usr local homebrew library homebrew cask lib hbc cli rb in process usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor brew doctor your system is ready to brew | 1 |
652,118 | 21,522,662,910 | IssuesEvent | 2022-04-28 15:26:05 | magento/pwa-studio | https://api.github.com/repos/magento/pwa-studio | reopened | [bug]: RootComponents can't have // commented code in their files | bug help wanted Priority: P2 Severity: S3 Progress: done | **Describe the bug**
Leaving code comments in RootComponents/Category/category.js prevents webpack from running after a restart
**To reproduce**
Steps to reproduce the behavior:
0. Start watcher with `yarn run watch:venia`
1. Go to packages/venia-ui/lib/RootComponents/Category/category.js
2. comment out
`
if (totalPagesFromData === null) {
return fullPageLoadingIndicator;
}
` with this `//` comment style
3. Note that it seems to compile normally without errors
4. Now restart the watcher
5. It should give you an error like below
<img width="1307" alt="Screenshot 2020-03-11 at 11 36 45" src="https://user-images.githubusercontent.com/19858728/76408037-9be0e580-638c-11ea-9287-b8a508cc35ef.png">
6. Now go ahead and remove the piece of code that we've just commented.
7. Restart watcher and note it now work properly
**Expected behavior**
Run's without a problem when you comment code
**Additional context**
It seem sto happen when the code has an `if()` statement in the code that's been commented.
Now for the weirdest part: when you use multiline comments like `/* */` the problem doesn't appear. So I think somewhere in the RootComponentsPlugin it somehow read's these comments
**Please complete the following device information:**
- Device [e.g. iPhone6, PC, Mac, Pixel3]:
- OS [e.g. iOS8.1, Windows 10]:
- Browser [e.g. Chrome, Safari]:
- Browser Version [e.g. 22]:
- Magento Version:
- PWA Studio Version:
- NPM version `npm -v`:
- Node Version `node -v`:
<!-- Complete the following sections to help us apply appropriate labels! -->
**Please let us know what packages this bug is in regards to:**
- [ ] `venia-concept`
- [ ] `venia-ui`
- [x] `pwa-buildpack`
- [ ] `peregrine`
- [ ] `pwa-devdocs`
- [ ] `upward-js`
- [ ] `upward-spec`
- [ ] `create-pwa`
| 1.0 | [bug]: RootComponents can't have // commented code in their files - **Describe the bug**
Leaving code comments in RootComponents/Category/category.js prevents webpack from running after a restart
**To reproduce**
Steps to reproduce the behavior:
0. Start watcher with `yarn run watch:venia`
1. Go to packages/venia-ui/lib/RootComponents/Category/category.js
2. comment out
`
if (totalPagesFromData === null) {
return fullPageLoadingIndicator;
}
` with this `//` comment style
3. Note that it seems to compile normally without errors
4. Now restart the watcher
5. It should give you an error like below
<img width="1307" alt="Screenshot 2020-03-11 at 11 36 45" src="https://user-images.githubusercontent.com/19858728/76408037-9be0e580-638c-11ea-9287-b8a508cc35ef.png">
6. Now go ahead and remove the piece of code that we've just commented.
7. Restart watcher and note it now work properly
**Expected behavior**
Run's without a problem when you comment code
**Additional context**
It seem sto happen when the code has an `if()` statement in the code that's been commented.
Now for the weirdest part: when you use multiline comments like `/* */` the problem doesn't appear. So I think somewhere in the RootComponentsPlugin it somehow read's these comments
**Please complete the following device information:**
- Device [e.g. iPhone6, PC, Mac, Pixel3]:
- OS [e.g. iOS8.1, Windows 10]:
- Browser [e.g. Chrome, Safari]:
- Browser Version [e.g. 22]:
- Magento Version:
- PWA Studio Version:
- NPM version `npm -v`:
- Node Version `node -v`:
<!-- Complete the following sections to help us apply appropriate labels! -->
**Please let us know what packages this bug is in regards to:**
- [ ] `venia-concept`
- [ ] `venia-ui`
- [x] `pwa-buildpack`
- [ ] `peregrine`
- [ ] `pwa-devdocs`
- [ ] `upward-js`
- [ ] `upward-spec`
- [ ] `create-pwa`
| non_main | rootcomponents can t have commented code in their files describe the bug leaving code comments in rootcomponents category category js prevents webpack from running after a restart to reproduce steps to reproduce the behavior start watcher with yarn run watch venia go to packages venia ui lib rootcomponents category category js comment out if totalpagesfromdata null return fullpageloadingindicator with this comment style note that it seems to compile normally without errors now restart the watcher it should give you an error like below img width alt screenshot at src now go ahead and remove the piece of code that we ve just commented restart watcher and note it now work properly expected behavior run s without a problem when you comment code additional context it seem sto happen when the code has an if statement in the code that s been commented now for the weirdest part when you use multiline comments like the problem doesn t appear so i think somewhere in the rootcomponentsplugin it somehow read s these comments please complete the following device information device os browser browser version magento version pwa studio version npm version npm v node version node v please let us know what packages this bug is in regards to venia concept venia ui pwa buildpack peregrine pwa devdocs upward js upward spec create pwa | 0 |
1,123 | 4,990,295,085 | IssuesEvent | 2016-12-08 14:42:58 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | RFE/Feature Idea: os_object - be able to specify the amount of threads used in operations | affects_2.3 cloud feature_idea openstack waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
openstack/os_object
##### ANSIBLE VERSION
Not relevant
##### CONFIGURATION
Not relevant
##### OS / ENVIRONMENT
Not relevant
##### SUMMARY
swiftclient has built-in parameters in order to split tasks into different threads in order to speed up operations, example:
```
--object-threads=OBJECT_THREADS
Number of threads to use for uploading full objects.
Default is 10.
--segment-threads=SEGMENT_THREADS
Number of threads to use for uploading object
segments. Default is 10.
```
the os_object module does not expose these and it would be an interesting feature to add in order to speed up long operations involving many files.
##### EXPECTED RESULTS
```
# Takes two minutes
- os_object:
cloud: mordred
state: present
name: huge_folder
container: destfolder
filename: huge_folder
object_threads: 100
```
##### ACTUAL RESULTS
```
# Takes two hours
- os_object:
cloud: mordred
state: present
name: huge_folder
container: destfolder
filename: huge_folder
```
| True | RFE/Feature Idea: os_object - be able to specify the amount of threads used in operations - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
openstack/os_object
##### ANSIBLE VERSION
Not relevant
##### CONFIGURATION
Not relevant
##### OS / ENVIRONMENT
Not relevant
##### SUMMARY
swiftclient has built-in parameters in order to split tasks into different threads in order to speed up operations, example:
```
--object-threads=OBJECT_THREADS
Number of threads to use for uploading full objects.
Default is 10.
--segment-threads=SEGMENT_THREADS
Number of threads to use for uploading object
segments. Default is 10.
```
the os_object module does not expose these and it would be an interesting feature to add in order to speed up long operations involving many files.
##### EXPECTED RESULTS
```
# Takes two minutes
- os_object:
cloud: mordred
state: present
name: huge_folder
container: destfolder
filename: huge_folder
object_threads: 100
```
##### ACTUAL RESULTS
```
# Takes two hours
- os_object:
cloud: mordred
state: present
name: huge_folder
container: destfolder
filename: huge_folder
```
| main | rfe feature idea os object be able to specify the amount of threads used in operations issue type feature idea component name openstack os object ansible version not relevant configuration not relevant os environment not relevant summary swiftclient has built in parameters in order to split tasks into different threads in order to speed up operations example object threads object threads number of threads to use for uploading full objects default is segment threads segment threads number of threads to use for uploading object segments default is the os object module does not expose these and it would be an interesting feature to add in order to speed up long operations involving many files expected results takes two minutes os object cloud mordred state present name huge folder container destfolder filename huge folder object threads actual results takes two hours os object cloud mordred state present name huge folder container destfolder filename huge folder | 1 |
37,363 | 8,369,339,321 | IssuesEvent | 2018-10-04 17:03:28 | gwaldron/osgearth | https://api.github.com/repos/gwaldron/osgearth | reopened | OSG 3.6.3: AnnotationUtils::createTextDrawable / osgText crash | defect | Tested with latest osgearth master:
```
* thread #44, stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
libOpenThreadsd.21.dylib`OpenThreads::Atomic::operator--(this=0xbaddc0dedeadbebd) at Atomic.cpp:58
libosgTextd.158.dylib`osg::Referenced::unref(this=0xbaddc0dedeadbead) const at Referenced:173
libosgTextd.158.dylib`osg::ref_ptr<osgText::GlyphTexture>::~ref_ptr(this=0x000060000020a270) at ref_ptr:41
libosgTextd.158.dylib`osg::ref_ptr<osgText::GlyphTexture>::~ref_ptr(this=0x000060000020a270) at ref_ptr:41
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >::destroy(this=0x000060c0001c0368, __p=0x000060000020a270) at memory:1807
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] void std::__1::allocator_traits<std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> > >::__destroy<osg::ref_ptr<osgText::GlyphTexture> >(__a=0x000060c0001c0368, __p=0x000060000020a270) at memory:1680
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] void std::__1::allocator_traits<std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> > >::destroy<osg::ref_ptr<osgText::GlyphTexture> >(__a=0x000060c0001c0368, __p=0x000060000020a270) at memory:1548
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::__destruct_at_end(this=0x0000700017659d80, __new_last=0x0000000000000000) at __split_buffer:296
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::__destruct_at_end(this=0x0000700017659d80, __new_last=0x0000000000000000) at __split_buffer:141
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::clear(this=0x0000700017659d80) at __split_buffer:86
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer(this=0x0000700017659d80) at __split_buffer:341
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer(this=0x0000700017659d80) at __split_buffer:340
libosgTextd.158.dylib`void std::__1::vector<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> > >::__push_back_slow_path<osg::ref_ptr<osgText::GlyphTexture> >(this=0x000060c0001c0358 size=1, __x=0x0000700017659f58) at vector:1580
libosgTextd.158.dylib`osgText::Font::assignGlyphToGlyphTexture(osgText::Glyph*, osgText::ShaderTechnique) [inlined] std::__1::vector<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> > >::push_back(this=0x000060c0001c0358 size=1, __x=0x0000700017659f58) at vector:1616
libosgTextd.158.dylib`osgText::Font::assignGlyphToGlyphTexture(this=0x000060c0001c02d0, glyph=0x000000010de16650, shaderTechnique=ALL_FEATURES) at Font.cpp:479
libosgTextd.158.dylib`osgText::Glyph::getOrCreateTextureInfo(this=0x000000010de16650, technique=ALL_FEATURES) at Glyph.cpp:493
libosgTextd.158.dylib`osgText::Text::computeGlyphRepresentation(this=0x0000000152171550) at Text.cpp:649
libosgTextd.158.dylib`osgText::TextBase::setText(this=0x0000000152171550, text=0x000070001765add0) at TextBase.cpp:269
libosgTextd.158.dylib`osgText::TextBase::setText(this=0x0000000152171550, text="-", encoding=ENCODING_UTF8) at TextBase.cpp:279
libosgEarthAnnotationd.0.dylib`osgEarth::Annotation::AnnotationUtils::createTextDrawable(text="-", symbol=0x0000000172054a00, box=0x000070001765b158) at AnnotationUtils.cpp:92
libosgEarthAnnotationd.0.dylib`osgEarth::Annotation::PlaceNode::compile(this=0x0000000172042400) at PlaceNode.cpp:300
libosgEarthAnnotationd.0.dylib`osgEarth::Annotation::PlaceNode::setStyle(this=0x0000000172042400, style=0x000070001765b8a8, readOptions=0x000000013f6a9ad0) at PlaceNode.cpp:449
osgdb_osgearth_label_annotationd.so`AnnotationLabelSource::makePlaceNode(this=0x0000600001c89c90, context=0x00007000176603e0, feature=0x00000001537b4e00, style=0x000070001765b8a8) at AnnotationLabelSource.cpp:184
osgdb_osgearth_label_annotationd.so`AnnotationLabelSource::createNode(this=0x0000600001c89c90, input=size=1, style=0x000070001765d5e8, context=0x00007000176603e0) at AnnotationLabelSource.cpp:131
libosgEarthFeaturesd.0.dylib`osgEarth::Features::BuildTextFilter::push(this=0x000070001765d4a8, input=size=1, context=0x00007000176603e0) at BuildTextFilter.cpp:63
libosgEarthFeaturesd.0.dylib`osgEarth::Features::GeometryCompiler::compile(this=0x0000700017662200, workingSet=size=1, style=0x0000700017663200, context=0x0000700017662af0) at GeometryCompiler.cpp:473
libosgEarthFeaturesd.0.dylib`osgEarth::Features::GeometryCompiler::compile(this=0x0000700017662200, cursor=0x0000600001c89150, style=0x0000700017663200, context=0x0000700017662af0) at GeometryCompiler.cpp:219
libosgEarthFeaturesd.0.dylib`osgEarth::Features::GeomFeatureNodeFactory::createOrUpdateNode(this=0x000000014db3de70, features=0x0000600001c89150, style=0x0000700017663200, context=0x0000700017662af0, node=0x0000700017662678) at FeatureModelSource.cpp:393
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::createOrUpdateNode(this=0x000000015e1f2800, cursor=0x0000600001c89150, style=0x0000700017663200, context=0x0000700017662af0, readOptions=0x00000001388d4480, output=0x0000700017662678) at FeatureModelGraph.cpp:1270
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::createStyleGroup(this=0x000000015e1f2800, style=0x0000700017663200, workingSet=size=1, contextPrototype=0x0000700017663530, readOptions=0x00000001388d4480) at FeatureModelGraph.cpp:1453
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::queryAndSortIntoStyleGroups(this=0x000000015e1f2800, query=0x00007000176646d8, styleExpr=0x0000000152ff7880, index=0x000000010db48840, parent=0x00006080003ae700, readOptions=0x00000001388d4480, progress=0x00006080046c8180) at FeatureModelGraph.cpp:1399
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::build(this=0x000000015e1f2800, defaultStyle=0x0000700017665248, baseQuery=0x0000700017665540, workingExtent=0x0000700017665bf0, index=0x000000010db48840, readOptions=0x00000001388d4480, progress=0x00006080046c8180) at FeatureModelGraph.cpp:1212
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::buildTile(this=0x000000015e1f2800, level=0x0000700017665b40, extent=0x0000700017665bf0, key=0x0000700017665ac8, readOptions=0x00000001388d4480) at FeatureModelGraph.cpp:1064
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::load(this=0x000000015e1f2800, lod=1, tileX=1, tileY=1, uri="1_1_1.osgearth_pseudo_fmg", readOptions=0x00000001388d4480) at FeatureModelGraph.cpp:676
libosgEarthFeaturesd.0.dylib`osgEarthFeatureModelPseudoLoader::readNode(this=0x0000608000100bd0, uri="1_1_1.osgearth_pseudo_fmg", readOptions=0x00000001388d4480) const at FeatureModelGraph.cpp:190
``` | 1.0 | OSG 3.6.3: AnnotationUtils::createTextDrawable / osgText crash - Tested with latest osgearth master:
```
* thread #44, stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
libOpenThreadsd.21.dylib`OpenThreads::Atomic::operator--(this=0xbaddc0dedeadbebd) at Atomic.cpp:58
libosgTextd.158.dylib`osg::Referenced::unref(this=0xbaddc0dedeadbead) const at Referenced:173
libosgTextd.158.dylib`osg::ref_ptr<osgText::GlyphTexture>::~ref_ptr(this=0x000060000020a270) at ref_ptr:41
libosgTextd.158.dylib`osg::ref_ptr<osgText::GlyphTexture>::~ref_ptr(this=0x000060000020a270) at ref_ptr:41
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >::destroy(this=0x000060c0001c0368, __p=0x000060000020a270) at memory:1807
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] void std::__1::allocator_traits<std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> > >::__destroy<osg::ref_ptr<osgText::GlyphTexture> >(__a=0x000060c0001c0368, __p=0x000060000020a270) at memory:1680
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] void std::__1::allocator_traits<std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> > >::destroy<osg::ref_ptr<osgText::GlyphTexture> >(__a=0x000060c0001c0368, __p=0x000060000020a270) at memory:1548
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::__destruct_at_end(this=0x0000700017659d80, __new_last=0x0000000000000000) at __split_buffer:296
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::__destruct_at_end(this=0x0000700017659d80, __new_last=0x0000000000000000) at __split_buffer:141
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer() [inlined] std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::clear(this=0x0000700017659d80) at __split_buffer:86
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer(this=0x0000700017659d80) at __split_buffer:341
libosgTextd.158.dylib`std::__1::__split_buffer<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> >&>::~__split_buffer(this=0x0000700017659d80) at __split_buffer:340
libosgTextd.158.dylib`void std::__1::vector<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> > >::__push_back_slow_path<osg::ref_ptr<osgText::GlyphTexture> >(this=0x000060c0001c0358 size=1, __x=0x0000700017659f58) at vector:1580
libosgTextd.158.dylib`osgText::Font::assignGlyphToGlyphTexture(osgText::Glyph*, osgText::ShaderTechnique) [inlined] std::__1::vector<osg::ref_ptr<osgText::GlyphTexture>, std::__1::allocator<osg::ref_ptr<osgText::GlyphTexture> > >::push_back(this=0x000060c0001c0358 size=1, __x=0x0000700017659f58) at vector:1616
libosgTextd.158.dylib`osgText::Font::assignGlyphToGlyphTexture(this=0x000060c0001c02d0, glyph=0x000000010de16650, shaderTechnique=ALL_FEATURES) at Font.cpp:479
libosgTextd.158.dylib`osgText::Glyph::getOrCreateTextureInfo(this=0x000000010de16650, technique=ALL_FEATURES) at Glyph.cpp:493
libosgTextd.158.dylib`osgText::Text::computeGlyphRepresentation(this=0x0000000152171550) at Text.cpp:649
libosgTextd.158.dylib`osgText::TextBase::setText(this=0x0000000152171550, text=0x000070001765add0) at TextBase.cpp:269
libosgTextd.158.dylib`osgText::TextBase::setText(this=0x0000000152171550, text="-", encoding=ENCODING_UTF8) at TextBase.cpp:279
libosgEarthAnnotationd.0.dylib`osgEarth::Annotation::AnnotationUtils::createTextDrawable(text="-", symbol=0x0000000172054a00, box=0x000070001765b158) at AnnotationUtils.cpp:92
libosgEarthAnnotationd.0.dylib`osgEarth::Annotation::PlaceNode::compile(this=0x0000000172042400) at PlaceNode.cpp:300
libosgEarthAnnotationd.0.dylib`osgEarth::Annotation::PlaceNode::setStyle(this=0x0000000172042400, style=0x000070001765b8a8, readOptions=0x000000013f6a9ad0) at PlaceNode.cpp:449
osgdb_osgearth_label_annotationd.so`AnnotationLabelSource::makePlaceNode(this=0x0000600001c89c90, context=0x00007000176603e0, feature=0x00000001537b4e00, style=0x000070001765b8a8) at AnnotationLabelSource.cpp:184
osgdb_osgearth_label_annotationd.so`AnnotationLabelSource::createNode(this=0x0000600001c89c90, input=size=1, style=0x000070001765d5e8, context=0x00007000176603e0) at AnnotationLabelSource.cpp:131
libosgEarthFeaturesd.0.dylib`osgEarth::Features::BuildTextFilter::push(this=0x000070001765d4a8, input=size=1, context=0x00007000176603e0) at BuildTextFilter.cpp:63
libosgEarthFeaturesd.0.dylib`osgEarth::Features::GeometryCompiler::compile(this=0x0000700017662200, workingSet=size=1, style=0x0000700017663200, context=0x0000700017662af0) at GeometryCompiler.cpp:473
libosgEarthFeaturesd.0.dylib`osgEarth::Features::GeometryCompiler::compile(this=0x0000700017662200, cursor=0x0000600001c89150, style=0x0000700017663200, context=0x0000700017662af0) at GeometryCompiler.cpp:219
libosgEarthFeaturesd.0.dylib`osgEarth::Features::GeomFeatureNodeFactory::createOrUpdateNode(this=0x000000014db3de70, features=0x0000600001c89150, style=0x0000700017663200, context=0x0000700017662af0, node=0x0000700017662678) at FeatureModelSource.cpp:393
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::createOrUpdateNode(this=0x000000015e1f2800, cursor=0x0000600001c89150, style=0x0000700017663200, context=0x0000700017662af0, readOptions=0x00000001388d4480, output=0x0000700017662678) at FeatureModelGraph.cpp:1270
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::createStyleGroup(this=0x000000015e1f2800, style=0x0000700017663200, workingSet=size=1, contextPrototype=0x0000700017663530, readOptions=0x00000001388d4480) at FeatureModelGraph.cpp:1453
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::queryAndSortIntoStyleGroups(this=0x000000015e1f2800, query=0x00007000176646d8, styleExpr=0x0000000152ff7880, index=0x000000010db48840, parent=0x00006080003ae700, readOptions=0x00000001388d4480, progress=0x00006080046c8180) at FeatureModelGraph.cpp:1399
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::build(this=0x000000015e1f2800, defaultStyle=0x0000700017665248, baseQuery=0x0000700017665540, workingExtent=0x0000700017665bf0, index=0x000000010db48840, readOptions=0x00000001388d4480, progress=0x00006080046c8180) at FeatureModelGraph.cpp:1212
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::buildTile(this=0x000000015e1f2800, level=0x0000700017665b40, extent=0x0000700017665bf0, key=0x0000700017665ac8, readOptions=0x00000001388d4480) at FeatureModelGraph.cpp:1064
libosgEarthFeaturesd.0.dylib`osgEarth::Features::FeatureModelGraph::load(this=0x000000015e1f2800, lod=1, tileX=1, tileY=1, uri="1_1_1.osgearth_pseudo_fmg", readOptions=0x00000001388d4480) at FeatureModelGraph.cpp:676
libosgEarthFeaturesd.0.dylib`osgEarthFeatureModelPseudoLoader::readNode(this=0x0000608000100bd0, uri="1_1_1.osgearth_pseudo_fmg", readOptions=0x00000001388d4480) const at FeatureModelGraph.cpp:190
``` | non_main | osg annotationutils createtextdrawable osgtext crash tested with latest osgearth master thread stop reason exc bad access code exc gpflt libopenthreadsd dylib openthreads atomic operator this at atomic cpp libosgtextd dylib osg referenced unref this const at referenced libosgtextd dylib osg ref ptr ref ptr this at ref ptr libosgtextd dylib osg ref ptr ref ptr this at ref ptr libosgtextd dylib std split buffer std allocator split buffer std allocator destroy this p at memory libosgtextd dylib std split buffer std allocator split buffer void std allocator traits destroy a p at memory libosgtextd dylib std split buffer std allocator split buffer void std allocator traits destroy a p at memory libosgtextd dylib std split buffer std allocator split buffer std split buffer std allocator destruct at end this new last at split buffer libosgtextd dylib std split buffer std allocator split buffer std split buffer std allocator destruct at end this new last at split buffer libosgtextd dylib std split buffer std allocator split buffer std split buffer std allocator clear this at split buffer libosgtextd dylib std split buffer std allocator split buffer this at split buffer libosgtextd dylib std split buffer std allocator split buffer this at split buffer libosgtextd dylib void std vector std allocator push back slow path this size x at vector libosgtextd dylib osgtext font assignglyphtoglyphtexture osgtext glyph osgtext shadertechnique std vector std allocator push back this size x at vector libosgtextd dylib osgtext font assignglyphtoglyphtexture this glyph shadertechnique all features at font cpp libosgtextd dylib osgtext glyph getorcreatetextureinfo this technique all features at glyph cpp libosgtextd dylib osgtext text computeglyphrepresentation this at text cpp libosgtextd dylib osgtext textbase settext this text at textbase cpp libosgtextd dylib osgtext textbase settext this text encoding encoding at textbase cpp libosgearthannotationd dylib osgearth annotation annotationutils createtextdrawable text symbol box at annotationutils cpp libosgearthannotationd dylib osgearth annotation placenode compile this at placenode cpp libosgearthannotationd dylib osgearth annotation placenode setstyle this style readoptions at placenode cpp osgdb osgearth label annotationd so annotationlabelsource makeplacenode this context feature style at annotationlabelsource cpp osgdb osgearth label annotationd so annotationlabelsource createnode this input size style context at annotationlabelsource cpp libosgearthfeaturesd dylib osgearth features buildtextfilter push this input size context at buildtextfilter cpp libosgearthfeaturesd dylib osgearth features geometrycompiler compile this workingset size style context at geometrycompiler cpp libosgearthfeaturesd dylib osgearth features geometrycompiler compile this cursor style context at geometrycompiler cpp libosgearthfeaturesd dylib osgearth features geomfeaturenodefactory createorupdatenode this features style context node at featuremodelsource cpp libosgearthfeaturesd dylib osgearth features featuremodelgraph createorupdatenode this cursor style context readoptions output at featuremodelgraph cpp libosgearthfeaturesd dylib osgearth features featuremodelgraph createstylegroup this style workingset size contextprototype readoptions at featuremodelgraph cpp libosgearthfeaturesd dylib osgearth features featuremodelgraph queryandsortintostylegroups this query styleexpr index parent readoptions progress at featuremodelgraph cpp libosgearthfeaturesd dylib osgearth features featuremodelgraph build this defaultstyle basequery workingextent index readoptions progress at featuremodelgraph cpp libosgearthfeaturesd dylib osgearth features featuremodelgraph buildtile this level extent key readoptions at featuremodelgraph cpp libosgearthfeaturesd dylib osgearth features featuremodelgraph load this lod tilex tiley uri osgearth pseudo fmg readoptions at featuremodelgraph cpp libosgearthfeaturesd dylib osgearthfeaturemodelpseudoloader readnode this uri osgearth pseudo fmg readoptions const at featuremodelgraph cpp | 0 |
90,993 | 8,288,048,134 | IssuesEvent | 2018-09-19 10:41:28 | humera987/HumTestData | https://api.github.com/repos/humera987/HumTestData | reopened | testing_hums : api_v1_alerts_get_query_param_sql_injection_postgres_pageSize | testing_hums | Project : testing_hums
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 17 Sep 2018 10:20:12 GMT]}
Endpoint : http://13.56.210.25/api/v1/alerts?pageSize=
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2018-09-17T10:20:12.613+0000",
"errors" : false,
"messages" : [ ],
"data" : [ {
"id" : "8a8080a565e5c6f90165e6140b5e3d3f",
"createdBy" : null,
"createdDate" : "2018-09-17T05:49:57.982+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:49:57.982+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e607e43f3ccc",
"createdBy" : null,
"createdDate" : "2018-09-17T05:36:41.535+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:36:41.535+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e6063dc13c59",
"createdBy" : null,
"createdDate" : "2018-09-17T05:34:53.377+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:34:53.377+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e6031fb73bdf",
"createdBy" : null,
"createdDate" : "2018-09-17T05:31:29.079+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:31:29.079+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e601e8863a32",
"createdBy" : null,
"createdDate" : "2018-09-17T05:30:09.414+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:30:09.414+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "INFO",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5f71b49109b",
"refName" : "testing_hums",
"subject" : "testing_hums",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e601a70d38bf",
"createdBy" : null,
"createdDate" : "2018-09-17T05:29:52.653+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:29:52.653+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5fd64dd1e42",
"createdBy" : null,
"createdDate" : "2018-09-17T05:25:13.565+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:25:13.565+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5fced591c2c",
"createdBy" : null,
"createdDate" : "2018-09-17T05:24:42.969+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:24:42.969+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5f0e622106f",
"createdBy" : null,
"createdDate" : "2018-09-17T05:11:34.689+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:11:34.689+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5f07bb60ffc",
"createdBy" : null,
"createdDate" : "2018-09-17T05:11:07.446+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:11:07.446+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5ef6ff10f89",
"createdBy" : null,
"createdDate" : "2018-09-17T05:09:58.897+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:09:58.897+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5eb82d70dfc",
"createdBy" : null,
"createdDate" : "2018-09-17T05:05:41.591+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:05:41.591+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5e697aa0d83",
"createdBy" : null,
"createdDate" : "2018-09-17T05:00:19.242+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:00:19.242+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5e52ce70d10",
"createdBy" : null,
"createdDate" : "2018-09-17T04:58:46.375+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T04:58:46.375+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5e44e590c9d",
"createdBy" : null,
"createdDate" : "2018-09-17T04:57:49.401+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T04:57:49.401+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
} ],
"totalPages" : 1,
"totalElements" : 15
}
Logs :
Assertion [@StatusCode != 200] resolved-to [200 != 200] result [Failed]
--- FX Bot --- | 1.0 | testing_hums : api_v1_alerts_get_query_param_sql_injection_postgres_pageSize - Project : testing_hums
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 17 Sep 2018 10:20:12 GMT]}
Endpoint : http://13.56.210.25/api/v1/alerts?pageSize=
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2018-09-17T10:20:12.613+0000",
"errors" : false,
"messages" : [ ],
"data" : [ {
"id" : "8a8080a565e5c6f90165e6140b5e3d3f",
"createdBy" : null,
"createdDate" : "2018-09-17T05:49:57.982+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:49:57.982+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e607e43f3ccc",
"createdBy" : null,
"createdDate" : "2018-09-17T05:36:41.535+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:36:41.535+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e6063dc13c59",
"createdBy" : null,
"createdDate" : "2018-09-17T05:34:53.377+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:34:53.377+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e6031fb73bdf",
"createdBy" : null,
"createdDate" : "2018-09-17T05:31:29.079+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:31:29.079+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e601e8863a32",
"createdBy" : null,
"createdDate" : "2018-09-17T05:30:09.414+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:30:09.414+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "INFO",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5f71b49109b",
"refName" : "testing_hums",
"subject" : "testing_hums",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e601a70d38bf",
"createdBy" : null,
"createdDate" : "2018-09-17T05:29:52.653+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:29:52.653+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5fd64dd1e42",
"createdBy" : null,
"createdDate" : "2018-09-17T05:25:13.565+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:25:13.565+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5fced591c2c",
"createdBy" : null,
"createdDate" : "2018-09-17T05:24:42.969+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:24:42.969+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5f0e622106f",
"createdBy" : null,
"createdDate" : "2018-09-17T05:11:34.689+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:11:34.689+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5f07bb60ffc",
"createdBy" : null,
"createdDate" : "2018-09-17T05:11:07.446+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:11:07.446+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5ef6ff10f89",
"createdBy" : null,
"createdDate" : "2018-09-17T05:09:58.897+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:09:58.897+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5eb82d70dfc",
"createdBy" : null,
"createdDate" : "2018-09-17T05:05:41.591+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:05:41.591+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5e697aa0d83",
"createdBy" : null,
"createdDate" : "2018-09-17T05:00:19.242+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T05:00:19.242+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5e52ce70d10",
"createdBy" : null,
"createdDate" : "2018-09-17T04:58:46.375+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T04:58:46.375+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
}, {
"id" : "8a8080a565e5c6f90165e5e44e590c9d",
"createdBy" : null,
"createdDate" : "2018-09-17T04:57:49.401+0000",
"modifiedBy" : null,
"modifiedDate" : "2018-09-17T04:57:49.401+0000",
"version" : null,
"inactive" : false,
"taskType" : "PROJECT_SYNC",
"taskState" : "ACTIVE",
"type" : "ERROR",
"status" : "UNREAD",
"refType" : "PROJECT",
"refId" : "8a8080a565e5c6f90165e5dc7d950089",
"refName" : "testproject917",
"subject" : "testproject917",
"message" : null,
"readDate" : null,
"healedDate" : null,
"users" : [ ],
"org" : {
"id" : "8a8080cf65e02c0f0165e031fb9e0003",
"createdBy" : null,
"createdDate" : null,
"modifiedBy" : null,
"modifiedDate" : null,
"version" : null,
"inactive" : false,
"name" : null
}
} ],
"totalPages" : 1,
"totalElements" : 15
}
Logs :
Assertion [@StatusCode != 200] resolved-to [200 != 200] result [Failed]
--- FX Bot --- | non_main | testing hums api alerts get query param sql injection postgres pagesize project testing hums job uat env uat region fxlabs us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options content type transfer encoding date endpoint request response requestid none requesttime errors false messages data id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type info status unread reftype project refid refname testing hums subject testing hums message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null id createdby null createddate modifiedby null modifieddate version null inactive false tasktype project sync taskstate active type error status unread reftype project refid refname subject message null readdate null healeddate null users org id createdby null createddate null modifiedby null modifieddate null version null inactive false name null totalpages totalelements logs assertion resolved to result fx bot | 0 |
182,652 | 21,673,914,122 | IssuesEvent | 2022-05-08 12:04:05 | turkdevops/vscode | https://api.github.com/repos/turkdevops/vscode | closed | CVE-2022-0654 (High) detected in requestretry-4.0.0.tgz - autoclosed | security vulnerability | ## CVE-2022-0654 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>requestretry-4.0.0.tgz</b></p></summary>
<p>request-retry wrap nodejs request to retry http(s) requests in case of error</p>
<p>Library home page: <a href="https://registry.npmjs.org/requestretry/-/requestretry-4.0.0.tgz">https://registry.npmjs.org/requestretry/-/requestretry-4.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/requestretry/package.json</p>
<p>
Dependency Hierarchy:
- gulp-remote-retry-src-0.6.0.tgz (Root Library)
- :x: **requestretry-4.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>webview-views</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository fgribreau/node-request-retry prior to 7.0.0.
<p>Publish Date: 2022-02-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0654>CVE-2022-0654</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0654">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0654</a></p>
<p>Release Date: 2022-02-23</p>
<p>Fix Resolution (requestretry): 7.0.0</p>
<p>Direct dependency fix Resolution (gulp-remote-retry-src): 0.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0654 (High) detected in requestretry-4.0.0.tgz - autoclosed - ## CVE-2022-0654 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>requestretry-4.0.0.tgz</b></p></summary>
<p>request-retry wrap nodejs request to retry http(s) requests in case of error</p>
<p>Library home page: <a href="https://registry.npmjs.org/requestretry/-/requestretry-4.0.0.tgz">https://registry.npmjs.org/requestretry/-/requestretry-4.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/requestretry/package.json</p>
<p>
Dependency Hierarchy:
- gulp-remote-retry-src-0.6.0.tgz (Root Library)
- :x: **requestretry-4.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>webview-views</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository fgribreau/node-request-retry prior to 7.0.0.
<p>Publish Date: 2022-02-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0654>CVE-2022-0654</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0654">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0654</a></p>
<p>Release Date: 2022-02-23</p>
<p>Fix Resolution (requestretry): 7.0.0</p>
<p>Direct dependency fix Resolution (gulp-remote-retry-src): 0.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in requestretry tgz autoclosed cve high severity vulnerability vulnerable library requestretry tgz request retry wrap nodejs request to retry http s requests in case of error library home page a href path to dependency file package json path to vulnerable library node modules requestretry package json dependency hierarchy gulp remote retry src tgz root library x requestretry tgz vulnerable library found in base branch webview views vulnerability details exposure of sensitive information to an unauthorized actor in github repository fgribreau node request retry prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution requestretry direct dependency fix resolution gulp remote retry src step up your open source security game with whitesource | 0 |
96,619 | 20,035,268,811 | IssuesEvent | 2022-02-02 11:11:11 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | CAPTCHA - reCAPTCHA | No Code Attached Yet | ### Steps to reproduce the issue
1. Enable CAPTCHA - reCAPTCHA (input site key and secret key) - everything is working normally
2. BUT when you decide to disable the CAPTCHA - reCAPTCHA even if the plugin is disabled - you cannot save it unless you put any character on the site key and secret key
### Expected result
You should be able to save it if the plugin is disabled (But in this case you need to put any character before you can save it)
### Actual result
But in this case, you need to put any character before you can save it
### System information (as much as possible)
Joomla 4.0.6
### Additional comments
| 1.0 | CAPTCHA - reCAPTCHA - ### Steps to reproduce the issue
1. Enable CAPTCHA - reCAPTCHA (input site key and secret key) - everything is working normally
2. BUT when you decide to disable the CAPTCHA - reCAPTCHA even if the plugin is disabled - you cannot save it unless you put any character on the site key and secret key
### Expected result
You should be able to save it if the plugin is disabled (But in this case you need to put any character before you can save it)
### Actual result
But in this case, you need to put any character before you can save it
### System information (as much as possible)
Joomla 4.0.6
### Additional comments
| non_main | captcha recaptcha steps to reproduce the issue enable captcha recaptcha input site key and secret key everything is working normally but when you decide to disable the captcha recaptcha even if the plugin is disabled you cannot save it unless you put any character on the site key and secret key expected result you should be able to save it if the plugin is disabled but in this case you need to put any character before you can save it actual result but in this case you need to put any character before you can save it system information as much as possible joomla additional comments | 0 |
762,591 | 26,724,662,918 | IssuesEvent | 2023-01-29 15:12:25 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | closed | Giant Yeti are pickpocketable | Confirmed 30-39 Priority-Low Loot Good first issue | https://github.com/chromiecraft/chromiecraft/issues/4910
### What client do you play on?
enUS
### Faction
Both
### Content Phase:
30-39
### Current Behaviour
Giant Yeti in Alterac Mountains can be pickpocketed
https://user-images.githubusercontent.com/11332559/215287067-c2e34b48-1239-4a01-a26d-25d61ea8d5a0.mp4
### Expected Blizzlike Behaviour
They should not be able to be pickpocketed
### Source
Wrath Classic
https://user-images.githubusercontent.com/11332559/215287077-462808df-f33b-467e-87ee-8ae366831644.mp4
wowhead page with no pickpocket loot section
https://www.wowhead.com/wotlk/npc=2251/giant-yeti
### Steps to reproduce the problem
.learn 1784
.learn 921
.tele alteracmountains
### Extra Notes
https://wowgaming.altervista.org/aowow/?npc=2251
### AC rev. hash/commit
https://github.com/chromiecraft/azerothcore-wotlk/commit/3fee40be7dac90ca99f73e6ae809b18ed7135ef6
### Operating system
Ubuntu 20.04
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-bg-item-reward](https://github.com/azerothcore/mod-bg-item-reward)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chat-transmitter](https://github.com/azerothcore/mod-chat-transmitter)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-cta-switch](https://github.com/azerothcore/mod-cta-switch)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna](https://github.com/azerothcore/mod-eluna)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-low-level-rbg](https://github.com/azerothcore/mod-low-level-rbg)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-progression-system](https://github.com/azerothcore/mod-progression-system)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-rdf-expansion](https://github.com/azerothcore/mod-rdf-expansion)
- [mod-transmog](https://github.com/azerothcore/mod-transmog)
- [mod-weekend-xp](https://github.com/azerothcore/mod-weekend-xp)
- [mod-instanced-worldbosses](https://github.com/nyeriah/mod-instanced-worldbosses)
- [mod-zone-difficulty](https://github.com/azerothcore/mod-zone-difficulty)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-exchange-npc](https://github.com/55Honey/Acore_ExchangeNpc)
- [lua-event-scripts](https://github.com/55Honey/Acore_eventScripts)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
| 1.0 | Giant Yeti are pickpocketable - https://github.com/chromiecraft/chromiecraft/issues/4910
### What client do you play on?
enUS
### Faction
Both
### Content Phase:
30-39
### Current Behaviour
Giant Yeti in Alterac Mountains can be pickpocketed
https://user-images.githubusercontent.com/11332559/215287067-c2e34b48-1239-4a01-a26d-25d61ea8d5a0.mp4
### Expected Blizzlike Behaviour
They should not be able to be pickpocketed
### Source
Wrath Classic
https://user-images.githubusercontent.com/11332559/215287077-462808df-f33b-467e-87ee-8ae366831644.mp4
wowhead page with no pickpocket loot section
https://www.wowhead.com/wotlk/npc=2251/giant-yeti
### Steps to reproduce the problem
.learn 1784
.learn 921
.tele alteracmountains
### Extra Notes
https://wowgaming.altervista.org/aowow/?npc=2251
### AC rev. hash/commit
https://github.com/chromiecraft/azerothcore-wotlk/commit/3fee40be7dac90ca99f73e6ae809b18ed7135ef6
### Operating system
Ubuntu 20.04
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-bg-item-reward](https://github.com/azerothcore/mod-bg-item-reward)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chat-transmitter](https://github.com/azerothcore/mod-chat-transmitter)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-cta-switch](https://github.com/azerothcore/mod-cta-switch)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna](https://github.com/azerothcore/mod-eluna)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-low-level-rbg](https://github.com/azerothcore/mod-low-level-rbg)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-progression-system](https://github.com/azerothcore/mod-progression-system)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-rdf-expansion](https://github.com/azerothcore/mod-rdf-expansion)
- [mod-transmog](https://github.com/azerothcore/mod-transmog)
- [mod-weekend-xp](https://github.com/azerothcore/mod-weekend-xp)
- [mod-instanced-worldbosses](https://github.com/nyeriah/mod-instanced-worldbosses)
- [mod-zone-difficulty](https://github.com/azerothcore/mod-zone-difficulty)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-exchange-npc](https://github.com/55Honey/Acore_ExchangeNpc)
- [lua-event-scripts](https://github.com/55Honey/Acore_eventScripts)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
| non_main | giant yeti are pickpocketable what client do you play on enus faction both content phase current behaviour giant yeti in alterac mountains can be pickpocketed expected blizzlike behaviour they should not be able to be pickpocketed source wrath classic wowhead page with no pickpocket loot section steps to reproduce the problem learn learn tele alteracmountains extra notes ac rev hash commit operating system ubuntu modules customizations none server chromiecraft | 0 |
1,977 | 6,694,175,665 | IssuesEvent | 2017-10-10 00:06:26 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Images: size filter | Maintainer Input Requested Type: Question | Hi,
I just have a quick question: when I choose to filter the images by size, if I choose "Large", do I also see the "Extra-large" images? Or just the ones that match what you have set as "large"?
Thanks
------
IA Page: http://duck.co/ia/view/images
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @jagtalon | True | Images: size filter - Hi,
I just have a quick question: when I choose to filter the images by size, if I choose "Large", do I also see the "Extra-large" images? Or just the ones that match what you have set as "large"?
Thanks
------
IA Page: http://duck.co/ia/view/images
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @jagtalon | main | images size filter hi i just have a quick question when i choose to filter the images by size if i choose large do i also see the extra large images or just the ones that match what you have set as large thanks ia page jagtalon | 1 |
397,286 | 11,726,297,149 | IssuesEvent | 2020-03-10 14:19:05 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | Error getting primary cp: could not find master node | co/multinode kind/bug priority/important-longterm | I am on this Mac ...
```
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.15.2
BuildVersion: 19C57
```
I have installed minikube thus ...
```
$ brew install minikube
```
Here's the version of minikube installed ...
```
$ minikube version
minikube version: v1.8.1
commit: cbda04cf6bbe65e987ae52bb393c10099ab62014
```
I try to start minikube and face issue ...
```
$ minikube start
😄 minikube v1.8.1 on Darwin 10.15.2
✨ Automatically selected the hyperkit driver. Other choices: virtualbox, docker
💣 Error getting primary cp: could not find master node
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
```
| 1.0 | Error getting primary cp: could not find master node - I am on this Mac ...
```
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.15.2
BuildVersion: 19C57
```
I have installed minikube thus ...
```
$ brew install minikube
```
Here's the version of minikube installed ...
```
$ minikube version
minikube version: v1.8.1
commit: cbda04cf6bbe65e987ae52bb393c10099ab62014
```
I try to start minikube and face issue ...
```
$ minikube start
😄 minikube v1.8.1 on Darwin 10.15.2
✨ Automatically selected the hyperkit driver. Other choices: virtualbox, docker
💣 Error getting primary cp: could not find master node
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
```
| non_main | error getting primary cp could not find master node i am on this mac sw vers productname mac os x productversion buildversion i have installed minikube thus brew install minikube here s the version of minikube installed minikube version minikube version commit i try to start minikube and face issue minikube start 😄 minikube on darwin ✨ automatically selected the hyperkit driver other choices virtualbox docker 💣 error getting primary cp could not find master node 😿 minikube is exiting due to an error if the above message is not useful open an issue 👉 | 0 |
2,005 | 6,718,164,096 | IssuesEvent | 2017-10-15 09:03:40 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | closed | [Bug] Некорректное отображение при отыгрыше второго и последующего источников | bug need-maintainer | ### 1. Запрос
Неплохо было бы, если при отыгрыше второй и последующий источники отображались бы корректно.
### 2. Пример вопроса
```markdown
https://i.imgur.com/zp26rLL.png Паприка богата содержанием ЭТОГО алкалоида*Капсаицин*-info-Паприка — красный стручковый перец*-proof-http://dic.academic.ru/dic.nsf/bse/118379/Паприка *-proof-http://dic.academic.ru/dic.nsf/dic_fwords/25743/ПАПРИКА *-proof-193—4
```
### 3. Желаемое поведение
```markdown
[3:29:28 PM] <орнитоптера_Королевы_Александры> ф
[3:29:28 PM] <VIOLET> Нет, не 'ф'
[3:29:28 PM] <VIOLET> Ответ: "Капсаицин"
[3:29:28 PM] <VIOLET> Комментарии: Паприка — красный стручковый перец
[3:29:28 PM] <VIOLET> Источники: 193—4 , http://dic.academic.ru/dic.nsf/dic_fwords/25743/ПАПРИКА
```
### 4. Актуальное поведение
```markdown
[3:29:28 PM] <орнитоптера_Королевы_Александры> ф
[3:29:28 PM] <VIOLET> Нет, не 'ф'
[3:29:28 PM] <VIOLET> Правильный ответ: "Капсаицин"
[3:29:28 PM] <VIOLET> Ещё варианты ответов: '-proof-http://dic.academic.ru/dic.nsf/dic_fwords/25743/ПАПРИКА'
[3:29:28 PM] <VIOLET> Комментарий: Паприка — красный стручковый перец
[3:29:28 PM] <VIOLET> Источник: 193—4
```
Спасибо. | True | [Bug] Некорректное отображение при отыгрыше второго и последующего источников - ### 1. Запрос
Неплохо было бы, если при отыгрыше второй и последующий источники отображались бы корректно.
### 2. Пример вопроса
```markdown
https://i.imgur.com/zp26rLL.png Паприка богата содержанием ЭТОГО алкалоида*Капсаицин*-info-Паприка — красный стручковый перец*-proof-http://dic.academic.ru/dic.nsf/bse/118379/Паприка *-proof-http://dic.academic.ru/dic.nsf/dic_fwords/25743/ПАПРИКА *-proof-193—4
```
### 3. Желаемое поведение
```markdown
[3:29:28 PM] <орнитоптера_Королевы_Александры> ф
[3:29:28 PM] <VIOLET> Нет, не 'ф'
[3:29:28 PM] <VIOLET> Ответ: "Капсаицин"
[3:29:28 PM] <VIOLET> Комментарии: Паприка — красный стручковый перец
[3:29:28 PM] <VIOLET> Источники: 193—4 , http://dic.academic.ru/dic.nsf/dic_fwords/25743/ПАПРИКА
```
### 4. Актуальное поведение
```markdown
[3:29:28 PM] <орнитоптера_Королевы_Александры> ф
[3:29:28 PM] <VIOLET> Нет, не 'ф'
[3:29:28 PM] <VIOLET> Правильный ответ: "Капсаицин"
[3:29:28 PM] <VIOLET> Ещё варианты ответов: '-proof-http://dic.academic.ru/dic.nsf/dic_fwords/25743/ПАПРИКА'
[3:29:28 PM] <VIOLET> Комментарий: Паприка — красный стручковый перец
[3:29:28 PM] <VIOLET> Источник: 193—4
```
Спасибо. | main | некорректное отображение при отыгрыше второго и последующего источников запрос неплохо было бы если при отыгрыше второй и последующий источники отображались бы корректно пример вопроса markdown паприка богата содержанием этого алкалоида капсаицин info паприка — красный стручковый перец proof proof proof — желаемое поведение markdown ф нет не ф ответ капсаицин комментарии паприка — красный стручковый перец источники — актуальное поведение markdown ф нет не ф правильный ответ капсаицин ещё варианты ответов proof комментарий паприка — красный стручковый перец источник — спасибо | 1 |
246,749 | 7,895,629,771 | IssuesEvent | 2018-06-29 04:39:42 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | xrayimage query distorts the image when the image width and height are not equal | Likelihood: 3 - Occasional OS: All Priority: Normal Severity: 4 - Crash / Wrong Results Support Group: Any bug version: 2.6.0 | If you take any of the examples in the xrayimage.py tests and change the image height to 600, the image normal image will be stretched to fill the 600 height.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 01/18/2013 01:41 pm
Original update: 01/29/2013 05:20 pm
Ticket number: 1316 | 1.0 | xrayimage query distorts the image when the image width and height are not equal - If you take any of the examples in the xrayimage.py tests and change the image height to 600, the image normal image will be stretched to fill the 600 height.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 01/18/2013 01:41 pm
Original update: 01/29/2013 05:20 pm
Ticket number: 1316 | non_main | xrayimage query distorts the image when the image width and height are not equal if you take any of the examples in the xrayimage py tests and change the image height to the image normal image will be stretched to fill the height redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author eric brugger original creation pm original update pm ticket number | 0 |
581,279 | 17,290,262,192 | IssuesEvent | 2021-07-24 15:43:36 | GRIS-UdeM/ControlGris | https://api.github.com/repos/GRIS-UdeM/ControlGris | closed | Corriger les valeurs de Y qui sont inversées par rapport à SpatGRIS3 | High priority bug | V120
Je mets un no de version mais je crois que cette erreur existe depuis très longtemps. Les valeurs de Y sont inversées dans ControlGRIS par rapport à SpatGRIS. Une valeur de -1.00 dans ControlGRIS correspond à une valeur de +1.00 dans SpatGRIS. Il est grand temps de corriger cela. | 1.0 | Corriger les valeurs de Y qui sont inversées par rapport à SpatGRIS3 - V120
Je mets un no de version mais je crois que cette erreur existe depuis très longtemps. Les valeurs de Y sont inversées dans ControlGRIS par rapport à SpatGRIS. Une valeur de -1.00 dans ControlGRIS correspond à une valeur de +1.00 dans SpatGRIS. Il est grand temps de corriger cela. | non_main | corriger les valeurs de y qui sont inversées par rapport à je mets un no de version mais je crois que cette erreur existe depuis très longtemps les valeurs de y sont inversées dans controlgris par rapport à spatgris une valeur de dans controlgris correspond à une valeur de dans spatgris il est grand temps de corriger cela | 0 |
3,705 | 15,169,758,911 | IssuesEvent | 2021-02-12 21:46:08 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Application to join: JSitter | Maintainer application | Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [x] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [x] N/A If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
JWT
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/jsitter/jwt
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
<!-- Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| True | Application to join: JSitter - Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [x] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [x] N/A If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
JWT
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/jsitter/jwt
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
<!-- Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| main | application to join jsitter hello and welcome to the contrib application process we re happy to have you please note these requirements for new contrib projects include a readme md file containing license and maintainer information you can use this example include a license txt file you can use this example n a if porting a drupal project maintain the git history from drupal please provide the following information the name of your module theme or layout jwt post a link to your new backdrop project under your own github account option if you have chosen option or above do you agree to the yes | 1 |
5,871 | 31,860,187,048 | IssuesEvent | 2023-09-15 10:19:23 | flemmerich/pysubgroup | https://api.github.com/repos/flemmerich/pysubgroup | closed | Renaming of basic classes | maintainer | The classes NominalSelector, NumericSelector and NominalTarget (potentially also NumericTarget) should be relabeled for clarity. This is especially true for the nominal selector/target as they are mostly geared towards Boolean targets.
The new names I would suggest are:
NominalSelector -> EqualitySelector
NumericSelector -> IntervalSelector
NominalTarget -> BooleanTarget
(NumericTarget -> MeanTarget?) | True | Renaming of basic classes - The classes NominalSelector, NumericSelector and NominalTarget (potentially also NumericTarget) should be relabeled for clarity. This is especially true for the nominal selector/target as they are mostly geared towards Boolean targets.
The new names I would suggest are:
NominalSelector -> EqualitySelector
NumericSelector -> IntervalSelector
NominalTarget -> BooleanTarget
(NumericTarget -> MeanTarget?) | main | renaming of basic classes the classes nominalselector numericselector and nominaltarget potentially also numerictarget should be relabeled for clarity this is especially true for the nominal selector target as they are mostly geared towards boolean targets the new names i would suggest are nominalselector equalityselector numericselector intervalselector nominaltarget booleantarget numerictarget meantarget | 1 |
657,661 | 21,799,666,897 | IssuesEvent | 2022-05-16 02:39:25 | Wiredcraft/pipelines | https://api.github.com/repos/Wiredcraft/pipelines | closed | GitHub authentication | Status: Backlog Priority: High | We are adding a GitHub OAuth authentication. User can select teams that are allowed to access the UI.
Command line argument: `--github-auth=Acme/Team1,Acme/Team2`
If user does not belong to allowed teams he gets redirected back to login page. | 1.0 | GitHub authentication - We are adding a GitHub OAuth authentication. User can select teams that are allowed to access the UI.
Command line argument: `--github-auth=Acme/Team1,Acme/Team2`
If user does not belong to allowed teams he gets redirected back to login page. | non_main | github authentication we are adding a github oauth authentication user can select teams that are allowed to access the ui command line argument github auth acme acme if user does not belong to allowed teams he gets redirected back to login page | 0 |
673 | 4,215,801,690 | IssuesEvent | 2016-06-30 06:35:41 | Particular/NServiceBus | https://api.github.com/repos/Particular/NServiceBus | closed | Obsolete Bus class | Project: V6 Launch Size: S State: In Progress - Maintainer Prio Tag: Maintainer Prio | When upgrading a saga the `Bus` property on the base class no resolves to the `Bus` class and we get a "cant find static method" error.

But since the Bus class itself is not obsolete it is hard to work out what is wrong
We should obsolete Bus and give a useful message | True | Obsolete Bus class - When upgrading a saga the `Bus` property on the base class no resolves to the `Bus` class and we get a "cant find static method" error.

But since the Bus class itself is not obsolete it is hard to work out what is wrong
We should obsolete Bus and give a useful message | main | obsolete bus class when upgrading a saga the bus property on the base class no resolves to the bus class and we get a cant find static method error but since the bus class itself is not obsolete it is hard to work out what is wrong we should obsolete bus and give a useful message | 1 |
95,499 | 19,703,864,659 | IssuesEvent | 2022-01-12 19:32:25 | binofcode/binofcode-comments | https://api.github.com/repos/binofcode/binofcode-comments | opened | /github/how-to-setup-github-contribution-grid-snake | binofcode gitalk 4ce3236a985b3eadb66b066f23831554 | https://binofcode.web.app/github/how-to-setup-github-contribution-grid-snake
An Open Source Community for beginners in programming. | 1.0 | /github/how-to-setup-github-contribution-grid-snake - https://binofcode.web.app/github/how-to-setup-github-contribution-grid-snake
An Open Source Community for beginners in programming. | non_main | github how to setup github contribution grid snake an open source community for beginners in programming | 0 |
224,555 | 17,193,043,108 | IssuesEvent | 2021-07-16 13:42:03 | djkoloski/rkyv | https://api.github.com/repos/djkoloski/rkyv | closed | Unclear error message after upgrading to 0.7 | documentation question | ```rust
#[derive(Archive, Deserialize, Serialize, Clone, Debug, Eq, PartialEq)]
pub struct Stream {
pub(crate) head: SignedHead,
pub(crate) outboard: Vec<u8>,
}
impl Stream {
pub(crate) fn to_bytes(&self) -> Result<AlignedVec> {
let mut ser = AlignedSerializer::new(AlignedVec::new());
ser.serialize_value(self)?;
Ok(ser.into_inner())
}
}
```
```
error[E0277]: the trait bound `AlignedSerializer<AlignedVec>: ScratchSpace` is not satisfied
--> core/src/stream.rs:186:13
|
186 | ser.serialize_value(self)?;
| ^^^^^^^^^^^^^^^ the trait `ScratchSpace` is not implemented for `AlignedSerializer<AlignedVec>`
|
= note: required because of the requirements on the impl of `rkyv::Serialize<AlignedSerializer<AlignedVec>>` for `Vec<u8>`
= note: 1 redundant requirements hidden
= note: required because of the requirements on the impl of `rkyv::Serialize<AlignedSerializer<AlignedVec>>` for `stream::Stream`
``` | 1.0 | Unclear error message after upgrading to 0.7 - ```rust
#[derive(Archive, Deserialize, Serialize, Clone, Debug, Eq, PartialEq)]
pub struct Stream {
pub(crate) head: SignedHead,
pub(crate) outboard: Vec<u8>,
}
impl Stream {
pub(crate) fn to_bytes(&self) -> Result<AlignedVec> {
let mut ser = AlignedSerializer::new(AlignedVec::new());
ser.serialize_value(self)?;
Ok(ser.into_inner())
}
}
```
```
error[E0277]: the trait bound `AlignedSerializer<AlignedVec>: ScratchSpace` is not satisfied
--> core/src/stream.rs:186:13
|
186 | ser.serialize_value(self)?;
| ^^^^^^^^^^^^^^^ the trait `ScratchSpace` is not implemented for `AlignedSerializer<AlignedVec>`
|
= note: required because of the requirements on the impl of `rkyv::Serialize<AlignedSerializer<AlignedVec>>` for `Vec<u8>`
= note: 1 redundant requirements hidden
= note: required because of the requirements on the impl of `rkyv::Serialize<AlignedSerializer<AlignedVec>>` for `stream::Stream`
``` | non_main | unclear error message after upgrading to rust pub struct stream pub crate head signedhead pub crate outboard vec impl stream pub crate fn to bytes self result let mut ser alignedserializer new alignedvec new ser serialize value self ok ser into inner error the trait bound alignedserializer scratchspace is not satisfied core src stream rs ser serialize value self the trait scratchspace is not implemented for alignedserializer note required because of the requirements on the impl of rkyv serialize for vec note redundant requirements hidden note required because of the requirements on the impl of rkyv serialize for stream stream | 0 |
4,561 | 23,727,691,347 | IssuesEvent | 2022-08-30 21:19:37 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | getRemainingTimeInMillis is returning a unix time | area/docker type/bug type/duplicate area/local/invoke stage/pr maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
Since v1.13.1 `context.getRemainingTimeInMillis()` is returning an integer that represents a recent date, rather than the remaining time in milliseconds.
This also occurs in Ruby lambdas, however Python 2 lambdas give us 0. All three work correctly in v1.12.0.
### Steps to reproduce:
<!-- Provide steps to replicate.-->
```yaml
# template.yaml
Transform: AWS::Serverless-2016-10-31
Resources:
Index:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
Timeout: 5
Events:
Api:
Type: Api
Properties:
Path: /
Method: GET
```
```js
// index.js
exports.handler = async (event, context) => {
return {
body: JSON.stringify({
remainingTimeInMillis: context.getRemainingTimeInMillis(),
date: new Date(context.getRemainingTimeInMillis()),
}),
};
};
```
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
Running the following command with v1.13.1
`sam local invoke Index --debug`
```bash
2021-01-06 14:18:28,422 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-01-06 14:18:28,510 | local invoke command is called
2021-01-06 14:18:28,513 | No Parameters detected in the template
2021-01-06 14:18:28,534 | 2 resources found in the template
2021-01-06 14:18:28,534 | Found Serverless function with name='Index' and CodeUri='.'
2021-01-06 14:18:28,540 | Found one Lambda function with name 'Index'
2021-01-06 14:18:28,541 | Invoking index.handler (nodejs12.x)
2021-01-06 14:18:28,541 | No environment variables found for function 'Index'
2021-01-06 14:18:28,541 | Environment variables overrides data is standard format
2021-01-06 14:18:28,541 | Loading AWS credentials from session with profile 'None'
2021-01-06 14:18:28,553 | Resolving code path. Cwd=/Users/danjordan/Sites/misc/sam-cli-test, CodeUri=.
2021-01-06 14:18:28,553 | Resolved absolute path to code is /Users/danjordan/Sites/misc/sam-cli-test
2021-01-06 14:18:28,553 | Code /Users/danjordan/Sites/misc/sam-cli-test is not a zip/jar file
2021-01-06 14:18:28,574 | Skip pulling image and use local one: amazon/aws-sam-cli-emulation-image-nodejs12.x:rapid-1.13.1.
2021-01-06 14:18:28,574 | Mounting /Users/danjordan/Sites/misc/sam-cli-test as /var/task:ro,delegated inside runtime container
2021-01-06 14:18:28,901 | Starting a timer for 5 seconds for function 'Index'
START RequestId: 46414975-e270-45d2-9c24-5ae61b25cd30 Version: $LATEST
END RequestId: 46414975-e270-45d2-9c24-5ae61b25cd30
REPORT RequestId: 46414975-e270-45d2-9c24-5ae61b25cd30 Init Duration: 0.15 ms Duration: 85.89 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 128 MB
2021-01-06 14:18:29,186 | Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'duration': 764, 'exitReason': 'success', 'exitCode': 0, 'requestId': 'c6c57255-762e-42af-a8a5-599eee573190', 'installationId': '190b0744-f13d-4279-84cd-860b54d9c540', 'sessionId': '8a9c4bb4-7609-4a78-9705-ff32bd726835', 'executionEnvironment': 'CLI', 'pyversion': '3.8.7', 'samcliVersion': '1.13.1'}}]}
2021-01-06 14:18:29,858 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
{"body":"{\"remainingTimeInMillis\":1609931569204,\"date\":\"2021-01-06T11:12:49.204Z\"}"}%
```
### Expected result:
<!-- Describe what you expected.-->
Running the following command with v1.12.0
`sam local invoke Index --debug`
```bash
2021-01-06 14:21:14,642 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-01-06 14:21:15,249 | local invoke command is called
2021-01-06 14:21:15,252 | No Parameters detected in the template
2021-01-06 14:21:15,281 | 2 resources found in the template
2021-01-06 14:21:15,281 | Found Serverless function with name='Index' and CodeUri='.'
2021-01-06 14:21:15,288 | Found one Lambda function with name 'Index'
2021-01-06 14:21:15,288 | Invoking index.handler (nodejs12.x)
2021-01-06 14:21:15,288 | No environment variables found for function 'Index'
2021-01-06 14:21:15,288 | Environment variables overrides data is standard format
2021-01-06 14:21:15,288 | Loading AWS credentials from session with profile 'None'
2021-01-06 14:21:15,299 | Resolving code path. Cwd=/Users/danjordan/Sites/misc/sam-cli-test, CodeUri=.
2021-01-06 14:21:15,299 | Resolved absolute path to code is /Users/danjordan/Sites/misc/sam-cli-test
2021-01-06 14:21:15,299 | Code /Users/danjordan/Sites/misc/sam-cli-test is not a zip/jar file
2021-01-06 14:21:15,309 | Skip pulling image and use local one: amazon/aws-sam-cli-emulation-image-nodejs12.x:rapid-1.12.0.
2021-01-06 14:21:15,309 | Mounting /Users/danjordan/Sites/misc/sam-cli-test as /var/task:ro,delegated inside runtime container
2021-01-06 14:21:15,672 | Starting a timer for 5 seconds for function 'Index'
START RequestId: 033cd921-a9e4-1351-c5a5-aae6c40794ae Version: $LATEST
END RequestId: 033cd921-a9e4-1351-c5a5-aae6c40794ae
REPORT RequestId: 033cd921-a9e4-1351-c5a5-aae6c40794ae Init Duration: 151.36 ms Duration: 3.38 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 41 MB
2021-01-06 14:21:15,934 | Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'duration': 1290, 'exitReason': 'success', 'exitCode': 0, 'requestId': '2949a164-fb1e-4eeb-a11c-a8ef3d8c09dc', 'installationId': '190b0744-f13d-4279-84cd-860b54d9c540', 'sessionId': '4ccaf64c-8096-4db9-b4fb-b28c728927e5', 'executionEnvironment': 'CLI', 'pyversion': '3.8.7', 'samcliVersion': '1.12.0'}}]}
2021-01-06 14:21:16,624 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
{"body":"{\"remainingTimeInMillis\":4882,\"date\":\"1970-01-01T00:00:04.882Z\"}"}
```
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Mac OS Big Sur
2. `sam --version`: discovered in v1.15.0, but narrowed down to being introduced in v1.13.1 | True | getRemainingTimeInMillis is returning a unix time - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
Since v1.13.1 `context.getRemainingTimeInMillis()` is returning an integer that represents a recent date, rather than the remaining time in milliseconds.
This also occurs in Ruby lambdas, however Python 2 lambdas give us 0. All three work correctly in v1.12.0.
### Steps to reproduce:
<!-- Provide steps to replicate.-->
```yaml
# template.yaml
Transform: AWS::Serverless-2016-10-31
Resources:
Index:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
Timeout: 5
Events:
Api:
Type: Api
Properties:
Path: /
Method: GET
```
```js
// index.js
exports.handler = async (event, context) => {
return {
body: JSON.stringify({
remainingTimeInMillis: context.getRemainingTimeInMillis(),
date: new Date(context.getRemainingTimeInMillis()),
}),
};
};
```
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
Running the following command with v1.13.1
`sam local invoke Index --debug`
```bash
2021-01-06 14:18:28,422 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-01-06 14:18:28,510 | local invoke command is called
2021-01-06 14:18:28,513 | No Parameters detected in the template
2021-01-06 14:18:28,534 | 2 resources found in the template
2021-01-06 14:18:28,534 | Found Serverless function with name='Index' and CodeUri='.'
2021-01-06 14:18:28,540 | Found one Lambda function with name 'Index'
2021-01-06 14:18:28,541 | Invoking index.handler (nodejs12.x)
2021-01-06 14:18:28,541 | No environment variables found for function 'Index'
2021-01-06 14:18:28,541 | Environment variables overrides data is standard format
2021-01-06 14:18:28,541 | Loading AWS credentials from session with profile 'None'
2021-01-06 14:18:28,553 | Resolving code path. Cwd=/Users/danjordan/Sites/misc/sam-cli-test, CodeUri=.
2021-01-06 14:18:28,553 | Resolved absolute path to code is /Users/danjordan/Sites/misc/sam-cli-test
2021-01-06 14:18:28,553 | Code /Users/danjordan/Sites/misc/sam-cli-test is not a zip/jar file
2021-01-06 14:18:28,574 | Skip pulling image and use local one: amazon/aws-sam-cli-emulation-image-nodejs12.x:rapid-1.13.1.
2021-01-06 14:18:28,574 | Mounting /Users/danjordan/Sites/misc/sam-cli-test as /var/task:ro,delegated inside runtime container
2021-01-06 14:18:28,901 | Starting a timer for 5 seconds for function 'Index'
START RequestId: 46414975-e270-45d2-9c24-5ae61b25cd30 Version: $LATEST
END RequestId: 46414975-e270-45d2-9c24-5ae61b25cd30
REPORT RequestId: 46414975-e270-45d2-9c24-5ae61b25cd30 Init Duration: 0.15 ms Duration: 85.89 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 128 MB
2021-01-06 14:18:29,186 | Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'duration': 764, 'exitReason': 'success', 'exitCode': 0, 'requestId': 'c6c57255-762e-42af-a8a5-599eee573190', 'installationId': '190b0744-f13d-4279-84cd-860b54d9c540', 'sessionId': '8a9c4bb4-7609-4a78-9705-ff32bd726835', 'executionEnvironment': 'CLI', 'pyversion': '3.8.7', 'samcliVersion': '1.13.1'}}]}
2021-01-06 14:18:29,858 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
{"body":"{\"remainingTimeInMillis\":1609931569204,\"date\":\"2021-01-06T11:12:49.204Z\"}"}%
```
### Expected result:
<!-- Describe what you expected.-->
Running the following command with v1.12.0
`sam local invoke Index --debug`
```bash
2021-01-06 14:21:14,642 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-01-06 14:21:15,249 | local invoke command is called
2021-01-06 14:21:15,252 | No Parameters detected in the template
2021-01-06 14:21:15,281 | 2 resources found in the template
2021-01-06 14:21:15,281 | Found Serverless function with name='Index' and CodeUri='.'
2021-01-06 14:21:15,288 | Found one Lambda function with name 'Index'
2021-01-06 14:21:15,288 | Invoking index.handler (nodejs12.x)
2021-01-06 14:21:15,288 | No environment variables found for function 'Index'
2021-01-06 14:21:15,288 | Environment variables overrides data is standard format
2021-01-06 14:21:15,288 | Loading AWS credentials from session with profile 'None'
2021-01-06 14:21:15,299 | Resolving code path. Cwd=/Users/danjordan/Sites/misc/sam-cli-test, CodeUri=.
2021-01-06 14:21:15,299 | Resolved absolute path to code is /Users/danjordan/Sites/misc/sam-cli-test
2021-01-06 14:21:15,299 | Code /Users/danjordan/Sites/misc/sam-cli-test is not a zip/jar file
2021-01-06 14:21:15,309 | Skip pulling image and use local one: amazon/aws-sam-cli-emulation-image-nodejs12.x:rapid-1.12.0.
2021-01-06 14:21:15,309 | Mounting /Users/danjordan/Sites/misc/sam-cli-test as /var/task:ro,delegated inside runtime container
2021-01-06 14:21:15,672 | Starting a timer for 5 seconds for function 'Index'
START RequestId: 033cd921-a9e4-1351-c5a5-aae6c40794ae Version: $LATEST
END RequestId: 033cd921-a9e4-1351-c5a5-aae6c40794ae
REPORT RequestId: 033cd921-a9e4-1351-c5a5-aae6c40794ae Init Duration: 151.36 ms Duration: 3.38 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 41 MB
2021-01-06 14:21:15,934 | Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'duration': 1290, 'exitReason': 'success', 'exitCode': 0, 'requestId': '2949a164-fb1e-4eeb-a11c-a8ef3d8c09dc', 'installationId': '190b0744-f13d-4279-84cd-860b54d9c540', 'sessionId': '4ccaf64c-8096-4db9-b4fb-b28c728927e5', 'executionEnvironment': 'CLI', 'pyversion': '3.8.7', 'samcliVersion': '1.12.0'}}]}
2021-01-06 14:21:16,624 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
{"body":"{\"remainingTimeInMillis\":4882,\"date\":\"1970-01-01T00:00:04.882Z\"}"}
```
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Mac OS Big Sur
2. `sam --version`: discovered in v1.15.0, but narrowed down to being introduced in v1.13.1 | main | getremainingtimeinmillis is returning a unix time make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description since context getremainingtimeinmillis is returning an integer that represents a recent date rather than the remaining time in milliseconds this also occurs in ruby lambdas however python lambdas give us all three work correctly in steps to reproduce yaml template yaml transform aws serverless resources index type aws serverless function properties handler index handler runtime x timeout events api type api properties path method get js index js exports handler async event context return body json stringify remainingtimeinmillis context getremainingtimeinmillis date new date context getremainingtimeinmillis observed result running the following command with sam local invoke index debug bash telemetry endpoint configured to be local invoke command is called no parameters detected in the template resources found in the template found serverless function with name index and codeuri found one lambda function with name index invoking index handler x no environment variables found for function index environment variables overrides data is standard format loading aws credentials from session with profile none resolving code path cwd users danjordan sites misc sam cli test codeuri resolved absolute path to code is users danjordan sites misc sam cli test code users danjordan sites misc sam cli test is not a zip jar file skip pulling image and use local one amazon aws sam cli emulation image x rapid mounting users danjordan sites misc sam cli test as var task ro delegated inside runtime container starting a timer for seconds for function index start requestid version latest end requestid report requestid init duration ms duration ms billed duration ms memory size mb max memory used mb sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout body remainingtimeinmillis date expected result running the following command with sam local invoke index debug bash telemetry endpoint configured to be local invoke command is called no parameters detected in the template resources found in the template found serverless function with name index and codeuri found one lambda function with name index invoking index handler x no environment variables found for function index environment variables overrides data is standard format loading aws credentials from session with profile none resolving code path cwd users danjordan sites misc sam cli test codeuri resolved absolute path to code is users danjordan sites misc sam cli test code users danjordan sites misc sam cli test is not a zip jar file skip pulling image and use local one amazon aws sam cli emulation image x rapid mounting users danjordan sites misc sam cli test as var task ro delegated inside runtime container starting a timer for seconds for function index start requestid version latest end requestid report requestid init duration ms duration ms billed duration ms memory size mb max memory used mb sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout body remainingtimeinmillis date additional environment details ex windows mac amazon linux etc os mac os big sur sam version discovered in but narrowed down to being introduced in | 1 |
347 | 3,237,017,476 | IssuesEvent | 2015-10-14 09:28:44 | Homebrew/homebrew | https://api.github.com/repos/Homebrew/homebrew | closed | `brew info --json=v1 --installed` should also return all aliases. | features help wanted maintainer feedback usability | Will make fixing e.g. https://github.com/Homebrew/homebrew-bundle/issues/105 much easier. | True | `brew info --json=v1 --installed` should also return all aliases. - Will make fixing e.g. https://github.com/Homebrew/homebrew-bundle/issues/105 much easier. | main | brew info json installed should also return all aliases will make fixing e g much easier | 1 |
2,339 | 8,372,684,113 | IssuesEvent | 2018-10-05 07:55:43 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | HaProxy drain mode 'bool' object is not callable error | affects_2.4 bug module needs_maintainer net_tools support:community traceback | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
HaProxy Module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes below -->
```
ansible 2.4.0.0
config file = None
configured module search path = [u'/Users/stewart/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible
executable location = /Library/Frameworks/Python.framework/Versions/2.7/bin/ansible
python version = 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 12:39:47) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
```
##### CONFIGURATION
`ansible-config dump --only-changed` returned nothing. I have changed no config.
##### OS / ENVIRONMENT
Running from OSX 10.10.5
Trying to control HaProxy 1.5.8 on Debian 8
##### SUMMARY
I am able to enable and disable HaProxy backends correctly but when I try to use the "drain" state type I get this error. The playbook works when I use enabled and disabled but fails with the below error when I use drain.
It says, `'bool' object is not callable`. I might just be configuring this wrong but I believe I am doing everything correctly and using the correct Ansible version.
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Set server to drain
hosts: haproxy
sudo: yes
tasks:
- name: Set server1 to disabled
haproxy:
state: drain
host: server1
socket: /var/run/haproxy.sock
backend: cluster_www
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect the HaProxy backend to change to the drain state.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
fatal: [10.5.59.15]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 10.5.59.15 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_Xzs48s/ansible_module_haproxy.py\", line 454, in <module>\r\n main()\r\n File \"/tmp/ansible_Xzs48s/ansible_module_haproxy.py\", line 450, in main\r\n ansible_haproxy.act()\r\n File \"/tmp/ansible_Xzs48s/ansible_module_haproxy.py\", line 410, in act\r\n self.drain(self.host, self.backend)\r\nTypeError: 'bool' object is not callable\r\n", "msg": "MODULE FAILURE", "rc": 0}
```
| True | HaProxy drain mode 'bool' object is not callable error - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
HaProxy Module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes below -->
```
ansible 2.4.0.0
config file = None
configured module search path = [u'/Users/stewart/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible
executable location = /Library/Frameworks/Python.framework/Versions/2.7/bin/ansible
python version = 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 12:39:47) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
```
##### CONFIGURATION
`ansible-config dump --only-changed` returned nothing. I have changed no config.
##### OS / ENVIRONMENT
Running from OSX 10.10.5
Trying to control HaProxy 1.5.8 on Debian 8
##### SUMMARY
I am able to enable and disable HaProxy backends correctly but when I try to use the "drain" state type I get this error. The playbook works when I use enabled and disabled but fails with the below error when I use drain.
It says, `'bool' object is not callable`. I might just be configuring this wrong but I believe I am doing everything correctly and using the correct Ansible version.
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Set server to drain
hosts: haproxy
sudo: yes
tasks:
- name: Set server1 to disabled
haproxy:
state: drain
host: server1
socket: /var/run/haproxy.sock
backend: cluster_www
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect the HaProxy backend to change to the drain state.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
fatal: [10.5.59.15]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 10.5.59.15 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_Xzs48s/ansible_module_haproxy.py\", line 454, in <module>\r\n main()\r\n File \"/tmp/ansible_Xzs48s/ansible_module_haproxy.py\", line 450, in main\r\n ansible_haproxy.act()\r\n File \"/tmp/ansible_Xzs48s/ansible_module_haproxy.py\", line 410, in act\r\n self.drain(self.host, self.backend)\r\nTypeError: 'bool' object is not callable\r\n", "msg": "MODULE FAILURE", "rc": 0}
```
| main | haproxy drain mode bool object is not callable error issue type bug report component name haproxy module ansible version ansible config file none configured module search path ansible python module location library frameworks python framework versions lib site packages ansible executable location library frameworks python framework versions bin ansible python version dec configuration ansible config dump only changed returned nothing i have changed no config os environment running from osx trying to control haproxy on debian summary i am able to enable and disable haproxy backends correctly but when i try to use the drain state type i get this error the playbook works when i use enabled and disabled but fails with the below error when i use drain it says bool object is not callable i might just be configuring this wrong but i believe i am doing everything correctly and using the correct ansible version steps to reproduce yaml name set server to drain hosts haproxy sudo yes tasks name set to disabled haproxy state drain host socket var run haproxy sock backend cluster www expected results i expect the haproxy backend to change to the drain state actual results fatal failed changed false failed true module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module haproxy py line in r n main r n file tmp ansible ansible module haproxy py line in main r n ansible haproxy act r n file tmp ansible ansible module haproxy py line in act r n self drain self host self backend r ntypeerror bool object is not callable r n msg module failure rc | 1 |
328,002 | 24,165,815,547 | IssuesEvent | 2022-09-22 14:57:46 | eosnetworkfoundation/devrel | https://api.github.com/repos/eosnetworkfoundation/devrel | closed | Curate all eosio references in the leap docs | documentation | docs sources: https://github.com/AntelopeIO/leap/tree/main/docs
docs view: https://docs.eosnetwork.com/leap/latest/
Replace all eosio/EOSIO references with Antelope if applicable. | 1.0 | Curate all eosio references in the leap docs - docs sources: https://github.com/AntelopeIO/leap/tree/main/docs
docs view: https://docs.eosnetwork.com/leap/latest/
Replace all eosio/EOSIO references with Antelope if applicable. | non_main | curate all eosio references in the leap docs docs sources docs view replace all eosio eosio references with antelope if applicable | 0 |
1,818 | 6,577,323,575 | IssuesEvent | 2017-09-12 00:06:34 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_vol does not allow adding all available volume types | affects_2.1 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_vol
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Linux: `CentOS 7.1`, `Ubuntu 14.04`, `Ubuntu 16.04`
arch: `x86_64`
##### SUMMARY
if you want to attach a volume of type not listed in ec2_vol.py explicitly your task fails.
##### STEPS TO REPRODUCE
The following playbook would fail
```
- name: attach new volume
hosts: localhost
tasks:
module: ec2_vol
instance: '<your instance id>'
region: '<your region>'
device_name: '/dev/xvdf' # or whatever
volume_size: 40
volume_type: 'st1'
iops: 100
delete_on_termination: true
```
##### EXPECTED RESULTS
I would expect the task to succeed :)
##### ACTUAL RESULTS
```
ansible-playbook -i hosts -e <custom params> playbooks/ec2node-provision.yml
TASK [myuser.ec2node : Attach local volumes (storage_type=local)] **************
task path: /home/user/workspace/ansible/ec2/playbooks/roles/myuser.ec2node/tasks/ec2_setup.yml:43
fatal: [localhost]: FAILED! => {"failed": true, "msg": "value of volume_type must be one of: standard,gp2,io1, got: st1"}
to retry, use: --limit @playbooks/ec2node-provision.retry
PLAY RECAP *********************************************************************
```
The code of `ec2_vol.py` clearly limits types in `main()`
| True | ec2_vol does not allow adding all available volume types - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_vol
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Linux: `CentOS 7.1`, `Ubuntu 14.04`, `Ubuntu 16.04`
arch: `x86_64`
##### SUMMARY
if you want to attach a volume of type not listed in ec2_vol.py explicitly your task fails.
##### STEPS TO REPRODUCE
The following playbook would fail
```
- name: attach new volume
hosts: localhost
tasks:
module: ec2_vol
instance: '<your instance id>'
region: '<your region>'
device_name: '/dev/xvdf' # or whatever
volume_size: 40
volume_type: 'st1'
iops: 100
delete_on_termination: true
```
##### EXPECTED RESULTS
I would expect the task to succeed :)
##### ACTUAL RESULTS
```
ansible-playbook -i hosts -e <custom params> playbooks/ec2node-provision.yml
TASK [myuser.ec2node : Attach local volumes (storage_type=local)] **************
task path: /home/user/workspace/ansible/ec2/playbooks/roles/myuser.ec2node/tasks/ec2_setup.yml:43
fatal: [localhost]: FAILED! => {"failed": true, "msg": "value of volume_type must be one of: standard,gp2,io1, got: st1"}
to retry, use: --limit @playbooks/ec2node-provision.retry
PLAY RECAP *********************************************************************
```
The code of `ec2_vol.py` clearly limits types in `main()`
| main | vol does not allow adding all available volume types issue type bug report component name vol ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment linux centos ubuntu ubuntu arch summary if you want to attach a volume of type not listed in vol py explicitly your task fails steps to reproduce the following playbook would fail name attach new volume hosts localhost tasks module vol instance region device name dev xvdf or whatever volume size volume type iops delete on termination true expected results i would expect the task to succeed actual results ansible playbook i hosts e playbooks provision yml task task path home user workspace ansible playbooks roles myuser tasks setup yml fatal failed failed true msg value of volume type must be one of standard got to retry use limit playbooks provision retry play recap the code of vol py clearly limits types in main | 1 |
4,274 | 21,475,469,915 | IssuesEvent | 2022-04-26 13:20:16 | precice/precice | https://api.github.com/repos/precice/precice | opened | Try to remove SolverInterfaceImpl::mesh() | question maintainability | I stumbled across the public getter `SolverInterfaceImpl::mesh()` when working on the documentation of SolverInterfaceImpl in #1249 and think we should remove this getter, because it is only used in tests and has no proper use-case to justify a public function. This is mostly motivated from a class design perspective trying to simplify `SolverInterfaceImpl`.
As a compromise, we could also add this function to a fixture or the `WhiteboxAccessor`, but I think improving the tests would be the better solution. However, for most of the tests I cannot judge whether removing `SolverInterfaceImpl::mesh()` would be ok in the context of the original goal of the test.
I found the following tests that use this function:
### `tests/serial/whitebox/TestExplicitWithDataScaling.cpp`
Easy to remove, because `testing::WhiteboxAccessor::impl(cplInterface).mesh("Test-Square-One").vertices().size()` always returns `nVertices = 4`. No need for this function.
### `tests/parallel/TestFinalize.cpp`
What is it actually testing? Can we remove this test?
### `tests/parallel/NearestProjectionRePartitioning.cpp`
I don't understand the rationale here. But is related to #371
### `/home/benjamin/Programming/precice/tests/serial/mapping-scaled-consistent/helpers.cpp`
Do we need to go that low here? For some functions I have the impression the purpose of these helpers lies somewhere else and not on testing the correct creation of the mesh. (Can we just remove the `BOOST_REQUIRE` calls?)
### `tests/serial/mapping-nearest-projection/helpers.cpp`
At different places. Sometimes a similar situations like above (`BOOST_REQUIRE`). `testQuadMappingNearestProjectionTallKite` and `testQuadMappingNearestProjectionWideKite` are testing `mesh::edgeLength(edge)`. Should this test maybe go somewhere else as a unit test, where accessing the mesh is easier?
| True | Try to remove SolverInterfaceImpl::mesh() - I stumbled across the public getter `SolverInterfaceImpl::mesh()` when working on the documentation of SolverInterfaceImpl in #1249 and think we should remove this getter, because it is only used in tests and has no proper use-case to justify a public function. This is mostly motivated from a class design perspective trying to simplify `SolverInterfaceImpl`.
As a compromise, we could also add this function to a fixture or the `WhiteboxAccessor`, but I think improving the tests would be the better solution. However, for most of the tests I cannot judge whether removing `SolverInterfaceImpl::mesh()` would be ok in the context of the original goal of the test.
I found the following tests that use this function:
### `tests/serial/whitebox/TestExplicitWithDataScaling.cpp`
Easy to remove, because `testing::WhiteboxAccessor::impl(cplInterface).mesh("Test-Square-One").vertices().size()` always returns `nVertices = 4`. No need for this function.
### `tests/parallel/TestFinalize.cpp`
What is it actually testing? Can we remove this test?
### `tests/parallel/NearestProjectionRePartitioning.cpp`
I don't understand the rationale here. But is related to #371
### `/home/benjamin/Programming/precice/tests/serial/mapping-scaled-consistent/helpers.cpp`
Do we need to go that low here? For some functions I have the impression the purpose of these helpers lies somewhere else and not on testing the correct creation of the mesh. (Can we just remove the `BOOST_REQUIRE` calls?)
### `tests/serial/mapping-nearest-projection/helpers.cpp`
At different places. Sometimes a similar situations like above (`BOOST_REQUIRE`). `testQuadMappingNearestProjectionTallKite` and `testQuadMappingNearestProjectionWideKite` are testing `mesh::edgeLength(edge)`. Should this test maybe go somewhere else as a unit test, where accessing the mesh is easier?
| main | try to remove solverinterfaceimpl mesh i stumbled across the public getter solverinterfaceimpl mesh when working on the documentation of solverinterfaceimpl in and think we should remove this getter because it is only used in tests and has no proper use case to justify a public function this is mostly motivated from a class design perspective trying to simplify solverinterfaceimpl as a compromise we could also add this function to a fixture or the whiteboxaccessor but i think improving the tests would be the better solution however for most of the tests i cannot judge whether removing solverinterfaceimpl mesh would be ok in the context of the original goal of the test i found the following tests that use this function tests serial whitebox testexplicitwithdatascaling cpp easy to remove because testing whiteboxaccessor impl cplinterface mesh test square one vertices size always returns nvertices no need for this function tests parallel testfinalize cpp what is it actually testing can we remove this test tests parallel nearestprojectionrepartitioning cpp i don t understand the rationale here but is related to home benjamin programming precice tests serial mapping scaled consistent helpers cpp do we need to go that low here for some functions i have the impression the purpose of these helpers lies somewhere else and not on testing the correct creation of the mesh can we just remove the boost require calls tests serial mapping nearest projection helpers cpp at different places sometimes a similar situations like above boost require testquadmappingnearestprojectiontallkite and testquadmappingnearestprojectionwidekite are testing mesh edgelength edge should this test maybe go somewhere else as a unit test where accessing the mesh is easier | 1 |
33,809 | 4,862,366,298 | IssuesEvent | 2016-11-14 12:08:40 | palantir/atlasdb | https://api.github.com/repos/palantir/atlasdb | closed | Flaky test: CassandraDropwizardEteTest | component: testing priority: P2 | This test has been failing at multiple instances, about 1/2 times recently.
Flaky builds:
- https://circleci.com/gh/palantir/atlasdb/2231
- https://circleci.com/gh/palantir/atlasdb/2237
- https://circleci.com/gh/palantir/atlasdb/2744
| 1.0 | Flaky test: CassandraDropwizardEteTest - This test has been failing at multiple instances, about 1/2 times recently.
Flaky builds:
- https://circleci.com/gh/palantir/atlasdb/2231
- https://circleci.com/gh/palantir/atlasdb/2237
- https://circleci.com/gh/palantir/atlasdb/2744
| non_main | flaky test cassandradropwizardetetest this test has been failing at multiple instances about times recently flaky builds | 0 |
25,123 | 4,146,856,586 | IssuesEvent | 2016-06-15 02:47:31 | steedos/apps | https://api.github.com/repos/steedos/apps | closed | 新建表单,修改表单内容,根据条件判断后,下一步骤名会变,处理人不会变。审批节点的岗位处理人是自动显示的,但是填写节点的处理人属性是审批时指定人员。 | fix:Done test:OK type:bug | 下一步骤:审批

下一步骤:填写

| 1.0 | 新建表单,修改表单内容,根据条件判断后,下一步骤名会变,处理人不会变。审批节点的岗位处理人是自动显示的,但是填写节点的处理人属性是审批时指定人员。 - 下一步骤:审批

下一步骤:填写

| non_main | 新建表单,修改表单内容,根据条件判断后,下一步骤名会变,处理人不会变。审批节点的岗位处理人是自动显示的,但是填写节点的处理人属性是审批时指定人员。 下一步骤:审批 下一步骤:填写 | 0 |
107,940 | 4,322,371,528 | IssuesEvent | 2016-07-25 13:54:02 | docker/docker | https://api.github.com/repos/docker/docker | closed | [1.12rc4] docker stats infinite loop after removing containers | kind/bug priority/P2 version/1.12 | <!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
Removing a container with `docker rm -f` while watching its stats with `docker stats` creates a bug with docker stats that takes 99% of the CPU (or more according to the number of cores you have).
It seems to happen each time a container that 'has been seen' by `docker stats` is removed.
As indicated below, the problem is probably in the daemon rather than in the client, as it is reproducible with docker-py.
**Output of `docker version`:**
(also tested on a Fedora 24 VM with the same version of Docker client/server)
```
Client:
Version: 1.12.0-rc4
API version: 1.24
Go version: go1.6.2
Git commit: e4a0dbc
Built: Wed Jul 13 03:28:51 2016
OS/Arch: darwin/amd64
Experimental: true
Server:
Version: 1.12.0-rc4
API version: 1.24
Go version: go1.6.2
Git commit: e4a0dbc
Built: Wed Jul 13 03:28:51 2016
OS/Arch: linux/amd64
Experimental: true
```
**Output of `docker info`:**
```
Containers: 26
Running: 1
Paused: 0
Stopped: 25
Images: 75
Server Version: 1.12.0-rc4
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 124
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay null bridge host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.15-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.954 GiB
Name: moby
ID: HFSP:6PBA:U5F4:5ZV7:EH7F:7OKJ:3ALL:PXGV:RWTY:F2RQ:JDCF:YTUW
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 22
Goroutines: 42
System Time: 2016-07-25T07:44:01.746691065Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**
Problem occurs, at least, on the current Docker for Mac 1.12rc4-beta20 and on a Fedora 24 VM with Docker 1.12rc4.
**Steps to reproduce the issue:**
1. start a random container: `docker run -ti --name arandomname centos`, and leave it alive
2. start `docker stats`
3. remove the first container with `docker rm -f arandomname`
It also happens when I do this:
1. start `docker stats`
2. start a random container: `docker run --name arandomname centos sleep 5`, wait until it ends
3. (docker stats is still behaving properly)
4. `docker rm arandomname`
**Describe the results you received:**
CPU usage skyrockets
**Describe the results you expected:**
CPU usage remains low, as usual when using docker stats.
**Additional information you deem important (e.g. issue happens only occasionally):**
When I test the issue with docker-py, `client.stats` returns me an infinite number of None, so it probably means the issue is on the daemon side. | 1.0 | [1.12rc4] docker stats infinite loop after removing containers - <!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
Removing a container with `docker rm -f` while watching its stats with `docker stats` creates a bug with docker stats that takes 99% of the CPU (or more according to the number of cores you have).
It seems to happen each time a container that 'has been seen' by `docker stats` is removed.
As indicated below, the problem is probably in the daemon rather than in the client, as it is reproducible with docker-py.
**Output of `docker version`:**
(also tested on a Fedora 24 VM with the same version of Docker client/server)
```
Client:
Version: 1.12.0-rc4
API version: 1.24
Go version: go1.6.2
Git commit: e4a0dbc
Built: Wed Jul 13 03:28:51 2016
OS/Arch: darwin/amd64
Experimental: true
Server:
Version: 1.12.0-rc4
API version: 1.24
Go version: go1.6.2
Git commit: e4a0dbc
Built: Wed Jul 13 03:28:51 2016
OS/Arch: linux/amd64
Experimental: true
```
**Output of `docker info`:**
```
Containers: 26
Running: 1
Paused: 0
Stopped: 25
Images: 75
Server Version: 1.12.0-rc4
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 124
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay null bridge host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.15-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.954 GiB
Name: moby
ID: HFSP:6PBA:U5F4:5ZV7:EH7F:7OKJ:3ALL:PXGV:RWTY:F2RQ:JDCF:YTUW
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 22
Goroutines: 42
System Time: 2016-07-25T07:44:01.746691065Z
EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**
Problem occurs, at least, on the current Docker for Mac 1.12rc4-beta20 and on a Fedora 24 VM with Docker 1.12rc4.
**Steps to reproduce the issue:**
1. start a random container: `docker run -ti --name arandomname centos`, and leave it alive
2. start `docker stats`
3. remove the first container with `docker rm -f arandomname`
It also happens when I do this:
1. start `docker stats`
2. start a random container: `docker run --name arandomname centos sleep 5`, wait until it ends
3. (docker stats is still behaving properly)
4. `docker rm arandomname`
**Describe the results you received:**
CPU usage skyrockets
**Describe the results you expected:**
CPU usage remains low, as usual when using docker stats.
**Additional information you deem important (e.g. issue happens only occasionally):**
When I test the issue with docker-py, `client.stats` returns me an infinite number of None, so it probably means the issue is on the daemon side. | non_main | docker stats infinite loop after removing containers if you are reporting a new issue make sure that we do not have any duplicates already open you can ensure this by searching the issue list for this repository if there is a duplicate please close your issue and add a comment to the existing issue instead if you suspect your issue is a bug please edit your issue description to include the bug report information shown below if you fail to provide this information within days we cannot debug your issue and will close it we will however reopen it if you later provide the information for more information about reporting issues see bug report information use the commands below to provide key information from your environment you do not have to include this information if this is a feature request removing a container with docker rm f while watching its stats with docker stats creates a bug with docker stats that takes of the cpu or more according to the number of cores you have it seems to happen each time a container that has been seen by docker stats is removed as indicated below the problem is probably in the daemon rather than in the client as it is reproducible with docker py output of docker version also tested on a fedora vm with the same version of docker client server client version api version go version git commit built wed jul os arch darwin experimental true server version api version go version git commit built wed jul os arch linux experimental true output of docker info containers running paused stopped images server version storage driver aufs root dir var lib docker aufs backing filesystem extfs dirs supported true logging driver json file cgroup driver cgroupfs plugins volume local network overlay null bridge host swarm inactive runtimes runc default runtime runc security options seccomp kernel version moby operating system alpine linux ostype linux architecture cpus total memory gib name moby id hfsp pxgv rwty jdcf ytuw docker root dir var lib docker debug mode client false debug mode server true file descriptors goroutines system time eventslisteners no proxy local registry experimental true insecure registries additional environment details aws virtualbox physical etc problem occurs at least on the current docker for mac and on a fedora vm with docker steps to reproduce the issue start a random container docker run ti name arandomname centos and leave it alive start docker stats remove the first container with docker rm f arandomname it also happens when i do this start docker stats start a random container docker run name arandomname centos sleep wait until it ends docker stats is still behaving properly docker rm arandomname describe the results you received cpu usage skyrockets describe the results you expected cpu usage remains low as usual when using docker stats additional information you deem important e g issue happens only occasionally when i test the issue with docker py client stats returns me an infinite number of none so it probably means the issue is on the daemon side | 0 |
862 | 4,533,155,487 | IssuesEvent | 2016-09-08 10:31:26 | Particular/NServiceBus.SqlServer | https://api.github.com/repos/Particular/NServiceBus.SqlServer | closed | Unhandled Exception: System.ObjectDisposedException: Cannot access a disposed object. | Tag: Maintainer Prio Type: Bug | When I try to send with to producers into the same Sql Azure Database the endpoints crash with the following exception:
```
Unhandled Exception: System.ObjectDisposedException: Cannot access a disposed object.
Object name: 'SqlDelegatedTransaction'.
at System.Data.SqlClient.SqlDelegatedTransaction.GetValidConnection()
at System.Data.SqlClient.SqlDelegatedTransaction.Rollback(SinglePhaseEnlistment enlistment)
at System.Transactions.DurableEnlistmentAborting.EnterState(InternalEnlistment enlistment)
at System.Transactions.DurableEnlistmentActive.InternalAborted(InternalEnlistment enlistment)
at System.Transactions.TransactionStateAborted.EnterState(InternalTransaction tx)
at System.Transactions.TransactionStateActive.Rollback(InternalTransaction tx, Exception e)
at System.Transactions.EnlistableStates.Timeout(InternalTransaction tx)
at System.Transactions.Bucket.TimeoutTransactions()
at System.Transactions.BucketSet.TimeoutTransactions()
at System.Transactions.TransactionTable.ThreadTimer(Object state)
at System.Threading.TimerQueueTimer.CallCallbackInContext(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.TimerQueueTimer.CallCallback()
at System.Threading.TimerQueueTimer.Fire()
at System.Threading.TimerQueue.FireNextTimers()
at System.Threading.TimerQueue.AppDomainTimerCallback()
```
I tried to wrap the sends with
```
using (var tx = new TransactionScope(TransactionScopeOption.Suppress, TransactionScopeAsyncFlowOption.Enabled))
{}
```
It doesn't help | True | Unhandled Exception: System.ObjectDisposedException: Cannot access a disposed object. - When I try to send with to producers into the same Sql Azure Database the endpoints crash with the following exception:
```
Unhandled Exception: System.ObjectDisposedException: Cannot access a disposed object.
Object name: 'SqlDelegatedTransaction'.
at System.Data.SqlClient.SqlDelegatedTransaction.GetValidConnection()
at System.Data.SqlClient.SqlDelegatedTransaction.Rollback(SinglePhaseEnlistment enlistment)
at System.Transactions.DurableEnlistmentAborting.EnterState(InternalEnlistment enlistment)
at System.Transactions.DurableEnlistmentActive.InternalAborted(InternalEnlistment enlistment)
at System.Transactions.TransactionStateAborted.EnterState(InternalTransaction tx)
at System.Transactions.TransactionStateActive.Rollback(InternalTransaction tx, Exception e)
at System.Transactions.EnlistableStates.Timeout(InternalTransaction tx)
at System.Transactions.Bucket.TimeoutTransactions()
at System.Transactions.BucketSet.TimeoutTransactions()
at System.Transactions.TransactionTable.ThreadTimer(Object state)
at System.Threading.TimerQueueTimer.CallCallbackInContext(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.TimerQueueTimer.CallCallback()
at System.Threading.TimerQueueTimer.Fire()
at System.Threading.TimerQueue.FireNextTimers()
at System.Threading.TimerQueue.AppDomainTimerCallback()
```
I tried to wrap the sends with
```
using (var tx = new TransactionScope(TransactionScopeOption.Suppress, TransactionScopeAsyncFlowOption.Enabled))
{}
```
It doesn't help | main | unhandled exception system objectdisposedexception cannot access a disposed object when i try to send with to producers into the same sql azure database the endpoints crash with the following exception unhandled exception system objectdisposedexception cannot access a disposed object object name sqldelegatedtransaction at system data sqlclient sqldelegatedtransaction getvalidconnection at system data sqlclient sqldelegatedtransaction rollback singlephaseenlistment enlistment at system transactions durableenlistmentaborting enterstate internalenlistment enlistment at system transactions durableenlistmentactive internalaborted internalenlistment enlistment at system transactions transactionstateaborted enterstate internaltransaction tx at system transactions transactionstateactive rollback internaltransaction tx exception e at system transactions enlistablestates timeout internaltransaction tx at system transactions bucket timeouttransactions at system transactions bucketset timeouttransactions at system transactions transactiontable threadtimer object state at system threading timerqueuetimer callcallbackincontext object state at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state boolean preservesyncctx at system threading executioncontext run executioncontext executioncontext contextcallback callback object state boolean preservesyncctx at system threading timerqueuetimer callcallback at system threading timerqueuetimer fire at system threading timerqueue firenexttimers at system threading timerqueue appdomaintimercallback i tried to wrap the sends with using var tx new transactionscope transactionscopeoption suppress transactionscopeasyncflowoption enabled it doesn t help | 1 |
54,582 | 6,396,695,018 | IssuesEvent | 2017-08-04 16:06:28 | dwyl/best-evidence | https://api.github.com/repos/dwyl/best-evidence | closed | Add Best Evidence heroku link to readme | enhancement please-test priority-2 T25m | **As** someone coming across the best-evidence repo
**I want to** know where to view the best-evidence site
**So that** I can see a working version of it.
- [x] Add Best Evidence heroku link to readme | 1.0 | Add Best Evidence heroku link to readme - **As** someone coming across the best-evidence repo
**I want to** know where to view the best-evidence site
**So that** I can see a working version of it.
- [x] Add Best Evidence heroku link to readme | non_main | add best evidence heroku link to readme as someone coming across the best evidence repo i want to know where to view the best evidence site so that i can see a working version of it add best evidence heroku link to readme | 0 |
2,653 | 9,083,454,093 | IssuesEvent | 2019-02-17 20:36:41 | pound-python/infobob | https://api.github.com/repos/pound-python/infobob | closed | Overhaul retrieval of contents from pastebin links | maintainability | Should be easy enough to refactor this into something a bit clearer, and less reliant on massive generated regexes and string formatting. | True | Overhaul retrieval of contents from pastebin links - Should be easy enough to refactor this into something a bit clearer, and less reliant on massive generated regexes and string formatting. | main | overhaul retrieval of contents from pastebin links should be easy enough to refactor this into something a bit clearer and less reliant on massive generated regexes and string formatting | 1 |
698 | 4,265,383,540 | IssuesEvent | 2016-07-12 10:52:21 | Particular/NServiceBus | https://api.github.com/repos/Particular/NServiceBus | closed | Avoid closure capturing in `Task.ContinueWith` | Tag: Maintainer Prio | <!-- Relates to https://github.com/Particular/PlatformDevelopment/issues/861 -->
If you pass any context (this, an instance member,or a local variable) to a lambda, caching won’t work to a closure.
If we make sure we don't capture state we can benefit from lambda delegate caching and generally save allocations per closure.
Sounds like not worth optimizing for? Keep in mind that when executing hundreds of operations concurrently allocations like that can easily add up.
```ini
BenchmarkDotNet=v0.9.7.0
OS=Microsoft Windows NT 6.2.9200.0
Processor=Intel(R) Xeon(R) CPU E5-2673 v3 2.40GHz, ProcessorCount=8
Frequency=10000000 ticks, Resolution=100.0000 ns, Timer=UNKNOWN
HostCLR=MS.NET 4.0.30319.42000, Arch=64-bit RELEASE [RyuJIT]
JitModules=clrjit-v4.6.1055.0
Type=ContinueWithBenchmarks Mode=Throughput
```
Method | Median | StdDev | Scaled | Mean | StdError | StdDev | Op/s | Min | Q1 | Median | Q3 | Max | Gen 0 | Gen 1 | Gen 2 | Bytes Allocated/Op |
-------------------- |---------- |---------- |------- |---------- |---------- |---------- |---------- |---------- |---------- |---------- |---------- |---------- |------ |------ |------ |------------------- |
ContinueWith | 1.4619 us | 0.0958 us | 1.00 | 1.4484 us | 0.0110 us | 0.0958 us | 690426.13 | 1.1596 us | 1.4091 us | 1.4619 us | 1.5169 us | 1.6524 us | 39.75 | 19.50 | - | 58.61 |
ContinueWithClosure | 1.4960 us | 0.1938 us | 1.02 | 1.4654 us | 0.0396 us | 0.1938 us | 682392.75 | 0.6727 us | 1.4612 us | 1.4960 us | 1.5093 us | 1.7665 us | 91.00 | 19.00 | - | 82.30 |
| True | Avoid closure capturing in `Task.ContinueWith` - <!-- Relates to https://github.com/Particular/PlatformDevelopment/issues/861 -->
If you pass any context (this, an instance member,or a local variable) to a lambda, caching won’t work to a closure.
If we make sure we don't capture state we can benefit from lambda delegate caching and generally save allocations per closure.
Sounds like not worth optimizing for? Keep in mind that when executing hundreds of operations concurrently allocations like that can easily add up.
```ini
BenchmarkDotNet=v0.9.7.0
OS=Microsoft Windows NT 6.2.9200.0
Processor=Intel(R) Xeon(R) CPU E5-2673 v3 2.40GHz, ProcessorCount=8
Frequency=10000000 ticks, Resolution=100.0000 ns, Timer=UNKNOWN
HostCLR=MS.NET 4.0.30319.42000, Arch=64-bit RELEASE [RyuJIT]
JitModules=clrjit-v4.6.1055.0
Type=ContinueWithBenchmarks Mode=Throughput
```
Method | Median | StdDev | Scaled | Mean | StdError | StdDev | Op/s | Min | Q1 | Median | Q3 | Max | Gen 0 | Gen 1 | Gen 2 | Bytes Allocated/Op |
-------------------- |---------- |---------- |------- |---------- |---------- |---------- |---------- |---------- |---------- |---------- |---------- |---------- |------ |------ |------ |------------------- |
ContinueWith | 1.4619 us | 0.0958 us | 1.00 | 1.4484 us | 0.0110 us | 0.0958 us | 690426.13 | 1.1596 us | 1.4091 us | 1.4619 us | 1.5169 us | 1.6524 us | 39.75 | 19.50 | - | 58.61 |
ContinueWithClosure | 1.4960 us | 0.1938 us | 1.02 | 1.4654 us | 0.0396 us | 0.1938 us | 682392.75 | 0.6727 us | 1.4612 us | 1.4960 us | 1.5093 us | 1.7665 us | 91.00 | 19.00 | - | 82.30 |
| main | avoid closure capturing in task continuewith if you pass any context this an instance member or a local variable to a lambda caching won’t work to a closure if we make sure we don t capture state we can benefit from lambda delegate caching and generally save allocations per closure sounds like not worth optimizing for keep in mind that when executing hundreds of operations concurrently allocations like that can easily add up ini benchmarkdotnet os microsoft windows nt processor intel r xeon r cpu processorcount frequency ticks resolution ns timer unknown hostclr ms net arch bit release jitmodules clrjit type continuewithbenchmarks mode throughput method median stddev scaled mean stderror stddev op s min median max gen gen gen bytes allocated op continuewith us us us us us us us us us us continuewithclosure us us us us us us us us us us | 1 |
3,727 | 15,537,192,673 | IssuesEvent | 2021-03-15 00:31:45 | exercism/python | https://api.github.com/repos/exercism/python | closed | [v3] Add prerequisites to Practice Exercises | in-progress 🌿 maintainer chore 🔧 | _This issue is part of the migration to v3. You can read full details about the various changes [here](https://github.com/exercism/v3-launch/)._
Exercism v3 introduces a new type of exercise: [Concept Exercises](https://github.com/exercism/v3-docs/blob/main/product/concept-exercises.md). All existing (V2) exercises will become [Practice Exercises](https://github.com/exercism/v3-docs/blob/main/product/practice-exercises.md).
Concept Exercises and Practice Exercises are linked to each other via [Concepts](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/concepts.md). Concepts are taught by Concept Exercises and practiced in Practice Exercises. Each Exercise (Concept or Practice) has prerequisites, which must be met to unlock an Exercise - once all the prerequisite Concepts have been "taught" by a Concept Exercise, the exercise itself becomes unlocked.
For example, in some languages completing the Concept Exercises that teach the "String Interpolation" and "Optional Parameters" concepts might then unlock the `two-fer` Practice Exercise.
Each Practice Exercise has two fields containing concepts: a `practices` field and a `prerequisites` field.
## Practices
The `practices` key should list the slugs of Concepts that this Practice Exercise actively allows a student to practice.
- These show up in the UI as "Practice this Concept in: TwoFer, Leap, etc"
- Try and choose 3 - 8 Exercises that practice each Concept.
- Try and choose at least two Exercises that allow someone to practice the basics of a Concept.
- Some Concepts are very common (for example `strings`). In those cases we recommend choosing a few good exercises that make people think about those Concepts in interesting ways. For example, exercises that require UTF-8, string concatenation, char enumeration, etc, would all be good examples.
- There should be one or more Concepts to practice per exercise.
## Prerequisites
The `prerequisites` key lists the Concept Exercises that a student must have completed in order to access this Practice Exercise.
- These show up in the UI as "Learn Strings to unlock TwoFer"
- It should include all Concepts that a student needs to have covered to be able to complete the exercise in at least one idiomatic way. For example, for the TwoFer exercise in Ruby, prerequisites might include `strings`, `optional-params`, `implicit-return`.
- For Exercises that can be completed using alternative Concepts (e.g. an Exercise solvable by `loops` or `recursion`), the maintainer should choose the one approach that they would like to unlock the Exercise, considering the student's journey through the track. For example, the loops/recursion example, they might think this exercise is a good early practice of `loops` or that they might like to leave it later to teach recursion. They can also make use of an analyzer to prompt the student to try an alternative approach: "Nice work on solving this via loops. You might also like to try solving this using Recursion."
- There should be one or more prerequisites Concepts per exercise.
Although ideally all Concepts should be taught by Concept Exercises, we recognise that it will take time for tracks to achieve that. Any Practice Exercises that have prerequisites which are not taught by Concept Exercises, will become unlocked once the final Concept Exercise has been completed.
## Goal
### Practices
The `"practices"` field of each element in the `"exercises.practice"` field in the `config.json` file should be updated to contain the practice concepts. See [the spec](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/config-json.md#practice-exercises).
To help with identifying the practice concepts, the `"topics"` field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the `"topics"` field should be removed.
Each practice concept should have its own entry in the top-level `"concepts"` array. See [the spec](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/config-json.md#concepts).
### Prerequisites
The `"prerequisites"` field of each element in the `"exercises.practice"` field in the `config.json` file should be updated to contain the prerequisite concepts. See [the spec](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/config-json.md#practice-exercises).
To help with identifying the prerequisites, the `"topics"` field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the `"topics"` field should be removed.
Each prerequisite concept should have its own entry in the top-level `"concepts"` array. See [the spec](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/config-json.md#concepts).
## Example
```json
{
"exercises": {
"practice": [
{
"uuid": "8ba15933-29a2-49b1-a9ce-70474bad3007",
"slug": "leap",
"name": "Leap",
"practices": ["if-statements", "numbers", "operator-precedence"],
"prerequisites": ["if-statements", "numbers"],
"difficulty": 1
}
]
}
}
```
## Tracking
https://github.com/exercism/v3-launch/issues/6
| True | [v3] Add prerequisites to Practice Exercises - _This issue is part of the migration to v3. You can read full details about the various changes [here](https://github.com/exercism/v3-launch/)._
Exercism v3 introduces a new type of exercise: [Concept Exercises](https://github.com/exercism/v3-docs/blob/main/product/concept-exercises.md). All existing (V2) exercises will become [Practice Exercises](https://github.com/exercism/v3-docs/blob/main/product/practice-exercises.md).
Concept Exercises and Practice Exercises are linked to each other via [Concepts](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/concepts.md). Concepts are taught by Concept Exercises and practiced in Practice Exercises. Each Exercise (Concept or Practice) has prerequisites, which must be met to unlock an Exercise - once all the prerequisite Concepts have been "taught" by a Concept Exercise, the exercise itself becomes unlocked.
For example, in some languages completing the Concept Exercises that teach the "String Interpolation" and "Optional Parameters" concepts might then unlock the `two-fer` Practice Exercise.
Each Practice Exercise has two fields containing concepts: a `practices` field and a `prerequisites` field.
## Practices
The `practices` key should list the slugs of Concepts that this Practice Exercise actively allows a student to practice.
- These show up in the UI as "Practice this Concept in: TwoFer, Leap, etc"
- Try and choose 3 - 8 Exercises that practice each Concept.
- Try and choose at least two Exercises that allow someone to practice the basics of a Concept.
- Some Concepts are very common (for example `strings`). In those cases we recommend choosing a few good exercises that make people think about those Concepts in interesting ways. For example, exercises that require UTF-8, string concatenation, char enumeration, etc, would all be good examples.
- There should be one or more Concepts to practice per exercise.
## Prerequisites
The `prerequisites` key lists the Concept Exercises that a student must have completed in order to access this Practice Exercise.
- These show up in the UI as "Learn Strings to unlock TwoFer"
- It should include all Concepts that a student needs to have covered to be able to complete the exercise in at least one idiomatic way. For example, for the TwoFer exercise in Ruby, prerequisites might include `strings`, `optional-params`, `implicit-return`.
- For Exercises that can be completed using alternative Concepts (e.g. an Exercise solvable by `loops` or `recursion`), the maintainer should choose the one approach that they would like to unlock the Exercise, considering the student's journey through the track. For example, the loops/recursion example, they might think this exercise is a good early practice of `loops` or that they might like to leave it later to teach recursion. They can also make use of an analyzer to prompt the student to try an alternative approach: "Nice work on solving this via loops. You might also like to try solving this using Recursion."
- There should be one or more prerequisites Concepts per exercise.
Although ideally all Concepts should be taught by Concept Exercises, we recognise that it will take time for tracks to achieve that. Any Practice Exercises that have prerequisites which are not taught by Concept Exercises, will become unlocked once the final Concept Exercise has been completed.
## Goal
### Practices
The `"practices"` field of each element in the `"exercises.practice"` field in the `config.json` file should be updated to contain the practice concepts. See [the spec](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/config-json.md#practice-exercises).
To help with identifying the practice concepts, the `"topics"` field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the `"topics"` field should be removed.
Each practice concept should have its own entry in the top-level `"concepts"` array. See [the spec](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/config-json.md#concepts).
### Prerequisites
The `"prerequisites"` field of each element in the `"exercises.practice"` field in the `config.json` file should be updated to contain the prerequisite concepts. See [the spec](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/config-json.md#practice-exercises).
To help with identifying the prerequisites, the `"topics"` field can be used (if it has any contents). Once prerequisites have been defined for a Practice Exercise, the `"topics"` field should be removed.
Each prerequisite concept should have its own entry in the top-level `"concepts"` array. See [the spec](https://github.com/exercism/v3-docs/blob/main/anatomy/tracks/config-json.md#concepts).
## Example
```json
{
"exercises": {
"practice": [
{
"uuid": "8ba15933-29a2-49b1-a9ce-70474bad3007",
"slug": "leap",
"name": "Leap",
"practices": ["if-statements", "numbers", "operator-precedence"],
"prerequisites": ["if-statements", "numbers"],
"difficulty": 1
}
]
}
}
```
## Tracking
https://github.com/exercism/v3-launch/issues/6
| main | add prerequisites to practice exercises this issue is part of the migration to you can read full details about the various changes exercism introduces a new type of exercise all existing exercises will become concept exercises and practice exercises are linked to each other via concepts are taught by concept exercises and practiced in practice exercises each exercise concept or practice has prerequisites which must be met to unlock an exercise once all the prerequisite concepts have been taught by a concept exercise the exercise itself becomes unlocked for example in some languages completing the concept exercises that teach the string interpolation and optional parameters concepts might then unlock the two fer practice exercise each practice exercise has two fields containing concepts a practices field and a prerequisites field practices the practices key should list the slugs of concepts that this practice exercise actively allows a student to practice these show up in the ui as practice this concept in twofer leap etc try and choose exercises that practice each concept try and choose at least two exercises that allow someone to practice the basics of a concept some concepts are very common for example strings in those cases we recommend choosing a few good exercises that make people think about those concepts in interesting ways for example exercises that require utf string concatenation char enumeration etc would all be good examples there should be one or more concepts to practice per exercise prerequisites the prerequisites key lists the concept exercises that a student must have completed in order to access this practice exercise these show up in the ui as learn strings to unlock twofer it should include all concepts that a student needs to have covered to be able to complete the exercise in at least one idiomatic way for example for the twofer exercise in ruby prerequisites might include strings optional params implicit return for exercises that can be completed using alternative concepts e g an exercise solvable by loops or recursion the maintainer should choose the one approach that they would like to unlock the exercise considering the student s journey through the track for example the loops recursion example they might think this exercise is a good early practice of loops or that they might like to leave it later to teach recursion they can also make use of an analyzer to prompt the student to try an alternative approach nice work on solving this via loops you might also like to try solving this using recursion there should be one or more prerequisites concepts per exercise although ideally all concepts should be taught by concept exercises we recognise that it will take time for tracks to achieve that any practice exercises that have prerequisites which are not taught by concept exercises will become unlocked once the final concept exercise has been completed goal practices the practices field of each element in the exercises practice field in the config json file should be updated to contain the practice concepts see to help with identifying the practice concepts the topics field can be used if it has any contents once prerequisites have been defined for a practice exercise the topics field should be removed each practice concept should have its own entry in the top level concepts array see prerequisites the prerequisites field of each element in the exercises practice field in the config json file should be updated to contain the prerequisite concepts see to help with identifying the prerequisites the topics field can be used if it has any contents once prerequisites have been defined for a practice exercise the topics field should be removed each prerequisite concept should have its own entry in the top level concepts array see example json exercises practice uuid slug leap name leap practices prerequisites difficulty tracking | 1 |
390,682 | 11,551,740,475 | IssuesEvent | 2020-02-19 02:31:05 | googleapis/python-pubsub | https://api.github.com/repos/googleapis/python-pubsub | closed | Synthesis failed for python-pubsub | :rotating_light: api: pubsub autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate python-pubsub. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
0a01a72a686c: Pulling fs layer
cc899a5544da: Pulling fs layer
19197c550755: Pulling fs layer
716d454e56b6: Pulling fs layer
356bff9680f9: Pulling fs layer
e9c3c22e42c0: Pulling fs layer
942b7dc330dd: Pulling fs layer
56709773ca4a: Pulling fs layer
8170975fe908: Pulling fs layer
d8b5c7fe7729: Pulling fs layer
dccf1bbd0660: Pulling fs layer
0f1049eb8f0b: Pulling fs layer
0637b8e4edf2: Pulling fs layer
fe441baf69db: Pulling fs layer
93bd4e7a0e34: Pulling fs layer
69ac465d8b6d: Pulling fs layer
ff53b2ac7bd7: Pulling fs layer
7c49755f75be: Pulling fs layer
03847e8fcbe9: Pulling fs layer
b017baf7ee37: Pulling fs layer
bfd64ffd417b: Pulling fs layer
25de3dde113f: Pulling fs layer
880ff89656a7: Pulling fs layer
253833b214f0: Pulling fs layer
6aff358b4b78: Pulling fs layer
bdf245069246: Pulling fs layer
5fa91c043e36: Pulling fs layer
a8dbf9c918af: Pulling fs layer
716d454e56b6: Waiting
356bff9680f9: Waiting
e9c3c22e42c0: Waiting
942b7dc330dd: Waiting
56709773ca4a: Waiting
8170975fe908: Waiting
d8b5c7fe7729: Waiting
dccf1bbd0660: Waiting
ff53b2ac7bd7: Waiting
7c49755f75be: Waiting
03847e8fcbe9: Waiting
b017baf7ee37: Waiting
6aff358b4b78: Waiting
bfd64ffd417b: Waiting
5fa91c043e36: Waiting
bdf245069246: Waiting
25de3dde113f: Waiting
880ff89656a7: Waiting
a8dbf9c918af: Waiting
fe441baf69db: Waiting
93bd4e7a0e34: Waiting
0637b8e4edf2: Waiting
253833b214f0: Waiting
69ac465d8b6d: Waiting
19197c550755: Verifying Checksum
19197c550755: Download complete
cc899a5544da: Verifying Checksum
cc899a5544da: Download complete
0a01a72a686c: Verifying Checksum
0a01a72a686c: Download complete
716d454e56b6: Verifying Checksum
716d454e56b6: Download complete
356bff9680f9: Verifying Checksum
356bff9680f9: Download complete
942b7dc330dd: Verifying Checksum
942b7dc330dd: Download complete
56709773ca4a: Verifying Checksum
56709773ca4a: Download complete
d8b5c7fe7729: Verifying Checksum
d8b5c7fe7729: Download complete
8170975fe908: Verifying Checksum
8170975fe908: Download complete
0f1049eb8f0b: Verifying Checksum
0f1049eb8f0b: Download complete
0a01a72a686c: Pull complete
dccf1bbd0660: Verifying Checksum
dccf1bbd0660: Download complete
cc899a5544da: Pull complete
19197c550755: Pull complete
716d454e56b6: Pull complete
fe441baf69db: Verifying Checksum
fe441baf69db: Download complete
e9c3c22e42c0: Verifying Checksum
e9c3c22e42c0: Download complete
93bd4e7a0e34: Verifying Checksum
93bd4e7a0e34: Download complete
69ac465d8b6d: Verifying Checksum
69ac465d8b6d: Download complete
0637b8e4edf2: Verifying Checksum
0637b8e4edf2: Download complete
ff53b2ac7bd7: Verifying Checksum
ff53b2ac7bd7: Download complete
356bff9680f9: Pull complete
7c49755f75be: Download complete
03847e8fcbe9: Verifying Checksum
03847e8fcbe9: Download complete
bfd64ffd417b: Download complete
880ff89656a7: Verifying Checksum
880ff89656a7: Download complete
253833b214f0: Download complete
6aff358b4b78: Verifying Checksum
6aff358b4b78: Download complete
b017baf7ee37: Verifying Checksum
b017baf7ee37: Download complete
bdf245069246: Download complete
5fa91c043e36: Verifying Checksum
5fa91c043e36: Download complete
a8dbf9c918af: Verifying Checksum
a8dbf9c918af: Download complete
25de3dde113f: Verifying Checksum
25de3dde113f: Download complete
e9c3c22e42c0: Pull complete
942b7dc330dd: Pull complete
56709773ca4a: Pull complete
8170975fe908: Pull complete
d8b5c7fe7729: Pull complete
dccf1bbd0660: Pull complete
0f1049eb8f0b: Pull complete
0637b8e4edf2: Pull complete
fe441baf69db: Pull complete
93bd4e7a0e34: Pull complete
69ac465d8b6d: Pull complete
ff53b2ac7bd7: Pull complete
7c49755f75be: Pull complete
03847e8fcbe9: Pull complete
b017baf7ee37: Pull complete
bfd64ffd417b: Pull complete
25de3dde113f: Pull complete
880ff89656a7: Pull complete
253833b214f0: Pull complete
6aff358b4b78: Pull complete
bdf245069246: Pull complete
5fa91c043e36: Pull complete
a8dbf9c918af: Pull complete
Digest: sha256:6aec9c34db0e4be221cdaf6faba27bdc07cfea846808b3d3b964dfce3a9a0f9b
Status: Downloaded newer image for googleapis/artman:latest
synthtool > Cloning googleapis.
synthtool > Running generator for google/pubsub/artman_pubsub.yaml.
synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/pubsub-v1.
synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/pubsub/v1/pubsub.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/pubsub-v1/google/cloud/pubsub_v1/proto/pubsub.proto
synthtool > Placed proto files into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/pubsub-v1/google/cloud/pubsub_v1/proto.
synthtool > Replaced 'from google.cloud import pubsub_v1' in tests/unit/gapic/v1/test_publisher_client_v1.py.
synthtool > Replaced ' pubsub_v1' in tests/unit/gapic/v1/test_publisher_client_v1.py.
synthtool > Replaced 'from google.cloud import pubsub_v1' in tests/unit/gapic/v1/test_subscriber_client_v1.py.
synthtool > Replaced ' pubsub_v1' in tests/unit/gapic/v1/test_subscriber_client_v1.py.
synthtool > Replaced '# The name of the interface for this client. This is the key used to' in google/cloud/pubsub_v1/gapic/subscriber_client.py.
synthtool > Replaced '# The name of the interface for this client. This is the key used to' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'import google.api_core.gapic_v1.method\n' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'DESCRIPTOR = _MESSAGESTORAGEPOLICY,\n\\s+__module__.*\n\\s+,\n\\s+__doc__ = """' in google/cloud/pubsub_v1/proto/pubsub_pb2.py.
synthtool > No replacements made in google/cloud/pubsub_v1/gapic/subscriber_client.py for pattern subscription \(str\): The subscription whose backlog .*
(.*
)+?\s+Format is .*, maybe replacement is not longer needed?
synthtool > Replaced 'import functools\n' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'import pkg_resources\n' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'class PublisherClient' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'client_config \\(dict\\): DEPRECATED.' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced '# Raise deprecation warnings .*\n.*\n.*\n.*\n.*\n.*\n' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced '~google.api_core.page_iterator.PageIterator' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced '~google.api_core.page_iterator.PageIterator' in google/cloud/pubsub_v1/gapic/subscriber_client.py.
synthtool > Replaced 'from google.iam.v1 import iam_policy_pb2' in google/cloud/pubsub_v1/gapic/transports/subscriber_grpc_transport.py.
synthtool > Replaced 'from google.iam.v1 import iam_policy_pb2' in google/cloud/pubsub_v1/gapic/transports/publisher_grpc_transport.py.
.coveragerc
.flake8
.github/CONTRIBUTING.md
.github/ISSUE_TEMPLATE/bug_report.md
.github/ISSUE_TEMPLATE/feature_request.md
.github/ISSUE_TEMPLATE/support_request.md
.github/PULL_REQUEST_TEMPLATE.md
.github/release-please.yml
.gitignore
.kokoro/build.sh
.kokoro/continuous/common.cfg
.kokoro/continuous/continuous.cfg
.kokoro/docs/common.cfg
.kokoro/docs/docs.cfg
.kokoro/presubmit/common.cfg
.kokoro/presubmit/presubmit.cfg
.kokoro/publish-docs.sh
.kokoro/release.sh
.kokoro/release/common.cfg
.kokoro/release/release.cfg
.kokoro/trampoline.sh
CODE_OF_CONDUCT.md
CONTRIBUTING.rst
LICENSE
MANIFEST.in
docs/_static/custom.css
docs/_templates/layout.html
docs/conf.py.j2
noxfile.py.j2
renovate.json
setup.cfg
synthtool > No replacements made in noxfile.py for pattern session\.install\("-e", "\.\./test_utils/"\), maybe replacement is not longer needed?
Running session blacken
Creating virtual environment (virtualenv) using python3.6 in .nox/blacken
pip install black==19.3b0
Error: pip is not installed into the virtualenv, it is located at /tmpfs/src/git/autosynth/env/bin/pip. Pass external=True into run() to explicitly allow this.
Session blacken failed.
synthtool > Failed executing nox -s blacken:
None
synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 99, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 91, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 205, in <module>
s.shell.run(["nox", "-s", "blacken"], hide_output=False)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['nox', '-s', 'blacken']' returned non-zero exit status 1.
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/71791922-d263-4d09-a962-3c1472f71410).
| 1.0 | Synthesis failed for python-pubsub - Hello! Autosynth couldn't regenerate python-pubsub. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
0a01a72a686c: Pulling fs layer
cc899a5544da: Pulling fs layer
19197c550755: Pulling fs layer
716d454e56b6: Pulling fs layer
356bff9680f9: Pulling fs layer
e9c3c22e42c0: Pulling fs layer
942b7dc330dd: Pulling fs layer
56709773ca4a: Pulling fs layer
8170975fe908: Pulling fs layer
d8b5c7fe7729: Pulling fs layer
dccf1bbd0660: Pulling fs layer
0f1049eb8f0b: Pulling fs layer
0637b8e4edf2: Pulling fs layer
fe441baf69db: Pulling fs layer
93bd4e7a0e34: Pulling fs layer
69ac465d8b6d: Pulling fs layer
ff53b2ac7bd7: Pulling fs layer
7c49755f75be: Pulling fs layer
03847e8fcbe9: Pulling fs layer
b017baf7ee37: Pulling fs layer
bfd64ffd417b: Pulling fs layer
25de3dde113f: Pulling fs layer
880ff89656a7: Pulling fs layer
253833b214f0: Pulling fs layer
6aff358b4b78: Pulling fs layer
bdf245069246: Pulling fs layer
5fa91c043e36: Pulling fs layer
a8dbf9c918af: Pulling fs layer
716d454e56b6: Waiting
356bff9680f9: Waiting
e9c3c22e42c0: Waiting
942b7dc330dd: Waiting
56709773ca4a: Waiting
8170975fe908: Waiting
d8b5c7fe7729: Waiting
dccf1bbd0660: Waiting
ff53b2ac7bd7: Waiting
7c49755f75be: Waiting
03847e8fcbe9: Waiting
b017baf7ee37: Waiting
6aff358b4b78: Waiting
bfd64ffd417b: Waiting
5fa91c043e36: Waiting
bdf245069246: Waiting
25de3dde113f: Waiting
880ff89656a7: Waiting
a8dbf9c918af: Waiting
fe441baf69db: Waiting
93bd4e7a0e34: Waiting
0637b8e4edf2: Waiting
253833b214f0: Waiting
69ac465d8b6d: Waiting
19197c550755: Verifying Checksum
19197c550755: Download complete
cc899a5544da: Verifying Checksum
cc899a5544da: Download complete
0a01a72a686c: Verifying Checksum
0a01a72a686c: Download complete
716d454e56b6: Verifying Checksum
716d454e56b6: Download complete
356bff9680f9: Verifying Checksum
356bff9680f9: Download complete
942b7dc330dd: Verifying Checksum
942b7dc330dd: Download complete
56709773ca4a: Verifying Checksum
56709773ca4a: Download complete
d8b5c7fe7729: Verifying Checksum
d8b5c7fe7729: Download complete
8170975fe908: Verifying Checksum
8170975fe908: Download complete
0f1049eb8f0b: Verifying Checksum
0f1049eb8f0b: Download complete
0a01a72a686c: Pull complete
dccf1bbd0660: Verifying Checksum
dccf1bbd0660: Download complete
cc899a5544da: Pull complete
19197c550755: Pull complete
716d454e56b6: Pull complete
fe441baf69db: Verifying Checksum
fe441baf69db: Download complete
e9c3c22e42c0: Verifying Checksum
e9c3c22e42c0: Download complete
93bd4e7a0e34: Verifying Checksum
93bd4e7a0e34: Download complete
69ac465d8b6d: Verifying Checksum
69ac465d8b6d: Download complete
0637b8e4edf2: Verifying Checksum
0637b8e4edf2: Download complete
ff53b2ac7bd7: Verifying Checksum
ff53b2ac7bd7: Download complete
356bff9680f9: Pull complete
7c49755f75be: Download complete
03847e8fcbe9: Verifying Checksum
03847e8fcbe9: Download complete
bfd64ffd417b: Download complete
880ff89656a7: Verifying Checksum
880ff89656a7: Download complete
253833b214f0: Download complete
6aff358b4b78: Verifying Checksum
6aff358b4b78: Download complete
b017baf7ee37: Verifying Checksum
b017baf7ee37: Download complete
bdf245069246: Download complete
5fa91c043e36: Verifying Checksum
5fa91c043e36: Download complete
a8dbf9c918af: Verifying Checksum
a8dbf9c918af: Download complete
25de3dde113f: Verifying Checksum
25de3dde113f: Download complete
e9c3c22e42c0: Pull complete
942b7dc330dd: Pull complete
56709773ca4a: Pull complete
8170975fe908: Pull complete
d8b5c7fe7729: Pull complete
dccf1bbd0660: Pull complete
0f1049eb8f0b: Pull complete
0637b8e4edf2: Pull complete
fe441baf69db: Pull complete
93bd4e7a0e34: Pull complete
69ac465d8b6d: Pull complete
ff53b2ac7bd7: Pull complete
7c49755f75be: Pull complete
03847e8fcbe9: Pull complete
b017baf7ee37: Pull complete
bfd64ffd417b: Pull complete
25de3dde113f: Pull complete
880ff89656a7: Pull complete
253833b214f0: Pull complete
6aff358b4b78: Pull complete
bdf245069246: Pull complete
5fa91c043e36: Pull complete
a8dbf9c918af: Pull complete
Digest: sha256:6aec9c34db0e4be221cdaf6faba27bdc07cfea846808b3d3b964dfce3a9a0f9b
Status: Downloaded newer image for googleapis/artman:latest
synthtool > Cloning googleapis.
synthtool > Running generator for google/pubsub/artman_pubsub.yaml.
synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/pubsub-v1.
synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/pubsub/v1/pubsub.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/pubsub-v1/google/cloud/pubsub_v1/proto/pubsub.proto
synthtool > Placed proto files into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/pubsub-v1/google/cloud/pubsub_v1/proto.
synthtool > Replaced 'from google.cloud import pubsub_v1' in tests/unit/gapic/v1/test_publisher_client_v1.py.
synthtool > Replaced ' pubsub_v1' in tests/unit/gapic/v1/test_publisher_client_v1.py.
synthtool > Replaced 'from google.cloud import pubsub_v1' in tests/unit/gapic/v1/test_subscriber_client_v1.py.
synthtool > Replaced ' pubsub_v1' in tests/unit/gapic/v1/test_subscriber_client_v1.py.
synthtool > Replaced '# The name of the interface for this client. This is the key used to' in google/cloud/pubsub_v1/gapic/subscriber_client.py.
synthtool > Replaced '# The name of the interface for this client. This is the key used to' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'import google.api_core.gapic_v1.method\n' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'DESCRIPTOR = _MESSAGESTORAGEPOLICY,\n\\s+__module__.*\n\\s+,\n\\s+__doc__ = """' in google/cloud/pubsub_v1/proto/pubsub_pb2.py.
synthtool > No replacements made in google/cloud/pubsub_v1/gapic/subscriber_client.py for pattern subscription \(str\): The subscription whose backlog .*
(.*
)+?\s+Format is .*, maybe replacement is not longer needed?
synthtool > Replaced 'import functools\n' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'import pkg_resources\n' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'class PublisherClient' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced 'client_config \\(dict\\): DEPRECATED.' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced '# Raise deprecation warnings .*\n.*\n.*\n.*\n.*\n.*\n' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced '~google.api_core.page_iterator.PageIterator' in google/cloud/pubsub_v1/gapic/publisher_client.py.
synthtool > Replaced '~google.api_core.page_iterator.PageIterator' in google/cloud/pubsub_v1/gapic/subscriber_client.py.
synthtool > Replaced 'from google.iam.v1 import iam_policy_pb2' in google/cloud/pubsub_v1/gapic/transports/subscriber_grpc_transport.py.
synthtool > Replaced 'from google.iam.v1 import iam_policy_pb2' in google/cloud/pubsub_v1/gapic/transports/publisher_grpc_transport.py.
.coveragerc
.flake8
.github/CONTRIBUTING.md
.github/ISSUE_TEMPLATE/bug_report.md
.github/ISSUE_TEMPLATE/feature_request.md
.github/ISSUE_TEMPLATE/support_request.md
.github/PULL_REQUEST_TEMPLATE.md
.github/release-please.yml
.gitignore
.kokoro/build.sh
.kokoro/continuous/common.cfg
.kokoro/continuous/continuous.cfg
.kokoro/docs/common.cfg
.kokoro/docs/docs.cfg
.kokoro/presubmit/common.cfg
.kokoro/presubmit/presubmit.cfg
.kokoro/publish-docs.sh
.kokoro/release.sh
.kokoro/release/common.cfg
.kokoro/release/release.cfg
.kokoro/trampoline.sh
CODE_OF_CONDUCT.md
CONTRIBUTING.rst
LICENSE
MANIFEST.in
docs/_static/custom.css
docs/_templates/layout.html
docs/conf.py.j2
noxfile.py.j2
renovate.json
setup.cfg
synthtool > No replacements made in noxfile.py for pattern session\.install\("-e", "\.\./test_utils/"\), maybe replacement is not longer needed?
Running session blacken
Creating virtual environment (virtualenv) using python3.6 in .nox/blacken
pip install black==19.3b0
Error: pip is not installed into the virtualenv, it is located at /tmpfs/src/git/autosynth/env/bin/pip. Pass external=True into run() to explicitly allow this.
Session blacken failed.
synthtool > Failed executing nox -s blacken:
None
synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 99, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 91, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 205, in <module>
s.shell.run(["nox", "-s", "blacken"], hide_output=False)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['nox', '-s', 'blacken']' returned non-zero exit status 1.
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/71791922-d263-4d09-a962-3c1472f71410).
| non_main | synthesis failed for python pubsub hello autosynth couldn t regenerate python pubsub broken heart here s the output from running synth py cloning into working repo switched to branch autosynth running synthtool synthtool executing tmpfs src git autosynth working repo synth py synthtool ensuring dependencies synthtool pulling artman image latest pulling from googleapis artman pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete pull complete verifying checksum download complete pull complete pull complete pull complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete pull complete download complete verifying checksum download complete download complete verifying checksum download complete download complete verifying checksum download complete verifying checksum download complete download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete digest status downloaded newer image for googleapis artman latest synthtool cloning googleapis synthtool running generator for google pubsub artman pubsub yaml synthtool generated code into home kbuilder cache synthtool googleapis artman genfiles python pubsub synthtool copy home kbuilder cache synthtool googleapis google pubsub pubsub proto to home kbuilder cache synthtool googleapis artman genfiles python pubsub google cloud pubsub proto pubsub proto synthtool placed proto files into home kbuilder cache synthtool googleapis artman genfiles python pubsub google cloud pubsub proto synthtool replaced from google cloud import pubsub in tests unit gapic test publisher client py synthtool replaced pubsub in tests unit gapic test publisher client py synthtool replaced from google cloud import pubsub in tests unit gapic test subscriber client py synthtool replaced pubsub in tests unit gapic test subscriber client py synthtool replaced the name of the interface for this client this is the key used to in google cloud pubsub gapic subscriber client py synthtool replaced the name of the interface for this client this is the key used to in google cloud pubsub gapic publisher client py synthtool replaced import google api core gapic method n in google cloud pubsub gapic publisher client py synthtool replaced descriptor messagestoragepolicy n s module n s n s doc in google cloud pubsub proto pubsub py synthtool no replacements made in google cloud pubsub gapic subscriber client py for pattern subscription str the subscription whose backlog s format is maybe replacement is not longer needed synthtool replaced import functools n in google cloud pubsub gapic publisher client py synthtool replaced import pkg resources n in google cloud pubsub gapic publisher client py synthtool replaced class publisherclient in google cloud pubsub gapic publisher client py synthtool replaced client config dict deprecated in google cloud pubsub gapic publisher client py synthtool replaced raise deprecation warnings n n n n n n in google cloud pubsub gapic publisher client py synthtool replaced google api core page iterator pageiterator in google cloud pubsub gapic publisher client py synthtool replaced google api core page iterator pageiterator in google cloud pubsub gapic subscriber client py synthtool replaced from google iam import iam policy in google cloud pubsub gapic transports subscriber grpc transport py synthtool replaced from google iam import iam policy in google cloud pubsub gapic transports publisher grpc transport py coveragerc github contributing md github issue template bug report md github issue template feature request md github issue template support request md github pull request template md github release please yml gitignore kokoro build sh kokoro continuous common cfg kokoro continuous continuous cfg kokoro docs common cfg kokoro docs docs cfg kokoro presubmit common cfg kokoro presubmit presubmit cfg kokoro publish docs sh kokoro release sh kokoro release common cfg kokoro release release cfg kokoro trampoline sh code of conduct md contributing rst license manifest in docs static custom css docs templates layout html docs conf py noxfile py renovate json setup cfg synthtool no replacements made in noxfile py for pattern session install e test utils maybe replacement is not longer needed running session blacken creating virtual environment virtualenv using in nox blacken pip install black error pip is not installed into the virtualenv it is located at tmpfs src git autosynth env bin pip pass external true into run to explicitly allow this session blacken failed synthtool failed executing nox s blacken none synthtool wrote metadata to synth metadata traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file tmpfs src git autosynth working repo synth py line in s shell run hide output false file tmpfs src git autosynth env lib site packages synthtool shell py line in run raise exc file tmpfs src git autosynth env lib site packages synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status synthesis failed google internal developers can see the full log | 0 |
78,472 | 10,062,169,790 | IssuesEvent | 2019-07-22 23:56:27 | pypa/pipenv | https://api.github.com/repos/pypa/pipenv | closed | Animation in README and on RTD is inaccessible | Priority: Medium Type: Documentation :book: help wanted | ### Issue description
The animation referenced in the README and on Read the Docs currently doesn't show up.
### Expected result
Seeing the animation.
### Actual result
The response from S3 is as follows:
```xml
<Error>
<Code>AllAccessDisabled</Code>
<Message>All access to this object has been disabled</Message>
<RequestId>BB41E4330F22C8C6</RequestId>
<HostId>7Nm9ixlUsFQEkIJJcU+jdU/6m47oM5m3/SXvB/+hNSTlJsd4AdcaPSl0FLWB/aFHgMPQJrvsJPU=</HostId>
</Error>
```
### Steps to replicate
1. Go to the [main repo page](https://github.com/pypa/pipenv) or [docs homepage](https://pipenv.readthedocs.io/en/latest/)
2. Scroll to where the animation is suppose to be
3. Observe the image not loading
### Screenshots
<img width="1516" alt="Screen Shot 2019-04-01 at 7 52 28 PM" src="https://user-images.githubusercontent.com/568543/55372753-b2619a00-54b7-11e9-825d-764640160042.png">
<img width="1516" alt="Screen Shot 2019-04-01 at 7 52 17 PM" src="https://user-images.githubusercontent.com/568543/55372752-b2619a00-54b7-11e9-9318-cc91ef9d1b63.png"> | 1.0 | Animation in README and on RTD is inaccessible - ### Issue description
The animation referenced in the README and on Read the Docs currently doesn't show up.
### Expected result
Seeing the animation.
### Actual result
The response from S3 is as follows:
```xml
<Error>
<Code>AllAccessDisabled</Code>
<Message>All access to this object has been disabled</Message>
<RequestId>BB41E4330F22C8C6</RequestId>
<HostId>7Nm9ixlUsFQEkIJJcU+jdU/6m47oM5m3/SXvB/+hNSTlJsd4AdcaPSl0FLWB/aFHgMPQJrvsJPU=</HostId>
</Error>
```
### Steps to replicate
1. Go to the [main repo page](https://github.com/pypa/pipenv) or [docs homepage](https://pipenv.readthedocs.io/en/latest/)
2. Scroll to where the animation is suppose to be
3. Observe the image not loading
### Screenshots
<img width="1516" alt="Screen Shot 2019-04-01 at 7 52 28 PM" src="https://user-images.githubusercontent.com/568543/55372753-b2619a00-54b7-11e9-825d-764640160042.png">
<img width="1516" alt="Screen Shot 2019-04-01 at 7 52 17 PM" src="https://user-images.githubusercontent.com/568543/55372752-b2619a00-54b7-11e9-9318-cc91ef9d1b63.png"> | non_main | animation in readme and on rtd is inaccessible issue description the animation referenced in the readme and on read the docs currently doesn t show up expected result seeing the animation actual result the response from is as follows xml allaccessdisabled all access to this object has been disabled jdu sxvb afhgmpqjrvsjpu steps to replicate go to the or scroll to where the animation is suppose to be observe the image not loading screenshots img width alt screen shot at pm src img width alt screen shot at pm src | 0 |
4,076 | 19,250,029,278 | IssuesEvent | 2021-12-09 03:18:52 | aws/aws-lambda-builders | https://api.github.com/repos/aws/aws-lambda-builders | closed | Python version validation fails in CircleCI cimg/python3.9 | area/workflow/python_pip blocked/more-info-needed maintainer/need-followup | **Description:**
When using the SAM CLI, it calls this library which then fails when trying to validate the installed Python version despite there being a valid installation. Interestingly the legacy image `circleci/python:3.9.6` works fine.
**Steps to reproduce the issue:**
1. Create a CircleCI pipeline with the executor image `cimg/python:3.9.6`
2. Run `sam build` with any template file
**Observed result:**
The command fails when trying to validate the Python version with these logs:
```
2021-09-10 01:43:20,142 | Invalid executable for python at /home/circleci/.pyenv/shims/python3.9
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 58, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 48, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.9, Found version: /home/circleci/.pyenv/shims/python3.9.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
2021-09-10 01:43:20,157 | Invalid executable for python at /home/circleci/.pyenv/shims/python
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 58, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 48, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.9, Found version: /home/circleci/.pyenv/shims/python.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
```
**Expected result:**
The command completes succesfully.
**Other notes**
When running the command generated by the validation module python_pip.validator manually like so:
```bash
/home/circleci/.pyenv/shims/python3.9 -c "import sys; assert sys.version_info.major == 3 and sys.version_info.minor == 9"
```
it completes successfully. This works with all Python versions found on the PATH checked by the validator.
| True | Python version validation fails in CircleCI cimg/python3.9 - **Description:**
When using the SAM CLI, it calls this library which then fails when trying to validate the installed Python version despite there being a valid installation. Interestingly the legacy image `circleci/python:3.9.6` works fine.
**Steps to reproduce the issue:**
1. Create a CircleCI pipeline with the executor image `cimg/python:3.9.6`
2. Run `sam build` with any template file
**Observed result:**
The command fails when trying to validate the Python version with these logs:
```
2021-09-10 01:43:20,142 | Invalid executable for python at /home/circleci/.pyenv/shims/python3.9
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 58, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 48, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.9, Found version: /home/circleci/.pyenv/shims/python3.9.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
2021-09-10 01:43:20,157 | Invalid executable for python at /home/circleci/.pyenv/shims/python
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 58, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 48, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.9, Found version: /home/circleci/.pyenv/shims/python.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
```
**Expected result:**
The command completes succesfully.
**Other notes**
When running the command generated by the validation module python_pip.validator manually like so:
```bash
/home/circleci/.pyenv/shims/python3.9 -c "import sys; assert sys.version_info.major == 3 and sys.version_info.minor == 9"
```
it completes successfully. This works with all Python versions found on the PATH checked by the validator.
| main | python version validation fails in circleci cimg description when using the sam cli it calls this library which then fails when trying to validate the installed python version despite there being a valid installation interestingly the legacy image circleci python works fine steps to reproduce the issue create a circleci pipeline with the executor image cimg python run sam build with any template file observed result the command fails when trying to validate the python version with these logs invalid executable for python at home circleci pyenv shims traceback most recent call last file aws lambda builders workflow py line in wrapper file aws lambda builders workflows python pip validator py line in validate aws lambda builders exceptions mismatchruntimeerror python executable found in your path does not match runtime expected version found version home circleci pyenv shims possibly related invalid executable for python at home circleci pyenv shims python traceback most recent call last file aws lambda builders workflow py line in wrapper file aws lambda builders workflows python pip validator py line in validate aws lambda builders exceptions mismatchruntimeerror python executable found in your path does not match runtime expected version found version home circleci pyenv shims python possibly related expected result the command completes succesfully other notes when running the command generated by the validation module python pip validator manually like so bash home circleci pyenv shims c import sys assert sys version info major and sys version info minor it completes successfully this works with all python versions found on the path checked by the validator | 1 |
22,630 | 7,195,417,884 | IssuesEvent | 2018-02-04 16:57:43 | eclipse/openj9 | https://api.github.com/repos/eclipse/openj9 | closed | Github->Jenkins PR Trigger Comments need to support jdk8 | comp:build jdk8 | related #660
We currently support the following PR builds via Github PR comments.
- OpenJ9-JDK9 zLinux (linux_390-64_cmprssptrs)
- compile only
- sanity
- extended
- OpenJ9-JDK9 pLinux (linux_ppc-64_cmprssptrs_le)
- compile only
- sanity
- extended
You can trigger these builds via PR commit comment
`Jenkins compile`
`Jenkins compile zLinux`
`Jenkins compile pLinux`
`Jenkins test sanity`
`Jenkins test sanity zlinux`
`Jenkins test sanity plinux`
`Jenkins test extended`
`Jenkins test extended zlinux`
`Jenkins test extended plinux`
Here is the regex from the zLinux Sanity build:
`.*jenkins test sanity(.*zlinux.*)?(?! xlinux)(?! plinux).*`
Each job has a slightly different regex which will allow that job to run if the comment matches the regex
We need to update the regex's to support more JDK versions (i.e. 8,10 for the immediate future).
The current regex 'model' is harder to maintain because each job has to maintain the list of platforms it is not interested in. The alternate would be to have a regex that looks for `all` or `<my_platform>`. This same problem will apply to JDK versions once we add 8,10.
#### Final regex example
`.*\bjenkins\s+test\s+sanity\b\s*($|\n|depends\s+.*|(all|([a-z]+,)*zlinux(,[a-z]+)*)\s*($|\n|depends\s+.*|all|(jdk[0-9]+,)*jdk8(,jdk[0-9]+)*)(\s+depends.*)?)` | 1.0 | Github->Jenkins PR Trigger Comments need to support jdk8 - related #660
We currently support the following PR builds via Github PR comments.
- OpenJ9-JDK9 zLinux (linux_390-64_cmprssptrs)
- compile only
- sanity
- extended
- OpenJ9-JDK9 pLinux (linux_ppc-64_cmprssptrs_le)
- compile only
- sanity
- extended
You can trigger these builds via PR commit comment
`Jenkins compile`
`Jenkins compile zLinux`
`Jenkins compile pLinux`
`Jenkins test sanity`
`Jenkins test sanity zlinux`
`Jenkins test sanity plinux`
`Jenkins test extended`
`Jenkins test extended zlinux`
`Jenkins test extended plinux`
Here is the regex from the zLinux Sanity build:
`.*jenkins test sanity(.*zlinux.*)?(?! xlinux)(?! plinux).*`
Each job has a slightly different regex which will allow that job to run if the comment matches the regex
We need to update the regex's to support more JDK versions (i.e. 8,10 for the immediate future).
The current regex 'model' is harder to maintain because each job has to maintain the list of platforms it is not interested in. The alternate would be to have a regex that looks for `all` or `<my_platform>`. This same problem will apply to JDK versions once we add 8,10.
#### Final regex example
`.*\bjenkins\s+test\s+sanity\b\s*($|\n|depends\s+.*|(all|([a-z]+,)*zlinux(,[a-z]+)*)\s*($|\n|depends\s+.*|all|(jdk[0-9]+,)*jdk8(,jdk[0-9]+)*)(\s+depends.*)?)` | non_main | github jenkins pr trigger comments need to support related we currently support the following pr builds via github pr comments zlinux linux cmprssptrs compile only sanity extended plinux linux ppc cmprssptrs le compile only sanity extended you can trigger these builds via pr commit comment jenkins compile jenkins compile zlinux jenkins compile plinux jenkins test sanity jenkins test sanity zlinux jenkins test sanity plinux jenkins test extended jenkins test extended zlinux jenkins test extended plinux here is the regex from the zlinux sanity build jenkins test sanity zlinux xlinux plinux each job has a slightly different regex which will allow that job to run if the comment matches the regex we need to update the regex s to support more jdk versions i e for the immediate future the current regex model is harder to maintain because each job has to maintain the list of platforms it is not interested in the alternate would be to have a regex that looks for all or this same problem will apply to jdk versions once we add final regex example bjenkins s test s sanity b s n depends s all zlinux s n depends s all jdk jdk s depends | 0 |
78,180 | 27,359,124,526 | IssuesEvent | 2023-02-27 14:47:22 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | submitJobFromJar uses whole buffer size even if sent part has lower size [HZ-2056] | Type: Defect Source: Internal Module: Jet to-jira Team: Platform | When Jar is submitted to the cluster by `submitJobFromJar` then it allocates a buffer (by default with size `10MB`) and split the jar to the parts of buffer size and send them over the network. However it sends full buffer even for the last part of a Jar which is lower than allocated buffer. It means that if we have small Jar with size of couple of kilobytes (where last part jar = whole jar) we still have to send `10MB` of data. In case when network become slower this can affect the time for job upload significantly (for example if for network with speed `256KB/s` it currently takes 40 seconds to upload `7KB` jar). | 1.0 | submitJobFromJar uses whole buffer size even if sent part has lower size [HZ-2056] - When Jar is submitted to the cluster by `submitJobFromJar` then it allocates a buffer (by default with size `10MB`) and split the jar to the parts of buffer size and send them over the network. However it sends full buffer even for the last part of a Jar which is lower than allocated buffer. It means that if we have small Jar with size of couple of kilobytes (where last part jar = whole jar) we still have to send `10MB` of data. In case when network become slower this can affect the time for job upload significantly (for example if for network with speed `256KB/s` it currently takes 40 seconds to upload `7KB` jar). | non_main | submitjobfromjar uses whole buffer size even if sent part has lower size when jar is submitted to the cluster by submitjobfromjar then it allocates a buffer by default with size and split the jar to the parts of buffer size and send them over the network however it sends full buffer even for the last part of a jar which is lower than allocated buffer it means that if we have small jar with size of couple of kilobytes where last part jar whole jar we still have to send of data in case when network become slower this can affect the time for job upload significantly for example if for network with speed s it currently takes seconds to upload jar | 0 |
910 | 4,581,543,090 | IssuesEvent | 2016-09-19 06:15:32 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ios_config does not allow looping under parents | affects_2.1 bug_report networking waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ios_config
##### ANSIBLE VERSION
ansible 2.1.1.0
config file = /home/ansible/ios_config/ansible.cfg
configured module search path = Default w/o overrides
ansible@unl01:~/ios_config$
##### CONFIGURATION
default ansible.cfg
##### OS / ENVIRONMENT
Ubuntu 14.04 LTS
##### SUMMARY
I am unable to use looping (with_items) with the parents option of ios_config.
##### STEPS TO REPRODUCE
```
- name: no shut
ios_config:
provider: "{{ provider }}"
lines:
- no shutdown
parents: "interface {{ item.interface }}"
with_items:
- { interface : Ethernet0/1 }
- { interface : Ethernet0/2 }
- { interface : Ethernet0/3 }
```
OR:
```
- name: no shut
ios_config:
provider: "{{ provider }}"
lines:
- no shutdown
parents: "interface {{ item.interface }}"
with_items:
- "{{ ip_addresses }}"
(in host inventory)
ip_addresses:
- interface: Loopback0
description: Router-ID
ip_address: 1.1.1.1
ip_mask: 255.255.255.255
- interface: Ethernet0/1
description: "To-ISP1"
ip_address: 172.31.1.2
ip_mask: 255.255.255.252
- interface: Ethernet0/2
description: "To-SW1"
ip_address: 10.1.1.2
ip_mask: 255.255.255.0
standby_grp: 1
standby_ip: 10.1.1.1
standby_pri: 200
- interface: Ethernet0/3
description: "To-WAN2"
ip_address: 10.1.254.253
ip_mask: 255.255.255.252
```
---------------- OUTPUT-----------------
________________
< TASK [no shut] >
----------------
```
task path: /home/ansible/ios_config/ios_config_play.yml:38
fatal: [wan1.cisco.com]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined\n\nThe error appears to have been in '/home/ansible/ios_config/ios_config_play.yml': line 38, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: no shut\n ^ here\n"}
msg: the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined
The error appears to have been in '/home/ansible/ios_config/ios_config_play.yml': line 38, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: no shut
^ here
```
##### EXPECTED RESULTS
the no shut command is run under interface X, interface Y, interface Z as defined in the with_items list
##### ACTUAL RESULTS
With this debug, I can see that the with_items should work
```
- name: test
debug: msg="{{ item.interface }}"
with_items:
- "{{ ip_addresses }}"
```
```
_____________
< TASK [test] >
-------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
task path: /home/ansible/ios_config/ios_config_play.yml:33
ok: [wan1.cisco.com] => (item={u'interface': u'Loopback0', u'ip_mask': u'255.255.255.255', u'ip_address': u'1.1.1.1', u'description': u'Router-ID'}) => {
"invocation": {
"module_args": {
"msg": "Loopback0"
},
"module_name": "debug"
},
"item": {
"description": "Router-ID",
"interface": "Loopback0",
"ip_address": "1.1.1.1",
"ip_mask": "255.255.255.255"
},
"msg": "Loopback0"
}
ok: [wan1.cisco.com] => (item={u'interface': u'Ethernet0/1', u'ip_mask': u'255.255.255.252', u'ip_address': u'172.31.1.2', u'description': u'To-ISP1'}) => {
"invocation": {
"module_args": {
"msg": "Ethernet0/1"
},
"module_name": "debug"
},
"item": {
"description": "To-ISP1",
"interface": "Ethernet0/1",
"ip_address": "172.31.1.2",
"ip_mask": "255.255.255.252"
},
"msg": "Ethernet0/1"
```
etc.
This is an extremely standard networking use case (i.e. making selective changes under defined interface parent sections). If I've messed up anything basic then I apologise in advance, but I can't see how the debug works but the real thing fails. I've even gone back to basics and manually defined the dictionary instead of calling it
| True | ios_config does not allow looping under parents - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ios_config
##### ANSIBLE VERSION
ansible 2.1.1.0
config file = /home/ansible/ios_config/ansible.cfg
configured module search path = Default w/o overrides
ansible@unl01:~/ios_config$
##### CONFIGURATION
default ansible.cfg
##### OS / ENVIRONMENT
Ubuntu 14.04 LTS
##### SUMMARY
I am unable to use looping (with_items) with the parents option of ios_config.
##### STEPS TO REPRODUCE
```
- name: no shut
ios_config:
provider: "{{ provider }}"
lines:
- no shutdown
parents: "interface {{ item.interface }}"
with_items:
- { interface : Ethernet0/1 }
- { interface : Ethernet0/2 }
- { interface : Ethernet0/3 }
```
OR:
```
- name: no shut
ios_config:
provider: "{{ provider }}"
lines:
- no shutdown
parents: "interface {{ item.interface }}"
with_items:
- "{{ ip_addresses }}"
(in host inventory)
ip_addresses:
- interface: Loopback0
description: Router-ID
ip_address: 1.1.1.1
ip_mask: 255.255.255.255
- interface: Ethernet0/1
description: "To-ISP1"
ip_address: 172.31.1.2
ip_mask: 255.255.255.252
- interface: Ethernet0/2
description: "To-SW1"
ip_address: 10.1.1.2
ip_mask: 255.255.255.0
standby_grp: 1
standby_ip: 10.1.1.1
standby_pri: 200
- interface: Ethernet0/3
description: "To-WAN2"
ip_address: 10.1.254.253
ip_mask: 255.255.255.252
```
---------------- OUTPUT-----------------
________________
< TASK [no shut] >
----------------
```
task path: /home/ansible/ios_config/ios_config_play.yml:38
fatal: [wan1.cisco.com]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined\n\nThe error appears to have been in '/home/ansible/ios_config/ios_config_play.yml': line 38, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: no shut\n ^ here\n"}
msg: the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined
The error appears to have been in '/home/ansible/ios_config/ios_config_play.yml': line 38, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: no shut
^ here
```
##### EXPECTED RESULTS
the no shut command is run under interface X, interface Y, interface Z as defined in the with_items list
##### ACTUAL RESULTS
With this debug, I can see that the with_items should work
```
- name: test
debug: msg="{{ item.interface }}"
with_items:
- "{{ ip_addresses }}"
```
```
_____________
< TASK [test] >
-------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
task path: /home/ansible/ios_config/ios_config_play.yml:33
ok: [wan1.cisco.com] => (item={u'interface': u'Loopback0', u'ip_mask': u'255.255.255.255', u'ip_address': u'1.1.1.1', u'description': u'Router-ID'}) => {
"invocation": {
"module_args": {
"msg": "Loopback0"
},
"module_name": "debug"
},
"item": {
"description": "Router-ID",
"interface": "Loopback0",
"ip_address": "1.1.1.1",
"ip_mask": "255.255.255.255"
},
"msg": "Loopback0"
}
ok: [wan1.cisco.com] => (item={u'interface': u'Ethernet0/1', u'ip_mask': u'255.255.255.252', u'ip_address': u'172.31.1.2', u'description': u'To-ISP1'}) => {
"invocation": {
"module_args": {
"msg": "Ethernet0/1"
},
"module_name": "debug"
},
"item": {
"description": "To-ISP1",
"interface": "Ethernet0/1",
"ip_address": "172.31.1.2",
"ip_mask": "255.255.255.252"
},
"msg": "Ethernet0/1"
```
etc.
This is an extremely standard networking use case (i.e. making selective changes under defined interface parent sections). If I've messed up anything basic then I apologise in advance, but I can't see how the debug works but the real thing fails. I've even gone back to basics and manually defined the dictionary instead of calling it
| main | ios config does not allow looping under parents issue type bug report component name ios config ansible version ansible config file home ansible ios config ansible cfg configured module search path default w o overrides ansible ios config configuration default ansible cfg os environment ubuntu lts summary i am unable to use looping with items with the parents option of ios config steps to reproduce name no shut ios config provider provider lines no shutdown parents interface item interface with items interface interface interface or name no shut ios config provider provider lines no shutdown parents interface item interface with items ip addresses in host inventory ip addresses interface description router id ip address ip mask interface description to ip address ip mask interface description to ip address ip mask standby grp standby ip standby pri interface description to ip address ip mask output task path home ansible ios config ios config play yml fatal failed failed true msg the field args has an invalid value which appears to include a variable that is undefined the error was item is undefined n nthe error appears to have been in home ansible ios config ios config play yml line column but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n n name no shut n here n msg the field args has an invalid value which appears to include a variable that is undefined the error was item is undefined the error appears to have been in home ansible ios config ios config play yml line column but may be elsewhere in the file depending on the exact syntax problem the offending line appears to be name no shut here expected results the no shut command is run under interface x interface y interface z as defined in the with items list actual results with this debug i can see that the with items should work name test debug msg item interface with items ip addresses oo w task path home ansible ios config ios config play yml ok item u interface u u ip mask u u ip address u u description u router id invocation module args msg module name debug item description router id interface ip address ip mask msg ok item u interface u u ip mask u u ip address u u description u to invocation module args msg module name debug item description to interface ip address ip mask msg etc this is an extremely standard networking use case i e making selective changes under defined interface parent sections if i ve messed up anything basic then i apologise in advance but i can t see how the debug works but the real thing fails i ve even gone back to basics and manually defined the dictionary instead of calling it | 1 |
51,052 | 3,010,381,038 | IssuesEvent | 2015-07-28 12:56:59 | knime-mpicbg/HCS-Tools | https://api.github.com/repos/knime-mpicbg/HCS-Tools | closed | MAD-Aggregation throws error for some columns | bugs high priority | For some columns the aggregation with HCS-Mad does throw the following error:
ERROR GroupBy : Execute failed: org.knime.core.data.def.IntCell cannot be cast to org.knime.core.data.def.DoubleCell | 1.0 | MAD-Aggregation throws error for some columns - For some columns the aggregation with HCS-Mad does throw the following error:
ERROR GroupBy : Execute failed: org.knime.core.data.def.IntCell cannot be cast to org.knime.core.data.def.DoubleCell | non_main | mad aggregation throws error for some columns for some columns the aggregation with hcs mad does throw the following error error groupby execute failed org knime core data def intcell cannot be cast to org knime core data def doublecell | 0 |
4,713 | 24,282,099,496 | IssuesEvent | 2022-09-28 18:21:45 | Lissy93/dashy | https://api.github.com/repos/Lissy93/dashy | closed | [QUESTION] How can I control the number of widgets per row in a section? | 🤷♂️ Question 👤 Awaiting Maintainer Response | ### Question
So I have a dashboard containing multiple widgets in a single section. I would like to have all 3 on one row, similar to how it's displayed in the section page. I searched through the documentation and I don't see anything similar to the field `itemCountX` used for the items for the widgets.
Widgets in the Home Page :

Widgets in the Section Page :

If it's useful, here is the config I'm currently using :
```
appConfig:
theme: colorful
layout: auto
iconSize: medium
language: fr
pageInfo:
title: Dashy
description: Welcome to your Home Lab!
navLinks:
- title: GitHub
path: https://github.com/Lissy93/dashy
- title: Documentation
path: https://dashy.to/docs
footerText: ''
sections:
- name: Host Info
displayData:
cols: 5
cutToHeight: true
sectionLayout: grid
itemCountX: 3
widgets:
- type: gl-current-cpu
options:
hostname: http://192.168.1.100:61208
id: 0_842_glcurrentcpu
- type: gl-current-mem
options:
hostname: http://192.168.1.100:61208
id: 1_842_glcurrentmem
- type: gl-disk-space
options:
hostname: http://192.168.1.100:61208
id: 2_842_gldiskspace
```
I'm clearly not an expert when it comes to CSS and custom style coding... Please help
### Category
Widgets
### Please tick the boxes
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number)
- [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | True | [QUESTION] How can I control the number of widgets per row in a section? - ### Question
So I have a dashboard containing multiple widgets in a single section. I would like to have all 3 on one row, similar to how it's displayed in the section page. I searched through the documentation and I don't see anything similar to the field `itemCountX` used for the items for the widgets.
Widgets in the Home Page :

Widgets in the Section Page :

If it's useful, here is the config I'm currently using :
```
appConfig:
theme: colorful
layout: auto
iconSize: medium
language: fr
pageInfo:
title: Dashy
description: Welcome to your Home Lab!
navLinks:
- title: GitHub
path: https://github.com/Lissy93/dashy
- title: Documentation
path: https://dashy.to/docs
footerText: ''
sections:
- name: Host Info
displayData:
cols: 5
cutToHeight: true
sectionLayout: grid
itemCountX: 3
widgets:
- type: gl-current-cpu
options:
hostname: http://192.168.1.100:61208
id: 0_842_glcurrentcpu
- type: gl-current-mem
options:
hostname: http://192.168.1.100:61208
id: 1_842_glcurrentmem
- type: gl-disk-space
options:
hostname: http://192.168.1.100:61208
id: 2_842_gldiskspace
```
I'm clearly not an expert when it comes to CSS and custom style coding... Please help
### Category
Widgets
### Please tick the boxes
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number)
- [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | main | how can i control the number of widgets per row in a section question so i have a dashboard containing multiple widgets in a single section i would like to have all on one row similar to how it s displayed in the section page i searched through the documentation and i don t see anything similar to the field itemcountx used for the items for the widgets widgets in the home page widgets in the section page if it s useful here is the config i m currently using appconfig theme colorful layout auto iconsize medium language fr pageinfo title dashy description welcome to your home lab navlinks title github path title documentation path footertext sections name host info displaydata cols cuttoheight true sectionlayout grid itemcountx widgets type gl current cpu options hostname id glcurrentcpu type gl current mem options hostname id glcurrentmem type gl disk space options hostname id gldiskspace i m clearly not an expert when it comes to css and custom style coding please help category widgets please tick the boxes you are using a version of dashy check the first two digits of the version number you ve checked that this you ve checked the and guide you agree to the | 1 |
64,067 | 8,709,319,423 | IssuesEvent | 2018-12-06 13:38:59 | agda/agda | https://api.github.com/repos/agda/agda | closed | make install-bin on a Mac can fail to install text-icu | documentation installation type: enhancement | On a Mac `brew install icu4c` installs icu into a non-standard location (/usr/local/opt/icu4c/).
As a result, `make install-bin` fails when trying to install the `text-icu` package (required due to `-fenable-cluster-counting`) which relies on standard locations for its C dependencies.
```
Missing dependencies on foreign libraries:
* Missing C libraries: icuuc, icui18n, icudata
This problem can usually be solved by installing the system packages that
provide these libraries (you may need the "-dev" versions). If the libraries
are already installed but in a non-standard location then you can use the
flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.
```
As noted these flags need to be added to `cabal install` somehow:
```
--extra-lib-dirs=/usr/local/opt/icu4c/lib
--extra-include-dirs=/usr/local/opt/icu4c/include
```
We should determine the best way to either automatically add these for Mac users, or to document how Mac users can add them manually when running `make install-bin`.
Note that this problem has also appeared for other Haskell packages (see for example https://github.com/yi-editor/yi/issues/795).
| 1.0 | make install-bin on a Mac can fail to install text-icu - On a Mac `brew install icu4c` installs icu into a non-standard location (/usr/local/opt/icu4c/).
As a result, `make install-bin` fails when trying to install the `text-icu` package (required due to `-fenable-cluster-counting`) which relies on standard locations for its C dependencies.
```
Missing dependencies on foreign libraries:
* Missing C libraries: icuuc, icui18n, icudata
This problem can usually be solved by installing the system packages that
provide these libraries (you may need the "-dev" versions). If the libraries
are already installed but in a non-standard location then you can use the
flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.
```
As noted these flags need to be added to `cabal install` somehow:
```
--extra-lib-dirs=/usr/local/opt/icu4c/lib
--extra-include-dirs=/usr/local/opt/icu4c/include
```
We should determine the best way to either automatically add these for Mac users, or to document how Mac users can add them manually when running `make install-bin`.
Note that this problem has also appeared for other Haskell packages (see for example https://github.com/yi-editor/yi/issues/795).
| non_main | make install bin on a mac can fail to install text icu on a mac brew install installs icu into a non standard location usr local opt as a result make install bin fails when trying to install the text icu package required due to fenable cluster counting which relies on standard locations for its c dependencies missing dependencies on foreign libraries missing c libraries icuuc icudata this problem can usually be solved by installing the system packages that provide these libraries you may need the dev versions if the libraries are already installed but in a non standard location then you can use the flags extra include dirs and extra lib dirs to specify where they are as noted these flags need to be added to cabal install somehow extra lib dirs usr local opt lib extra include dirs usr local opt include we should determine the best way to either automatically add these for mac users or to document how mac users can add them manually when running make install bin note that this problem has also appeared for other haskell packages see for example | 0 |
4,817 | 24,817,490,642 | IssuesEvent | 2022-10-25 14:08:24 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | AssertionError while creating new record | type: bug work: backend status: ready restricted: maintainers status: triage | ## Description
I got this error while creating a new record. I could not find a reliable method to reproduce this, but once I got this error, I kept getting it consistently until I restarted our service.
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/db/v0/tables/159/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/records.py", line 139, in create
serializer.save()
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 206, in save
assert self.instance is not None, (
Exception Type: AssertionError at /api/db/v0/tables/159/records/
Exception Value: `create()` did not return an object instance.
```
| True | AssertionError while creating new record - ## Description
I got this error while creating a new record. I could not find a reliable method to reproduce this, but once I got this error, I kept getting it consistently until I restarted our service.
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/db/v0/tables/159/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/records.py", line 139, in create
serializer.save()
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 206, in save
assert self.instance is not None, (
Exception Type: AssertionError at /api/db/v0/tables/159/records/
Exception Value: `create()` did not return an object instance.
```
| main | assertionerror while creating new record description i got this error while creating a new record i could not find a reliable method to reproduce this but once i got this error i kept getting it consistently until i restarted our service environment request method post request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file code mathesar api db viewsets records py line in create serializer save file usr local lib site packages rest framework serializers py line in save assert self instance is not none exception type assertionerror at api db tables records exception value create did not return an object instance | 1 |
84,553 | 24,343,763,250 | IssuesEvent | 2022-10-02 02:50:21 | andymina/seam-carving | https://api.github.com/repos/andymina/seam-carving | closed | Update pip requirements | build | The pip `requirements.txt` released in v1.0.0 are not all necessary. Update `requirements.txt` to only include what's needed. | 1.0 | Update pip requirements - The pip `requirements.txt` released in v1.0.0 are not all necessary. Update `requirements.txt` to only include what's needed. | non_main | update pip requirements the pip requirements txt released in are not all necessary update requirements txt to only include what s needed | 0 |
3,787 | 16,064,921,663 | IssuesEvent | 2021-04-23 17:31:57 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | Can someone point us to research/validation around the wording for "compact" and "short" in the context of sizes | role: ux 🍿 role: visual 🎨 status: waiting for maintainer response 💬 type: question ❓ | <!--
Hi there! 👋 Hope everything is going okay using projects from the Carbon Design
System. It looks like you might have a question about our work, so we wanted to
share a couple resources that you could use if you haven't tried them yet 🙂.
If you're an IBMer, we have a couple of Slack channels available across all IBM
Workspaces:
- #carbon-design-system for questions about the Design System
- #carbon-components for questions about component styles
- #carbon-react for questions about our React components
If these resources don't work out, help us out by filling out a couple of
details below!
-->
## Summary
We have concerns around the user's understanding of the words "compact" and "short" and how they are different, and which one is actually smaller than the other.
## Relevant information
As seen here https://www.carbondesignsystem.com/components/data-table/usage#sizing
| True | Can someone point us to research/validation around the wording for "compact" and "short" in the context of sizes - <!--
Hi there! 👋 Hope everything is going okay using projects from the Carbon Design
System. It looks like you might have a question about our work, so we wanted to
share a couple resources that you could use if you haven't tried them yet 🙂.
If you're an IBMer, we have a couple of Slack channels available across all IBM
Workspaces:
- #carbon-design-system for questions about the Design System
- #carbon-components for questions about component styles
- #carbon-react for questions about our React components
If these resources don't work out, help us out by filling out a couple of
details below!
-->
## Summary
We have concerns around the user's understanding of the words "compact" and "short" and how they are different, and which one is actually smaller than the other.
## Relevant information
As seen here https://www.carbondesignsystem.com/components/data-table/usage#sizing
| main | can someone point us to research validation around the wording for compact and short in the context of sizes hi there 👋 hope everything is going okay using projects from the carbon design system it looks like you might have a question about our work so we wanted to share a couple resources that you could use if you haven t tried them yet 🙂 if you re an ibmer we have a couple of slack channels available across all ibm workspaces carbon design system for questions about the design system carbon components for questions about component styles carbon react for questions about our react components if these resources don t work out help us out by filling out a couple of details below summary we have concerns around the user s understanding of the words compact and short and how they are different and which one is actually smaller than the other relevant information as seen here | 1 |
1,126 | 4,997,037,971 | IssuesEvent | 2016-12-09 15:40:28 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | git - in Ansible 2.1 switching a shallow clone from tag to branch fails | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
git module
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Vanilla
##### OS / ENVIRONMENT
Happens on both OS X and Ubuntu with different versions of git (1.8 and 1.9)
##### SUMMARY
Updating a shallow clone based on a tag to one based on a branch fails
##### STEPS TO REPRODUCE
Run this with `ansible-playbook -i 127.0.0.1, ~/tmp/test.yml; rm /tmp/testclone`
```
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: initial clone from tag
git:
repo: git@github.com:adamchainz/nose-randomly.git
dest: /tmp/testclone
version: 1.2.0
depth: 1
- name: update to branch
git:
repo: git@github.com:adamchainz/nose-randomly.git
dest: /tmp/testclone
version: master
depth: 1
```
##### EXPECTED RESULTS
On Ansible 2.0.2.0, it works fine:
```
PLAY [127.0.0.1] ***************************************************************
TASK [initial clone from tag] **************************************************
changed: [127.0.0.1]
TASK [update to branch] ********************************************************
changed: [127.0.0.1]
PLAY RECAP *********************************************************************
127.0.0.1 : ok=2 changed=2 unreachable=0 failed=0
```
##### ACTUAL RESULTS
On Ansible 2.1.0.0, I get:
```
PLAY [127.0.0.1] ***************************************************************
TASK [initial clone from tag] **************************************************
changed: [127.0.0.1]
TASK [update to branch] ********************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IOError: [Errno 2] No such file or directory: '/tmp/testclone/.git/refs/remotes/origin/HEAD'
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\", line 877, in <module>\n main()\n File \"/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\", line 832, in main\n fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec)\n File \"/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\", line 530, in fetch\n currenthead = get_head_branch(git_path, module, dest, remote)\n File \"/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\", line 498, in get_head_branch\n f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))\nIOError: [Errno 2] No such file or directory: '/tmp/testclone/.git/refs/remotes/origin/HEAD'\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/Users/adamj/tmp/test.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=1 changed=1 unreachable=0 failed=1
```
It looks like `fetch` has started assuming that if we're updating to a remote branch, we must currently be on a branch, as per lines 530-532:
``` py
elif is_remote_branch(git_path, module, dest, repo, version):
currenthead = get_head_branch(git_path, module, dest, remote)
if currenthead != version:
```
or that `get_head_branch` should work, but doesn't, when we're on a tag. N.B. this is the contents of the `.git` directory after the clone from tag:
```
$ tree /tmp/testclone/.git/refs
/tmp/testclone/.git/refs
├── heads
└── tags
2 directories, 0 files
```
| True | git - in Ansible 2.1 switching a shallow clone from tag to branch fails - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
git module
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Vanilla
##### OS / ENVIRONMENT
Happens on both OS X and Ubuntu with different versions of git (1.8 and 1.9)
##### SUMMARY
Updating a shallow clone based on a tag to one based on a branch fails
##### STEPS TO REPRODUCE
Run this with `ansible-playbook -i 127.0.0.1, ~/tmp/test.yml; rm /tmp/testclone`
```
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: initial clone from tag
git:
repo: git@github.com:adamchainz/nose-randomly.git
dest: /tmp/testclone
version: 1.2.0
depth: 1
- name: update to branch
git:
repo: git@github.com:adamchainz/nose-randomly.git
dest: /tmp/testclone
version: master
depth: 1
```
##### EXPECTED RESULTS
On Ansible 2.0.2.0, it works fine:
```
PLAY [127.0.0.1] ***************************************************************
TASK [initial clone from tag] **************************************************
changed: [127.0.0.1]
TASK [update to branch] ********************************************************
changed: [127.0.0.1]
PLAY RECAP *********************************************************************
127.0.0.1 : ok=2 changed=2 unreachable=0 failed=0
```
##### ACTUAL RESULTS
On Ansible 2.1.0.0, I get:
```
PLAY [127.0.0.1] ***************************************************************
TASK [initial clone from tag] **************************************************
changed: [127.0.0.1]
TASK [update to branch] ********************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: IOError: [Errno 2] No such file or directory: '/tmp/testclone/.git/refs/remotes/origin/HEAD'
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\", line 877, in <module>\n main()\n File \"/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\", line 832, in main\n fetch(git_path, module, repo, dest, version, remote, depth, bare, refspec)\n File \"/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\", line 530, in fetch\n currenthead = get_head_branch(git_path, module, dest, remote)\n File \"/var/folders/_z/wpddpg7d5hd6fd2zjn_fct7m0000gn/T/ansible_vBeq0g/ansible_module_git.py\", line 498, in get_head_branch\n f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))\nIOError: [Errno 2] No such file or directory: '/tmp/testclone/.git/refs/remotes/origin/HEAD'\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/Users/adamj/tmp/test.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=1 changed=1 unreachable=0 failed=1
```
It looks like `fetch` has started assuming that if we're updating to a remote branch, we must currently be on a branch, as per lines 530-532:
``` py
elif is_remote_branch(git_path, module, dest, repo, version):
currenthead = get_head_branch(git_path, module, dest, remote)
if currenthead != version:
```
or that `get_head_branch` should work, but doesn't, when we're on a tag. N.B. this is the contents of the `.git` directory after the clone from tag:
```
$ tree /tmp/testclone/.git/refs
/tmp/testclone/.git/refs
├── heads
└── tags
2 directories, 0 files
```
| main | git in ansible switching a shallow clone from tag to branch fails issue type bug report component name git module ansible version ansible config file configured module search path default w o overrides configuration vanilla os environment happens on both os x and ubuntu with different versions of git and summary updating a shallow clone based on a tag to one based on a branch fails steps to reproduce run this with ansible playbook i tmp test yml rm tmp testclone hosts connection local gather facts no tasks name initial clone from tag git repo git github com adamchainz nose randomly git dest tmp testclone version depth name update to branch git repo git github com adamchainz nose randomly git dest tmp testclone version master depth expected results on ansible it works fine play task changed task changed play recap ok changed unreachable failed actual results on ansible i get play task changed task an exception occurred during task execution to see the full traceback use vvv the error was ioerror no such file or directory tmp testclone git refs remotes origin head fatal failed changed false failed true module stderr traceback most recent call last n file var folders z t ansible ansible module git py line in n main n file var folders z t ansible ansible module git py line in main n fetch git path module repo dest version remote depth bare refspec n file var folders z t ansible ansible module git py line in fetch n currenthead get head branch git path module dest remote n file var folders z t ansible ansible module git py line in get head branch n f open os path join repo path refs remotes remote head nioerror no such file or directory tmp testclone git refs remotes origin head n module stdout msg module failure parsed false no more hosts left to retry use limit users adamj tmp test retry play recap ok changed unreachable failed it looks like fetch has started assuming that if we re updating to a remote branch we must currently be on a branch as per lines py elif is remote branch git path module dest repo version currenthead get head branch git path module dest remote if currenthead version or that get head branch should work but doesn t when we re on a tag n b this is the contents of the git directory after the clone from tag tree tmp testclone git refs tmp testclone git refs ├── heads └── tags directories files | 1 |
1,170 | 5,088,165,322 | IssuesEvent | 2016-12-31 15:53:20 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Is it ok to auto-accept EULAs for Java, CUDA, etc.? | awaiting maintainer feedback discussion | I noticed that the [Java](https://github.com/caskroom/homebrew-cask/blob/master/Casks/java.rb) and [CUDA](https://github.com/caskroom/homebrew-cask/blob/master/Casks/cuda.rb) casks auto-accept license agreements for the user.
Is that ok? I doubt it. I know of no other distributions that do it. With [Debian](https://askubuntu.com/questions/190582/installing-java-automatically-with-silent-option) and [Gentoo](https://wiki.gentoo.org/wiki/Java#Installing_a_JRE.2FJDK) the user has to take some action to indicate what types of licenses they will accept, and only *then* do they allow silent mode.
Note that I'm asking because I have the same problem with [Spack](github.com/llnl/spack). There is some relevant discussion in LLNL/spack#1298 and LLNL/spack#2621.
I am leaning towards what Gentoo and Debian do, described best [in the Gentoo docs](https://wiki.gentoo.org/wiki/Handbook:X86/Working/Portage#Licenses). | True | Is it ok to auto-accept EULAs for Java, CUDA, etc.? - I noticed that the [Java](https://github.com/caskroom/homebrew-cask/blob/master/Casks/java.rb) and [CUDA](https://github.com/caskroom/homebrew-cask/blob/master/Casks/cuda.rb) casks auto-accept license agreements for the user.
Is that ok? I doubt it. I know of no other distributions that do it. With [Debian](https://askubuntu.com/questions/190582/installing-java-automatically-with-silent-option) and [Gentoo](https://wiki.gentoo.org/wiki/Java#Installing_a_JRE.2FJDK) the user has to take some action to indicate what types of licenses they will accept, and only *then* do they allow silent mode.
Note that I'm asking because I have the same problem with [Spack](github.com/llnl/spack). There is some relevant discussion in LLNL/spack#1298 and LLNL/spack#2621.
I am leaning towards what Gentoo and Debian do, described best [in the Gentoo docs](https://wiki.gentoo.org/wiki/Handbook:X86/Working/Portage#Licenses). | main | is it ok to auto accept eulas for java cuda etc i noticed that the and casks auto accept license agreements for the user is that ok i doubt it i know of no other distributions that do it with and the user has to take some action to indicate what types of licenses they will accept and only then do they allow silent mode note that i m asking because i have the same problem with github com llnl spack there is some relevant discussion in llnl spack and llnl spack i am leaning towards what gentoo and debian do described best | 1 |
129,824 | 17,908,851,433 | IssuesEvent | 2021-09-09 00:27:17 | AlaskaAirlines/auro-button | https://api.github.com/repos/AlaskaAirlines/auro-button | opened | Blueprint development | Status: Work In Progress Type: Design auro-button | # General Support Request
There is no matching design reference for the auro-button.
## Support request
Create UI Kit blueprint document.
## Context
The scope of the blueprint document is to match the current development scope of auro-button. | 1.0 | Blueprint development - # General Support Request
There is no matching design reference for the auro-button.
## Support request
Create UI Kit blueprint document.
## Context
The scope of the blueprint document is to match the current development scope of auro-button. | non_main | blueprint development general support request there is no matching design reference for the auro button support request create ui kit blueprint document context the scope of the blueprint document is to match the current development scope of auro button | 0 |
161,238 | 20,123,093,074 | IssuesEvent | 2022-02-08 06:01:17 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | [Security Solution] Kibana mapped fields are not present in Kibana environment. | bug triage_needed v8.0.0 impact:high Team: SecuritySolution | **Describe the bug:**
Kibana mapped fields are not present in the Kibana environment.
**Kibana/Elasticsearch Stack version:**
```
Build Info
Version: 8.0.0
Build: 49192
Commit:57ca5e139a33dd2eed927ce98d8231a1f217cd15
```
**Preconditions**
1. Elasticsearch should be up and running
2. Kibana should be up and running
3. Alerts should be present on the environment
**Steps to reproduce**
1. Navigate to Security-->Alerts tab.
2. Click on view details icon on any alerts tab.
3. Search the following fields under table tab.
- kibana.alert.group.id
- kibana.alert.group.index
- kibana.alert.original_event.duration
- kibana.alert.original_event.end
- kibana.alert.original_event.hash
- kibana.alert.original_event.provider
- kibana.alert.original_event.reason
- kibana.alert.original_event.risk_score_norm
- kibana.alert.original_event.start
- kibana.alert.original_event.timezone
[Fields Mapping](https://github.com/elastic/kibana/blob/main/x-pack/plugins/security_solution/server/lib/detection_engine/routes/index/signal_aad_mapping.json)
**Screenshots:**
`kibana.alert.group.id`

`kibana.alert.group.index`

`kibana.alert.original_event.duration`

`kibana.alert.original_event.end`

`kibana.alert.original_event.hash`

`kibana.alert.original_event.provider`

`kibana.alert.original_event.reason`

`kibana.alert.original_event.risk_score_norm`

`kibana.alert.original_event.start`

`kibana.alert.original_event.timezone`

| True | [Security Solution] Kibana mapped fields are not present in Kibana environment. - **Describe the bug:**
Kibana mapped fields are not present in the Kibana environment.
**Kibana/Elasticsearch Stack version:**
```
Build Info
Version: 8.0.0
Build: 49192
Commit:57ca5e139a33dd2eed927ce98d8231a1f217cd15
```
**Preconditions**
1. Elasticsearch should be up and running
2. Kibana should be up and running
3. Alerts should be present on the environment
**Steps to reproduce**
1. Navigate to Security-->Alerts tab.
2. Click on view details icon on any alerts tab.
3. Search the following fields under table tab.
- kibana.alert.group.id
- kibana.alert.group.index
- kibana.alert.original_event.duration
- kibana.alert.original_event.end
- kibana.alert.original_event.hash
- kibana.alert.original_event.provider
- kibana.alert.original_event.reason
- kibana.alert.original_event.risk_score_norm
- kibana.alert.original_event.start
- kibana.alert.original_event.timezone
[Fields Mapping](https://github.com/elastic/kibana/blob/main/x-pack/plugins/security_solution/server/lib/detection_engine/routes/index/signal_aad_mapping.json)
**Screenshots:**
`kibana.alert.group.id`

`kibana.alert.group.index`

`kibana.alert.original_event.duration`

`kibana.alert.original_event.end`

`kibana.alert.original_event.hash`

`kibana.alert.original_event.provider`

`kibana.alert.original_event.reason`

`kibana.alert.original_event.risk_score_norm`

`kibana.alert.original_event.start`

`kibana.alert.original_event.timezone`

| non_main | kibana mapped fields are not present in kibana environment describe the bug kibana mapped fields are not present in the kibana environment kibana elasticsearch stack version build info version build commit preconditions elasticsearch should be up and running kibana should be up and running alerts should be present on the environment steps to reproduce navigate to security alerts tab click on view details icon on any alerts tab search the following fields under table tab kibana alert group id kibana alert group index kibana alert original event duration kibana alert original event end kibana alert original event hash kibana alert original event provider kibana alert original event reason kibana alert original event risk score norm kibana alert original event start kibana alert original event timezone screenshots kibana alert group id kibana alert group index kibana alert original event duration kibana alert original event end kibana alert original event hash kibana alert original event provider kibana alert original event reason kibana alert original event risk score norm kibana alert original event start kibana alert original event timezone | 0 |
248,214 | 18,858,049,155 | IssuesEvent | 2021-11-12 09:19:40 | kengjit/pe | https://api.github.com/repos/kengjit/pe | opened | UG -- Edit does not include any examples | severity.Medium type.DocumentationBug | Edit feature in UG did not include any examples. Was difficult for the reader to understand how it works.
<!--session: 1636703410529-d12d539c-8c6d-4415-bfa1-4aeef6033363-->
<!--Version: Web v3.4.1--> | 1.0 | UG -- Edit does not include any examples - Edit feature in UG did not include any examples. Was difficult for the reader to understand how it works.
<!--session: 1636703410529-d12d539c-8c6d-4415-bfa1-4aeef6033363-->
<!--Version: Web v3.4.1--> | non_main | ug edit does not include any examples edit feature in ug did not include any examples was difficult for the reader to understand how it works | 0 |
17,082 | 2,974,593,176 | IssuesEvent | 2015-07-15 02:10:24 | Reimashi/jotai | https://api.github.com/repos/Reimashi/jotai | closed | Sager NP9170 Issues: CPU Speed wrong, HD7970M Temps zoro | auto-migrated Priority-Medium Type-Defect wontfix | ```
1. CPU Speeds - I know I got a fast laptop, but the current readings appear to
be large multiples on reality.
I am showing a base value of 7782 Mhz and a max in a cqmouple of cases at 16862
Mhz... This is on / with a Intel Core I7-3720QM processor.
2. HD7970M is showing 0c operating temperature...
Please let me know if there is information you need that I can provide on this
one.
```
Original issue reported on code.google.com by `JayDTea...@gmail.com` on 29 Jun 2012 at 2:23 | 1.0 | Sager NP9170 Issues: CPU Speed wrong, HD7970M Temps zoro - ```
1. CPU Speeds - I know I got a fast laptop, but the current readings appear to
be large multiples on reality.
I am showing a base value of 7782 Mhz and a max in a cqmouple of cases at 16862
Mhz... This is on / with a Intel Core I7-3720QM processor.
2. HD7970M is showing 0c operating temperature...
Please let me know if there is information you need that I can provide on this
one.
```
Original issue reported on code.google.com by `JayDTea...@gmail.com` on 29 Jun 2012 at 2:23 | non_main | sager issues cpu speed wrong temps zoro cpu speeds i know i got a fast laptop but the current readings appear to be large multiples on reality i am showing a base value of mhz and a max in a cqmouple of cases at mhz this is on with a intel core processor is showing operating temperature please let me know if there is information you need that i can provide on this one original issue reported on code google com by jaydtea gmail com on jun at | 0 |
665 | 4,194,452,959 | IssuesEvent | 2016-06-25 03:12:58 | pypiserver/pypiserver | https://api.github.com/repos/pypiserver/pypiserver | closed | New release | status.URGENT type.Maintainance | Hey can we release a new version. My use case requires packages not be overwritten and this was fixed in https://github.com/pypiserver/pypiserver/pull/113
| True | New release - Hey can we release a new version. My use case requires packages not be overwritten and this was fixed in https://github.com/pypiserver/pypiserver/pull/113
| main | new release hey can we release a new version my use case requires packages not be overwritten and this was fixed in | 1 |
1,745 | 6,574,929,578 | IssuesEvent | 2017-09-11 14:31:34 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Cannot pull all facts with nxos_facts | affects_2.3 bug_report networking waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
nxos_facts
##### ANSIBLE VERSION
```
ansible --version
2.3.0 (commit c064dce)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
or
2.2.0.0-0.2.rc2
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
inventory = ./hosts
gathering = explicit
roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles
private_role_vars = yes
log_path = /var/log/ansible.log
fact_caching = redis
fact_caching_timeout = 86400
retry_files_enabled = False
##### OS / ENVIRONMENT
- **Local host**: Ubuntu 16.04 4.4.0
- **Target nodes**: NX-OSv 7.3(0)D1(1) (last release available in Cisco VIRL)
##### SUMMARY
Running nxos_facts triggers a fatal error (connection timeout), whereas I can manually login into the target node with SSH or run nxos-feature on the same targets.
##### STEPS TO REPRODUCE
```
- include_vars: "../defaults/{{ os_family }}/http.yml"
- include_vars: "../defaults/{{ os_family }}/connections.yml"
- name: Fetching facts from the remote node
nxos_facts:
gather_subset: all
provider: "{{ connections.nxapi }}"
register: return
```
##### EXPECTED RESULTS
Successful nxos_facts
##### ACTUAL RESULTS
```
TASK [nxos_pull_facts : Fetching facts from the remote node] *******************
task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/nxos_pull_facts/tasks/main.yml:74
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py
<172.21.100.12> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.21.100.12> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111 `" && echo ansible-tmp-1476884246.75-158252726280111="` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111 `" ) && sleep 0'
<172.21.100.11> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.21.100.11> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720 `" && echo ansible-tmp-1476884246.75-116017618040720="` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720 `" ) && sleep 0'
<172.21.100.12> PUT /tmp/tmpbsH8md TO /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py
<172.21.100.11> PUT /tmp/tmpiPM3JI TO /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py
<172.21.100.12> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/ /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py && sleep 0'
<172.21.100.11> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/ /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py && sleep 0'
<172.21.100.12> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/" > /dev/null 2>&1 && sleep 0'
<172.21.100.11> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/" > /dev/null 2>&1 && sleep 0'
fatal: [NX_OSv_Spine_11]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"authorize": false,
"gather_subset": [
"all"
],
"host": "172.21.100.11",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 8080,
"provider": {
"auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"host": "172.21.100.11",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": "8080",
"transport": "nxapi",
"use_ssl": false,
"username": "admin",
"validate_certs": false
},
"ssh_keyfile": null,
"timeout": 10,
"transport": "nxapi",
"use_ssl": false,
"username": "admin",
"validate_certs": false
},
"module_name": "nxos_facts"
},
"msg": "Connection failure: timed out",
"status": -1,
"url": "http://172.21.100.11:8080/ins"
}
```
| True | Cannot pull all facts with nxos_facts - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
nxos_facts
##### ANSIBLE VERSION
```
ansible --version
2.3.0 (commit c064dce)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
or
2.2.0.0-0.2.rc2
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
inventory = ./hosts
gathering = explicit
roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/roles
private_role_vars = yes
log_path = /var/log/ansible.log
fact_caching = redis
fact_caching_timeout = 86400
retry_files_enabled = False
##### OS / ENVIRONMENT
- **Local host**: Ubuntu 16.04 4.4.0
- **Target nodes**: NX-OSv 7.3(0)D1(1) (last release available in Cisco VIRL)
##### SUMMARY
Running nxos_facts triggers a fatal error (connection timeout), whereas I can manually login into the target node with SSH or run nxos-feature on the same targets.
##### STEPS TO REPRODUCE
```
- include_vars: "../defaults/{{ os_family }}/http.yml"
- include_vars: "../defaults/{{ os_family }}/connections.yml"
- name: Fetching facts from the remote node
nxos_facts:
gather_subset: all
provider: "{{ connections.nxapi }}"
register: return
```
##### EXPECTED RESULTS
Successful nxos_facts
##### ACTUAL RESULTS
```
TASK [nxos_pull_facts : Fetching facts from the remote node] *******************
task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/nxos_pull_facts/tasks/main.yml:74
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_facts.py
<172.21.100.12> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.21.100.12> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111 `" && echo ansible-tmp-1476884246.75-158252726280111="` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111 `" ) && sleep 0'
<172.21.100.11> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.21.100.11> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720 `" && echo ansible-tmp-1476884246.75-116017618040720="` echo $HOME/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720 `" ) && sleep 0'
<172.21.100.12> PUT /tmp/tmpbsH8md TO /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py
<172.21.100.11> PUT /tmp/tmpiPM3JI TO /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py
<172.21.100.12> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/ /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py && sleep 0'
<172.21.100.11> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/ /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py && sleep 0'
<172.21.100.12> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/nxos_facts.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1476884246.75-158252726280111/" > /dev/null 2>&1 && sleep 0'
<172.21.100.11> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/nxos_facts.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1476884246.75-116017618040720/" > /dev/null 2>&1 && sleep 0'
fatal: [NX_OSv_Spine_11]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"authorize": false,
"gather_subset": [
"all"
],
"host": "172.21.100.11",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 8080,
"provider": {
"auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"host": "172.21.100.11",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": "8080",
"transport": "nxapi",
"use_ssl": false,
"username": "admin",
"validate_certs": false
},
"ssh_keyfile": null,
"timeout": 10,
"transport": "nxapi",
"use_ssl": false,
"username": "admin",
"validate_certs": false
},
"module_name": "nxos_facts"
},
"msg": "Connection failure: timed out",
"status": -1,
"url": "http://172.21.100.11:8080/ins"
}
```
| main | cannot pull all facts with nxos facts issue type bug report component name nxos facts ansible version ansible version commit config file etc ansible ansible cfg configured module search path default w o overrides or config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible git ansible roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment local host ubuntu target nodes nx osv last release available in cisco virl summary running nxos facts triggers a fatal error connection timeout whereas i can manually login into the target node with ssh or run nxos feature on the same targets steps to reproduce include vars defaults os family http yml include vars defaults os family connections yml name fetching facts from the remote node nxos facts gather subset all provider connections nxapi register return expected results successful nxos facts actual results task task path home actionmystique program files ubuntu ansible roles roles nxos pull facts tasks main yml using module file usr lib dist packages ansible modules core network nxos nxos facts py using module file usr lib dist packages ansible modules core network nxos nxos facts py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp nxos facts py put tmp to root ansible tmp ansible tmp nxos facts py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp nxos facts py sleep exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp nxos facts py sleep exec bin sh c usr bin python root ansible tmp ansible tmp nxos facts py rm rf root ansible tmp ansible tmp dev null sleep exec bin sh c usr bin python root ansible tmp ansible tmp nxos facts py rm rf root ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args auth pass value specified in no log parameter authorize false gather subset all host password value specified in no log parameter port provider auth pass value specified in no log parameter host password value specified in no log parameter port transport nxapi use ssl false username admin validate certs false ssh keyfile null timeout transport nxapi use ssl false username admin validate certs false module name nxos facts msg connection failure timed out status url | 1 |
712 | 4,306,363,677 | IssuesEvent | 2016-07-21 02:40:33 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Luhn Algorithm: Reduce subtitle redundancy | Bug Low-Hanging Fruit Maintainer Input Requested Relevancy | The subtitle currently repeats the result displayed in the `$title` -- it should rather provide context for the title.
The result should look like:
> **5**
> Luhn check digit for 12345
https://duckduckgo.com/?q=luhn+12345&ia=answer
------
IA Page: http://duck.co/ia/view/luhn_algorithm
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @regagain | True | Luhn Algorithm: Reduce subtitle redundancy - The subtitle currently repeats the result displayed in the `$title` -- it should rather provide context for the title.
The result should look like:
> **5**
> Luhn check digit for 12345
https://duckduckgo.com/?q=luhn+12345&ia=answer
------
IA Page: http://duck.co/ia/view/luhn_algorithm
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @regagain | main | luhn algorithm reduce subtitle redundancy the subtitle currently repeats the result displayed in the title it should rather provide context for the title the result should look like luhn check digit for ia page regagain | 1 |
245,592 | 20,779,671,017 | IssuesEvent | 2022-03-16 13:45:33 | vaop/vaop | https://api.github.com/repos/vaop/vaop | opened | [Admin][Administrators] Unit Testing | testing | Unit testing needs to be added to the `AdminCrudController` covering both admin operations as well as RBAC/Policy/Permission enforcement. | 1.0 | [Admin][Administrators] Unit Testing - Unit testing needs to be added to the `AdminCrudController` covering both admin operations as well as RBAC/Policy/Permission enforcement. | non_main | unit testing unit testing needs to be added to the admincrudcontroller covering both admin operations as well as rbac policy permission enforcement | 0 |
316,460 | 27,165,957,950 | IssuesEvent | 2023-02-17 15:21:36 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: pgjdbc failed | C-test-failure O-robot O-roachtest branch-master release-blocker T-sql-sessions | roachtest.pgjdbc [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8740106?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8740106?buildTab=artifacts#/pgjdbc) on master @ [7e2df35a2f6bf7a859bb0539c8ca43c4e72ed260](https://github.com/cockroachdb/cockroach/commits/7e2df35a2f6bf7a859bb0539c8ca43c4e72ed260):
```
test artifacts and logs in: /artifacts/pgjdbc/run_1
(orm_helpers.go:201).summarizeFailed:
Tests run on Cockroach v23.1.0-alpha.2-405-g7e2df35a2f
Tests run against pgjdbc REL42.3.3
5788 Total Tests Run
5055 tests passed
733 tests failed
72 tests skipped
182 tests ignored
7 tests passed unexpectedly
0 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- PASS: org.postgresql.test.jdbc4.PGCopyInputStreamTest.testReadBytesCorrectlyReadsDataInChunks - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc4.PGCopyInputStreamTest.testMixedAPI - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc2.CopyTest.testCopyOutByRow - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc4.PGCopyInputStreamTest.testReadBytesCorrectlyHandlesEof - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc4.PGCopyInputStreamTest.testCopyAPI - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc2.CopyTest.testCopyOut - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc2.CopyTest.testCopyQuery - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
For a full summary look at the pgjdbc artifacts
An updated blocklist (pgjdbcBlocklist) is available in the artifacts' pgjdbc log
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #94292 roachtest: pgjdbc failed [C-test-failure O-roachtest O-robot T-sql-sessions branch-release-22.2]
- #93242 roachtest: pgjdbc failed [C-test-failure O-roachtest O-robot T-sql-sessions branch-release-22.1]
</p>
</details>
/cc @cockroachdb/sql-sessions
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*pgjdbc.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: pgjdbc failed - roachtest.pgjdbc [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8740106?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8740106?buildTab=artifacts#/pgjdbc) on master @ [7e2df35a2f6bf7a859bb0539c8ca43c4e72ed260](https://github.com/cockroachdb/cockroach/commits/7e2df35a2f6bf7a859bb0539c8ca43c4e72ed260):
```
test artifacts and logs in: /artifacts/pgjdbc/run_1
(orm_helpers.go:201).summarizeFailed:
Tests run on Cockroach v23.1.0-alpha.2-405-g7e2df35a2f
Tests run against pgjdbc REL42.3.3
5788 Total Tests Run
5055 tests passed
733 tests failed
72 tests skipped
182 tests ignored
7 tests passed unexpectedly
0 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- PASS: org.postgresql.test.jdbc4.PGCopyInputStreamTest.testReadBytesCorrectlyReadsDataInChunks - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc4.PGCopyInputStreamTest.testMixedAPI - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc2.CopyTest.testCopyOutByRow - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc4.PGCopyInputStreamTest.testReadBytesCorrectlyHandlesEof - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc4.PGCopyInputStreamTest.testCopyAPI - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc2.CopyTest.testCopyOut - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
--- PASS: org.postgresql.test.jdbc2.CopyTest.testCopyQuery - https://github.com/cockroachdb/cockroach/issues/41608 (unexpected)
For a full summary look at the pgjdbc artifacts
An updated blocklist (pgjdbcBlocklist) is available in the artifacts' pgjdbc log
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #94292 roachtest: pgjdbc failed [C-test-failure O-roachtest O-robot T-sql-sessions branch-release-22.2]
- #93242 roachtest: pgjdbc failed [C-test-failure O-roachtest O-robot T-sql-sessions branch-release-22.1]
</p>
</details>
/cc @cockroachdb/sql-sessions
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*pgjdbc.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_main | roachtest pgjdbc failed roachtest pgjdbc with on master test artifacts and logs in artifacts pgjdbc run orm helpers go summarizefailed tests run on cockroach alpha tests run against pgjdbc total tests run tests passed tests failed tests skipped tests ignored tests passed unexpectedly tests failed unexpectedly tests expected failed but skipped tests expected failed but not run pass org postgresql test pgcopyinputstreamtest testreadbytescorrectlyreadsdatainchunks unexpected pass org postgresql test pgcopyinputstreamtest testmixedapi unexpected pass org postgresql test copytest testcopyoutbyrow unexpected pass org postgresql test pgcopyinputstreamtest testreadbytescorrectlyhandleseof unexpected pass org postgresql test pgcopyinputstreamtest testcopyapi unexpected pass org postgresql test copytest testcopyout unexpected pass org postgresql test copytest testcopyquery unexpected for a full summary look at the pgjdbc artifacts an updated blocklist pgjdbcblocklist is available in the artifacts pgjdbc log parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see same failure on other branches roachtest pgjdbc failed roachtest pgjdbc failed cc cockroachdb sql sessions | 0 |
185,319 | 21,786,157,494 | IssuesEvent | 2022-05-14 06:44:30 | classicvalues/AA-ionic-login | https://api.github.com/repos/classicvalues/AA-ionic-login | closed | CVE-2021-44908 (High) detected in sails-1.5.2.tgz - autoclosed | security vulnerability | ## CVE-2021-44908 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sails-1.5.2.tgz</b></p></summary>
<p>API-driven framework for building realtime apps, using MVC conventions (based on Express and Socket.io)</p>
<p>Library home page: <a href="https://registry.npmjs.org/sails/-/sails-1.5.2.tgz">https://registry.npmjs.org/sails/-/sails-1.5.2.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/sails/package.json</p>
<p>
Dependency Hierarchy:
- :x: **sails-1.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/classicvalues/AA-ionic-login/commit/d4f4480b7ddd8c520e4b02ea2008621ded4be6ab">d4f4480b7ddd8c520e4b02ea2008621ded4be6ab</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
SailsJS Sails.js <=1.4.0 is vulnerable to Prototype Pollution via controller/load-action-modules.js, function loadActionModules().
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44908>CVE-2021-44908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44908">https://nvd.nist.gov/vuln/detail/CVE-2021-44908</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: sails - 1.0.0,0.12.10,0.12.2-0,0.12.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-44908 (High) detected in sails-1.5.2.tgz - autoclosed - ## CVE-2021-44908 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sails-1.5.2.tgz</b></p></summary>
<p>API-driven framework for building realtime apps, using MVC conventions (based on Express and Socket.io)</p>
<p>Library home page: <a href="https://registry.npmjs.org/sails/-/sails-1.5.2.tgz">https://registry.npmjs.org/sails/-/sails-1.5.2.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/sails/package.json</p>
<p>
Dependency Hierarchy:
- :x: **sails-1.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/classicvalues/AA-ionic-login/commit/d4f4480b7ddd8c520e4b02ea2008621ded4be6ab">d4f4480b7ddd8c520e4b02ea2008621ded4be6ab</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
SailsJS Sails.js <=1.4.0 is vulnerable to Prototype Pollution via controller/load-action-modules.js, function loadActionModules().
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44908>CVE-2021-44908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44908">https://nvd.nist.gov/vuln/detail/CVE-2021-44908</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: sails - 1.0.0,0.12.10,0.12.2-0,0.12.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in sails tgz autoclosed cve high severity vulnerability vulnerable library sails tgz api driven framework for building realtime apps using mvc conventions based on express and socket io library home page a href path to dependency file application package json path to vulnerable library application node modules sails package json dependency hierarchy x sails tgz vulnerable library found in head commit a href found in base branch master vulnerability details sailsjs sails js is vulnerable to prototype pollution via controller load action modules js function loadactionmodules publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution sails step up your open source security game with whitesource | 0 |
105,089 | 9,022,910,451 | IssuesEvent | 2019-02-07 04:17:39 | owncloud/core | https://api.github.com/repos/owncloud/core | closed | progress of automated acceptance tests for locks | QA-team dev:acceptance-tests | As we discovered that litmus does not test enough of the WebDAV locking this issue tracks the progress of automated acceptance tests for locks
## Lock symbol appears correctly in the webUI
0.5md to finish but IMO it would be over-engineering
| things to check| user without email address | user with email address set | user with display-name set | lockscope | depth|
|----|----|----|----|----| --- |
| user locks own file/folder| :robot: | :robot: | :robot: | :robot: |
| user locks own folder, lock propagates into subfolders| :robot: | | | | :robot: |
| owner locks shared file/folder| | :robot: | :robot: | :robot: |
| share receiver locks file/folder| | :robot: | :robot: | :robot: |
| share receiver re-shares file/folder| | :robot: | :robot: | :robot: |
| ~~public locks a file/folder~~ | :robot: | N/A | N/A| |
| multiple locks are set by various users| | :robot: | :robot: | N/A|
| unlocking by DAV removes the symbol |:robot: |
| deleting the lock removes the symbol & lock of locked item |:robot: |||:robot: |
| deleting the lock removes the symbol & lock of locked sub items |:robot: |||:robot: |
## Locks can be deleted from the webUI
0.25md to finish
| things to check| exclusive | shared |
|-----|---|--|
| owner deletes lock that was set by herself| :robot: | :robot: |
| share receiver deletes lock that was by herself |
| owner deletes lock that was set by share receiver |:robot: | :robot: |
| owner deletes lock that was set by the public|
| share receiver deletes lock that was set by owner|
| delete lock from a sub-element |
## operation on locked files/folders
1.5md to finish
owner locked - owner tries the operation: ol-oo
owner locked - share receiver tries the operation: ol-ro
share receiver locked - owner tries the operation: rl-oo
share receiver locked - share receiver tries the operation: rl-ro
~~owner locked - public tries the operation: ol-po~~
~~public locked - owner tries the operation: pl-oo~~
~~public locked - public tries the operation: pl-po~~
|operation|UI - ol-oo|UI - ol-ro|UI - rl-oo|UI - rl-ro|UI - ol-po|UI - pl-oo|UI - pl-po|DAV - ol-oo|DAV - ol-ro|DAV - rl-oo|DAV - rl-ro|DAV - ol-po|DAV - pl-oo|DAV - pl-po|
|----|-----|-----|-----|-----|-----|------|-----|-----|------------|------------|------------|------------|------------|------------|
|download|
|upload in locked folder|:robot:| | | | | | | :robot: |:robot: #34327 |:robot: #34327 |:robot: #34327 | :robot: |:robot: #34327|
|upload overwrite in locked folder|:robot:| | | | :robot: |:robot: #34327 | | | | | | :robot: |
|upload overwrite locked file |:robot:|
|rename|:robot:| | | | :robot: |
|move_locked|:robot:| | | | :robot: |
|move_in|:robot:|
|move_in_overwrite|:robot:|
|move_in_subdir|
|move_in_subdir_overwrite|
|move_out| | | | | | | |:robot: #34370|
|move_out_subdir|
|copy_in|
|copy_in_overwrite|
|delete|:robot:| | | | :robot: |
|mkdir| | | | | | | | :robot: | | | | |
|rmdir|
|share| | | | | N/A| | | | | | | N/A| | N/A|
|unshare| | :robot: | | | N/A | | | | | | | N/A | | N/A|
|unlock|:robot: | | :robot: | | | | | :robot: #34303 | :robot: #34303 | :robot: #34303 | :robot: #34303 | :robot: #34303 | :robot: #34303 | :robot: #34303 |
|lock exclusively locked| | | | | | | | | | | :robot: |
|accept share|N/A| | | | N/A | N/A|N/A|N/A| | | | N/A | N/A| N/A|
|decline share|N/A| :robot: | | | N/A |N/A| N/A|N/A| | | | N/A | N/A| N/A|
|accept declined share|N/A| :robot: | | | N/A |N/A| N/A|N/A| | | | N/A | N/A| N/A|
|decline accepted share|N/A| :robot: | | | N/A |N/A| N/A|N/A| | | | N/A | N/A| N/A|
more variations:
- with own token :robot:
- with token of an other user but own username/password :robot:
- reshared shares
- lockscope shared/exclusive
- ~~locks set in levels above the public link~~
- depth 0/infinity
## LOCKDISCOVERY
0.5md to finish
| | lockroot | locktoken | timeout #33846 | owner | count of locks |
| ---- | ---- | ---- | ---- | ---- | ---- |
| owner locked - owner tries discovery | | | :robot: #34320 | | :robot: |
| owner locked - share receiver tries discovery | | | :robot: #34320 |
| share receiver locked - owner tries discovery | | |:robot: #34320 | | :robot: |
| share receiver locked - share receiver tries discovery | | | | | :robot: |
| owner locked - public tries discovery | :robot: | :robot: |:robot: #34320|
| ~~public locked - owner tries discovery~~ | | | :robot: #34320 |
| ~~public locked - public tries discovery~~ | :robot: | :robot: |
more variations:
- lockscope shared/exclusive
- discovery of various levels
- locks on various levels
- set timeouts
- refresh lock | 1.0 | progress of automated acceptance tests for locks - As we discovered that litmus does not test enough of the WebDAV locking this issue tracks the progress of automated acceptance tests for locks
## Lock symbol appears correctly in the webUI
0.5md to finish but IMO it would be over-engineering
| things to check| user without email address | user with email address set | user with display-name set | lockscope | depth|
|----|----|----|----|----| --- |
| user locks own file/folder| :robot: | :robot: | :robot: | :robot: |
| user locks own folder, lock propagates into subfolders| :robot: | | | | :robot: |
| owner locks shared file/folder| | :robot: | :robot: | :robot: |
| share receiver locks file/folder| | :robot: | :robot: | :robot: |
| share receiver re-shares file/folder| | :robot: | :robot: | :robot: |
| ~~public locks a file/folder~~ | :robot: | N/A | N/A| |
| multiple locks are set by various users| | :robot: | :robot: | N/A|
| unlocking by DAV removes the symbol |:robot: |
| deleting the lock removes the symbol & lock of locked item |:robot: |||:robot: |
| deleting the lock removes the symbol & lock of locked sub items |:robot: |||:robot: |
## Locks can be deleted from the webUI
0.25md to finish
| things to check| exclusive | shared |
|-----|---|--|
| owner deletes lock that was set by herself| :robot: | :robot: |
| share receiver deletes lock that was by herself |
| owner deletes lock that was set by share receiver |:robot: | :robot: |
| owner deletes lock that was set by the public|
| share receiver deletes lock that was set by owner|
| delete lock from a sub-element |
## operation on locked files/folders
1.5md to finish
owner locked - owner tries the operation: ol-oo
owner locked - share receiver tries the operation: ol-ro
share receiver locked - owner tries the operation: rl-oo
share receiver locked - share receiver tries the operation: rl-ro
~~owner locked - public tries the operation: ol-po~~
~~public locked - owner tries the operation: pl-oo~~
~~public locked - public tries the operation: pl-po~~
|operation|UI - ol-oo|UI - ol-ro|UI - rl-oo|UI - rl-ro|UI - ol-po|UI - pl-oo|UI - pl-po|DAV - ol-oo|DAV - ol-ro|DAV - rl-oo|DAV - rl-ro|DAV - ol-po|DAV - pl-oo|DAV - pl-po|
|----|-----|-----|-----|-----|-----|------|-----|-----|------------|------------|------------|------------|------------|------------|
|download|
|upload in locked folder|:robot:| | | | | | | :robot: |:robot: #34327 |:robot: #34327 |:robot: #34327 | :robot: |:robot: #34327|
|upload overwrite in locked folder|:robot:| | | | :robot: |:robot: #34327 | | | | | | :robot: |
|upload overwrite locked file |:robot:|
|rename|:robot:| | | | :robot: |
|move_locked|:robot:| | | | :robot: |
|move_in|:robot:|
|move_in_overwrite|:robot:|
|move_in_subdir|
|move_in_subdir_overwrite|
|move_out| | | | | | | |:robot: #34370|
|move_out_subdir|
|copy_in|
|copy_in_overwrite|
|delete|:robot:| | | | :robot: |
|mkdir| | | | | | | | :robot: | | | | |
|rmdir|
|share| | | | | N/A| | | | | | | N/A| | N/A|
|unshare| | :robot: | | | N/A | | | | | | | N/A | | N/A|
|unlock|:robot: | | :robot: | | | | | :robot: #34303 | :robot: #34303 | :robot: #34303 | :robot: #34303 | :robot: #34303 | :robot: #34303 | :robot: #34303 |
|lock exclusively locked| | | | | | | | | | | :robot: |
|accept share|N/A| | | | N/A | N/A|N/A|N/A| | | | N/A | N/A| N/A|
|decline share|N/A| :robot: | | | N/A |N/A| N/A|N/A| | | | N/A | N/A| N/A|
|accept declined share|N/A| :robot: | | | N/A |N/A| N/A|N/A| | | | N/A | N/A| N/A|
|decline accepted share|N/A| :robot: | | | N/A |N/A| N/A|N/A| | | | N/A | N/A| N/A|
more variations:
- with own token :robot:
- with token of an other user but own username/password :robot:
- reshared shares
- lockscope shared/exclusive
- ~~locks set in levels above the public link~~
- depth 0/infinity
## LOCKDISCOVERY
0.5md to finish
| | lockroot | locktoken | timeout #33846 | owner | count of locks |
| ---- | ---- | ---- | ---- | ---- | ---- |
| owner locked - owner tries discovery | | | :robot: #34320 | | :robot: |
| owner locked - share receiver tries discovery | | | :robot: #34320 |
| share receiver locked - owner tries discovery | | |:robot: #34320 | | :robot: |
| share receiver locked - share receiver tries discovery | | | | | :robot: |
| owner locked - public tries discovery | :robot: | :robot: |:robot: #34320|
| ~~public locked - owner tries discovery~~ | | | :robot: #34320 |
| ~~public locked - public tries discovery~~ | :robot: | :robot: |
more variations:
- lockscope shared/exclusive
- discovery of various levels
- locks on various levels
- set timeouts
- refresh lock | non_main | progress of automated acceptance tests for locks as we discovered that litmus does not test enough of the webdav locking this issue tracks the progress of automated acceptance tests for locks lock symbol appears correctly in the webui to finish but imo it would be over engineering things to check user without email address user with email address set user with display name set lockscope depth user locks own file folder robot robot robot robot user locks own folder lock propagates into subfolders robot robot owner locks shared file folder robot robot robot share receiver locks file folder robot robot robot share receiver re shares file folder robot robot robot public locks a file folder robot n a n a multiple locks are set by various users robot robot n a unlocking by dav removes the symbol robot deleting the lock removes the symbol lock of locked item robot robot deleting the lock removes the symbol lock of locked sub items robot robot locks can be deleted from the webui to finish things to check exclusive shared owner deletes lock that was set by herself robot robot share receiver deletes lock that was by herself owner deletes lock that was set by share receiver robot robot owner deletes lock that was set by the public share receiver deletes lock that was set by owner delete lock from a sub element operation on locked files folders to finish owner locked owner tries the operation ol oo owner locked share receiver tries the operation ol ro share receiver locked owner tries the operation rl oo share receiver locked share receiver tries the operation rl ro owner locked public tries the operation ol po public locked owner tries the operation pl oo public locked public tries the operation pl po operation ui ol oo ui ol ro ui rl oo ui rl ro ui ol po ui pl oo ui pl po dav ol oo dav ol ro dav rl oo dav rl ro dav ol po dav pl oo dav pl po download upload in locked folder robot robot robot robot robot robot robot upload overwrite in locked folder robot robot robot robot upload overwrite locked file robot rename robot robot move locked robot robot move in robot move in overwrite robot move in subdir move in subdir overwrite move out robot move out subdir copy in copy in overwrite delete robot robot mkdir robot rmdir share n a n a n a unshare robot n a n a n a unlock robot robot robot robot robot robot robot robot robot lock exclusively locked robot accept share n a n a n a n a n a n a n a n a decline share n a robot n a n a n a n a n a n a n a accept declined share n a robot n a n a n a n a n a n a n a decline accepted share n a robot n a n a n a n a n a n a n a more variations with own token robot with token of an other user but own username password robot reshared shares lockscope shared exclusive locks set in levels above the public link depth infinity lockdiscovery to finish lockroot locktoken timeout owner count of locks owner locked owner tries discovery robot robot owner locked share receiver tries discovery robot share receiver locked owner tries discovery robot robot share receiver locked share receiver tries discovery robot owner locked public tries discovery robot robot robot public locked owner tries discovery robot public locked public tries discovery robot robot more variations lockscope shared exclusive discovery of various levels locks on various levels set timeouts refresh lock | 0 |
225,193 | 17,243,777,069 | IssuesEvent | 2021-07-21 05:04:20 | curl/curl | https://api.github.com/repos/curl/curl | closed | dump-header help is awkward | cmdline tool documentation | https://github.com/curl/curl/blob/bc035f5c9dc4e0387a615d2cc5914f37a90be1e2/docs/cmdline-opts/dump-header.d#L11-L19
> This option is handy to use when you want to store the headers that an HTTP
site sends to you. Cookies from the headers could then be read in a second
curl invocation by using the --cookie option! The --cookie-jar option is a
better way to store cookies.
`--cookie-jar` is the recommended way to store cookies, it seems odd to actively suggest `--dump-header` for sending cookies only to immediately say _don't do this_.
> When used in FTP, the FTP server response lines are considered being "headers"
and thus are saved there.
I think I'd write `here` instead of `there`.
I'm not going to write a PR w/o input. | 1.0 | dump-header help is awkward - https://github.com/curl/curl/blob/bc035f5c9dc4e0387a615d2cc5914f37a90be1e2/docs/cmdline-opts/dump-header.d#L11-L19
> This option is handy to use when you want to store the headers that an HTTP
site sends to you. Cookies from the headers could then be read in a second
curl invocation by using the --cookie option! The --cookie-jar option is a
better way to store cookies.
`--cookie-jar` is the recommended way to store cookies, it seems odd to actively suggest `--dump-header` for sending cookies only to immediately say _don't do this_.
> When used in FTP, the FTP server response lines are considered being "headers"
and thus are saved there.
I think I'd write `here` instead of `there`.
I'm not going to write a PR w/o input. | non_main | dump header help is awkward this option is handy to use when you want to store the headers that an http site sends to you cookies from the headers could then be read in a second curl invocation by using the cookie option the cookie jar option is a better way to store cookies cookie jar is the recommended way to store cookies it seems odd to actively suggest dump header for sending cookies only to immediately say don t do this when used in ftp the ftp server response lines are considered being headers and thus are saved there i think i d write here instead of there i m not going to write a pr w o input | 0 |
2,395 | 8,499,960,269 | IssuesEvent | 2018-10-29 18:30:42 | Qo2770/Algorithms | https://api.github.com/repos/Qo2770/Algorithms | closed | Algorithms is searching for Maintainers! | Maintainers Wanted | *This issue was created by [Maintainers Wanted](https://maintainerswanted.com)* :nerd_face:
*Support us by leaving a star on [Github](https://github.com/flxwu/maintainerswanted.com)!* :star2:
## Algorithms is searching for new Maintainers! :man_technologist: :mailbox_with_mail:
Do you use Algorithms personally or at work and would like this project to be further developed and improved?
Or are you already a contributor and ready to take the next step to becoming a maintainer?
If you are interested, comment here below on this issue :point_down: or drop me a message on [Twitter](https://twitter.com/Qo2770)! :raised_hands: | True | Algorithms is searching for Maintainers! - *This issue was created by [Maintainers Wanted](https://maintainerswanted.com)* :nerd_face:
*Support us by leaving a star on [Github](https://github.com/flxwu/maintainerswanted.com)!* :star2:
## Algorithms is searching for new Maintainers! :man_technologist: :mailbox_with_mail:
Do you use Algorithms personally or at work and would like this project to be further developed and improved?
Or are you already a contributor and ready to take the next step to becoming a maintainer?
If you are interested, comment here below on this issue :point_down: or drop me a message on [Twitter](https://twitter.com/Qo2770)! :raised_hands: | main | algorithms is searching for maintainers this issue was created by nerd face support us by leaving a star on algorithms is searching for new maintainers man technologist mailbox with mail do you use algorithms personally or at work and would like this project to be further developed and improved or are you already a contributor and ready to take the next step to becoming a maintainer if you are interested comment here below on this issue point down or drop me a message on raised hands | 1 |
513,773 | 14,926,575,696 | IssuesEvent | 2021-01-24 12:05:36 | robingenz/dhbw-dualis-app | https://api.github.com/repos/robingenz/dhbw-dualis-app | closed | bug: timeout causes unknown error | bug/fix priority: medium | ```
2020-11-29 13:31:53.576 10548-11382/de.robingenz.dhbw.dualis W/Cordova-Plugin-HTTP: Request timed out
com.silkimen.http.HttpRequest$HttpRequestException: java.net.SocketTimeoutException: timeout
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1449)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
Caused by: java.net.SocketTimeoutException: timeout
at com.android.okhttp.okio.Okio$3.newTimeoutException(Okio.java:214)
at com.android.okhttp.okio.AsyncTimeout.exit(AsyncTimeout.java:263)
at com.android.okhttp.okio.AsyncTimeout$2.read(AsyncTimeout.java:217)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:307)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:301)
at com.android.okhttp.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:197)
at com.android.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:188)
at com.android.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:129)
at com.android.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:750)
at com.android.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:622)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:475)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:411)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:542)
at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:106)
at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:30)
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1447)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
Caused by: java.net.SocketException: socket is closed
at com.android.org.conscrypt.ConscryptFileDescriptorSocket$SSLInputStream.read(ConscryptFileDescriptorSocket.java:554)
at com.android.okhttp.okio.Okio$2.read(Okio.java:138)
at com.android.okhttp.okio.AsyncTimeout$2.read(AsyncTimeout.java:213)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:307)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:301)
at com.android.okhttp.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:197)
at com.android.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:188)
at com.android.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:129)
at com.android.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:750)
at com.android.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:622)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:475)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:411)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:542)
at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:106)
at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:30)
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1447)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
2020-11-29 13:31:53.586 10548-10548/de.robingenz.dhbw.dualis E/Capacitor/Console: File: http://192.168.2.120:8100/main.js - Line 41 - Msg: [NativeHttpService] [object Object]
2020-11-29 13:31:53.814 10548-10548/de.robingenz.dhbw.dualis E/Capacitor/Console: File: http://192.168.2.120:8100/main.js - Line 135 - Msg: Error: Uncaught (in promise): Error: [NativeHttpService] Unknown error occurred.
Error: [NativeHttpService] Unknown error occurred.
``` | 1.0 | bug: timeout causes unknown error - ```
2020-11-29 13:31:53.576 10548-11382/de.robingenz.dhbw.dualis W/Cordova-Plugin-HTTP: Request timed out
com.silkimen.http.HttpRequest$HttpRequestException: java.net.SocketTimeoutException: timeout
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1449)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
Caused by: java.net.SocketTimeoutException: timeout
at com.android.okhttp.okio.Okio$3.newTimeoutException(Okio.java:214)
at com.android.okhttp.okio.AsyncTimeout.exit(AsyncTimeout.java:263)
at com.android.okhttp.okio.AsyncTimeout$2.read(AsyncTimeout.java:217)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:307)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:301)
at com.android.okhttp.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:197)
at com.android.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:188)
at com.android.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:129)
at com.android.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:750)
at com.android.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:622)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:475)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:411)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:542)
at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:106)
at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:30)
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1447)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
Caused by: java.net.SocketException: socket is closed
at com.android.org.conscrypt.ConscryptFileDescriptorSocket$SSLInputStream.read(ConscryptFileDescriptorSocket.java:554)
at com.android.okhttp.okio.Okio$2.read(Okio.java:138)
at com.android.okhttp.okio.AsyncTimeout$2.read(AsyncTimeout.java:213)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:307)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:301)
at com.android.okhttp.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:197)
at com.android.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:188)
at com.android.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:129)
at com.android.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:750)
at com.android.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:622)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:475)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:411)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:542)
at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:106)
at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:30)
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1447)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
2020-11-29 13:31:53.586 10548-10548/de.robingenz.dhbw.dualis E/Capacitor/Console: File: http://192.168.2.120:8100/main.js - Line 41 - Msg: [NativeHttpService] [object Object]
2020-11-29 13:31:53.814 10548-10548/de.robingenz.dhbw.dualis E/Capacitor/Console: File: http://192.168.2.120:8100/main.js - Line 135 - Msg: Error: Uncaught (in promise): Error: [NativeHttpService] Unknown error occurred.
Error: [NativeHttpService] Unknown error occurred.
``` | non_main | bug timeout causes unknown error de robingenz dhbw dualis w cordova plugin http request timed out com silkimen http httprequest httprequestexception java net sockettimeoutexception timeout at com silkimen http httprequest code httprequest java at com silkimen http httprequest stream httprequest java at com silkimen http httprequest buffer httprequest java at com silkimen http httprequest receive httprequest java at com silkimen cordovahttp cordovahttpbase processresponse cordovahttpbase java at com silkimen cordovahttp cordovahttpbase run cordovahttpbase java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java net sockettimeoutexception timeout at com android okhttp okio okio newtimeoutexception okio java at com android okhttp okio asynctimeout exit asynctimeout java at com android okhttp okio asynctimeout read asynctimeout java at com android okhttp okio realbufferedsource indexof realbufferedsource java at com android okhttp okio realbufferedsource indexof realbufferedsource java at com android okhttp okio realbufferedsource realbufferedsource java at com android okhttp internal http readresponse java at com android okhttp internal http readresponseheaders java at com android okhttp internal http httpengine readnetworkresponse httpengine java at com android okhttp internal http httpengine readresponse httpengine java at com android okhttp internal huc httpurlconnectionimpl execute httpurlconnectionimpl java at com android okhttp internal huc httpurlconnectionimpl getresponse httpurlconnectionimpl java at com android okhttp internal huc httpurlconnectionimpl getresponsecode httpurlconnectionimpl java at com android okhttp internal huc delegatinghttpsurlconnection getresponsecode delegatinghttpsurlconnection java at com android okhttp internal huc httpsurlconnectionimpl getresponsecode httpsurlconnectionimpl java at com silkimen http httprequest code httprequest java at com silkimen http httprequest stream httprequest java at com silkimen http httprequest buffer httprequest java at com silkimen http httprequest receive httprequest java at com silkimen cordovahttp cordovahttpbase processresponse cordovahttpbase java at com silkimen cordovahttp cordovahttpbase run cordovahttpbase java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java net socketexception socket is closed at com android org conscrypt conscryptfiledescriptorsocket sslinputstream read conscryptfiledescriptorsocket java at com android okhttp okio okio read okio java at com android okhttp okio asynctimeout read asynctimeout java at com android okhttp okio realbufferedsource indexof realbufferedsource java at com android okhttp okio realbufferedsource indexof realbufferedsource java at com android okhttp okio realbufferedsource realbufferedsource java at com android okhttp internal http readresponse java at com android okhttp internal http readresponseheaders java at com android okhttp internal http httpengine readnetworkresponse httpengine java at com android okhttp internal http httpengine readresponse httpengine java at com android okhttp internal huc httpurlconnectionimpl execute httpurlconnectionimpl java at com android okhttp internal huc httpurlconnectionimpl getresponse httpurlconnectionimpl java at com android okhttp internal huc httpurlconnectionimpl getresponsecode httpurlconnectionimpl java at com android okhttp internal huc delegatinghttpsurlconnection getresponsecode delegatinghttpsurlconnection java at com android okhttp internal huc httpsurlconnectionimpl getresponsecode httpsurlconnectionimpl java at com silkimen http httprequest code httprequest java at com silkimen http httprequest stream httprequest java at com silkimen http httprequest buffer httprequest java at com silkimen http httprequest receive httprequest java at com silkimen cordovahttp cordovahttpbase processresponse cordovahttpbase java at com silkimen cordovahttp cordovahttpbase run cordovahttpbase java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java de robingenz dhbw dualis e capacitor console file line msg de robingenz dhbw dualis e capacitor console file line msg error uncaught in promise error unknown error occurred error unknown error occurred | 0 |
17,287 | 10,679,095,297 | IssuesEvent | 2019-10-21 18:35:12 | cityofaustin/atd-vz-data | https://api.github.com/repos/cityofaustin/atd-vz-data | opened | VZE: Remove Sidebar? | Project: Vision Zero Crash Data System Service: Dev Type: Enhancement Workgroup: VZ | For discussion: do we even need the sidebar? It takes up a 1/6th of the page width. E.g. can we move to a header row?
| 1.0 | VZE: Remove Sidebar? - For discussion: do we even need the sidebar? It takes up a 1/6th of the page width. E.g. can we move to a header row?
| non_main | vze remove sidebar for discussion do we even need the sidebar it takes up a of the page width e g can we move to a header row | 0 |
128,164 | 5,050,073,350 | IssuesEvent | 2016-12-20 17:36:23 | tableau-mkt/componentize | https://api.github.com/repos/tableau-mkt/componentize | closed | Context to control template vars | est-5 priority-low status-invalid value-1 | Add a Context Component Alter reaction option to set a template variable value... needs to be translatable which requires string registering, etc.
Pull request: https://github.com/tableau-mkt/componentize/pull/15
| 1.0 | Context to control template vars - Add a Context Component Alter reaction option to set a template variable value... needs to be translatable which requires string registering, etc.
Pull request: https://github.com/tableau-mkt/componentize/pull/15
| non_main | context to control template vars add a context component alter reaction option to set a template variable value needs to be translatable which requires string registering etc pull request | 0 |
4,549 | 2,610,115,645 | IssuesEvent | 2015-02-26 18:35:53 | chrsmith/scribefire-chrome | https://api.github.com/repos/chrsmith/scribefire-chrome | closed | Paragraph Style Problem | auto-migrated Priority-Medium Type-Defect | ```
What's the problem?
The first paragraph of posts I create using ScribeFire always come through as
the correct style ("Paragraph"). All subsequent paragraphs become some other
style. This is somewhat frustrating since it means that I need to go into
WordPress to make the fix, somewhat defeating the purpose of using Scribefire.
Even just having a true styles dropdown menu (beyond one just for Header
styles) would go a long way.
The same is true for Scribefire for Safari. I'm not using ScribeFire for
Firefox because I have two blogs (one disabled/inactive and one active) and
ScribeFire for Firefox is only recognizing/finding the defunct blog.
Anyway, besides this, this is a GREAT tool. Keep up the good work.
What version of ScribeFire for Chrome are you running?
For Chrome: 1.0.0.1
For Safari: 1.0.0.2
```
-----
Original issue reported on code.google.com by `nassimsu...@gmail.com` on 9 Jul 2010 at 8:13 | 1.0 | Paragraph Style Problem - ```
What's the problem?
The first paragraph of posts I create using ScribeFire always come through as
the correct style ("Paragraph"). All subsequent paragraphs become some other
style. This is somewhat frustrating since it means that I need to go into
WordPress to make the fix, somewhat defeating the purpose of using Scribefire.
Even just having a true styles dropdown menu (beyond one just for Header
styles) would go a long way.
The same is true for Scribefire for Safari. I'm not using ScribeFire for
Firefox because I have two blogs (one disabled/inactive and one active) and
ScribeFire for Firefox is only recognizing/finding the defunct blog.
Anyway, besides this, this is a GREAT tool. Keep up the good work.
What version of ScribeFire for Chrome are you running?
For Chrome: 1.0.0.1
For Safari: 1.0.0.2
```
-----
Original issue reported on code.google.com by `nassimsu...@gmail.com` on 9 Jul 2010 at 8:13 | non_main | paragraph style problem what s the problem the first paragraph of posts i create using scribefire always come through as the correct style paragraph all subsequent paragraphs become some other style this is somewhat frustrating since it means that i need to go into wordpress to make the fix somewhat defeating the purpose of using scribefire even just having a true styles dropdown menu beyond one just for header styles would go a long way the same is true for scribefire for safari i m not using scribefire for firefox because i have two blogs one disabled inactive and one active and scribefire for firefox is only recognizing finding the defunct blog anyway besides this this is a great tool keep up the good work what version of scribefire for chrome are you running for chrome for safari original issue reported on code google com by nassimsu gmail com on jul at | 0 |
2,767 | 9,873,463,327 | IssuesEvent | 2019-06-22 14:43:10 | arcticicestudio/snowsaw | https://api.github.com/repos/arcticicestudio/snowsaw | closed | Git ignore and attribute pattern | context-workflow scope-maintainability type-task | <p align="center"><img src="https://upload.wikimedia.org/wikipedia/commons/e/e0/Git-logo.svg" width="20%" /></p>
> Epic: #33
> Depends on #49
> Blocks #48
Add the [`.gitattributes`][gitattributes] and [`.gitignore`][gitignore] configuration files for pattern handling and to match the latest _Arctic Ice Studio_ project design standards/guidelines.
[gitattributes]: https://git-scm.com/docs/gitattributes
[gitignore]: https://git-scm.com/docs/gitignore | True | Git ignore and attribute pattern - <p align="center"><img src="https://upload.wikimedia.org/wikipedia/commons/e/e0/Git-logo.svg" width="20%" /></p>
> Epic: #33
> Depends on #49
> Blocks #48
Add the [`.gitattributes`][gitattributes] and [`.gitignore`][gitignore] configuration files for pattern handling and to match the latest _Arctic Ice Studio_ project design standards/guidelines.
[gitattributes]: https://git-scm.com/docs/gitattributes
[gitignore]: https://git-scm.com/docs/gitignore | main | git ignore and attribute pattern epic depends on blocks add the and configuration files for pattern handling and to match the latest arctic ice studio project design standards guidelines | 1 |
3,038 | 11,258,995,829 | IssuesEvent | 2020-01-13 06:59:47 | microsoft/DirectXTK12 | https://api.github.com/repos/microsoft/DirectXTK12 | closed | Retire support for VS 2015 | maintainence | In 2020, I plan to retire support for VS 2015. The following projects will be removed, and the NuGet ``directxtk12_desktop_2015`` package will be deprecated in favor of one built with VS 2017 and/or VS 2019:
DirectXTK_Desktop_2015_Win10
DirectXTK_Windows10_2015
DirectXTK_XboxOneXDK_2015
Please put any requests for continued support for one or more of these here.
| True | Retire support for VS 2015 - In 2020, I plan to retire support for VS 2015. The following projects will be removed, and the NuGet ``directxtk12_desktop_2015`` package will be deprecated in favor of one built with VS 2017 and/or VS 2019:
DirectXTK_Desktop_2015_Win10
DirectXTK_Windows10_2015
DirectXTK_XboxOneXDK_2015
Please put any requests for continued support for one or more of these here.
| main | retire support for vs in i plan to retire support for vs the following projects will be removed and the nuget desktop package will be deprecated in favor of one built with vs and or vs directxtk desktop directxtk directxtk xboxonexdk please put any requests for continued support for one or more of these here | 1 |
351,307 | 10,515,470,441 | IssuesEvent | 2019-09-28 10:07:12 | opencaching/opencaching-pl | https://api.github.com/repos/opencaching/opencaching-pl | closed | Additional filters for the map | Component CacheLog Component Map Priority Medium Type Enhancement | - [x] cache size filter - filtrowanie po wielkości skrzynki - np. tak jak filtrowanie po typie, ale jak dla mnie wystarczające byłoby też tak jak po ocenach (np. co najmniej mała, co najmniej normalna, co najmniej duża)
- [ ] hiding of taken up mobile caches - filtrowanie skrzynek mobilnych zabranych z terenu - (jak to zrobić?!) | 1.0 | Additional filters for the map - - [x] cache size filter - filtrowanie po wielkości skrzynki - np. tak jak filtrowanie po typie, ale jak dla mnie wystarczające byłoby też tak jak po ocenach (np. co najmniej mała, co najmniej normalna, co najmniej duża)
- [ ] hiding of taken up mobile caches - filtrowanie skrzynek mobilnych zabranych z terenu - (jak to zrobić?!) | non_main | additional filters for the map cache size filter filtrowanie po wielkości skrzynki np tak jak filtrowanie po typie ale jak dla mnie wystarczające byłoby też tak jak po ocenach np co najmniej mała co najmniej normalna co najmniej duża hiding of taken up mobile caches filtrowanie skrzynek mobilnych zabranych z terenu jak to zrobić | 0 |
235 | 2,945,489,483 | IssuesEvent | 2015-07-03 13:25:10 | OpenLightingProject/ola | https://api.github.com/repos/OpenLightingProject/ola | closed | Avoid obsolete network functions | enhancement Maintainability OpSys-All | We should use the newer versions of the network functions where available.
inet_aton -> inet_pton
inet_ntoa -> inet_ntop
| True | Avoid obsolete network functions - We should use the newer versions of the network functions where available.
inet_aton -> inet_pton
inet_ntoa -> inet_ntop
| main | avoid obsolete network functions we should use the newer versions of the network functions where available inet aton inet pton inet ntoa inet ntop | 1 |
101,126 | 12,644,195,489 | IssuesEvent | 2020-06-16 11:09:17 | zayd62/mkdocs-versioning | https://api.github.com/repos/zayd62/mkdocs-versioning | closed | implementing versioning | design decision enhancement | **Note:**
- whenever software is ready for release, it should have a git annotated tag
- the tag should respect [semver](https://semver.org/) but that's not necessary
There are two ways of implementing versioning
# Method 1: seperate nav page
This is the original idea where directory structure looks like this
```
.
├── 1.0.0
│ ├── index.html
│ └── ...
├── 1.1.1
│ ├── index.html
│ └── ...
├── 2.0.0
│ ├── index.html
│ └── ...
├── index.html
└── ...
```
where the numbers (1.0.0, 1.1.0, ...) are folders containing the **built** documentation for that specific version. ```index.html``` in the root directory is a web page with links to index.html for each built version
each built version has a nav item that points to ```../``` (which points to ```index.html``` in the **root directory**). This means that whenever a new version is built, the newly built version has a nav item that points to ```../``` and the `index.html` will have a new pointer to the newly built version
# Method 2
each built version has a nav item pointing to other versions built.
eg **1.0.0** has a nav item pointing nowhere. when **1.1.0** is built, **1.0.0** has a nav item pointing to **1.1.0** and **1.1.0** has a nav item pointing to 1.0.0 etc. when ever a new version is released, ensure that **all built versions** have pointers to all the other **built versions**
problem with this approach is that when ever a new version is released, all version prior need to have the new version added to the nav and rebuilt. say you have 10 versions and an 11th is released, all 11 will have to be rebuilt. Method 1 avoids this by having a centrally updating versions page so only 2 versions need to be built (the 11th version and the central version page). method 1 means that you have to go to an entirely different page
The beta has method 1 but could change for the 1.0.0 release
| 1.0 | implementing versioning - **Note:**
- whenever software is ready for release, it should have a git annotated tag
- the tag should respect [semver](https://semver.org/) but that's not necessary
There are two ways of implementing versioning
# Method 1: seperate nav page
This is the original idea where directory structure looks like this
```
.
├── 1.0.0
│ ├── index.html
│ └── ...
├── 1.1.1
│ ├── index.html
│ └── ...
├── 2.0.0
│ ├── index.html
│ └── ...
├── index.html
└── ...
```
where the numbers (1.0.0, 1.1.0, ...) are folders containing the **built** documentation for that specific version. ```index.html``` in the root directory is a web page with links to index.html for each built version
each built version has a nav item that points to ```../``` (which points to ```index.html``` in the **root directory**). This means that whenever a new version is built, the newly built version has a nav item that points to ```../``` and the `index.html` will have a new pointer to the newly built version
# Method 2
each built version has a nav item pointing to other versions built.
eg **1.0.0** has a nav item pointing nowhere. when **1.1.0** is built, **1.0.0** has a nav item pointing to **1.1.0** and **1.1.0** has a nav item pointing to 1.0.0 etc. when ever a new version is released, ensure that **all built versions** have pointers to all the other **built versions**
problem with this approach is that when ever a new version is released, all version prior need to have the new version added to the nav and rebuilt. say you have 10 versions and an 11th is released, all 11 will have to be rebuilt. Method 1 avoids this by having a centrally updating versions page so only 2 versions need to be built (the 11th version and the central version page). method 1 means that you have to go to an entirely different page
The beta has method 1 but could change for the 1.0.0 release
| non_main | implementing versioning note whenever software is ready for release it should have a git annotated tag the tag should respect but that s not necessary there are two ways of implementing versioning method seperate nav page this is the original idea where directory structure looks like this ├── │ ├── index html │ └── ├── │ ├── index html │ └── ├── │ ├── index html │ └── ├── index html └── where the numbers are folders containing the built documentation for that specific version index html in the root directory is a web page with links to index html for each built version each built version has a nav item that points to which points to index html in the root directory this means that whenever a new version is built the newly built version has a nav item that points to and the index html will have a new pointer to the newly built version method each built version has a nav item pointing to other versions built eg has a nav item pointing nowhere when is built has a nav item pointing to and has a nav item pointing to etc when ever a new version is released ensure that all built versions have pointers to all the other built versions problem with this approach is that when ever a new version is released all version prior need to have the new version added to the nav and rebuilt say you have versions and an is released all will have to be rebuilt method avoids this by having a centrally updating versions page so only versions need to be built the version and the central version page method means that you have to go to an entirely different page the beta has method but could change for the release | 0 |
3,722 | 15,384,858,240 | IssuesEvent | 2021-03-03 05:29:27 | utm-cssc/website | https://api.github.com/repos/utm-cssc/website | closed | 🥅 Initiative: Website Login | Domain: User Experience Role: Maintainer Role: Product Owner | ### Initiative Overview 👁️🗨️
Implement a utor login on the website such that students are able to have their calendars populated with their courses/tutorials, etc.
Students should also be able to export their calendar as well as customize it to their liking on our website | True | 🥅 Initiative: Website Login - ### Initiative Overview 👁️🗨️
Implement a utor login on the website such that students are able to have their calendars populated with their courses/tutorials, etc.
Students should also be able to export their calendar as well as customize it to their liking on our website | main | 🥅 initiative website login initiative overview 👁️🗨️ implement a utor login on the website such that students are able to have their calendars populated with their courses tutorials etc students should also be able to export their calendar as well as customize it to their liking on our website | 1 |
15,703 | 10,337,894,637 | IssuesEvent | 2019-09-03 15:43:46 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | video linsk are too old. | Pri1 app-service/svc assigned-to-author doc-bug triaged | Please change the resource links from
"For videos about scaling App Service apps, see the following resources:"
They link to an Azure Friday session from 2013, that show the Azure Classic portal.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 44c8de6c-a71b-7c7d-69f5-b1ec446f33ff
* Version Independent ID: 85f9c74a-7039-f466-f2da-bef5eaf106a1
* Content: [Scale up features and capacities - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/manage-scale-up#feedback)
* Content Source: [articles/app-service/manage-scale-up.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/manage-scale-up.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | 1.0 | video linsk are too old. - Please change the resource links from
"For videos about scaling App Service apps, see the following resources:"
They link to an Azure Friday session from 2013, that show the Azure Classic portal.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 44c8de6c-a71b-7c7d-69f5-b1ec446f33ff
* Version Independent ID: 85f9c74a-7039-f466-f2da-bef5eaf106a1
* Content: [Scale up features and capacities - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/manage-scale-up#feedback)
* Content Source: [articles/app-service/manage-scale-up.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/manage-scale-up.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | non_main | video linsk are too old please change the resource links from for videos about scaling app service apps see the following resources they link to an azure friday session from that show the azure classic portal document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin | 0 |
143,409 | 5,515,727,003 | IssuesEvent | 2017-03-17 18:10:16 | codeforamerica/intake | https://api.github.com/repos/codeforamerica/intake | closed | Send partners email so they can set up their usernames and passwords | high priority ready | ON HOLD until they have gotten the go ahead from their supervisors and we have addressed the consent form issue | 1.0 | Send partners email so they can set up their usernames and passwords - ON HOLD until they have gotten the go ahead from their supervisors and we have addressed the consent form issue | non_main | send partners email so they can set up their usernames and passwords on hold until they have gotten the go ahead from their supervisors and we have addressed the consent form issue | 0 |
70,312 | 9,396,890,611 | IssuesEvent | 2019-04-08 08:25:56 | orange-cloudfoundry/paas-templates | https://api.github.com/repos/orange-cloudfoundry/paas-templates | opened | Document jumpbox ssh access to ops portal | documentation enhancement | ### Expected behavior
As a Paas admin, I expect to discover how to connect to the jumpbox using the ops portal, possibly finding a ssh URL (following SSH [URL scheme](https://tools.ietf.org/id/draft-salowey-secsh-uri-00.html) such as ssh://user@host.example.com:22) for the initial connection, and a second url for per user account ssh://user@host.example.com:2224
### Observed behavior
Ops portal "common tools/admin tools" section does not describe the ssh connection url
### Affected release
Reproduced on version 39.x
-->
<!-- specify release note version here -->
<!--
### Traces and logs
Remember this is a public repo. DON'T leak credentials or Orange internal URLs.
Automation may be applied in the future.
* [ ] I have reviewed provided traces against secrets (credentials, internal URLs) that should not be leake, manually of using some tools such as [truffle-hog file:///user/dxa4481/codeprojects/mytraces.txt](https://github.com/dxa4481/truffleHog#truffle-hog)
-->
| 1.0 | Document jumpbox ssh access to ops portal - ### Expected behavior
As a Paas admin, I expect to discover how to connect to the jumpbox using the ops portal, possibly finding a ssh URL (following SSH [URL scheme](https://tools.ietf.org/id/draft-salowey-secsh-uri-00.html) such as ssh://user@host.example.com:22) for the initial connection, and a second url for per user account ssh://user@host.example.com:2224
### Observed behavior
Ops portal "common tools/admin tools" section does not describe the ssh connection url
### Affected release
Reproduced on version 39.x
-->
<!-- specify release note version here -->
<!--
### Traces and logs
Remember this is a public repo. DON'T leak credentials or Orange internal URLs.
Automation may be applied in the future.
* [ ] I have reviewed provided traces against secrets (credentials, internal URLs) that should not be leake, manually of using some tools such as [truffle-hog file:///user/dxa4481/codeprojects/mytraces.txt](https://github.com/dxa4481/truffleHog#truffle-hog)
-->
| non_main | document jumpbox ssh access to ops portal expected behavior as a paas admin i expect to discover how to connect to the jumpbox using the ops portal possibly finding a ssh url following ssh such as ssh user host example com for the initial connection and a second url for per user account ssh user host example com observed behavior ops portal common tools admin tools section does not describe the ssh connection url affected release reproduced on version x traces and logs remember this is a public repo don t leak credentials or orange internal urls automation may be applied in the future i have reviewed provided traces against secrets credentials internal urls that should not be leake manually of using some tools such as | 0 |
5,294 | 26,756,024,308 | IssuesEvent | 2023-01-31 00:20:33 | aws/serverless-application-model | https://api.github.com/repos/aws/serverless-application-model | closed | Can not link an existing API gateway as an event for a lambda resource using AWS SAM | type/feature maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
<!-- Briefly describe the bug you are facing.-->
I am trying to create a SAM stack which creates a lambda. Now what I want is, to link an existing API gateway as a trigger for this lambda. So I tried to do this with the event :
```
Events:
MyAPI:
Type: Api
Properties:
Path: '/Prod/pushsfmessage'
Method : post
RestApiId: "l8lgze9uib"
```
Where "l8lgze9uib" is the API ID of the API I want to link. This gives me an error :
`"StatusReason": "Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [<lambda-name>] is invalid. Event with id [MyAPI] is invalid. RestApiId must be a valid reference to an 'AWS::Serverless::Api' resource in same template.",
`
Which means that I can not use an API that doesn't exist in the template. Which would mean that I am required to create the API gateway every time I deploy the stack, which doesn't solve my issue.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
Try to link any external API gateway as a trigger to a lambda resource.
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
Error in creating the lambda trigger.
### Expected result:
<!-- Describe what you expected.-->
No issue, and the trigger is created. Just like we can link existing DDB streams with lambdas.
| True | Can not link an existing API gateway as an event for a lambda resource using AWS SAM - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
<!-- Briefly describe the bug you are facing.-->
I am trying to create a SAM stack which creates a lambda. Now what I want is, to link an existing API gateway as a trigger for this lambda. So I tried to do this with the event :
```
Events:
MyAPI:
Type: Api
Properties:
Path: '/Prod/pushsfmessage'
Method : post
RestApiId: "l8lgze9uib"
```
Where "l8lgze9uib" is the API ID of the API I want to link. This gives me an error :
`"StatusReason": "Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [<lambda-name>] is invalid. Event with id [MyAPI] is invalid. RestApiId must be a valid reference to an 'AWS::Serverless::Api' resource in same template.",
`
Which means that I can not use an API that doesn't exist in the template. Which would mean that I am required to create the API gateway every time I deploy the stack, which doesn't solve my issue.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
Try to link any external API gateway as a trigger to a lambda resource.
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
Error in creating the lambda trigger.
### Expected result:
<!-- Describe what you expected.-->
No issue, and the trigger is created. Just like we can link existing DDB streams with lambdas.
| main | can not link an existing api gateway as an event for a lambda resource using aws sam make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description i am trying to create a sam stack which creates a lambda now what i want is to link an existing api gateway as a trigger for this lambda so i tried to do this with the event events myapi type api properties path prod pushsfmessage method post restapiid where is the api id of the api i want to link this gives me an error statusreason transform aws serverless failed with invalid serverless application specification document number of errors found resource with id is invalid event with id is invalid restapiid must be a valid reference to an aws serverless api resource in same template which means that i can not use an api that doesn t exist in the template which would mean that i am required to create the api gateway every time i deploy the stack which doesn t solve my issue steps to reproduce try to link any external api gateway as a trigger to a lambda resource observed result error in creating the lambda trigger expected result no issue and the trigger is created just like we can link existing ddb streams with lambdas | 1 |
90,321 | 10,678,590,415 | IssuesEvent | 2019-10-21 17:35:02 | GabryLele98/TorVergHub | https://api.github.com/repos/GabryLele98/TorVergHub | opened | Dictionary | documentation | * Training Material / Didattic material:
Every form of media that provides information about a scholastic topic (e.g. exercisers, videolessons, books, e-books etc.). | 1.0 | Dictionary - * Training Material / Didattic material:
Every form of media that provides information about a scholastic topic (e.g. exercisers, videolessons, books, e-books etc.). | non_main | dictionary training material didattic material every form of media that provides information about a scholastic topic e g exercisers videolessons books e books etc | 0 |
452,900 | 13,061,038,205 | IssuesEvent | 2020-07-30 13:21:54 | fac20/Week-three-GMNO | https://api.github.com/repos/fac20/Week-three-GMNO | opened | On clicking the add button the input bar should be cleared for the next input | E1 enhancement priority: high | - [ ] Write tests for this as well | 1.0 | On clicking the add button the input bar should be cleared for the next input - - [ ] Write tests for this as well | non_main | on clicking the add button the input bar should be cleared for the next input write tests for this as well | 0 |
3,779 | 15,933,081,338 | IssuesEvent | 2021-04-14 06:56:21 | precice/tutorials | https://api.github.com/repos/precice/tutorials | closed | Port and test Code_Aster case | maintainability | In #151 we did not yet completely port and test the "flow over heated plate steady state" with OpenFOAM and Code_Aster.
Does Code_Aster actually support 2D simulations? We also still need to update the mesh names. | True | Port and test Code_Aster case - In #151 we did not yet completely port and test the "flow over heated plate steady state" with OpenFOAM and Code_Aster.
Does Code_Aster actually support 2D simulations? We also still need to update the mesh names. | main | port and test code aster case in we did not yet completely port and test the flow over heated plate steady state with openfoam and code aster does code aster actually support simulations we also still need to update the mesh names | 1 |
1,727 | 6,574,810,678 | IssuesEvent | 2017-09-11 14:09:45 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Unexpected token \r\n when running script module on windows target with environment var set on playbook | affects_2.3 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
commands/script.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0 (devel fda933723c) last updated 2016/10/25 04:47:56 (GMT -200)
lib/ansible/modules/core: (detached HEAD c51ced56cc) last updated 2016/10/25 04:48:03 (GMT -200)
lib/ansible/modules/extras: (detached HEAD 8ffe314ea5) last updated 2016/10/25 04:48:03 (GMT -200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Only windows connection settings:
ansible_port= 5986
ansible_connection= winrm
ansible_winrm_transport= ntlm
ansible_winrm_server_cert_validation= ignore
ansible_user= Administrator
ansible_password= <redacted>
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Running from checkout on CentOS Linux release 7.2.1511 (Core), targeting a fully updated windows Standard Server 2016 host (Full Desktop Experience Installed)
##### SUMMARY
Running a powershell script using script modules throws me an error about "Unexpected token \r\n'" when I'm also setting environment variables on the playbook.
Ansible 2.2-stable also throws the error.
On Ansible 2.1 it works fine. Or at least it runs. Apparently I am not being able to actually use the envvar inside the script, but that may be something wrong I am doing. (in my particular case I am using the playbook envvar for some local actions run after that, not for using inside the script.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Stripped down playbook:
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: windows
gather_facts: false
environment:
TESTENV: blah
tasks:
- name: test script
script: files/test.ps1 creates="c:\\test.txt"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Script run sucessfully generating a file at c:\test.txt.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
[root@ansiblelab-u01 ansible]# ansible-playbook -i inv test.yml -l vmwarelab -vvvv [4/1867]
No config file found; using defaults
Loading callback plugin default of type stdout, v2.0 from /root/src/ansible/lib/ansible/plugins/callback/__init__.py
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [windows] *****************************************************************
TASK [test script] *************************************************************
task path: /root/ansible/test.yml:7
<my.ip.address> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO my.ip.address
<my.ip.address> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1477381702.07-161232195129995").FullName | Write-Host -Separator '';
<my.ip.address> EXEC Set-StrictMode -Version Latest
If (Test-Path "c:\test.txt")
{
$res = 0;
}
Else
{
$res = 1;
}
Write-Host "$res";
Exit $res;
<my.ip.address> PUT "/root/ansible/files/test.ps1" TO "C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995\test.ps1"
<my.ip.address> EXEC $env:TESTENV='blah' 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995\test.ps1'
<my.ip.address> EXEC Set-StrictMode -Version Latest
Remove-Item "C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995" -Force -Recurse;
fatal: [my.ip.address]: FAILED! => {
"changed": true,
"failed": true,
"invocation": {
"module_args": {
"_raw_params": "files/test.ps1",
"creates": "c:\\test.txt"
},
"module_name": "script"
},
"rc": 1,
"stderr": "At line:1 char:21\r\n+ ... TENV='blah' 'C:\\Users\\Administrator\\AppData\\Local\\Temp\\ansible-tmp-14 ...\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~\r\nUnexpected token ''C:\\Users\\Administrator\\AppData\\Local\\Temp\\ansible-tmp-1477381702.07-161232195129995\\test.ps1'' in \r\nexpression or statement.\r\n+ CategoryInfo : ParserErro
r: (:) [], ParentContainsErrorRecordException\r\n+ FullyQualifiedErrorId : UnexpectedToken\r\n",
"stdout": "",
"stdout_lines": []
}
to retry, use: --limit @/root/ansible/test.retry
PLAY RECAP *********************************************************************
my.ip.address : ok=0 changed=0 unreachable=0 failed=1
```
| True | Unexpected token \r\n when running script module on windows target with environment var set on playbook - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
commands/script.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0 (devel fda933723c) last updated 2016/10/25 04:47:56 (GMT -200)
lib/ansible/modules/core: (detached HEAD c51ced56cc) last updated 2016/10/25 04:48:03 (GMT -200)
lib/ansible/modules/extras: (detached HEAD 8ffe314ea5) last updated 2016/10/25 04:48:03 (GMT -200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Only windows connection settings:
ansible_port= 5986
ansible_connection= winrm
ansible_winrm_transport= ntlm
ansible_winrm_server_cert_validation= ignore
ansible_user= Administrator
ansible_password= <redacted>
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Running from checkout on CentOS Linux release 7.2.1511 (Core), targeting a fully updated windows Standard Server 2016 host (Full Desktop Experience Installed)
##### SUMMARY
Running a powershell script using script modules throws me an error about "Unexpected token \r\n'" when I'm also setting environment variables on the playbook.
Ansible 2.2-stable also throws the error.
On Ansible 2.1 it works fine. Or at least it runs. Apparently I am not being able to actually use the envvar inside the script, but that may be something wrong I am doing. (in my particular case I am using the playbook envvar for some local actions run after that, not for using inside the script.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Stripped down playbook:
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: windows
gather_facts: false
environment:
TESTENV: blah
tasks:
- name: test script
script: files/test.ps1 creates="c:\\test.txt"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Script run sucessfully generating a file at c:\test.txt.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
[root@ansiblelab-u01 ansible]# ansible-playbook -i inv test.yml -l vmwarelab -vvvv [4/1867]
No config file found; using defaults
Loading callback plugin default of type stdout, v2.0 from /root/src/ansible/lib/ansible/plugins/callback/__init__.py
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [windows] *****************************************************************
TASK [test script] *************************************************************
task path: /root/ansible/test.yml:7
<my.ip.address> ESTABLISH WINRM CONNECTION FOR USER: Administrator on PORT 5986 TO my.ip.address
<my.ip.address> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1477381702.07-161232195129995").FullName | Write-Host -Separator '';
<my.ip.address> EXEC Set-StrictMode -Version Latest
If (Test-Path "c:\test.txt")
{
$res = 0;
}
Else
{
$res = 1;
}
Write-Host "$res";
Exit $res;
<my.ip.address> PUT "/root/ansible/files/test.ps1" TO "C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995\test.ps1"
<my.ip.address> EXEC $env:TESTENV='blah' 'C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995\test.ps1'
<my.ip.address> EXEC Set-StrictMode -Version Latest
Remove-Item "C:\Users\Administrator\AppData\Local\Temp\ansible-tmp-1477381702.07-161232195129995" -Force -Recurse;
fatal: [my.ip.address]: FAILED! => {
"changed": true,
"failed": true,
"invocation": {
"module_args": {
"_raw_params": "files/test.ps1",
"creates": "c:\\test.txt"
},
"module_name": "script"
},
"rc": 1,
"stderr": "At line:1 char:21\r\n+ ... TENV='blah' 'C:\\Users\\Administrator\\AppData\\Local\\Temp\\ansible-tmp-14 ...\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~\r\nUnexpected token ''C:\\Users\\Administrator\\AppData\\Local\\Temp\\ansible-tmp-1477381702.07-161232195129995\\test.ps1'' in \r\nexpression or statement.\r\n+ CategoryInfo : ParserErro
r: (:) [], ParentContainsErrorRecordException\r\n+ FullyQualifiedErrorId : UnexpectedToken\r\n",
"stdout": "",
"stdout_lines": []
}
to retry, use: --limit @/root/ansible/test.retry
PLAY RECAP *********************************************************************
my.ip.address : ok=0 changed=0 unreachable=0 failed=1
```
| main | unexpected token r n when running script module on windows target with environment var set on playbook issue type bug report component name commands script py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables only windows connection settings ansible port ansible connection winrm ansible winrm transport ntlm ansible winrm server cert validation ignore ansible user administrator ansible password os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running from checkout on centos linux release core targeting a fully updated windows standard server host full desktop experience installed summary running a powershell script using script modules throws me an error about unexpected token r n when i m also setting environment variables on the playbook ansible stable also throws the error on ansible it works fine or at least it runs apparently i am not being able to actually use the envvar inside the script but that may be something wrong i am doing in my particular case i am using the playbook envvar for some local actions run after that not for using inside the script steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used stripped down playbook hosts windows gather facts false environment testenv blah tasks name test script script files test creates c test txt expected results script run sucessfully generating a file at c test txt actual results ansible playbook i inv test yml l vmwarelab vvvv no config file found using defaults loading callback plugin default of type stdout from root src ansible lib ansible plugins callback init py playbook test yml plays in test yml play task task path root ansible test yml establish winrm connection for user administrator on port to my ip address exec set strictmode version latest new item type directory path env temp name ansible tmp fullname write host separator exec set strictmode version latest if test path c test txt res else res write host res exit res put root ansible files test to c users administrator appdata local temp ansible tmp test exec env testenv blah c users administrator appdata local temp ansible tmp test exec set strictmode version latest remove item c users administrator appdata local temp ansible tmp force recurse fatal failed changed true failed true invocation module args raw params files test creates c test txt module name script rc stderr at line char r n tenv blah c users administrator appdata local temp ansible tmp r n r nunexpected token c users administrator appdata local temp ansible tmp test in r nexpression or statement r n categoryinfo parsererro r parentcontainserrorrecordexception r n fullyqualifiederrorid unexpectedtoken r n stdout stdout lines to retry use limit root ansible test retry play recap my ip address ok changed unreachable failed | 1 |
2,202 | 7,764,006,353 | IssuesEvent | 2018-06-01 18:38:47 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | The command report can (still) be used to meta the roundtype | Maintainability/Hinders improvements | Sample Command Report (Actual round type was ops)
Central Command Status Summary
--------------------------------------------------------------------------------
Central Command has intercepted and partially decoded a Syndicate transmission with vital information regarding their movements. The following report outlines the most likely threats to appear in your sector.
--------------------------------------------------------------------------------
The transmission mostly failed to mention your sector. It is possible that there is nothing in the Syndicate that could threaten your station during this shift.
--------------------------------------------------------------------------------
Infernal creatures have been seen nearby offering great boons in exchange for souls. This is considered theft against Nanotrasen, as all employment contracts contain a lien on the employee's soul. If anyone sells their soul in error, contact an attorney to overrule the sale. Be warned that if the devil purchases enough souls, a gateway to hell may open.
--------------------------------------------------------------------------------
One of Central Command's trading routes was recently disrupted by a raid carried out by the Gorlex Marauders. They seemed to only be after one ship - a highly-sensitive transport containing a nuclear fission explosive, although it is useless without the proper code and authorization disk. While the code was likely found in minutes, the only disk that can activate this explosive is on your station. Ensure that it is protected at all times, and remain alert for possible intruders.
--------------------------------------------------------------------------------
Reports of an ancient banana blight outbreak that turn humans into monkeys has been reported in your quadrant. Any such infections may be treated with banana juice. If an outbreak occurs, ensure the station is quarantined to prevent a largescale outbreak at CentCom.
so, tldr:
Possibility 1: Extended.
Possibility 2: Devil (not even in rotation)
Possibility 3: Nuke ops
Possibility 4: Monkey (also not in rotation)
So basically they have it narrowed down to two round types. This can happen with other modes- See IAA, Devil, meteor, and cult? It's cult!
Here's my recommended solutions:
1. Change it back to the way it was before: Utter gibberish, totally useless.
2. Make it list x amount of possible round types and then the real one. Not entirely useless since you can rule out stuff but still better.
3. Make it show possible round types- The gamemode doesn't have to be in there. | True | The command report can (still) be used to meta the roundtype - Sample Command Report (Actual round type was ops)
Central Command Status Summary
--------------------------------------------------------------------------------
Central Command has intercepted and partially decoded a Syndicate transmission with vital information regarding their movements. The following report outlines the most likely threats to appear in your sector.
--------------------------------------------------------------------------------
The transmission mostly failed to mention your sector. It is possible that there is nothing in the Syndicate that could threaten your station during this shift.
--------------------------------------------------------------------------------
Infernal creatures have been seen nearby offering great boons in exchange for souls. This is considered theft against Nanotrasen, as all employment contracts contain a lien on the employee's soul. If anyone sells their soul in error, contact an attorney to overrule the sale. Be warned that if the devil purchases enough souls, a gateway to hell may open.
--------------------------------------------------------------------------------
One of Central Command's trading routes was recently disrupted by a raid carried out by the Gorlex Marauders. They seemed to only be after one ship - a highly-sensitive transport containing a nuclear fission explosive, although it is useless without the proper code and authorization disk. While the code was likely found in minutes, the only disk that can activate this explosive is on your station. Ensure that it is protected at all times, and remain alert for possible intruders.
--------------------------------------------------------------------------------
Reports of an ancient banana blight outbreak that turn humans into monkeys has been reported in your quadrant. Any such infections may be treated with banana juice. If an outbreak occurs, ensure the station is quarantined to prevent a largescale outbreak at CentCom.
so, tldr:
Possibility 1: Extended.
Possibility 2: Devil (not even in rotation)
Possibility 3: Nuke ops
Possibility 4: Monkey (also not in rotation)
So basically they have it narrowed down to two round types. This can happen with other modes- See IAA, Devil, meteor, and cult? It's cult!
Here's my recommended solutions:
1. Change it back to the way it was before: Utter gibberish, totally useless.
2. Make it list x amount of possible round types and then the real one. Not entirely useless since you can rule out stuff but still better.
3. Make it show possible round types- The gamemode doesn't have to be in there. | main | the command report can still be used to meta the roundtype sample command report actual round type was ops central command status summary central command has intercepted and partially decoded a syndicate transmission with vital information regarding their movements the following report outlines the most likely threats to appear in your sector the transmission mostly failed to mention your sector it is possible that there is nothing in the syndicate that could threaten your station during this shift infernal creatures have been seen nearby offering great boons in exchange for souls this is considered theft against nanotrasen as all employment contracts contain a lien on the employee s soul if anyone sells their soul in error contact an attorney to overrule the sale be warned that if the devil purchases enough souls a gateway to hell may open one of central command s trading routes was recently disrupted by a raid carried out by the gorlex marauders they seemed to only be after one ship a highly sensitive transport containing a nuclear fission explosive although it is useless without the proper code and authorization disk while the code was likely found in minutes the only disk that can activate this explosive is on your station ensure that it is protected at all times and remain alert for possible intruders reports of an ancient banana blight outbreak that turn humans into monkeys has been reported in your quadrant any such infections may be treated with banana juice if an outbreak occurs ensure the station is quarantined to prevent a largescale outbreak at centcom so tldr possibility extended possibility devil not even in rotation possibility nuke ops possibility monkey also not in rotation so basically they have it narrowed down to two round types this can happen with other modes see iaa devil meteor and cult it s cult here s my recommended solutions change it back to the way it was before utter gibberish totally useless make it list x amount of possible round types and then the real one not entirely useless since you can rule out stuff but still better make it show possible round types the gamemode doesn t have to be in there | 1 |
269,078 | 28,959,996,031 | IssuesEvent | 2023-05-10 01:06:49 | dreamboy9/mongo | https://api.github.com/repos/dreamboy9/mongo | reopened | CVE-2022-0355 (High) detected in simple-get-3.1.0.tgz | Mend: dependency security vulnerability | ## CVE-2022-0355 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>simple-get-3.1.0.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz">https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz</a></p>
<p>Path to dependency file: /buildscripts/libdeps/graph_visualizer_web_stack/package.json</p>
<p>Path to vulnerable library: /buildscripts/libdeps/graph_visualizer_web_stack/node_modules/simple-get/package.json</p>
<p>
Dependency Hierarchy:
- canvas-2.8.0.tgz (Root Library)
- :x: **simple-get-3.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM simple-get prior to 4.0.1.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0355>CVE-2022-0355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution (simple-get): 3.1.1</p>
<p>Direct dependency fix Resolution (canvas): 2.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0355 (High) detected in simple-get-3.1.0.tgz - ## CVE-2022-0355 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>simple-get-3.1.0.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz">https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz</a></p>
<p>Path to dependency file: /buildscripts/libdeps/graph_visualizer_web_stack/package.json</p>
<p>Path to vulnerable library: /buildscripts/libdeps/graph_visualizer_web_stack/node_modules/simple-get/package.json</p>
<p>
Dependency Hierarchy:
- canvas-2.8.0.tgz (Root Library)
- :x: **simple-get-3.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM simple-get prior to 4.0.1.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0355>CVE-2022-0355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution (simple-get): 3.1.1</p>
<p>Direct dependency fix Resolution (canvas): 2.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in simple get tgz cve high severity vulnerability vulnerable library simple get tgz simplest way to make http get requests supports https redirects gzip deflate streams in library home page a href path to dependency file buildscripts libdeps graph visualizer web stack package json path to vulnerable library buildscripts libdeps graph visualizer web stack node modules simple get package json dependency hierarchy canvas tgz root library x simple get tgz vulnerable library found in base branch master vulnerability details exposure of sensitive information to an unauthorized actor in npm simple get prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution simple get direct dependency fix resolution canvas step up your open source security game with mend | 0 |
5,430 | 27,239,694,235 | IssuesEvent | 2023-02-21 19:15:16 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | SPIKE | PNI Refactoring | engineering maintain | Let's identify where we can improve the readability and performance of PNI from a code standpoint. The goal of this ticket is to identify our largest problem areas concerning PNI. Later (not part of this ticket), we will take these problem areas and break them down into actionable tickets.
Schedule a meeting to talk about the approach concerning front-end vs. back-end functionality.
Timebox: 4 hours | True | SPIKE | PNI Refactoring - Let's identify where we can improve the readability and performance of PNI from a code standpoint. The goal of this ticket is to identify our largest problem areas concerning PNI. Later (not part of this ticket), we will take these problem areas and break them down into actionable tickets.
Schedule a meeting to talk about the approach concerning front-end vs. back-end functionality.
Timebox: 4 hours | main | spike pni refactoring let s identify where we can improve the readability and performance of pni from a code standpoint the goal of this ticket is to identify our largest problem areas concerning pni later not part of this ticket we will take these problem areas and break them down into actionable tickets schedule a meeting to talk about the approach concerning front end vs back end functionality timebox hours | 1 |
472,391 | 13,623,581,018 | IssuesEvent | 2020-09-24 06:36:28 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | m.bild.de - site is not usable | browser-focus-geckoview engine-gecko priority-important | <!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/58713 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://m.bild.de/wa/ll/bild-de/privater-modus-unangemeldet-54578900.bildMobile.html
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Message to disable ad locker, but I have not an adblocker
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | m.bild.de - site is not usable - <!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/58713 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://m.bild.de/wa/ll/bild-de/privater-modus-unangemeldet-54578900.bildMobile.html
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Message to disable ad locker, but I have not an adblocker
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | m bild de site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce message to disable ad locker but i have not an adblocker browser configuration none from with ❤️ | 0 |
410,002 | 27,761,911,099 | IssuesEvent | 2023-03-16 08:55:02 | idomand/word-game-3.0 | https://api.github.com/repos/idomand/word-game-3.0 | closed | update read me | documentation | add read.me with info:
1. link to site on Netlify
2. pictures of the app
3. how to use the app
| 1.0 | update read me - add read.me with info:
1. link to site on Netlify
2. pictures of the app
3. how to use the app
| non_main | update read me add read me with info link to site on netlify pictures of the app how to use the app | 0 |
753,454 | 26,347,243,206 | IssuesEvent | 2023-01-10 23:35:01 | DKFN/edr-issues | https://api.github.com/repos/DKFN/edr-issues | closed | Displayer player username or IA next to train | enhancement Priority fixed | This task has two parts:
- [ ] Implement SteamAPI (there is no need to call SimRail servers for that)
- [ ] Display the information
Here is an example of suggested result by Howky

Submitted by Howky | 1.0 | Displayer player username or IA next to train - This task has two parts:
- [ ] Implement SteamAPI (there is no need to call SimRail servers for that)
- [ ] Display the information
Here is an example of suggested result by Howky

Submitted by Howky | non_main | displayer player username or ia next to train this task has two parts implement steamapi there is no need to call simrail servers for that display the information here is an example of suggested result by howky submitted by howky | 0 |
2,952 | 10,602,526,139 | IssuesEvent | 2019-10-10 14:22:44 | lrusso96/simple-biblio | https://api.github.com/repos/lrusso96/simple-biblio | closed | Fix "method_lines" issue in Feedbooks class | maintainability | Method `parseBook` has 27 lines of code (exceeds 25 allowed). Consider refactoring.
https://codeclimate.com/github/lrusso96/simple-biblio/src/main/java/lrusso96/simplebiblio/core/providers/feedbooks/Feedbooks.java#issue_5d9f378f725a710001000024 | True | Fix "method_lines" issue in Feedbooks class - Method `parseBook` has 27 lines of code (exceeds 25 allowed). Consider refactoring.
https://codeclimate.com/github/lrusso96/simple-biblio/src/main/java/lrusso96/simplebiblio/core/providers/feedbooks/Feedbooks.java#issue_5d9f378f725a710001000024 | main | fix method lines issue in feedbooks class method parsebook has lines of code exceeds allowed consider refactoring | 1 |
1,912 | 6,577,573,416 | IssuesEvent | 2017-09-12 01:51:20 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Create custom VM with vsphere_guest on specific folder and resource_pool | affects_2.3 cloud feature_idea vmware waiting_on_maintainer | ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
N/A
##### SUMMARY
I need a way to create a custom virtual machine (VM) on specific folder and resource pool. I'm trying with the examples of official docs but the VM always was created outside of any resource pool and on cluster folder...then I was reviewed the sourcecode and when you use `state: powered_on` you can't specify the option resource_pool.
The option resource_pool only was available when you use clone from template.
| True | Create custom VM with vsphere_guest on specific folder and resource_pool - ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
N/A
##### SUMMARY
I need a way to create a custom virtual machine (VM) on specific folder and resource pool. I'm trying with the examples of official docs but the VM always was created outside of any resource pool and on cluster folder...then I was reviewed the sourcecode and when you use `state: powered_on` you can't specify the option resource_pool.
The option resource_pool only was available when you use clone from template.
| main | create custom vm with vsphere guest on specific folder and resource pool issue type feature idea component name vsphere guest ansible version n a summary i need a way to create a custom virtual machine vm on specific folder and resource pool i m trying with the examples of official docs but the vm always was created outside of any resource pool and on cluster folder then i was reviewed the sourcecode and when you use state powered on you can t specify the option resource pool the option resource pool only was available when you use clone from template | 1 |
8,508 | 5,786,450,747 | IssuesEvent | 2017-05-01 10:50:04 | AdamsLair/duality | https://api.github.com/repos/AdamsLair/duality | opened | Introduce an AnglePropertyEditor | Editor Feature Usability | ### Summary
It's a somewhat common case to specify angles in the inspector. However, radian angles as used throughout Duality are not easily readable by humans. There should be a specialized property editor for angles that allows to switch between radians and degrees and defaults to degrees.
### Analysis
- Note that issue #528 is a prerequisite to this.
- The editor should feel like an "extended numeric editor", so nothing entirely new. Same float field, limits, dragdrop, etc.
- There should be a unit label (deg / rad) with a button area of some sort for switching units. | True | Introduce an AnglePropertyEditor - ### Summary
It's a somewhat common case to specify angles in the inspector. However, radian angles as used throughout Duality are not easily readable by humans. There should be a specialized property editor for angles that allows to switch between radians and degrees and defaults to degrees.
### Analysis
- Note that issue #528 is a prerequisite to this.
- The editor should feel like an "extended numeric editor", so nothing entirely new. Same float field, limits, dragdrop, etc.
- There should be a unit label (deg / rad) with a button area of some sort for switching units. | non_main | introduce an anglepropertyeditor summary it s a somewhat common case to specify angles in the inspector however radian angles as used throughout duality are not easily readable by humans there should be a specialized property editor for angles that allows to switch between radians and degrees and defaults to degrees analysis note that issue is a prerequisite to this the editor should feel like an extended numeric editor so nothing entirely new same float field limits dragdrop etc there should be a unit label deg rad with a button area of some sort for switching units | 0 |
3,435 | 13,210,344,752 | IssuesEvent | 2020-08-15 16:25:51 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | LLDP Module Issue | affects_2.9 bot_closed bug collection collection:community.general module needs_collection_redirect needs_maintainer needs_triage net_tools python3 support:community | ##### SUMMARY
Run the following test code in ansible's documentation
- name: Gather information from lldp
lldp:
- name: Print each switch/port
debug:
msg: "{{ lldp[item]['chassis']['name'] }} / {{ lldp[item]['port']['ifname'] }}"
with_items: "{{ lldp.keys() }}"
# TASK: [Print each switch/port] ***********************************************************
# ok: [10.13.0.22] => (item=eth2) => {"item": "eth2", "msg": "switch1.example.com / Gi0/24"}
# ok: [10.13.0.22] => (item=eth1) => {"item": "eth1", "msg": "switch2.example.com / Gi0/3"}
# ok: [10.13.0.22] => (item=eth0) => {"item": "eth0", "msg": "switch3.example.com / Gi0/3"}
but fails with a return value
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute \"dict_keys(['p1p1', 'em1', 'p1p2', 'p3p1', 'p3p2'])\"\n\nThe error appears to be in '/home/admin/ansible-linux/roles/test_rename/tasks/main.yml': line 5, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Print each switch/port\n ^ here\n"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
LLDP
##### ANSIBLE VERSION
ansible 2.9.2
config file = /home/feisa/ansible-linux/ansible.cfg
configured module search path = ['/home/feisa/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
##### CONFIGURATION
ANSIBLE_SSH_ARGS(/home/admin/ansible-linux/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
DEFAULT_INVENTORY_PLUGIN_PATH(/home/admin/ansible-linux/ansible.cfg) = ['/home/admin/ansible-linux/inventory/plugins']
DEFAULT_REMOTE_USER(/home/admin/ansible-linux/ansible.cfg) = root
DEFAULT_ROLES_PATH(/home/admin/ansible-linux/ansible.cfg) = ['/home/admin/ansible-linux/roles']
DEFAULT_SCP_IF_SSH(/home/admin/ansible-linux/ansible.cfg) = True
HOST_KEY_CHECKING(/home/admin/ansible-linux/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/admin/ansible-linux/ansible.cfg) = False
RETRY_FILES_SAVE_PATH(/home/admin/ansible-linux/ansible.cfg) = /tmp
##### OS / ENVIRONMENT
Centos
##### STEPS TO REPRODUCE
Ran sample code from the documentation
```
# Retrieve switch/port information
- name: Gather information from lldp
lldp:
- name: Print each switch/port
debug:
msg: "{{ lldp[item]['chassis']['name'] }} / {{ lldp[item]['port']['ifname'] }}"
with_items: "{{ lldp.keys() }}"
##### EXPECTED RESULTS
# TASK: [Print each switch/port] ***********************************************************
# ok: [10.13.0.22] => (item=eth2) => {"item": "eth2", "msg": "switch1.example.com / Gi0/24"}
# ok: [10.13.0.22] => (item=eth1) => {"item": "eth1", "msg": "switch2.example.com / Gi0/3"}
# ok: [10.13.0.22] => (item=eth0) => {"item": "eth0", "msg": "switch3.example.com / Gi0/3"}
##### ACTUAL RESULTS
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute \"dict_keys(['p1p1', 'em1', 'p1p2', 'p3p1', 'p3p2'])\"\n\nThe error appears to be in '/home/admin/ansible-linux/roles/test_rename/tasks/main.yml': line 5, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Print each switch/port\n ^ here\n"
| True | LLDP Module Issue - ##### SUMMARY
Run the following test code in ansible's documentation
- name: Gather information from lldp
lldp:
- name: Print each switch/port
debug:
msg: "{{ lldp[item]['chassis']['name'] }} / {{ lldp[item]['port']['ifname'] }}"
with_items: "{{ lldp.keys() }}"
# TASK: [Print each switch/port] ***********************************************************
# ok: [10.13.0.22] => (item=eth2) => {"item": "eth2", "msg": "switch1.example.com / Gi0/24"}
# ok: [10.13.0.22] => (item=eth1) => {"item": "eth1", "msg": "switch2.example.com / Gi0/3"}
# ok: [10.13.0.22] => (item=eth0) => {"item": "eth0", "msg": "switch3.example.com / Gi0/3"}
but fails with a return value
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute \"dict_keys(['p1p1', 'em1', 'p1p2', 'p3p1', 'p3p2'])\"\n\nThe error appears to be in '/home/admin/ansible-linux/roles/test_rename/tasks/main.yml': line 5, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Print each switch/port\n ^ here\n"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
LLDP
##### ANSIBLE VERSION
ansible 2.9.2
config file = /home/feisa/ansible-linux/ansible.cfg
configured module search path = ['/home/feisa/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
##### CONFIGURATION
ANSIBLE_SSH_ARGS(/home/admin/ansible-linux/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
DEFAULT_INVENTORY_PLUGIN_PATH(/home/admin/ansible-linux/ansible.cfg) = ['/home/admin/ansible-linux/inventory/plugins']
DEFAULT_REMOTE_USER(/home/admin/ansible-linux/ansible.cfg) = root
DEFAULT_ROLES_PATH(/home/admin/ansible-linux/ansible.cfg) = ['/home/admin/ansible-linux/roles']
DEFAULT_SCP_IF_SSH(/home/admin/ansible-linux/ansible.cfg) = True
HOST_KEY_CHECKING(/home/admin/ansible-linux/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/admin/ansible-linux/ansible.cfg) = False
RETRY_FILES_SAVE_PATH(/home/admin/ansible-linux/ansible.cfg) = /tmp
##### OS / ENVIRONMENT
Centos
##### STEPS TO REPRODUCE
Ran sample code from the documentation
```
# Retrieve switch/port information
- name: Gather information from lldp
lldp:
- name: Print each switch/port
debug:
msg: "{{ lldp[item]['chassis']['name'] }} / {{ lldp[item]['port']['ifname'] }}"
with_items: "{{ lldp.keys() }}"
##### EXPECTED RESULTS
# TASK: [Print each switch/port] ***********************************************************
# ok: [10.13.0.22] => (item=eth2) => {"item": "eth2", "msg": "switch1.example.com / Gi0/24"}
# ok: [10.13.0.22] => (item=eth1) => {"item": "eth1", "msg": "switch2.example.com / Gi0/3"}
# ok: [10.13.0.22] => (item=eth0) => {"item": "eth0", "msg": "switch3.example.com / Gi0/3"}
##### ACTUAL RESULTS
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute \"dict_keys(['p1p1', 'em1', 'p1p2', 'p3p1', 'p3p2'])\"\n\nThe error appears to be in '/home/admin/ansible-linux/roles/test_rename/tasks/main.yml': line 5, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Print each switch/port\n ^ here\n"
| main | lldp module issue summary run the following test code in ansible s documentation name gather information from lldp lldp name print each switch port debug msg lldp lldp with items lldp keys task ok item item msg example com ok item item msg example com ok item item msg example com but fails with a return value msg the task includes an option with an undefined variable the error was dict object has no attribute dict keys n nthe error appears to be in home admin ansible linux roles test rename tasks main yml line column but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n n name print each switch port n here n issue type bug report component name lldp ansible version ansible config file home feisa ansible linux ansible cfg configured module search path ansible python module location usr local lib site packages ansible executable location usr local bin ansible python version default aug configuration ansible ssh args home admin ansible linux ansible cfg o controlmaster auto o controlpersist o stricthostkeychecking no o userknownhostsfile dev null default inventory plugin path home admin ansible linux ansible cfg default remote user home admin ansible linux ansible cfg root default roles path home admin ansible linux ansible cfg default scp if ssh home admin ansible linux ansible cfg true host key checking home admin ansible linux ansible cfg false retry files enabled home admin ansible linux ansible cfg false retry files save path home admin ansible linux ansible cfg tmp os environment centos steps to reproduce ran sample code from the documentation retrieve switch port information name gather information from lldp lldp name print each switch port debug msg lldp lldp with items lldp keys expected results task ok item item msg example com ok item item msg example com ok item item msg example com actual results msg the task includes an option with an undefined variable the error was dict object has no attribute dict keys n nthe error appears to be in home admin ansible linux roles test rename tasks main yml line column but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n n name print each switch port n here n | 1 |
403,897 | 27,440,589,342 | IssuesEvent | 2023-03-02 10:42:48 | mindsdb/mindsdb | https://api.github.com/repos/mindsdb/mindsdb | closed | [Docs] Create a doc page for the Google BigQuery data integration | help wanted good first issue documentation | ## Instructions :page_facing_up:
Please find the detailed instructions on how to create a doc page for the Google BigQuery data integration here: https://docs.mindsdb.com/contribute/data-integration-page
Make sure to check out the [README.md file](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/bigquery_handler/README.md).
## Hackathon Issue :loudspeaker:
MindsDB has organized a hackathon to let in more contributors to the in-database ML world.
Each hackathon issue is worth a certain amount of credits that will bring you prizes by the end of the MindsDB Hackathon.
Check out the [MindsDB Hackathon rules](https://mindsdb.com/mindsdb-hackathon)!
| 1.0 | [Docs] Create a doc page for the Google BigQuery data integration - ## Instructions :page_facing_up:
Please find the detailed instructions on how to create a doc page for the Google BigQuery data integration here: https://docs.mindsdb.com/contribute/data-integration-page
Make sure to check out the [README.md file](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/bigquery_handler/README.md).
## Hackathon Issue :loudspeaker:
MindsDB has organized a hackathon to let in more contributors to the in-database ML world.
Each hackathon issue is worth a certain amount of credits that will bring you prizes by the end of the MindsDB Hackathon.
Check out the [MindsDB Hackathon rules](https://mindsdb.com/mindsdb-hackathon)!
| non_main | create a doc page for the google bigquery data integration instructions page facing up please find the detailed instructions on how to create a doc page for the google bigquery data integration here make sure to check out the hackathon issue loudspeaker mindsdb has organized a hackathon to let in more contributors to the in database ml world each hackathon issue is worth a certain amount of credits that will bring you prizes by the end of the mindsdb hackathon check out the | 0 |
2,247 | 7,922,221,141 | IssuesEvent | 2018-07-05 10:04:00 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | opened | do_after does not check if target turf changed. | Maintainability/Hinders improvements | Only relevant when there's turf specific things after the check like in https://github.com/tgstation/tgstation/blob/3167e2be86c259291c089b0403ab7148534af7c0/code/game/turfs/simulated/wall/mineral_walls.dm#L141-L143 | True | do_after does not check if target turf changed. - Only relevant when there's turf specific things after the check like in https://github.com/tgstation/tgstation/blob/3167e2be86c259291c089b0403ab7148534af7c0/code/game/turfs/simulated/wall/mineral_walls.dm#L141-L143 | main | do after does not check if target turf changed only relevant when there s turf specific things after the check like in | 1 |
3,920 | 17,618,301,539 | IssuesEvent | 2021-08-18 12:35:27 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [feat] pass a ref to the checkbox label | type: enhancement 💡 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | Hello,
I need to show a tooltip when hover the label of a checkbox. The ref is targeting the input that is hidden and I would like to target the label to get the event.
Or maybe there is another way to trigger the tooltip?
use case example :
```
const [hoverref, isHover] = useHover()
return (
<div>
<Tooltip open={isHover} showIcon={false}>
<p>Timestamp cannot be applied to a process filter</p>
</Tooltip>
<Checkbox
id="specific-timestamp"
ref={hoverref}
labelText="Select case starting from a specific timestamp"
onChange={handleShowTimeOptions}
/>
</div>
)
```
thanks for any help
| True | [feat] pass a ref to the checkbox label - Hello,
I need to show a tooltip when hover the label of a checkbox. The ref is targeting the input that is hidden and I would like to target the label to get the event.
Or maybe there is another way to trigger the tooltip?
use case example :
```
const [hoverref, isHover] = useHover()
return (
<div>
<Tooltip open={isHover} showIcon={false}>
<p>Timestamp cannot be applied to a process filter</p>
</Tooltip>
<Checkbox
id="specific-timestamp"
ref={hoverref}
labelText="Select case starting from a specific timestamp"
onChange={handleShowTimeOptions}
/>
</div>
)
```
thanks for any help
| main | pass a ref to the checkbox label hello i need to show a tooltip when hover the label of a checkbox the ref is targeting the input that is hidden and i would like to target the label to get the event or maybe there is another way to trigger the tooltip use case example const usehover return timestamp cannot be applied to a process filter checkbox id specific timestamp ref hoverref labeltext select case starting from a specific timestamp onchange handleshowtimeoptions thanks for any help | 1 |
1,562 | 6,572,254,900 | IssuesEvent | 2017-09-11 00:40:04 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | S3: Connecting to S3 fails when the AWS credentials are read from environment variables and contain slashes | affects_1.9 aws bug_report cloud waiting_on_maintainer | #### Issue Type:
Bug Report
#### Ansible Version:
ansible 1.9.4
boto 2.38.0
#### Environment:
ubuntu-vivid-64
#### Summary:
Connecting to S3 fails when the AWS credentials are read from environment variables and contain slashes
#### Steps To Reproduce:
Generate aws access keys with slashes ( may take a few tries...)
Set the access key and secret as environment variables, for instance by using
```
export AWS_ACCESS_KEY_ID=ASAMAASA/6GA
export AWS_SECRET_ACCESS_KEY=THATSMYKE/Y
```
Now run the s3 module for instance like this:
```
ansible master -m s3 -a "bucket=lab object=/test/test.txt src= /test/test.txt mode=put"
```
### Expected results
You should be able to put the item into s3 without any problems
### Actual results
The connection fails with
`Invalid header value 'AWS AKI_obfuscated_NA\\r:H_obfuscated_3Ass8='\n".``
Putting the same key and secret as variables on input works fine, though, so there seems to be some issue when reading from the environment.
Complete error message
```
"msg": "Traceback (most recent call last):\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1445852577.57-258812722150749/s3\", line 2320, in <module>\n main()\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1445852577.57-258812722150749/s3\", line 431, in main\n create_bucket(module, s3, bucket, location)\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1445852577.57-258812722150749/s3\", line 171, in create_bucket\n bucket = s3.create_bucket(bucket, location=location)\n File \"/home/vagrant/.local/lib/python2.7/site-packages/boto/s3/connection.py\", line 612, in create_bucket\n data=data)\n File \"/home/vagrant/.local/lib/python2.7/site-packages/boto/s3/connection.py\", line 664, in make_request\n retry_handler=retry_handler\n File \"/home/vagrant/.local/lib/python2.7/site-packages/boto/connection.py\", line 1071, in make_request\n retry_handler=retry_handler)\n File \"/home/vagrant/.local/lib/python2.7/site-packages/boto/connection.py\", line 943, in _mexe\n request.body, request.headers)\n File \"/usr/lib/python2.7/httplib.py\", line 1048, in request\n self._send_request(method, url, body, headers)\n File \"/usr/lib/python2.7/httplib.py\", line 1087, in _send_request\n self.putheader(hdr, value)\n File \"/usr/lib/python2.7/httplib.py\", line 1026, in putheader\n raise ValueError('Invalid header value %r' % (one_value,))\nValueError: Invalid header value 'AWS AKI_obfuscated_NA\\r:H_obfuscated_3Ass8='\n",
```
| True | S3: Connecting to S3 fails when the AWS credentials are read from environment variables and contain slashes - #### Issue Type:
Bug Report
#### Ansible Version:
ansible 1.9.4
boto 2.38.0
#### Environment:
ubuntu-vivid-64
#### Summary:
Connecting to S3 fails when the AWS credentials are read from environment variables and contain slashes
#### Steps To Reproduce:
Generate aws access keys with slashes ( may take a few tries...)
Set the access key and secret as environment variables, for instance by using
```
export AWS_ACCESS_KEY_ID=ASAMAASA/6GA
export AWS_SECRET_ACCESS_KEY=THATSMYKE/Y
```
Now run the s3 module for instance like this:
```
ansible master -m s3 -a "bucket=lab object=/test/test.txt src= /test/test.txt mode=put"
```
### Expected results
You should be able to put the item into s3 without any problems
### Actual results
The connection fails with
`Invalid header value 'AWS AKI_obfuscated_NA\\r:H_obfuscated_3Ass8='\n".``
Putting the same key and secret as variables on input works fine, though, so there seems to be some issue when reading from the environment.
Complete error message
```
"msg": "Traceback (most recent call last):\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1445852577.57-258812722150749/s3\", line 2320, in <module>\n main()\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1445852577.57-258812722150749/s3\", line 431, in main\n create_bucket(module, s3, bucket, location)\n File \"/home/vagrant/.ansible/tmp/ansible-tmp-1445852577.57-258812722150749/s3\", line 171, in create_bucket\n bucket = s3.create_bucket(bucket, location=location)\n File \"/home/vagrant/.local/lib/python2.7/site-packages/boto/s3/connection.py\", line 612, in create_bucket\n data=data)\n File \"/home/vagrant/.local/lib/python2.7/site-packages/boto/s3/connection.py\", line 664, in make_request\n retry_handler=retry_handler\n File \"/home/vagrant/.local/lib/python2.7/site-packages/boto/connection.py\", line 1071, in make_request\n retry_handler=retry_handler)\n File \"/home/vagrant/.local/lib/python2.7/site-packages/boto/connection.py\", line 943, in _mexe\n request.body, request.headers)\n File \"/usr/lib/python2.7/httplib.py\", line 1048, in request\n self._send_request(method, url, body, headers)\n File \"/usr/lib/python2.7/httplib.py\", line 1087, in _send_request\n self.putheader(hdr, value)\n File \"/usr/lib/python2.7/httplib.py\", line 1026, in putheader\n raise ValueError('Invalid header value %r' % (one_value,))\nValueError: Invalid header value 'AWS AKI_obfuscated_NA\\r:H_obfuscated_3Ass8='\n",
```
| main | connecting to fails when the aws credentials are read from environment variables and contain slashes issue type bug report ansible version ansible boto environment ubuntu vivid summary connecting to fails when the aws credentials are read from environment variables and contain slashes steps to reproduce generate aws access keys with slashes may take a few tries set the access key and secret as environment variables for instance by using export aws access key id asamaasa export aws secret access key thatsmyke y now run the module for instance like this ansible master m a bucket lab object test test txt src test test txt mode put expected results you should be able to put the item into without any problems actual results the connection fails with invalid header value aws aki obfuscated na r h obfuscated n putting the same key and secret as variables on input works fine though so there seems to be some issue when reading from the environment complete error message msg traceback most recent call last n file home vagrant ansible tmp ansible tmp line in n main n file home vagrant ansible tmp ansible tmp line in main n create bucket module bucket location n file home vagrant ansible tmp ansible tmp line in create bucket n bucket create bucket bucket location location n file home vagrant local lib site packages boto connection py line in create bucket n data data n file home vagrant local lib site packages boto connection py line in make request n retry handler retry handler n file home vagrant local lib site packages boto connection py line in make request n retry handler retry handler n file home vagrant local lib site packages boto connection py line in mexe n request body request headers n file usr lib httplib py line in request n self send request method url body headers n file usr lib httplib py line in send request n self putheader hdr value n file usr lib httplib py line in putheader n raise valueerror invalid header value r one value nvalueerror invalid header value aws aki obfuscated na r h obfuscated n | 1 |
8,033 | 7,191,924,453 | IssuesEvent | 2018-02-02 23:12:14 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Invalid setting csharp_new_line_within_query_expression_clauses in .editorconfig | area-Infrastructure enhancement up-for-grabs | I used .editorconfig from corefx in our project. Noticed that .editorconfig file has one invalid setting that Roslyn doesn't support csharp_new_line_within_query_expression_clauses.
But Roslyn does support csharp_new_line_between_query_expression_clauses and it's missing from .editorconfig.
Also noticed that at least following settings are missing as well
csharp_indent_case_contents_when_block
dotnet_style_prefer_inferred_tuple_names
dotnet_style_prefer_inferred_anonymous_type_member_names
dotnet_style_require_accessibility_modifiers
dotnet_style_prefer_auto_properties
csharp_style_deconstructed_variable_declaration
dotnet_style_prefer_is_null_check_over_reference_equality_method | 1.0 | Invalid setting csharp_new_line_within_query_expression_clauses in .editorconfig - I used .editorconfig from corefx in our project. Noticed that .editorconfig file has one invalid setting that Roslyn doesn't support csharp_new_line_within_query_expression_clauses.
But Roslyn does support csharp_new_line_between_query_expression_clauses and it's missing from .editorconfig.
Also noticed that at least following settings are missing as well
csharp_indent_case_contents_when_block
dotnet_style_prefer_inferred_tuple_names
dotnet_style_prefer_inferred_anonymous_type_member_names
dotnet_style_require_accessibility_modifiers
dotnet_style_prefer_auto_properties
csharp_style_deconstructed_variable_declaration
dotnet_style_prefer_is_null_check_over_reference_equality_method | non_main | invalid setting csharp new line within query expression clauses in editorconfig i used editorconfig from corefx in our project noticed that editorconfig file has one invalid setting that roslyn doesn t support csharp new line within query expression clauses but roslyn does support csharp new line between query expression clauses and it s missing from editorconfig also noticed that at least following settings are missing as well csharp indent case contents when block dotnet style prefer inferred tuple names dotnet style prefer inferred anonymous type member names dotnet style require accessibility modifiers dotnet style prefer auto properties csharp style deconstructed variable declaration dotnet style prefer is null check over reference equality method | 0 |
25,699 | 4,417,714,277 | IssuesEvent | 2016-08-15 07:25:20 | snowie2000/mactype | https://api.github.com/repos/snowie2000/mactype | closed | Letter 'g' and 'j' is cutout with Verdana 12px | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Expected output is a normal letter 'g' and 'j' but instead the bottom is cutout
in the text.
What version of the product are you using? On what operating system?
MacType 1.2012.0406.0, LCD-style, Windows 7 Professional 64-bit. Firefox 11.0.
Same issue in Chrome 18+.
Please provide any additional information below.
This seems to happen with the font "Verdana" size 12px. I looked at many
different sites with the same font/fontsize and the output is the same.
It's not a big deal but maybe good to know if u are updating it for the future.
Posting a picture so you can see how it looks.
Keep up the good work!
```
Original issue reported on code.google.com by `end...@gmail.com` on 19 Apr 2012 at 1:14
Attachments:
* [Untitled.png](https://storage.googleapis.com/google-code-attachments/mactype/issue-7/comment-0/Untitled.png)
| 1.0 | Letter 'g' and 'j' is cutout with Verdana 12px - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Expected output is a normal letter 'g' and 'j' but instead the bottom is cutout
in the text.
What version of the product are you using? On what operating system?
MacType 1.2012.0406.0, LCD-style, Windows 7 Professional 64-bit. Firefox 11.0.
Same issue in Chrome 18+.
Please provide any additional information below.
This seems to happen with the font "Verdana" size 12px. I looked at many
different sites with the same font/fontsize and the output is the same.
It's not a big deal but maybe good to know if u are updating it for the future.
Posting a picture so you can see how it looks.
Keep up the good work!
```
Original issue reported on code.google.com by `end...@gmail.com` on 19 Apr 2012 at 1:14
Attachments:
* [Untitled.png](https://storage.googleapis.com/google-code-attachments/mactype/issue-7/comment-0/Untitled.png)
| non_main | letter g and j is cutout with verdana what steps will reproduce the problem what is the expected output what do you see instead expected output is a normal letter g and j but instead the bottom is cutout in the text what version of the product are you using on what operating system mactype lcd style windows professional bit firefox same issue in chrome please provide any additional information below this seems to happen with the font verdana size i looked at many different sites with the same font fontsize and the output is the same it s not a big deal but maybe good to know if u are updating it for the future posting a picture so you can see how it looks keep up the good work original issue reported on code google com by end gmail com on apr at attachments | 0 |
4,366 | 22,113,311,332 | IssuesEvent | 2022-06-01 23:49:15 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Remove @set-text-size mixin | engineering frontend Maintain | All SASS files are using the `@set-text-size` mixin.
Since the move to Tailwind, we would need that mixin anymore.
Refactoring the sass files will be a lot easier if we do not have to think about how the mixin transforms even if it makes the sass files temporarily more dry code | True | Remove @set-text-size mixin - All SASS files are using the `@set-text-size` mixin.
Since the move to Tailwind, we would need that mixin anymore.
Refactoring the sass files will be a lot easier if we do not have to think about how the mixin transforms even if it makes the sass files temporarily more dry code | main | remove set text size mixin all sass files are using the set text size mixin since the move to tailwind we would need that mixin anymore refactoring the sass files will be a lot easier if we do not have to think about how the mixin transforms even if it makes the sass files temporarily more dry code | 1 |
87,466 | 10,546,546,821 | IssuesEvent | 2019-10-02 21:46:38 | somewhatabstract/checksync | https://api.github.com/repos/somewhatabstract/checksync | opened | Improve readme and help around tagging | documentation enhancement good first issue | We should have some more examples of how tags work, especially regarding multiple files targeting one another on the same content.
Regarding the readme, some images may help illustrate much more clearly. | 1.0 | Improve readme and help around tagging - We should have some more examples of how tags work, especially regarding multiple files targeting one another on the same content.
Regarding the readme, some images may help illustrate much more clearly. | non_main | improve readme and help around tagging we should have some more examples of how tags work especially regarding multiple files targeting one another on the same content regarding the readme some images may help illustrate much more clearly | 0 |
2,159 | 7,504,457,326 | IssuesEvent | 2018-04-10 03:44:43 | hnordt/reminders | https://api.github.com/repos/hnordt/reminders | closed | Fix "method_complexity" issue in src/_infinity/utils/processUpdate.js | maintainability | Function `processUpdate` has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
https://codeclimate.com/github/hnordt/reminders/src/_infinity/utils/processUpdate.js#issue_5acc29fdc046c3000100007b | True | Fix "method_complexity" issue in src/_infinity/utils/processUpdate.js - Function `processUpdate` has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
https://codeclimate.com/github/hnordt/reminders/src/_infinity/utils/processUpdate.js#issue_5acc29fdc046c3000100007b | main | fix method complexity issue in src infinity utils processupdate js function processupdate has a cognitive complexity of exceeds allowed consider refactoring | 1 |
6,548 | 7,687,153,274 | IssuesEvent | 2018-05-17 03:40:29 | GalateaEngine/Galatea | https://api.github.com/repos/GalateaEngine/Galatea | closed | Command line arguments to override config file parameters | emotion_classifier enhancement microservices | src/microservices/modules/emotion/
Currently, we read the config file for all parameters for the server
The usage of command line arguments to either override the current config file or override arguments.
ideally with Argsparse | 1.0 | Command line arguments to override config file parameters - src/microservices/modules/emotion/
Currently, we read the config file for all parameters for the server
The usage of command line arguments to either override the current config file or override arguments.
ideally with Argsparse | non_main | command line arguments to override config file parameters src microservices modules emotion currently we read the config file for all parameters for the server the usage of command line arguments to either override the current config file or override arguments ideally with argsparse | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.