Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
4,265
21,280,249,914
IssuesEvent
2022-04-14 00:29:34
aws/aws-lambda-builders
https://api.github.com/repos/aws/aws-lambda-builders
closed
JavaGradleWorkflow fails with Gradle <3.5; Cannot convert the provided notation to a File or URI
area/workflow/java_gradle maintainer/need-followup
**Description:** When I run `sam build` on my Java project with Gradle 3.4.1, it always fails. This appears to be due to https://github.com/awslabs/aws-lambda-builders/blob/develop/aws_lambda_builders/workflows/java_gradle/resources/lambda-build-init.gradle#L18 setting the project `buildDir` property to a `Path`, when it isn't prepared for it. Adding `.toFile()` to this line appears to fix the problem. It appears that Gradle 3.5+ support resolving `Path` values. **Steps to reproduce the issue:** 1. `sam init --runtime java8 --dependency-manager gradle --name gradlebuildtest` 2. `cd gradlebuildtest/` 3. `(cd HelloWorldFunction && sdk use gradle 3.4.1 && gradle wrapper) && sam build` **Observed result:** ``` Using gradle version 3.4.1 in this shell. :wrapper BUILD SUCCESSFUL Total time: 1.31 secs Building resource 'HelloWorldFunction' Running JavaGradleWorkflow:GradleBuild Build Failed Error: JavaGradleWorkflow:GradleBuild - Gradle Failed: FAILURE: Build failed with an exception. * What went wrong: A problem occurred configuring root project 'HelloWorldFunction'. > Cannot convert the provided notation to a File or URI: /var/folders/p1/w3tg9xz54lj_4xgwj60tffb126v92f/T/tmptseldu67/4abbb2503507efca0fbeaf9d14459fc8cdd6af90/build. The following types/formats are supported: - A String or CharSequence path, for example 'src/main/java' or '/usr/include'. - A String or CharSequence URI, for example 'file:/usr/include'. - A File instance. - A URI or URL instance. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. mac-dcarr:gradlebuildtest dcarr$ ``` **Expected result:** ``` mac-dcarr:gradlebuildtest dcarr$ (cd HelloWorldFunction && sdk use gradle 5.6.3 && gradle wrapper) && sam build Using gradle version 5.6.3 in this shell. BUILD SUCCESSFUL in 727ms 1 actionable task: 1 executed Building resource 'HelloWorldFunction' Running JavaGradleWorkflow:GradleBuild Running JavaGradleWorkflow:CopyArtifacts Build Succeeded Built Artifacts : .aws-sam/build Built Template : .aws-sam/build/template.yaml Commands you can use next ========================= [*] Invoke Function: sam local invoke [*] Package: sam package --s3-bucket <yourbucket> mac-dcarr:gradlebuildtest dcarr$ ``` **Additional environment details (Ex: Windows, Mac, Amazon Linux etc)** Scripts were run on Mac OS X 10.14.6, and use https://sdkman.io/ for Gradle version management.
True
JavaGradleWorkflow fails with Gradle <3.5; Cannot convert the provided notation to a File or URI - **Description:** When I run `sam build` on my Java project with Gradle 3.4.1, it always fails. This appears to be due to https://github.com/awslabs/aws-lambda-builders/blob/develop/aws_lambda_builders/workflows/java_gradle/resources/lambda-build-init.gradle#L18 setting the project `buildDir` property to a `Path`, when it isn't prepared for it. Adding `.toFile()` to this line appears to fix the problem. It appears that Gradle 3.5+ support resolving `Path` values. **Steps to reproduce the issue:** 1. `sam init --runtime java8 --dependency-manager gradle --name gradlebuildtest` 2. `cd gradlebuildtest/` 3. `(cd HelloWorldFunction && sdk use gradle 3.4.1 && gradle wrapper) && sam build` **Observed result:** ``` Using gradle version 3.4.1 in this shell. :wrapper BUILD SUCCESSFUL Total time: 1.31 secs Building resource 'HelloWorldFunction' Running JavaGradleWorkflow:GradleBuild Build Failed Error: JavaGradleWorkflow:GradleBuild - Gradle Failed: FAILURE: Build failed with an exception. * What went wrong: A problem occurred configuring root project 'HelloWorldFunction'. > Cannot convert the provided notation to a File or URI: /var/folders/p1/w3tg9xz54lj_4xgwj60tffb126v92f/T/tmptseldu67/4abbb2503507efca0fbeaf9d14459fc8cdd6af90/build. The following types/formats are supported: - A String or CharSequence path, for example 'src/main/java' or '/usr/include'. - A String or CharSequence URI, for example 'file:/usr/include'. - A File instance. - A URI or URL instance. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. mac-dcarr:gradlebuildtest dcarr$ ``` **Expected result:** ``` mac-dcarr:gradlebuildtest dcarr$ (cd HelloWorldFunction && sdk use gradle 5.6.3 && gradle wrapper) && sam build Using gradle version 5.6.3 in this shell. BUILD SUCCESSFUL in 727ms 1 actionable task: 1 executed Building resource 'HelloWorldFunction' Running JavaGradleWorkflow:GradleBuild Running JavaGradleWorkflow:CopyArtifacts Build Succeeded Built Artifacts : .aws-sam/build Built Template : .aws-sam/build/template.yaml Commands you can use next ========================= [*] Invoke Function: sam local invoke [*] Package: sam package --s3-bucket <yourbucket> mac-dcarr:gradlebuildtest dcarr$ ``` **Additional environment details (Ex: Windows, Mac, Amazon Linux etc)** Scripts were run on Mac OS X 10.14.6, and use https://sdkman.io/ for Gradle version management.
main
javagradleworkflow fails with gradle cannot convert the provided notation to a file or uri description when i run sam build on my java project with gradle it always fails this appears to be due to setting the project builddir property to a path when it isn t prepared for it adding tofile to this line appears to fix the problem it appears that gradle support resolving path values steps to reproduce the issue sam init runtime dependency manager gradle name gradlebuildtest cd gradlebuildtest cd helloworldfunction sdk use gradle gradle wrapper sam build observed result using gradle version in this shell wrapper build successful total time secs building resource helloworldfunction running javagradleworkflow gradlebuild build failed error javagradleworkflow gradlebuild gradle failed failure build failed with an exception what went wrong a problem occurred configuring root project helloworldfunction cannot convert the provided notation to a file or uri var folders t build the following types formats are supported a string or charsequence path for example src main java or usr include a string or charsequence uri for example file usr include a file instance a uri or url instance try run with stacktrace option to get the stack trace run with info or debug option to get more log output mac dcarr gradlebuildtest dcarr expected result mac dcarr gradlebuildtest dcarr cd helloworldfunction sdk use gradle gradle wrapper sam build using gradle version in this shell build successful in actionable task executed building resource helloworldfunction running javagradleworkflow gradlebuild running javagradleworkflow copyartifacts build succeeded built artifacts aws sam build built template aws sam build template yaml commands you can use next invoke function sam local invoke package sam package bucket mac dcarr gradlebuildtest dcarr additional environment details ex windows mac amazon linux etc scripts were run on mac os x and use for gradle version management
1
4,612
23,879,130,572
IssuesEvent
2022-09-07 22:28:31
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Feature request: AWS::LanguageExtensions support
type/feature maintainer/need-followup
### Describe your idea/feature/enhancement I wish SAM CLI would handle the new AWS::LanguageExtensions transform as specified in the documentation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-languageextension-transform.html Example template: ```yml AWSTemplateFormatVersion: 2010-09-09 Transform: - AWS::LanguageExtensions - AWS::Serverless-2016-10-31 Parameters: Environment: Type: String Default: dev AllowedValues: - dev - prod Conditions: IsProd: !Equals [!Ref Environment, prod] Resources: Bucket: Type: AWS::S3::Bucket DeletionPolicy: !If [IsProd, Retain, Delete] UpdateReplacePolicy: !If [IsProd, Retain, Delete] ``` `AWS CLI`, version 2.7.7 succeeds with: ``` ❯ aws cloudformation deploy --stack-name test-language-extension --template-file sam/test-template.yml Waiting for changeset to be created.. Waiting for stack create/update to complete Successfully created/updated stack - test-language-extension ``` `SAM CLI`, version 1.55.0 fail with: ``` sam deploy --stack-name test-language-extension --template-file sam/test-template.yml Traceback (most recent call last): [... redacted stack trace ...] File "/usr/local/Cellar/aws-sam-cli/1.55.0/libexec/lib/python3.8/site-packages/samcli/lib/samlib/wrapper.py", line 70, in run_plugins raise InvalidSamDocumentException( samcli.commands.validate.lib.exceptions.InvalidSamDocumentException: [InvalidTemplateException('Every DeletionPolicy member must be a string.')] Every DeletionPolicy member must be a string. ``` ### Proposal Implement the same changes as `aws-cli` and `cfn-lint` For reference, the releases/announcements from both tools - https://github.com/aws-cloudformation/cfn-lint/compare/v0.62.0..v0.63.0 - https://github.com/aws/aws-cli/issues/3825#issuecomment-1231608258 Things to consider: The order of the transforms is important and using multiple transforms is currently broken in `cfn-lint` https://github.com/aws-cloudformation/cfn-lint/issues/2346 ### Additional Details No additional details
True
Feature request: AWS::LanguageExtensions support - ### Describe your idea/feature/enhancement I wish SAM CLI would handle the new AWS::LanguageExtensions transform as specified in the documentation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-languageextension-transform.html Example template: ```yml AWSTemplateFormatVersion: 2010-09-09 Transform: - AWS::LanguageExtensions - AWS::Serverless-2016-10-31 Parameters: Environment: Type: String Default: dev AllowedValues: - dev - prod Conditions: IsProd: !Equals [!Ref Environment, prod] Resources: Bucket: Type: AWS::S3::Bucket DeletionPolicy: !If [IsProd, Retain, Delete] UpdateReplacePolicy: !If [IsProd, Retain, Delete] ``` `AWS CLI`, version 2.7.7 succeeds with: ``` ❯ aws cloudformation deploy --stack-name test-language-extension --template-file sam/test-template.yml Waiting for changeset to be created.. Waiting for stack create/update to complete Successfully created/updated stack - test-language-extension ``` `SAM CLI`, version 1.55.0 fail with: ``` sam deploy --stack-name test-language-extension --template-file sam/test-template.yml Traceback (most recent call last): [... redacted stack trace ...] File "/usr/local/Cellar/aws-sam-cli/1.55.0/libexec/lib/python3.8/site-packages/samcli/lib/samlib/wrapper.py", line 70, in run_plugins raise InvalidSamDocumentException( samcli.commands.validate.lib.exceptions.InvalidSamDocumentException: [InvalidTemplateException('Every DeletionPolicy member must be a string.')] Every DeletionPolicy member must be a string. ``` ### Proposal Implement the same changes as `aws-cli` and `cfn-lint` For reference, the releases/announcements from both tools - https://github.com/aws-cloudformation/cfn-lint/compare/v0.62.0..v0.63.0 - https://github.com/aws/aws-cli/issues/3825#issuecomment-1231608258 Things to consider: The order of the transforms is important and using multiple transforms is currently broken in `cfn-lint` https://github.com/aws-cloudformation/cfn-lint/issues/2346 ### Additional Details No additional details
main
feature request aws languageextensions support describe your idea feature enhancement i wish sam cli would handle the new aws languageextensions transform as specified in the documentation example template yml awstemplateformatversion transform aws languageextensions aws serverless parameters environment type string default dev allowedvalues dev prod conditions isprod equals resources bucket type aws bucket deletionpolicy if updatereplacepolicy if aws cli version succeeds with ❯ aws cloudformation deploy stack name test language extension template file sam test template yml waiting for changeset to be created waiting for stack create update to complete successfully created updated stack test language extension sam cli version fail with sam deploy stack name test language extension template file sam test template yml traceback most recent call last file usr local cellar aws sam cli libexec lib site packages samcli lib samlib wrapper py line in run plugins raise invalidsamdocumentexception samcli commands validate lib exceptions invalidsamdocumentexception every deletionpolicy member must be a string proposal implement the same changes as aws cli and cfn lint for reference the releases announcements from both tools things to consider the order of the transforms is important and using multiple transforms is currently broken in cfn lint additional details no additional details
1
1,720
6,574,483,942
IssuesEvent
2017-09-11 13:03:43
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Ansible apt ignore cache_valid_time value
affects_2.2 bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> apt ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /home/vagrant/my/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ansible.cfg: [defaults] hostfile = hosts ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> OS you are running Ansible from: Ubuntu 16.04 OS you are managing: Ubuntu 16.04 ##### SUMMARY <!--- Explain the problem briefly --> After upgradig to ansible 2.2 I always get changes in apt module because it ignore **cache_valid_time** value. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` test.yml: --- - hosts: localvm become: yes tasks: - name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago apt: update_cache: yes cache_valid_time: 3600 vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Update apt cache on first run, skip updating cache on second run. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Always changes. <!--- Paste verbatim command output between quotes below --> ``` vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv Using /home/vagrant/my/ansible.cfg as config file PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [localvm] ***************************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `" && echo ansible-tmp-1478178800.59-26361197346329="` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `" ) && sleep 0'"'"'' <192.168.60.4> PUT /tmp/tmpGz1Eb9 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py && sleep 0'"'"'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bblyfpmawwxwihkyhdzgsrwimfkjlzuk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' ok: [192.168.60.4] TASK [Only run "update_cache=yes" if the last one is more than 3600 seconds ago] *** task path: /home/vagrant/my/test.yml:6 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `" && echo ansible-tmp-1478178801.29-209769775274469="` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `" ) && sleep 0'"'"'' <192.168.60.4> PUT /tmp/tmpb8HOiL TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py && sleep 0'"'"'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-soyskgemfitdsrhonujvdopjieqzexmq; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' changed: [192.168.60.4] => { "cache_update_time": 1478170123, "cache_updated": true, "changed": true, "invocation": { "module_args": { "allow_unauthenticated": false, "autoremove": false, "cache_valid_time": 3600, "deb": null, "default_release": null, "dpkg_options": "force-confdef,force-confold", "force": false, "install_recommends": null, "only_upgrade": false, "package": null, "purge": false, "state": "present", "update_cache": true, "upgrade": null }, "module_name": "apt" } } PLAY RECAP ********************************************************************* 192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0 vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv Using /home/vagrant/my/ansible.cfg as config file PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [localvm] ***************************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `" && echo ansible-tmp-1478178871.45-218992397586023="` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `" ) && sleep 0'"'"'' <192.168.60.4> PUT /tmp/tmpv9o0e3 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py && sleep 0'"'"'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-lwuttqhzswvnqlvkfcbraivcbuceisuz; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' ok: [192.168.60.4] TASK [Only run "update_cache=yes" if the last one is more than 3600 seconds ago] *** task path: /home/vagrant/my/test.yml:6 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `" && echo ansible-tmp-1478178872.37-148384000832646="` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `" ) && sleep 0'"'"'' <192.168.60.4> PUT /tmp/tmp3rCfzf TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py && sleep 0'"'"'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xhjxsxornuzelhyvlsiksuindfcmjlpx; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' changed: [192.168.60.4] => { "cache_update_time": 1478170123, "cache_updated": true, "changed": true, "invocation": { "module_args": { "allow_unauthenticated": false, "autoremove": false, "cache_valid_time": 3600, "deb": null, "default_release": null, "dpkg_options": "force-confdef,force-confold", "force": false, "install_recommends": null, "only_upgrade": false, "package": null, "purge": false, "state": "present", "update_cache": true, "upgrade": null }, "module_name": "apt" } } PLAY RECAP ********************************************************************* 192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0 ``` It seems **cache_update_time** didn't updated.
True
Ansible apt ignore cache_valid_time value - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> apt ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /home/vagrant/my/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ansible.cfg: [defaults] hostfile = hosts ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> OS you are running Ansible from: Ubuntu 16.04 OS you are managing: Ubuntu 16.04 ##### SUMMARY <!--- Explain the problem briefly --> After upgradig to ansible 2.2 I always get changes in apt module because it ignore **cache_valid_time** value. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` test.yml: --- - hosts: localvm become: yes tasks: - name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago apt: update_cache: yes cache_valid_time: 3600 vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Update apt cache on first run, skip updating cache on second run. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Always changes. <!--- Paste verbatim command output between quotes below --> ``` vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv Using /home/vagrant/my/ansible.cfg as config file PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [localvm] ***************************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `" && echo ansible-tmp-1478178800.59-26361197346329="` echo $HOME/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329 `" ) && sleep 0'"'"'' <192.168.60.4> PUT /tmp/tmpGz1Eb9 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py && sleep 0'"'"'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bblyfpmawwxwihkyhdzgsrwimfkjlzuk; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178800.59-26361197346329/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' ok: [192.168.60.4] TASK [Only run "update_cache=yes" if the last one is more than 3600 seconds ago] *** task path: /home/vagrant/my/test.yml:6 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `" && echo ansible-tmp-1478178801.29-209769775274469="` echo $HOME/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469 `" ) && sleep 0'"'"'' <192.168.60.4> PUT /tmp/tmpb8HOiL TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py && sleep 0'"'"'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-soyskgemfitdsrhonujvdopjieqzexmq; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/apt.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178801.29-209769775274469/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' changed: [192.168.60.4] => { "cache_update_time": 1478170123, "cache_updated": true, "changed": true, "invocation": { "module_args": { "allow_unauthenticated": false, "autoremove": false, "cache_valid_time": 3600, "deb": null, "default_release": null, "dpkg_options": "force-confdef,force-confold", "force": false, "install_recommends": null, "only_upgrade": false, "package": null, "purge": false, "state": "present", "update_cache": true, "upgrade": null }, "module_name": "apt" } } PLAY RECAP ********************************************************************* 192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0 vagrant@ans-contrl:~/my$ ansible-playbook test.yml -vvv Using /home/vagrant/my/ansible.cfg as config file PLAYBOOK: test.yml ************************************************************* 1 plays in test.yml PLAY [localvm] ***************************************************************** TASK [setup] ******************************************************************* Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `" && echo ansible-tmp-1478178871.45-218992397586023="` echo $HOME/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023 `" ) && sleep 0'"'"'' <192.168.60.4> PUT /tmp/tmpv9o0e3 TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py && sleep 0'"'"'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-lwuttqhzswvnqlvkfcbraivcbuceisuz; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/setup.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178871.45-218992397586023/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' ok: [192.168.60.4] TASK [Only run "update_cache=yes" if the last one is more than 3600 seconds ago] *** task path: /home/vagrant/my/test.yml:6 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `" && echo ansible-tmp-1478178872.37-148384000832646="` echo $HOME/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646 `" ) && sleep 0'"'"'' <192.168.60.4> PUT /tmp/tmp3rCfzf TO /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py <192.168.60.4> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.60.4]' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.60.4 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/ /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py && sleep 0'"'"'' <192.168.60.4> ESTABLISH SSH CONNECTION FOR USER: vagrant <192.168.60.4> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 192.168.60.4 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xhjxsxornuzelhyvlsiksuindfcmjlpx; /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/apt.py; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1478178872.37-148384000832646/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' changed: [192.168.60.4] => { "cache_update_time": 1478170123, "cache_updated": true, "changed": true, "invocation": { "module_args": { "allow_unauthenticated": false, "autoremove": false, "cache_valid_time": 3600, "deb": null, "default_release": null, "dpkg_options": "force-confdef,force-confold", "force": false, "install_recommends": null, "only_upgrade": false, "package": null, "purge": false, "state": "present", "update_cache": true, "upgrade": null }, "module_name": "apt" } } PLAY RECAP ********************************************************************* 192.168.60.4 : ok=2 changed=1 unreachable=0 failed=0 ``` It seems **cache_update_time** didn't updated.
main
ansible apt ignore cache valid time value issue type bug report component name apt ansible version ansible config file home vagrant my ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible cfg hostfile hosts os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific os you are running ansible from ubuntu os you are managing ubuntu summary after upgradig to ansible i always get changes in apt module because it ignore cache valid time value steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used test yml hosts localvm become yes tasks name only run update cache yes if the last one is more than seconds ago apt update cache yes cache valid time vagrant ans contrl my ansible playbook test yml vvv expected results update apt cache on first run skip updating cache on second run actual results always changes vagrant ans contrl my ansible playbook test yml vvv using home vagrant my ansible cfg as config file playbook test yml plays in test yml play task using module file usr lib dist packages ansible modules core system setup py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp setup py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success bblyfpmawwxwihkyhdzgsrwimfkjlzuk usr bin python home vagrant ansible tmp ansible tmp setup py rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant my test yml using module file usr lib dist packages ansible modules core packaging os apt py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp apt py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp apt py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success soyskgemfitdsrhonujvdopjieqzexmq usr bin python home vagrant ansible tmp ansible tmp apt py rm rf home vagrant ansible tmp ansible tmp dev null sleep changed cache update time cache updated true changed true invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null only upgrade false package null purge false state present update cache true upgrade null module name apt play recap ok changed unreachable failed vagrant ans contrl my ansible playbook test yml vvv using home vagrant my ansible cfg as config file playbook test yml plays in test yml play task using module file usr lib dist packages ansible modules core system setup py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp setup py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp setup py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success lwuttqhzswvnqlvkfcbraivcbuceisuz usr bin python home vagrant ansible tmp ansible tmp setup py rm rf home vagrant ansible tmp ansible tmp dev null sleep ok task task path home vagrant my test yml using module file usr lib dist packages ansible modules core packaging os apt py establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home vagrant ansible tmp ansible tmp apt py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x home vagrant ansible tmp ansible tmp home vagrant ansible tmp ansible tmp apt py sleep establish ssh connection for user vagrant ssh exec ssh c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user vagrant o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success xhjxsxornuzelhyvlsiksuindfcmjlpx usr bin python home vagrant ansible tmp ansible tmp apt py rm rf home vagrant ansible tmp ansible tmp dev null sleep changed cache update time cache updated true changed true invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null only upgrade false package null purge false state present update cache true upgrade null module name apt play recap ok changed unreachable failed it seems cache update time didn t updated
1
273,306
23,745,130,865
IssuesEvent
2022-08-31 15:21:02
Kong/gateway-operator
https://api.github.com/repos/Kong/gateway-operator
closed
make run target not working when running on GKE
bug area/tests priority/low
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior `make run` target seem to not work when running GKE ```console $ make run /Users/jarek.mroz@konghq.com/git/gateway-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases /Users/jarek.mroz@konghq.com/git/gateway-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..." go fmt ./... go vet ./... /Users/jarek.mroz@konghq.com/git/gateway-operator/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/controlplanes.gateway-operator.konghq.com created customresourcedefinition.apiextensions.k8s.io/dataplanes.gateway-operator.konghq.com created customresourcedefinition.apiextensions.k8s.io/gatewayconfigurations.gateway-operator.konghq.com created kubectl kustomize https://github.com/kubernetes-sigs/gateway-api.git/config/crd?ref=main | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created CONTROLLER_DEVELOPMENT_MODE=true go run ./main.go --no-leader-election INFO: development mode has been enabled INFO: leader election has been disabled 1.65732348903128e+09 ERROR Failed to get API Group-Resources {"error": "no Auth Provider found for name \"gcp\""} sigs.k8s.io/controller-runtime/pkg/cluster.New /Users/jarek.mroz@konghq.com/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.12.3/pkg/cluster/cluster.go:160 sigs.k8s.io/controller-runtime/pkg/manager.New /Users/jarek.mroz@konghq.com/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.12.3/pkg/manager/manager.go:322 github.com/kong/gateway-operator/internal/manager.Run /Users/jarek.mroz@konghq.com/git/gateway-operator/internal/manager/run.go:84 main.main /Users/jarek.mroz@konghq.com/git/gateway-operator/main.go:71 runtime.main /opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/proc.go:250 unable to start manager: no Auth Provider found for name "gcp" exit status 1 make: *** [run] Error 1 ``` ### Expected Behavior Should work. ### Steps To Reproduce ```markdown 1. Setup google cloud cluster 2. Run `make run` command ``` ### Kong Ingress Controller version _No response_ ### Kubernetes version _No response_ ### Anything else? _No response_
1.0
make run target not working when running on GKE - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior `make run` target seem to not work when running GKE ```console $ make run /Users/jarek.mroz@konghq.com/git/gateway-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases /Users/jarek.mroz@konghq.com/git/gateway-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..." go fmt ./... go vet ./... /Users/jarek.mroz@konghq.com/git/gateway-operator/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/controlplanes.gateway-operator.konghq.com created customresourcedefinition.apiextensions.k8s.io/dataplanes.gateway-operator.konghq.com created customresourcedefinition.apiextensions.k8s.io/gatewayconfigurations.gateway-operator.konghq.com created kubectl kustomize https://github.com/kubernetes-sigs/gateway-api.git/config/crd?ref=main | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created CONTROLLER_DEVELOPMENT_MODE=true go run ./main.go --no-leader-election INFO: development mode has been enabled INFO: leader election has been disabled 1.65732348903128e+09 ERROR Failed to get API Group-Resources {"error": "no Auth Provider found for name \"gcp\""} sigs.k8s.io/controller-runtime/pkg/cluster.New /Users/jarek.mroz@konghq.com/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.12.3/pkg/cluster/cluster.go:160 sigs.k8s.io/controller-runtime/pkg/manager.New /Users/jarek.mroz@konghq.com/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.12.3/pkg/manager/manager.go:322 github.com/kong/gateway-operator/internal/manager.Run /Users/jarek.mroz@konghq.com/git/gateway-operator/internal/manager/run.go:84 main.main /Users/jarek.mroz@konghq.com/git/gateway-operator/main.go:71 runtime.main /opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/proc.go:250 unable to start manager: no Auth Provider found for name "gcp" exit status 1 make: *** [run] Error 1 ``` ### Expected Behavior Should work. ### Steps To Reproduce ```markdown 1. Setup google cloud cluster 2. Run `make run` command ``` ### Kong Ingress Controller version _No response_ ### Kubernetes version _No response_ ### Anything else? _No response_
non_main
make run target not working when running on gke is there an existing issue for this i have searched the existing issues current behavior make run target seem to not work when running gke console make run users jarek mroz konghq com git gateway operator bin controller gen rbac rolename manager role crd webhook paths output crd artifacts config config crd bases users jarek mroz konghq com git gateway operator bin controller gen object headerfile hack boilerplate go txt paths go fmt go vet users jarek mroz konghq com git gateway operator bin kustomize build config crd kubectl apply f customresourcedefinition apiextensions io controlplanes gateway operator konghq com created customresourcedefinition apiextensions io dataplanes gateway operator konghq com created customresourcedefinition apiextensions io gatewayconfigurations gateway operator konghq com created kubectl kustomize kubectl apply f customresourcedefinition apiextensions io gatewayclasses gateway networking io created customresourcedefinition apiextensions io gateways gateway networking io created customresourcedefinition apiextensions io httproutes gateway networking io created controller development mode true go run main go no leader election info development mode has been enabled info leader election has been disabled error failed to get api group resources error no auth provider found for name gcp sigs io controller runtime pkg cluster new users jarek mroz konghq com go pkg mod sigs io controller runtime pkg cluster cluster go sigs io controller runtime pkg manager new users jarek mroz konghq com go pkg mod sigs io controller runtime pkg manager manager go github com kong gateway operator internal manager run users jarek mroz konghq com git gateway operator internal manager run go main main users jarek mroz konghq com git gateway operator main go runtime main opt homebrew cellar go libexec src runtime proc go unable to start manager no auth provider found for name gcp exit status make error expected behavior should work steps to reproduce markdown setup google cloud cluster run make run command kong ingress controller version no response kubernetes version no response anything else no response
0
3,911
17,466,074,614
IssuesEvent
2021-08-06 17:01:50
synthesized-io/fairlens
https://api.github.com/repos/synthesized-io/fairlens
closed
Updated README and documentation
category:repository-maintainance
What things do we have left to do here? Updated: - [x] Write short 2-3 tutorials based on either COMPAS, German Credit, Adult, or LSAC datasets. - [x] Include a fairness scorer use case in README. - [x] Polishing overview and quickstart. - [x] Include contribution guides in docs
True
Updated README and documentation - What things do we have left to do here? Updated: - [x] Write short 2-3 tutorials based on either COMPAS, German Credit, Adult, or LSAC datasets. - [x] Include a fairness scorer use case in README. - [x] Polishing overview and quickstart. - [x] Include contribution guides in docs
main
updated readme and documentation what things do we have left to do here updated write short tutorials based on either compas german credit adult or lsac datasets include a fairness scorer use case in readme polishing overview and quickstart include contribution guides in docs
1
49,228
13,445,714,683
IssuesEvent
2020-09-08 11:52:12
chaitanya00/aem-wknd
https://api.github.com/repos/chaitanya00/aem-wknd
opened
CVE-2018-16490 (High) detected in mpath-0.1.1.tgz
security vulnerability
## CVE-2018-16490 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mpath-0.1.1.tgz</b></p></summary> <p>{G,S}et object values using MongoDB path notation</p> <p>Library home page: <a href="https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz">https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/aem-wknd/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/aem-wknd/node_modules/mpath/package.json</p> <p> Dependency Hierarchy: - mongoose-4.2.4.tgz (Root Library) - :x: **mpath-0.1.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/chaitanya00/aem-wknd/commit/3f4c2902a45eb04bc7915c408df14545aa90511c">3f4c2902a45eb04bc7915c408df14545aa90511c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A prototype pollution vulnerability was found in module mpath <0.5.1 that allows an attacker to inject arbitrary properties onto Object.prototype. <p>Publish Date: 2019-02-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16490>CVE-2018-16490</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hackerone.com/reports/390860">https://hackerone.com/reports/390860</a></p> <p>Release Date: 2019-02-01</p> <p>Fix Resolution: 0.5.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-16490 (High) detected in mpath-0.1.1.tgz - ## CVE-2018-16490 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mpath-0.1.1.tgz</b></p></summary> <p>{G,S}et object values using MongoDB path notation</p> <p>Library home page: <a href="https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz">https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/aem-wknd/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/aem-wknd/node_modules/mpath/package.json</p> <p> Dependency Hierarchy: - mongoose-4.2.4.tgz (Root Library) - :x: **mpath-0.1.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/chaitanya00/aem-wknd/commit/3f4c2902a45eb04bc7915c408df14545aa90511c">3f4c2902a45eb04bc7915c408df14545aa90511c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A prototype pollution vulnerability was found in module mpath <0.5.1 that allows an attacker to inject arbitrary properties onto Object.prototype. <p>Publish Date: 2019-02-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16490>CVE-2018-16490</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hackerone.com/reports/390860">https://hackerone.com/reports/390860</a></p> <p>Release Date: 2019-02-01</p> <p>Fix Resolution: 0.5.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in mpath tgz cve high severity vulnerability vulnerable library mpath tgz g s et object values using mongodb path notation library home page a href path to dependency file tmp ws scm aem wknd package json path to vulnerable library tmp ws scm aem wknd node modules mpath package json dependency hierarchy mongoose tgz root library x mpath tgz vulnerable library found in head commit a href vulnerability details a prototype pollution vulnerability was found in module mpath that allows an attacker to inject arbitrary properties onto object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
313,959
26,965,363,115
IssuesEvent
2023-02-08 21:50:13
bcgov/zeva
https://api.github.com/repos/bcgov/zeva
closed
ZEVA - BCeID user unable to access their report once a reassessment is issued
Bug High Tested :)
**Describe the Bug** For a BCeID user, when they have been issued a reassessment by government they can no linger access their associated supplementary report anymore - the link to the page appears to be being changed. **Expected Behaviour** A BCEID ZEVA user should always be able to access a supplementary report that they have created. **Actual Behaviour** The BCeID ZEVA user cannot access their supplementary report after a reassessment has been issued. **Implications** The BCeID user is denied access to important information in the app that they should have access to. **Steps To Reproduce** Steps to reproduce the behaviour: User/Role: BCeID ZEVA User 1. After government has issued a reassessment to a supplier 2. Go to Compliance Reporting 3. Click on Reassessment and then click on the tab to view the supplementary report 3. See that you are unable to view the supplementary report. **Acceptance Criteria** Given that I am a BCeID user, when I am issued a reassessment, then I should still be able to view the supplementary report associated with the reassessment. **Development Checklist** (1) Maybe a fronted issue; see SupplementaryContainer.js first
1.0
ZEVA - BCeID user unable to access their report once a reassessment is issued - **Describe the Bug** For a BCeID user, when they have been issued a reassessment by government they can no linger access their associated supplementary report anymore - the link to the page appears to be being changed. **Expected Behaviour** A BCEID ZEVA user should always be able to access a supplementary report that they have created. **Actual Behaviour** The BCeID ZEVA user cannot access their supplementary report after a reassessment has been issued. **Implications** The BCeID user is denied access to important information in the app that they should have access to. **Steps To Reproduce** Steps to reproduce the behaviour: User/Role: BCeID ZEVA User 1. After government has issued a reassessment to a supplier 2. Go to Compliance Reporting 3. Click on Reassessment and then click on the tab to view the supplementary report 3. See that you are unable to view the supplementary report. **Acceptance Criteria** Given that I am a BCeID user, when I am issued a reassessment, then I should still be able to view the supplementary report associated with the reassessment. **Development Checklist** (1) Maybe a fronted issue; see SupplementaryContainer.js first
non_main
zeva bceid user unable to access their report once a reassessment is issued describe the bug for a bceid user when they have been issued a reassessment by government they can no linger access their associated supplementary report anymore the link to the page appears to be being changed expected behaviour a bceid zeva user should always be able to access a supplementary report that they have created actual behaviour the bceid zeva user cannot access their supplementary report after a reassessment has been issued implications the bceid user is denied access to important information in the app that they should have access to steps to reproduce steps to reproduce the behaviour user role bceid zeva user after government has issued a reassessment to a supplier go to compliance reporting click on reassessment and then click on the tab to view the supplementary report see that you are unable to view the supplementary report acceptance criteria given that i am a bceid user when i am issued a reassessment then i should still be able to view the supplementary report associated with the reassessment development checklist maybe a fronted issue see supplementarycontainer js first
0
542
3,956,186,647
IssuesEvent
2016-04-30 01:45:43
citp/coniks-ref-implementation
https://api.github.com/repos/citp/coniks-ref-implementation
closed
Refactor server
maintainability server
Modularize the server a bit more and update any remaining terminology from older versions of the paper.
True
Refactor server - Modularize the server a bit more and update any remaining terminology from older versions of the paper.
main
refactor server modularize the server a bit more and update any remaining terminology from older versions of the paper
1
5,213
26,464,341,680
IssuesEvent
2023-01-16 21:18:26
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
Flag --incompatible_disable_starlark_host_transitions will break IntelliJ Plugin Google in Bazel 7.0
type: bug product: IntelliJ topic: bazel awaiting-maintainer
Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ Plugin Google. Please migrate to fix this and unblock the flip of this flag. The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032). Please check the following CI builds for build and test results: - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec33-4c2b-a275-f8aa54ada99f) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec30-4823-b959-41c7eec9a0e9) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec3c-49d1-b4af-3c286c89b3a4) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec39-4a25-afea-07841244a923) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec3f-4c95-8328-9b5d8ef7ddf7) Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything. If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration.
True
Flag --incompatible_disable_starlark_host_transitions will break IntelliJ Plugin Google in Bazel 7.0 - Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ Plugin Google. Please migrate to fix this and unblock the flip of this flag. The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032). Please check the following CI builds for build and test results: - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec33-4c2b-a275-f8aa54ada99f) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec30-4823-b959-41c7eec9a0e9) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec3c-49d1-b4af-3c286c89b3a4) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec39-4a25-afea-07841244a923) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154a-ec3f-4c95-8328-9b5d8ef7ddf7) Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything. If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration.
main
flag incompatible disable starlark host transitions will break intellij plugin google in bazel incompatible flag incompatible disable starlark host transitions will be enabled by default in the next major release bazel thus breaking intellij plugin google please migrate to fix this and unblock the flip of this flag the flag is documented here please check the following ci builds for build and test results never heard of incompatible flags before we have that explains everything if you have any questions please file an issue in
1
733,425
25,305,604,245
IssuesEvent
2022-11-17 13:58:33
googleapis/java-spanner-jdbc
https://api.github.com/repos/googleapis/java-spanner-jdbc
closed
JVM crash on version 2.5.3 and 2.5.4
priority: p2 api: spanner
Hi guys, I raised https://github.com/googleapis/java-spanner-jdbc/issues/657 and you very promptly got onto the issue and upgraded dependencies to resolve the vulnerability. Now there's a very weird thing happening. Using google-cloud-spanner-jdbc 2.5.3 or 2.5.4 causes a big old JVM crash. 2.5.2 works fine. #### Environment details - Alpine 3.14.3 - Corretto JDK OpenJDK Runtime Environment Corretto-11.0.13.8.1 (build 11.0.13+8-LTS) - Running containerised in GCP Cloud Run - Error in google-cloud-spanner-jdbc 2.5.3 + 2.5.4 #### Stacktrace - (Condensed because it's huge. Extracted from err file dump) ``` "Internal exceptions (20 events):" "Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'java/security/PrivilegedActionException'{0x00000007eb834d70}> (0x00000007eb834d70) thrown at [src/hotspot/share/prims/jvm.cpp, line 1304]" "Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'sun/nio/fs/UnixException'{0x00000007eb835ee0}> (0x00000007eb835ee0) thrown at [src/hotspot/share/prims/jni.cpp, line 616]" "Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'java/security/PrivilegedActionException'{0x00000007eb836ce8}> (0x00000007eb836ce8) thrown at [src/hotspot/share/prims/jvm.cpp, line 1304]" "Event: 9.476 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8b5508}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, java.lang.Object)'> (0x00000007eb8b5508) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.478 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8bdfe0}: 'void java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, java.lang.Object, double)'> (0x00000007eb8bdfe0) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.479 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8c9058}: 'long java.lang.invoke.DirectMethodHandle$Holder.invokeVirtual(java.lang.Object, java.lang.Object)'> (0x00000007eb8c9058) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.492 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb6bcf88}: 'java.lang.Object java.lang.invoke.DirectMethodHandle$Holder.invokeSpecial(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb6bcf88) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.492 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb6c1428}: 'java.lang.Object java.lang.invoke.Invokers$Holder.linkToTargetMethod(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb6c1428) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.510 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb46abb0}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeInterface(java.lang.Object, java.lang.Object)'> (0x00000007eb46abb0) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.510 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb4709b8}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeSpecial(java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb4709b8) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.639 Thread 0x00003e3f01312800 Exception <a 'java/io/FileNotFoundException'{0x00000007ea3c07a0}> (0x00000007ea3c07a0) thrown at [src/hotspot/share/prims/jni.cpp, line 616]" "Event: 9.701 Thread 0x00003e3f01312800 Implicit null exception at 0x00003e3ef0329edd to 0x00003e3ef032a130" "Event: 10.237 Thread 0x00003e3edd81d800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007ff321008}: 'long java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, long, long)'> (0x00000007ff321008) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 10.866 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f74d4550}: org/springframework/boot/loader/http/Handler> (0x00000007f74d4550) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]" "Event: 11.209 Thread 0x00003e3f01312800 Implicit null exception at 0x00003e3ef02e0cec to 0x00003e3ef02e0d98" "Event: 11.373 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f3046220}: org/springframework/boot/loader/https/Handler> (0x00000007f3046220) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]" "Event: 11.602 Thread 0x00003e3f01312800 Exception <a 'java/io/FileNotFoundException'{0x00000007f0cc2fa8}> (0x00000007f0cc2fa8) thrown at [src/hotspot/share/prims/jni.cpp, line 616]" "Event: 11.603 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f0cd03c8}: sun/misc/SharedSecrets> (0x00000007f0cd03c8) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]" "Event: 11.742 Thread 0x00003e3f01312800 Exception <a 'java/lang/UnsatisfiedLinkError'{0x00000007ef9569b0}: 'int io.grpc.netty.shaded.io.netty.channel.epoll.Native.offsetofEpollData()'> (0x00000007ef9569b0) thrown at [src/hotspot/share/prims/nativeLookup.cpp, line 528]" "Event: 11.746 Thread 0x00003e3f01312800 Exception <a 'java/lang/reflect/InvocationTargetException'{0x00000007ef9c55a8}> (0x00000007ef9c55a8) thrown at [src/hotspot/share/runtime/reflection.cpp, line 1245]" [...] The crash happened outside the Java Virtual Machine in native code. [...] Uncaught signal: 6, pid=1, tid=2, fault_addr=0. [...] Container terminated on signal 6.
1.0
JVM crash on version 2.5.3 and 2.5.4 - Hi guys, I raised https://github.com/googleapis/java-spanner-jdbc/issues/657 and you very promptly got onto the issue and upgraded dependencies to resolve the vulnerability. Now there's a very weird thing happening. Using google-cloud-spanner-jdbc 2.5.3 or 2.5.4 causes a big old JVM crash. 2.5.2 works fine. #### Environment details - Alpine 3.14.3 - Corretto JDK OpenJDK Runtime Environment Corretto-11.0.13.8.1 (build 11.0.13+8-LTS) - Running containerised in GCP Cloud Run - Error in google-cloud-spanner-jdbc 2.5.3 + 2.5.4 #### Stacktrace - (Condensed because it's huge. Extracted from err file dump) ``` "Internal exceptions (20 events):" "Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'java/security/PrivilegedActionException'{0x00000007eb834d70}> (0x00000007eb834d70) thrown at [src/hotspot/share/prims/jvm.cpp, line 1304]" "Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'sun/nio/fs/UnixException'{0x00000007eb835ee0}> (0x00000007eb835ee0) thrown at [src/hotspot/share/prims/jni.cpp, line 616]" "Event: 9.471 Thread 0x00003e3f01312800 Exception <a 'java/security/PrivilegedActionException'{0x00000007eb836ce8}> (0x00000007eb836ce8) thrown at [src/hotspot/share/prims/jvm.cpp, line 1304]" "Event: 9.476 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8b5508}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, java.lang.Object)'> (0x00000007eb8b5508) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.478 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8bdfe0}: 'void java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, java.lang.Object, double)'> (0x00000007eb8bdfe0) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.479 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb8c9058}: 'long java.lang.invoke.DirectMethodHandle$Holder.invokeVirtual(java.lang.Object, java.lang.Object)'> (0x00000007eb8c9058) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.492 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb6bcf88}: 'java.lang.Object java.lang.invoke.DirectMethodHandle$Holder.invokeSpecial(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb6bcf88) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.492 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb6c1428}: 'java.lang.Object java.lang.invoke.Invokers$Holder.linkToTargetMethod(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb6c1428) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.510 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb46abb0}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeInterface(java.lang.Object, java.lang.Object)'> (0x00000007eb46abb0) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.510 Thread 0x00003e3f01312800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007eb4709b8}: 'double java.lang.invoke.DirectMethodHandle$Holder.invokeSpecial(java.lang.Object, java.lang.Object, java.lang.Object)'> (0x00000007eb4709b8) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 9.639 Thread 0x00003e3f01312800 Exception <a 'java/io/FileNotFoundException'{0x00000007ea3c07a0}> (0x00000007ea3c07a0) thrown at [src/hotspot/share/prims/jni.cpp, line 616]" "Event: 9.701 Thread 0x00003e3f01312800 Implicit null exception at 0x00003e3ef0329edd to 0x00003e3ef032a130" "Event: 10.237 Thread 0x00003e3edd81d800 Exception <a 'java/lang/NoSuchMethodError'{0x00000007ff321008}: 'long java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java.lang.Object, long, long)'> (0x00000007ff321008) thrown at [src/hotspot/share/interpreter/linkResolver.cpp, line 772]" "Event: 10.866 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f74d4550}: org/springframework/boot/loader/http/Handler> (0x00000007f74d4550) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]" "Event: 11.209 Thread 0x00003e3f01312800 Implicit null exception at 0x00003e3ef02e0cec to 0x00003e3ef02e0d98" "Event: 11.373 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f3046220}: org/springframework/boot/loader/https/Handler> (0x00000007f3046220) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]" "Event: 11.602 Thread 0x00003e3f01312800 Exception <a 'java/io/FileNotFoundException'{0x00000007f0cc2fa8}> (0x00000007f0cc2fa8) thrown at [src/hotspot/share/prims/jni.cpp, line 616]" "Event: 11.603 Thread 0x00003e3f01312800 Exception <a 'java/lang/ClassNotFoundException'{0x00000007f0cd03c8}: sun/misc/SharedSecrets> (0x00000007f0cd03c8) thrown at [src/hotspot/share/classfile/systemDictionary.cpp, line 231]" "Event: 11.742 Thread 0x00003e3f01312800 Exception <a 'java/lang/UnsatisfiedLinkError'{0x00000007ef9569b0}: 'int io.grpc.netty.shaded.io.netty.channel.epoll.Native.offsetofEpollData()'> (0x00000007ef9569b0) thrown at [src/hotspot/share/prims/nativeLookup.cpp, line 528]" "Event: 11.746 Thread 0x00003e3f01312800 Exception <a 'java/lang/reflect/InvocationTargetException'{0x00000007ef9c55a8}> (0x00000007ef9c55a8) thrown at [src/hotspot/share/runtime/reflection.cpp, line 1245]" [...] The crash happened outside the Java Virtual Machine in native code. [...] Uncaught signal: 6, pid=1, tid=2, fault_addr=0. [...] Container terminated on signal 6.
non_main
jvm crash on version and hi guys i raised and you very promptly got onto the issue and upgraded dependencies to resolve the vulnerability now there s a very weird thing happening using google cloud spanner jdbc or causes a big old jvm crash works fine environment details alpine corretto jdk openjdk runtime environment corretto build lts running containerised in gcp cloud run error in google cloud spanner jdbc stacktrace condensed because it s huge extracted from err file dump internal exceptions events event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread implicit null exception at to event thread exception thrown at event thread exception thrown at event thread implicit null exception at to event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at event thread exception thrown at the crash happened outside the java virtual machine in native code uncaught signal pid tid fault addr container terminated on signal
0
347,060
10,424,163,939
IssuesEvent
2019-09-16 13:06:35
AY1920S1-CS2103-T16-3/main
https://api.github.com/repos/AY1920S1-CS2103-T16-3/main
opened
Morph address book into base task manager
priority.High type.Task
* [ ] Rename classes, documentation, etc. * [ ] Edit methods to fit a task manager
1.0
Morph address book into base task manager - * [ ] Rename classes, documentation, etc. * [ ] Edit methods to fit a task manager
non_main
morph address book into base task manager rename classes documentation etc edit methods to fit a task manager
0
5,683
29,924,449,412
IssuesEvent
2023-06-22 03:22:52
spicetify/spicetify-themes
https://api.github.com/repos/spicetify/spicetify-themes
closed
[Flow] Theme overlapping and entirely incorrect
☠️ unmaintained
**Describe the bug** Flow theme does not fit the full screen and all buttons/screens are overlapping/incorrect **To Reproduce** Open Spotify with Flow theme **Expected behavior** Expected Spotify to be fullscreen without overlapping (Look like screenshots in read me) ![image](https://user-images.githubusercontent.com/112017221/186501328-7af5ffd3-41d5-46ad-b47e-c6b109458a4e.png) - OS: Windows 10 - Spotify version 1.1.91.824.g07f1e963 - Spicetify version 2.12.0 - Flow
True
[Flow] Theme overlapping and entirely incorrect - **Describe the bug** Flow theme does not fit the full screen and all buttons/screens are overlapping/incorrect **To Reproduce** Open Spotify with Flow theme **Expected behavior** Expected Spotify to be fullscreen without overlapping (Look like screenshots in read me) ![image](https://user-images.githubusercontent.com/112017221/186501328-7af5ffd3-41d5-46ad-b47e-c6b109458a4e.png) - OS: Windows 10 - Spotify version 1.1.91.824.g07f1e963 - Spicetify version 2.12.0 - Flow
main
theme overlapping and entirely incorrect describe the bug flow theme does not fit the full screen and all buttons screens are overlapping incorrect to reproduce open spotify with flow theme expected behavior expected spotify to be fullscreen without overlapping look like screenshots in read me os windows spotify version spicetify version flow
1
1,543
6,572,237,030
IssuesEvent
2017-09-11 00:26:27
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Gitlab python binding
affects_2.3 feature_idea waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME gitlab_user gitlab_project gitlab_group ##### ANSIBLE VERSION Latest ##### SUMMARY The gitlab_x modules depend on the pyapi-gitlab library. pyapi-gitlab is not actively being maintained (the current maintainer is looking for new maintainers), and there are lots and lots of missing features. I believe it would make sense to move to https://github.com/gpocentek/python-gitlab instead, or maybe even implement the parts of the api that are used natively in the gitlab_x modules.
True
Gitlab python binding - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME gitlab_user gitlab_project gitlab_group ##### ANSIBLE VERSION Latest ##### SUMMARY The gitlab_x modules depend on the pyapi-gitlab library. pyapi-gitlab is not actively being maintained (the current maintainer is looking for new maintainers), and there are lots and lots of missing features. I believe it would make sense to move to https://github.com/gpocentek/python-gitlab instead, or maybe even implement the parts of the api that are used natively in the gitlab_x modules.
main
gitlab python binding issue type feature idea component name gitlab user gitlab project gitlab group ansible version latest summary the gitlab x modules depend on the pyapi gitlab library pyapi gitlab is not actively being maintained the current maintainer is looking for new maintainers and there are lots and lots of missing features i believe it would make sense to move to instead or maybe even implement the parts of the api that are used natively in the gitlab x modules
1
51,587
7,717,527,209
IssuesEvent
2018-05-23 13:58:23
telerik/kendo-ui-core
https://api.github.com/repos/telerik/kendo-ui-core
opened
Add resources for filtering and sequential loading of items
C: TreeView Documentation Kendo2
1. Filtering with loadOnDemand true 2. Sequential expanding of items 3. Asynchronous loading of a dataItem
1.0
Add resources for filtering and sequential loading of items - 1. Filtering with loadOnDemand true 2. Sequential expanding of items 3. Asynchronous loading of a dataItem
non_main
add resources for filtering and sequential loading of items filtering with loadondemand true sequential expanding of items asynchronous loading of a dataitem
0
415,922
12,137,060,520
IssuesEvent
2020-04-23 15:12:36
olros/picturerama
https://api.github.com/repos/olros/picturerama
closed
Make the stage keep its size from the previous scene
priority
In GitLab by @martsha on Apr 7, 2020, 19:53 null
1.0
Make the stage keep its size from the previous scene - In GitLab by @martsha on Apr 7, 2020, 19:53 null
non_main
make the stage keep its size from the previous scene in gitlab by martsha on apr null
0
725
4,318,960,452
IssuesEvent
2016-07-24 11:06:54
gogits/gogs
https://api.github.com/repos/gogits/gogs
closed
500 error when creating a release with an invalid tag name
kind/bug status/assigned to maintainer status/needs feedback
- Gogs version: 0.9.13.0318 - Git version: 1.8.3.1 - Operating system: CentOS 7 - Database: MySQL (MariaDB) - Can you reproduce the bug at http://try.gogs.io: - [x] Yes (provide example URL): https://try.gogs.io/beta/test-repo/releases/new - [ ] No - [ ] Not relevant - Log: ``` 2016/05/09 12:02:14 [...ters/repo/release.go:195 NewReleasePost()] [E] CreateRelease: exit status 128 - fatal: '1.0 alpha' is not a valid tag name. ``` ## Description When creating a release and input an invalid tag name (e.g., `1.0 alpha`), a 500 server error will be shown. There should be a guide to what a valid tag name is and also a gentle error info after failing.
True
500 error when creating a release with an invalid tag name - - Gogs version: 0.9.13.0318 - Git version: 1.8.3.1 - Operating system: CentOS 7 - Database: MySQL (MariaDB) - Can you reproduce the bug at http://try.gogs.io: - [x] Yes (provide example URL): https://try.gogs.io/beta/test-repo/releases/new - [ ] No - [ ] Not relevant - Log: ``` 2016/05/09 12:02:14 [...ters/repo/release.go:195 NewReleasePost()] [E] CreateRelease: exit status 128 - fatal: '1.0 alpha' is not a valid tag name. ``` ## Description When creating a release and input an invalid tag name (e.g., `1.0 alpha`), a 500 server error will be shown. There should be a guide to what a valid tag name is and also a gentle error info after failing.
main
error when creating a release with an invalid tag name gogs version git version operating system centos database mysql mariadb can you reproduce the bug at yes provide example url no not relevant log createrelease exit status fatal alpha is not a valid tag name description when creating a release and input an invalid tag name e g alpha a server error will be shown there should be a guide to what a valid tag name is and also a gentle error info after failing
1
4,335
21,786,655,184
IssuesEvent
2022-05-14 08:29:54
Numble-challenge-Team/client
https://api.github.com/repos/Numble-challenge-Team/client
closed
eslint 규칙 적용
maintain eslint
### ISSUE - Type: chore - Page: - ### 변경 사항 - Icon, Layout, Navigation 컴포넌트 eslint 룰 적용 - pages index 파일 eslint 룰 적용 - my-video app 페이지 eslint 룰 적용
True
eslint 규칙 적용 - ### ISSUE - Type: chore - Page: - ### 변경 사항 - Icon, Layout, Navigation 컴포넌트 eslint 룰 적용 - pages index 파일 eslint 룰 적용 - my-video app 페이지 eslint 룰 적용
main
eslint 규칙 적용 issue type chore page 변경 사항 icon layout navigation 컴포넌트 eslint 룰 적용 pages index 파일 eslint 룰 적용 my video app 페이지 eslint 룰 적용
1
31,661
5,967,920,423
IssuesEvent
2017-05-30 16:58:40
10up/wp_mock
https://api.github.com/repos/10up/wp_mock
closed
Which branch should be used?
bug Documentation
The repository defaults to showing `dev` branch and it looks like that is the branch with active development. `master` hasn't been updated in a few years. Which branch do you recommend we install?
1.0
Which branch should be used? - The repository defaults to showing `dev` branch and it looks like that is the branch with active development. `master` hasn't been updated in a few years. Which branch do you recommend we install?
non_main
which branch should be used the repository defaults to showing dev branch and it looks like that is the branch with active development master hasn t been updated in a few years which branch do you recommend we install
0
254,237
8,071,701,650
IssuesEvent
2018-08-06 13:57:54
aiidateam/aiida_core
https://api.github.com/repos/aiidateam/aiida_core
opened
Global variable not defined in aiida.orm.data.remote._clean
priority/important type/bug
In `aiida.orm.data.remote._clean` at line 169, there is a call to the free function `clean_remote`, which is not defined.
1.0
Global variable not defined in aiida.orm.data.remote._clean - In `aiida.orm.data.remote._clean` at line 169, there is a call to the free function `clean_remote`, which is not defined.
non_main
global variable not defined in aiida orm data remote clean in aiida orm data remote clean at line there is a call to the free function clean remote which is not defined
0
381,739
26,466,991,409
IssuesEvent
2023-01-17 01:25:53
scprogramming/Olive
https://api.github.com/repos/scprogramming/Olive
closed
[Documentation] Login and Registration
In Progress High priority Documentation
I need to document these once I've got them up and running to my standards
1.0
[Documentation] Login and Registration - I need to document these once I've got them up and running to my standards
non_main
login and registration i need to document these once i ve got them up and running to my standards
0
541
3,955,463,610
IssuesEvent
2016-04-29 21:01:28
duckduckgo/zeroclickinfo-goodies
https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies
closed
Improvement/suggestion for Curl cheat sheet
Maintainer Approved
This is for https://duck.co/ia/view/curl_cheat_sheet The page looks fine to me, I just have few small improvements/suggestions I guess it would be better to 1. Have a -v (verbose) and --connect-timeout (or -m/--max-time) since they're used frequently 2. Instead of having https://www.cheatography.com/ankushagarwal11/cheat-sheets/curl-cheat-sheet/ as the reference, would it be a good idea to have a link that has all options? Something like http://linux.about.com/od/commands/l/blcmdl1_curl.htm ?
True
Improvement/suggestion for Curl cheat sheet - This is for https://duck.co/ia/view/curl_cheat_sheet The page looks fine to me, I just have few small improvements/suggestions I guess it would be better to 1. Have a -v (verbose) and --connect-timeout (or -m/--max-time) since they're used frequently 2. Instead of having https://www.cheatography.com/ankushagarwal11/cheat-sheets/curl-cheat-sheet/ as the reference, would it be a good idea to have a link that has all options? Something like http://linux.about.com/od/commands/l/blcmdl1_curl.htm ?
main
improvement suggestion for curl cheat sheet this is for the page looks fine to me i just have few small improvements suggestions i guess it would be better to have a v verbose and connect timeout or m max time since they re used frequently instead of having as the reference would it be a good idea to have a link that has all options something like
1
1,310
5,557,785,615
IssuesEvent
2017-03-24 13:07:54
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
route53_zone does not support split horizon setup
affects_2.0 aws bug_report cloud waiting_on_maintainer
##### Issue Type: - Bug Report ##### Plugin Name: route53_zone ##### Ansible Version: ``` ansible 2.0.0.2 ``` ##### Ansible Configuration: N/A ##### Environment: Mac OS X against AWS api ##### Summary: Split horizon dns setup fails in ansible. ##### Steps To Reproduce: ``` - name: "register new private zone for {{ domain }}" route53_zone: vpc_id: "{{ vpc.vpc_id }}" vpc_region: "{{ ec2_region }}" zone: "{{ domain }}" state: present register: priv_zone_out - debug: var=priv_zone_out - name: "register new zone for {{ domain }}" route53_zone: zone: "{{ domain }}" state: present register: pub_zone_out - debug: var=pub_zone_out ``` <!-- You can also paste gist.github.com links for larger files. --> ##### Expected Results: I expect two zones in AWS one public and one private. ##### Actual Results: Ansible does not discern between public and private dns zones if they have the same name. It creates one first and next attempt it reuses the private one for the public one. ``` TASK [vpc : register new private zone for qa.tst.<****>] ***************** task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:39 ESTABLISH LOCAL CONNECTION FOR USER: olvesh 127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441 )" ) 127.0.0.1 PUT /var/folders/cw/pnp93xgs3zq16021bb6zc_680000gn/T/tmpTA3526 TO /Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/route53_zone 127.0.0.1 EXEC LANG=no_NO.UTF-8 LC_ALL=no_NO.UTF-8 LC_MESSAGES=no_NO.UTF-8 /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/route53_zone; rm -rf "/Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/" > /dev/null 2>&1 changed: [localhost] => {"changed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "comment": "", "ec2_url": null, "profile": null, "region": null, "security_token": null, "state": "present", "validate_certs": true, "vpc_id": "vpc-59ec7d30", "vpc_region": "eu-central-1", "zone": "qa.tst.<****>"}, "module_name": "route53_zone"}, "set": {"comment": "", "name": "qa.tst.<****>.", "private_zone": true, "vpc_id": "vpc-59ec7d30", "vpc_region": "eu-central-1", "zone_id": "Z3IM3APUOSPX29"}} TASK [vpc : debug] ************************************************************* task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:47 ok: [localhost] => { "priv_zone_out": { "changed": true, "set": { "comment": "", "name": "qa.tst.<****>.", "private_zone": true, "vpc_id": "vpc-****", "vpc_region": "eu-central-1", "zone_id": "Z3IM3APUOSPX29" } } } TASK [vpc : register new zone for qa.tst.<****>] ************************* task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:49 ESTABLISH LOCAL CONNECTION FOR USER: olvesh 127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879 )" ) 127.0.0.1 PUT /var/folders/cw/pnp93xgs3zq16021bb6zc_680000gn/T/tmpwMfLtN TO /Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/route53_zone 127.0.0.1 EXEC LANG=no_NO.UTF-8 LC_ALL=no_NO.UTF-8 LC_MESSAGES=no_NO.UTF-8 /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/route53_zone; rm -rf "/Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/" > /dev/null 2>&1 ok: [localhost] => {"changed": false, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "comment": "", "ec2_url": null, "profile": null, "region": null, "security_token": null, "state": "present", "validate_certs": true, "vpc_id": null, "vpc_region": null, "zone": "qa.tst.<****>"}, "module_name": "route53_zone"}, "set": {"comment": "", "name": "qa.tst.<****>.", "private_zone": false, "vpc_id": null, "vpc_region": null, "zone_id": "Z3IM3APUOSPX29"}} TASK [vpc : debug] ************************************************************* task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:55 ok: [localhost] => { "pub_zone_out": { "changed": false, "set": { "comment": "", "name": "qa.tst.<****>.", "private_zone": false, "vpc_id": null, "vpc_region": null, "zone_id": "Z3IM3APUOSPX29" } } } ```
True
route53_zone does not support split horizon setup - ##### Issue Type: - Bug Report ##### Plugin Name: route53_zone ##### Ansible Version: ``` ansible 2.0.0.2 ``` ##### Ansible Configuration: N/A ##### Environment: Mac OS X against AWS api ##### Summary: Split horizon dns setup fails in ansible. ##### Steps To Reproduce: ``` - name: "register new private zone for {{ domain }}" route53_zone: vpc_id: "{{ vpc.vpc_id }}" vpc_region: "{{ ec2_region }}" zone: "{{ domain }}" state: present register: priv_zone_out - debug: var=priv_zone_out - name: "register new zone for {{ domain }}" route53_zone: zone: "{{ domain }}" state: present register: pub_zone_out - debug: var=pub_zone_out ``` <!-- You can also paste gist.github.com links for larger files. --> ##### Expected Results: I expect two zones in AWS one public and one private. ##### Actual Results: Ansible does not discern between public and private dns zones if they have the same name. It creates one first and next attempt it reuses the private one for the public one. ``` TASK [vpc : register new private zone for qa.tst.<****>] ***************** task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:39 ESTABLISH LOCAL CONNECTION FOR USER: olvesh 127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441 )" ) 127.0.0.1 PUT /var/folders/cw/pnp93xgs3zq16021bb6zc_680000gn/T/tmpTA3526 TO /Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/route53_zone 127.0.0.1 EXEC LANG=no_NO.UTF-8 LC_ALL=no_NO.UTF-8 LC_MESSAGES=no_NO.UTF-8 /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/route53_zone; rm -rf "/Users/olvesh/.ansible/tmp/ansible-tmp-1456736155.85-14866030392441/" > /dev/null 2>&1 changed: [localhost] => {"changed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "comment": "", "ec2_url": null, "profile": null, "region": null, "security_token": null, "state": "present", "validate_certs": true, "vpc_id": "vpc-59ec7d30", "vpc_region": "eu-central-1", "zone": "qa.tst.<****>"}, "module_name": "route53_zone"}, "set": {"comment": "", "name": "qa.tst.<****>.", "private_zone": true, "vpc_id": "vpc-59ec7d30", "vpc_region": "eu-central-1", "zone_id": "Z3IM3APUOSPX29"}} TASK [vpc : debug] ************************************************************* task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:47 ok: [localhost] => { "priv_zone_out": { "changed": true, "set": { "comment": "", "name": "qa.tst.<****>.", "private_zone": true, "vpc_id": "vpc-****", "vpc_region": "eu-central-1", "zone_id": "Z3IM3APUOSPX29" } } } TASK [vpc : register new zone for qa.tst.<****>] ************************* task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:49 ESTABLISH LOCAL CONNECTION FOR USER: olvesh 127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879 )" ) 127.0.0.1 PUT /var/folders/cw/pnp93xgs3zq16021bb6zc_680000gn/T/tmpwMfLtN TO /Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/route53_zone 127.0.0.1 EXEC LANG=no_NO.UTF-8 LC_ALL=no_NO.UTF-8 LC_MESSAGES=no_NO.UTF-8 /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/route53_zone; rm -rf "/Users/olvesh/.ansible/tmp/ansible-tmp-1456736158.72-146896956005879/" > /dev/null 2>&1 ok: [localhost] => {"changed": false, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "comment": "", "ec2_url": null, "profile": null, "region": null, "security_token": null, "state": "present", "validate_certs": true, "vpc_id": null, "vpc_region": null, "zone": "qa.tst.<****>"}, "module_name": "route53_zone"}, "set": {"comment": "", "name": "qa.tst.<****>.", "private_zone": false, "vpc_id": null, "vpc_region": null, "zone_id": "Z3IM3APUOSPX29"}} TASK [vpc : debug] ************************************************************* task path: /Users/olvesh/utvikling/vimond-ansible/_roles/vpc/tasks/eip.yml:55 ok: [localhost] => { "pub_zone_out": { "changed": false, "set": { "comment": "", "name": "qa.tst.<****>.", "private_zone": false, "vpc_id": null, "vpc_region": null, "zone_id": "Z3IM3APUOSPX29" } } } ```
main
zone does not support split horizon setup issue type bug report plugin name zone ansible version ansible ansible configuration n a environment mac os x against aws api summary split horizon dns setup fails in ansible steps to reproduce name register new private zone for domain zone vpc id vpc vpc id vpc region region zone domain state present register priv zone out debug var priv zone out name register new zone for domain zone zone domain state present register pub zone out debug var pub zone out expected results i expect two zones in aws one public and one private actual results ansible does not discern between public and private dns zones if they have the same name it creates one first and next attempt it reuses the private one for the public one task task path users olvesh utvikling vimond ansible roles vpc tasks eip yml establish local connection for user olvesh exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders cw t to users olvesh ansible tmp ansible tmp zone exec lang no no utf lc all no no utf lc messages no no utf library frameworks python framework versions resources python app contents macos python users olvesh ansible tmp ansible tmp zone rm rf users olvesh ansible tmp ansible tmp dev null changed changed true invocation module args aws access key null aws secret key null comment url null profile null region null security token null state present validate certs true vpc id vpc vpc region eu central zone qa tst module name zone set comment name qa tst private zone true vpc id vpc vpc region eu central zone id task task path users olvesh utvikling vimond ansible roles vpc tasks eip yml ok priv zone out changed true set comment name qa tst private zone true vpc id vpc vpc region eu central zone id task task path users olvesh utvikling vimond ansible roles vpc tasks eip yml establish local connection for user olvesh exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders cw t tmpwmfltn to users olvesh ansible tmp ansible tmp zone exec lang no no utf lc all no no utf lc messages no no utf library frameworks python framework versions resources python app contents macos python users olvesh ansible tmp ansible tmp zone rm rf users olvesh ansible tmp ansible tmp dev null ok changed false invocation module args aws access key null aws secret key null comment url null profile null region null security token null state present validate certs true vpc id null vpc region null zone qa tst module name zone set comment name qa tst private zone false vpc id null vpc region null zone id task task path users olvesh utvikling vimond ansible roles vpc tasks eip yml ok pub zone out changed false set comment name qa tst private zone false vpc id null vpc region null zone id
1
25,412
12,241,330,620
IssuesEvent
2020-05-05 03:38:48
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
python message.dead_letter(description) does not add description to the dead lettered message
Pri2 cxp doc-enhancement service-bus-messaging/svc triaged
If you do: message.dead_letter(description="SETTING A reason") That description cannot be found in the dead letter properties: for message in messages: # pylint: disable=not-an-iterable print(message) print(message.header) print(message.properties) print(message.user_properties) print(message.annotations) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 47bc6b40-39cd-eb95-1911-ddff96dda210 * Version Independent ID: d91a6110-e9c0-3a34-a321-6850778baaef * Content: [Quickstart: Use Azure Service Bus queues with Python](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-python-how-to-use-queues) * Content Source: [articles/service-bus-messaging/service-bus-python-how-to-use-queues.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-bus-messaging/service-bus-python-how-to-use-queues.md) * Service: **service-bus-messaging** * GitHub Login: @axisc * Microsoft Alias: **aschhab**
1.0
python message.dead_letter(description) does not add description to the dead lettered message - If you do: message.dead_letter(description="SETTING A reason") That description cannot be found in the dead letter properties: for message in messages: # pylint: disable=not-an-iterable print(message) print(message.header) print(message.properties) print(message.user_properties) print(message.annotations) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 47bc6b40-39cd-eb95-1911-ddff96dda210 * Version Independent ID: d91a6110-e9c0-3a34-a321-6850778baaef * Content: [Quickstart: Use Azure Service Bus queues with Python](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-python-how-to-use-queues) * Content Source: [articles/service-bus-messaging/service-bus-python-how-to-use-queues.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-bus-messaging/service-bus-python-how-to-use-queues.md) * Service: **service-bus-messaging** * GitHub Login: @axisc * Microsoft Alias: **aschhab**
non_main
python message dead letter description does not add description to the dead lettered message if you do message dead letter description setting a reason that description cannot be found in the dead letter properties for message in messages pylint disable not an iterable print message print message header print message properties print message user properties print message annotations document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service bus messaging github login axisc microsoft alias aschhab
0
1,655
6,573,991,771
IssuesEvent
2017-09-11 10:59:41
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
authorized_key: pull keys from git server before the module is copied to the target machine
affects_2.2 feature_idea waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Feature Idea ##### COMPONENT NAME <!--- Name of the plugin/module/task --> module: authorized_key ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /home/username/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> None which affect module behaviour. ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> N/A ##### SUMMARY <!--- Explain the problem briefly --> In my company we are using a local git repository server (gitlab) and very few servers are able to access it. The majority of servers don't have network access to our local gitlab instance since we use it exclusively for ansible. So when i use the authorized_key module to deploy ssh keys and tell it to pull the keys from our gitlab instance (https://gitlab_server/{{ username }}.keys) the servers that can't access our gitlab instance cannot pull the keys. I understand that the module is copied to the target machine first and then executed, but it would be neat if there could be a way to get the keys from the git server before the module is copied to the target machine. sorry if this is to much to ask and i know there are other ways to deploy ssh keys, but i find the ability to provide the keys from URL very useful and it seems useless if target servers cannot access the git server to get the keys. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> Try to deploy the keys to a target that cannot access the git server. <!--- Paste example playbooks or commands between quotes below --> ``` - name: "Deploy public ssh key for username" authorized_key: user: "username" key: "https://gitlab_server/username.keys" exclusive: yes validate_certs: no state: present ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> ``` changed: [ansible_host] ``` ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Because the target server cannot access the local git server the following error appears. ``` fatal: [ansible_host]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "exclusive": true, "key": "https://gitlab_server/username.keys", "key_options": null, "manage_dir": true, "path": null, "state": "present", "unique": false, "user": "username", "validate_certs": false }, "module_name": "authorized_key" }, "msg": "Error getting key from: https://gitlab_server/username.keys" } ```
True
authorized_key: pull keys from git server before the module is copied to the target machine - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Feature Idea ##### COMPONENT NAME <!--- Name of the plugin/module/task --> module: authorized_key ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /home/username/.ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> None which affect module behaviour. ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> N/A ##### SUMMARY <!--- Explain the problem briefly --> In my company we are using a local git repository server (gitlab) and very few servers are able to access it. The majority of servers don't have network access to our local gitlab instance since we use it exclusively for ansible. So when i use the authorized_key module to deploy ssh keys and tell it to pull the keys from our gitlab instance (https://gitlab_server/{{ username }}.keys) the servers that can't access our gitlab instance cannot pull the keys. I understand that the module is copied to the target machine first and then executed, but it would be neat if there could be a way to get the keys from the git server before the module is copied to the target machine. sorry if this is to much to ask and i know there are other ways to deploy ssh keys, but i find the ability to provide the keys from URL very useful and it seems useless if target servers cannot access the git server to get the keys. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> Try to deploy the keys to a target that cannot access the git server. <!--- Paste example playbooks or commands between quotes below --> ``` - name: "Deploy public ssh key for username" authorized_key: user: "username" key: "https://gitlab_server/username.keys" exclusive: yes validate_certs: no state: present ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> ``` changed: [ansible_host] ``` ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Because the target server cannot access the local git server the following error appears. ``` fatal: [ansible_host]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "exclusive": true, "key": "https://gitlab_server/username.keys", "key_options": null, "manage_dir": true, "path": null, "state": "present", "unique": false, "user": "username", "validate_certs": false }, "module_name": "authorized_key" }, "msg": "Error getting key from: https://gitlab_server/username.keys" } ```
main
authorized key pull keys from git server before the module is copied to the target machine issue type feature idea component name module authorized key ansible version ansible config file home username ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none which affect module behaviour os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary in my company we are using a local git repository server gitlab and very few servers are able to access it the majority of servers don t have network access to our local gitlab instance since we use it exclusively for ansible so when i use the authorized key module to deploy ssh keys and tell it to pull the keys from our gitlab instance username keys the servers that can t access our gitlab instance cannot pull the keys i understand that the module is copied to the target machine first and then executed but it would be neat if there could be a way to get the keys from the git server before the module is copied to the target machine sorry if this is to much to ask and i know there are other ways to deploy ssh keys but i find the ability to provide the keys from url very useful and it seems useless if target servers cannot access the git server to get the keys steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used try to deploy the keys to a target that cannot access the git server name deploy public ssh key for username authorized key user username key exclusive yes validate certs no state present expected results changed actual results because the target server cannot access the local git server the following error appears fatal failed changed false failed true invocation module args exclusive true key key options null manage dir true path null state present unique false user username validate certs false module name authorized key msg error getting key from
1
7,114
6,776,390,782
IssuesEvent
2017-10-27 17:40:23
servo/servo
https://api.github.com/repos/servo/servo
closed
servo-mac9 and servo-mac5 take 14 minutes longer to compile than other mac builders
A-infrastructure
They both consistently report 37 minutes for ./mach build, while other macs like servo-mac3 report 23 minutes. This tends to push our end-to-end built times to >1 hour when these machines are selected. I sshed in and didn't see any renegade processes that have caused similar slowdowns in the past.
1.0
servo-mac9 and servo-mac5 take 14 minutes longer to compile than other mac builders - They both consistently report 37 minutes for ./mach build, while other macs like servo-mac3 report 23 minutes. This tends to push our end-to-end built times to >1 hour when these machines are selected. I sshed in and didn't see any renegade processes that have caused similar slowdowns in the past.
non_main
servo and servo take minutes longer to compile than other mac builders they both consistently report minutes for mach build while other macs like servo report minutes this tends to push our end to end built times to hour when these machines are selected i sshed in and didn t see any renegade processes that have caused similar slowdowns in the past
0
127,973
5,041,569,246
IssuesEvent
2016-12-19 10:47:58
restlet/restlet-framework-java
https://api.github.com/repos/restlet/restlet-framework-java
closed
[GWT] in IE 10, POST/PUT request without body are received on server side with an "undefined" body
Priority: high State: new Type: bug
It works fine with Chrome and Firefox
1.0
[GWT] in IE 10, POST/PUT request without body are received on server side with an "undefined" body - It works fine with Chrome and Firefox
non_main
in ie post put request without body are received on server side with an undefined body it works fine with chrome and firefox
0
496,047
14,293,017,046
IssuesEvent
2020-11-24 02:34:00
internetarchive/openlibrary
https://api.github.com/repos/internetarchive/openlibrary
closed
Unable to login successfully during the local development environment setup
Lead: @cdrini Priority: 1 Theme: Development Type: Bug
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? --> At the time of login to OL interface an internal error is noticed. ### Evidence / Screenshot (if possible) ![Screenshot from 2020-11-13 12-09-28](https://user-images.githubusercontent.com/31753232/99041302-ab19d200-25b0-11eb-87db-c0959ca8a181.png) ![Screenshot from 2020-11-13 13-05-44](https://user-images.githubusercontent.com/31753232/99041456-f8963f00-25b0-11eb-824c-0cab16ef0ee0.png) ### Relevant url? <!-- `https://openlibrary.org/...` --> ### Steps to Reproduce <!-- What steps caused you to find the bug? --> 1. Run docker-compose up 2. Browse to localhost:8080 3. Try to Login <!-- What actually happened after these steps? What did you expect to happen? --> * Actual: Showed Internal Error * Expected: ### Details - **Logged in (Y/N)?** Y - **Browser type/version?** Firefox/Chromium - **Operating system?** Ubuntu 18.04 - **Environment (prod/dev/local)?** local <!-- If not sure, put prod --> ### Proposal & Constraints <!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? --> ### Related files <!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. --> ### Stakeholders <!-- @ tag stakeholders of this bug -->
1.0
Unable to login successfully during the local development environment setup - <!-- What problem are we solving? What does the experience look like today? What are the symptoms? --> At the time of login to OL interface an internal error is noticed. ### Evidence / Screenshot (if possible) ![Screenshot from 2020-11-13 12-09-28](https://user-images.githubusercontent.com/31753232/99041302-ab19d200-25b0-11eb-87db-c0959ca8a181.png) ![Screenshot from 2020-11-13 13-05-44](https://user-images.githubusercontent.com/31753232/99041456-f8963f00-25b0-11eb-824c-0cab16ef0ee0.png) ### Relevant url? <!-- `https://openlibrary.org/...` --> ### Steps to Reproduce <!-- What steps caused you to find the bug? --> 1. Run docker-compose up 2. Browse to localhost:8080 3. Try to Login <!-- What actually happened after these steps? What did you expect to happen? --> * Actual: Showed Internal Error * Expected: ### Details - **Logged in (Y/N)?** Y - **Browser type/version?** Firefox/Chromium - **Operating system?** Ubuntu 18.04 - **Environment (prod/dev/local)?** local <!-- If not sure, put prod --> ### Proposal & Constraints <!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? --> ### Related files <!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. --> ### Stakeholders <!-- @ tag stakeholders of this bug -->
non_main
unable to login successfully during the local development environment setup at the time of login to ol interface an internal error is noticed evidence screenshot if possible relevant url steps to reproduce run docker compose up browse to localhost try to login actual showed internal error expected details logged in y n y browser type version firefox chromium operating system ubuntu environment prod dev local local proposal constraints related files stakeholders
0
5,334
26,922,607,326
IssuesEvent
2023-02-07 11:33:30
dbt-labs/docs.getdbt.com
https://api.github.com/repos/dbt-labs/docs.getdbt.com
opened
`vars` in `dbt_project.yml` are not Jinja-rendered
content improvement maintainer request
### Contributions - [X] I have read the contribution docs, and understand what's expected of me. ### Link to the page on docs.getdbt.com requiring updates https://docs.getdbt.com/docs/build/project-variables#defining-variables-in-dbt_projectyml ### What part(s) of the page would you like to see updated? `vars` can take static input only. The `vars` dictionary in `dbt_project.yml` is not Jinja rendered. As such, you **cannot** have code like: ```yml vars: my_var: | {% if target.name == 'dev' %} something {% elif env_var('other_input') %} something_else {% endif %} ``` ### Additional information This is a frequently opened issue: - https://github.com/dbt-labs/dbt-core/issues/3105 - https://github.com/dbt-labs/dbt-core/issues/6382 - https://github.com/dbt-labs/dbt-core/issues/6880 Lengthier discussion: - https://github.com/dbt-labs/dbt-core/discussions/6170
True
`vars` in `dbt_project.yml` are not Jinja-rendered - ### Contributions - [X] I have read the contribution docs, and understand what's expected of me. ### Link to the page on docs.getdbt.com requiring updates https://docs.getdbt.com/docs/build/project-variables#defining-variables-in-dbt_projectyml ### What part(s) of the page would you like to see updated? `vars` can take static input only. The `vars` dictionary in `dbt_project.yml` is not Jinja rendered. As such, you **cannot** have code like: ```yml vars: my_var: | {% if target.name == 'dev' %} something {% elif env_var('other_input') %} something_else {% endif %} ``` ### Additional information This is a frequently opened issue: - https://github.com/dbt-labs/dbt-core/issues/3105 - https://github.com/dbt-labs/dbt-core/issues/6382 - https://github.com/dbt-labs/dbt-core/issues/6880 Lengthier discussion: - https://github.com/dbt-labs/dbt-core/discussions/6170
main
vars in dbt project yml are not jinja rendered contributions i have read the contribution docs and understand what s expected of me link to the page on docs getdbt com requiring updates what part s of the page would you like to see updated vars can take static input only the vars dictionary in dbt project yml is not jinja rendered as such you cannot have code like yml vars my var if target name dev something elif env var other input something else endif additional information this is a frequently opened issue lengthier discussion
1
3,540
13,932,592,824
IssuesEvent
2020-10-22 07:30:06
pace/bricks
https://api.github.com/repos/pace/bricks
closed
objstore: move healthcheck registration into client creation
EST::Hours S::In Progress T::Maintainance
# Motivation Do not register healthchecks if the package is simply being imported but the client not necessarily used, i.e., move them out of the `init()` method into the client creation.
True
objstore: move healthcheck registration into client creation - # Motivation Do not register healthchecks if the package is simply being imported but the client not necessarily used, i.e., move them out of the `init()` method into the client creation.
main
objstore move healthcheck registration into client creation motivation do not register healthchecks if the package is simply being imported but the client not necessarily used i e move them out of the init method into the client creation
1
1,797
6,575,903,229
IssuesEvent
2017-09-11 17:46:27
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Add limit of number of backup files to file modules with backup option
affects_2.3 feature_idea waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME File modules with `backup` option: `copy`, `template`, `lineinfile`, `ini_file`, `replace`. ##### SUMMARY If you are using `backup` option for a long time, large number of backup files is piled up in config directory: ``` service.conf service.conf.2016-03-09@12:20:22 service.conf.2016-03-15@18:17:20~ service.conf.2016-03-21@17:59:52~ service.conf.2016-03-24@19:19:26~ ... tons and tons and tons of backup files here ... ``` In my use case backup files are used to be able to quick-revert manually, if something got wrong. So old files are not interesting, they just become obsolete garbage. It would be very convenient to have options `backup_max_age` and `backup_max_files` , that will automatically clean up old backup files based on their age(in days) or total number. ##### STEPS TO REPRODUCE Something like that: ``` yaml - name: install service config template: src=service.cfg dest=/etc/service/service.cfg mode=0644 backup=yes backup_max_age=14 ``` If service.cfg was changed - creates new backup file and clears up backup files older than 2 weeks.
True
Add limit of number of backup files to file modules with backup option - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME File modules with `backup` option: `copy`, `template`, `lineinfile`, `ini_file`, `replace`. ##### SUMMARY If you are using `backup` option for a long time, large number of backup files is piled up in config directory: ``` service.conf service.conf.2016-03-09@12:20:22 service.conf.2016-03-15@18:17:20~ service.conf.2016-03-21@17:59:52~ service.conf.2016-03-24@19:19:26~ ... tons and tons and tons of backup files here ... ``` In my use case backup files are used to be able to quick-revert manually, if something got wrong. So old files are not interesting, they just become obsolete garbage. It would be very convenient to have options `backup_max_age` and `backup_max_files` , that will automatically clean up old backup files based on their age(in days) or total number. ##### STEPS TO REPRODUCE Something like that: ``` yaml - name: install service config template: src=service.cfg dest=/etc/service/service.cfg mode=0644 backup=yes backup_max_age=14 ``` If service.cfg was changed - creates new backup file and clears up backup files older than 2 weeks.
main
add limit of number of backup files to file modules with backup option issue type feature idea component name file modules with backup option copy template lineinfile ini file replace summary if you are using backup option for a long time large number of backup files is piled up in config directory service conf service conf service conf service conf service conf tons and tons and tons of backup files here in my use case backup files are used to be able to quick revert manually if something got wrong so old files are not interesting they just become obsolete garbage it would be very convenient to have options backup max age and backup max files that will automatically clean up old backup files based on their age in days or total number steps to reproduce something like that yaml name install service config template src service cfg dest etc service service cfg mode backup yes backup max age if service cfg was changed creates new backup file and clears up backup files older than weeks
1
83,329
24,041,192,440
IssuesEvent
2022-09-16 02:05:11
moclojer/moclojer
https://api.github.com/repos/moclojer/moclojer
closed
clojure devcontainer support
documentation docker build
Whats is [devcontainer](https://code.visualstudio.com/docs/remote/containers)? Way to leave the development environment inside the container, it is a specification that started in vscode and other editors support.
1.0
clojure devcontainer support - Whats is [devcontainer](https://code.visualstudio.com/docs/remote/containers)? Way to leave the development environment inside the container, it is a specification that started in vscode and other editors support.
non_main
clojure devcontainer support whats is way to leave the development environment inside the container it is a specification that started in vscode and other editors support
0
594,321
18,043,226,923
IssuesEvent
2021-09-18 12:15:27
UofA-SPEAR/software
https://api.github.com/repos/UofA-SPEAR/software
closed
Create SMACH state to drive to a GPS waypoint
good first issue priority 2
https://circ.cstag.ca/2021/rules/#autonomy-guidelines Some competition tasks will involve navigating to GPS waypoints. The rover should have a SMACH state where it navigates toward a given GPS waypoint and stops when it has reached that location (within a configurable margin of error).
1.0
Create SMACH state to drive to a GPS waypoint - https://circ.cstag.ca/2021/rules/#autonomy-guidelines Some competition tasks will involve navigating to GPS waypoints. The rover should have a SMACH state where it navigates toward a given GPS waypoint and stops when it has reached that location (within a configurable margin of error).
non_main
create smach state to drive to a gps waypoint some competition tasks will involve navigating to gps waypoints the rover should have a smach state where it navigates toward a given gps waypoint and stops when it has reached that location within a configurable margin of error
0
216,594
24,281,584,131
IssuesEvent
2022-09-28 17:54:33
liorzilberg/struts
https://api.github.com/repos/liorzilberg/struts
opened
CVE-2020-13959 (Medium) detected in velocity-tools-2.0.jar
security vulnerability
## CVE-2020-13959 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>velocity-tools-2.0.jar</b></p></summary> <p>VelocityTools is an integrated collection of Velocity subprojects with the common goal of creating tools and infrastructure to speed and ease development of both web and non-web applications using the Velocity template engine.</p> <p>Path to dependency file: /plugins/sitemesh/pom.xml</p> <p>Path to vulnerable library: /.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar</p> <p> Dependency Hierarchy: - :x: **velocity-tools-2.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/liorzilberg/struts/commit/6950763af860884188f4080d19a18c5ede16cd74">6950763af860884188f4080d19a18c5ede16cd74</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The default error page for VelocityView in Apache Velocity Tools prior to 3.1 reflects back the vm file that was entered as part of the URL. An attacker can set an XSS payload file as this vm file in the URL which results in this payload being executed. XSS vulnerabilities allow attackers to execute arbitrary JavaScript in the context of the attacked website and the attacked user. This can be abused to steal session cookies, perform requests in the name of the victim or for phishing attacks. <p>Publish Date: 2021-03-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13959>CVE-2020-13959</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-fh63-4r66-jc7v">https://github.com/advisories/GHSA-fh63-4r66-jc7v</a></p> <p>Release Date: 2021-03-10</p> <p>Fix Resolution: org.apache.velocity.tools:velocity-tools-view:3.1</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
True
CVE-2020-13959 (Medium) detected in velocity-tools-2.0.jar - ## CVE-2020-13959 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>velocity-tools-2.0.jar</b></p></summary> <p>VelocityTools is an integrated collection of Velocity subprojects with the common goal of creating tools and infrastructure to speed and ease development of both web and non-web applications using the Velocity template engine.</p> <p>Path to dependency file: /plugins/sitemesh/pom.xml</p> <p>Path to vulnerable library: /.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar,/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar</p> <p> Dependency Hierarchy: - :x: **velocity-tools-2.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/liorzilberg/struts/commit/6950763af860884188f4080d19a18c5ede16cd74">6950763af860884188f4080d19a18c5ede16cd74</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The default error page for VelocityView in Apache Velocity Tools prior to 3.1 reflects back the vm file that was entered as part of the URL. An attacker can set an XSS payload file as this vm file in the URL which results in this payload being executed. XSS vulnerabilities allow attackers to execute arbitrary JavaScript in the context of the attacked website and the attacked user. This can be abused to steal session cookies, perform requests in the name of the victim or for phishing attacks. <p>Publish Date: 2021-03-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13959>CVE-2020-13959</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-fh63-4r66-jc7v">https://github.com/advisories/GHSA-fh63-4r66-jc7v</a></p> <p>Release Date: 2021-03-10</p> <p>Fix Resolution: org.apache.velocity.tools:velocity-tools-view:3.1</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
non_main
cve medium detected in velocity tools jar cve medium severity vulnerability vulnerable library velocity tools jar velocitytools is an integrated collection of velocity subprojects with the common goal of creating tools and infrastructure to speed and ease development of both web and non web applications using the velocity template engine path to dependency file plugins sitemesh pom xml path to vulnerable library repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar repository org apache velocity velocity tools velocity tools jar dependency hierarchy x velocity tools jar vulnerable library found in head commit a href found in base branch master vulnerability details the default error page for velocityview in apache velocity tools prior to reflects back the vm file that was entered as part of the url an attacker can set an xss payload file as this vm file in the url which results in this payload being executed xss vulnerabilities allow attackers to execute arbitrary javascript in the context of the attacked website and the attacked user this can be abused to steal session cookies perform requests in the name of the victim or for phishing attacks publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache velocity tools velocity tools view check this box to open an automated fix pr
0
776,768
27,264,635,511
IssuesEvent
2023-02-22 17:06:08
ascheid/itsg33-pbmm-issue-gen
https://api.github.com/repos/ascheid/itsg33-pbmm-issue-gen
opened
MA-3: Maintenance Tools
Priority: P3 ITSG-33 Suggested Assignment: IT Operations Group Class: Operational Control: MA-3
# Control Definition (A) The organization approves, controls, and monitors information system maintenance tools. # Class Operational # Supplemental Guidance This control addresses security-related issues associated with maintenance tools used specifically for diagnostic and repair actions on organizational information systems. Maintenance tools can include hardware, software, and firmware items. Maintenance tools are potential vehicles for transporting malicious code, either intentionally or unintentionally, into a facility and subsequently into organizational information systems. Maintenance tools can include, for example, hardware/software diagnostic test equipment and hardware/software packet sniffers. This control does not cover hardware/software components that may support information system maintenance, yet are a part of the system, such as the software implementing “ping,” “ls,” “ipconfig,” or the hardware and software implementing the monitoring port of an Ethernet switch. Related controls: MA-2, MA-5, MP-6 # Suggested Assignment IT Operations Group
1.0
MA-3: Maintenance Tools - # Control Definition (A) The organization approves, controls, and monitors information system maintenance tools. # Class Operational # Supplemental Guidance This control addresses security-related issues associated with maintenance tools used specifically for diagnostic and repair actions on organizational information systems. Maintenance tools can include hardware, software, and firmware items. Maintenance tools are potential vehicles for transporting malicious code, either intentionally or unintentionally, into a facility and subsequently into organizational information systems. Maintenance tools can include, for example, hardware/software diagnostic test equipment and hardware/software packet sniffers. This control does not cover hardware/software components that may support information system maintenance, yet are a part of the system, such as the software implementing “ping,” “ls,” “ipconfig,” or the hardware and software implementing the monitoring port of an Ethernet switch. Related controls: MA-2, MA-5, MP-6 # Suggested Assignment IT Operations Group
non_main
ma maintenance tools control definition a the organization approves controls and monitors information system maintenance tools class operational supplemental guidance this control addresses security related issues associated with maintenance tools used specifically for diagnostic and repair actions on organizational information systems maintenance tools can include hardware software and firmware items maintenance tools are potential vehicles for transporting malicious code either intentionally or unintentionally into a facility and subsequently into organizational information systems maintenance tools can include for example hardware software diagnostic test equipment and hardware software packet sniffers this control does not cover hardware software components that may support information system maintenance yet are a part of the system such as the software implementing “ping ” “ls ” “ipconfig ” or the hardware and software implementing the monitoring port of an ethernet switch related controls ma ma mp suggested assignment it operations group
0
5,803
30,743,622,135
IssuesEvent
2023-07-28 13:26:50
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
opened
Reduce size of distribution
kind/toil area/maintainability
**Description** While I was working on the different Docker images, one thing notable is that the Zeebe layer/distribution is 174MB. Looking into it, it's almost entirely 3rd party dependencies. I've listed out the ones whose size are greater than 1MB: - rocksdbjni 59MB - grpc-xds 12MB - grpc-netty-shaded 9.9MB - scala-library 6MB - zstdjni 5.5MB - netty-tcnative-boringssl (1.1MB + 1.2MB + 1MB + 1.1MB + 1.0MB) 5.5MB - conscrypt 4.5MB - s3-2.20 3.3MB - guava-jre 3.0MB - proto-google-common-protos 2.0MB - reactor-core 1.8MB - log4j-core-2 1.8MB - spring-boot-autoconfigure 1.8MB - spring-core 1.8MB - spring-web 1.8MB - protobuf-java 1.7MB - jackson-databind 1.6MB - spring-boot 1.5MB - kotlin-stdlib 1.5MB - commons-compact 1.1MB By specifying an architecture during build, you could easily cut down the size for RocksDB and netty-tcnative, both of which pull in multiple pre-compiled binaries for each architecture: - rocksdbjni 59MB down to 19MB - netty-tcnative 5.5MB down to 1.2MB Then, since already include Netty everywhere, we don't need the `grpc-netty-shaded` dependency. We can simply use `grpc-netty` and our existing Netty dependency. That's another 9.9MB knocked off. It also seems possible we could exclude `conscrypt` if we're using `netty-tcnative`, so that would be another 4.5MB. But that would need to be verified. So opportunities to reduce from 174MB down to at least 115 MB. Possibly 103MB if we can also drop the requirement on `grpc-xds` (xDS being a service mesh protocol, a feature we're not really using for the gateway). No real urgency here, I think. The benefit is a smaller image - meaning faster to push and pull - and for the dependencies we drop, a slightly smaller CVE surface.
True
Reduce size of distribution - **Description** While I was working on the different Docker images, one thing notable is that the Zeebe layer/distribution is 174MB. Looking into it, it's almost entirely 3rd party dependencies. I've listed out the ones whose size are greater than 1MB: - rocksdbjni 59MB - grpc-xds 12MB - grpc-netty-shaded 9.9MB - scala-library 6MB - zstdjni 5.5MB - netty-tcnative-boringssl (1.1MB + 1.2MB + 1MB + 1.1MB + 1.0MB) 5.5MB - conscrypt 4.5MB - s3-2.20 3.3MB - guava-jre 3.0MB - proto-google-common-protos 2.0MB - reactor-core 1.8MB - log4j-core-2 1.8MB - spring-boot-autoconfigure 1.8MB - spring-core 1.8MB - spring-web 1.8MB - protobuf-java 1.7MB - jackson-databind 1.6MB - spring-boot 1.5MB - kotlin-stdlib 1.5MB - commons-compact 1.1MB By specifying an architecture during build, you could easily cut down the size for RocksDB and netty-tcnative, both of which pull in multiple pre-compiled binaries for each architecture: - rocksdbjni 59MB down to 19MB - netty-tcnative 5.5MB down to 1.2MB Then, since already include Netty everywhere, we don't need the `grpc-netty-shaded` dependency. We can simply use `grpc-netty` and our existing Netty dependency. That's another 9.9MB knocked off. It also seems possible we could exclude `conscrypt` if we're using `netty-tcnative`, so that would be another 4.5MB. But that would need to be verified. So opportunities to reduce from 174MB down to at least 115 MB. Possibly 103MB if we can also drop the requirement on `grpc-xds` (xDS being a service mesh protocol, a feature we're not really using for the gateway). No real urgency here, I think. The benefit is a smaller image - meaning faster to push and pull - and for the dependencies we drop, a slightly smaller CVE surface.
main
reduce size of distribution description while i was working on the different docker images one thing notable is that the zeebe layer distribution is looking into it it s almost entirely party dependencies i ve listed out the ones whose size are greater than rocksdbjni grpc xds grpc netty shaded scala library zstdjni netty tcnative boringssl conscrypt guava jre proto google common protos reactor core core spring boot autoconfigure spring core spring web protobuf java jackson databind spring boot kotlin stdlib commons compact by specifying an architecture during build you could easily cut down the size for rocksdb and netty tcnative both of which pull in multiple pre compiled binaries for each architecture rocksdbjni down to netty tcnative down to then since already include netty everywhere we don t need the grpc netty shaded dependency we can simply use grpc netty and our existing netty dependency that s another knocked off it also seems possible we could exclude conscrypt if we re using netty tcnative so that would be another but that would need to be verified so opportunities to reduce from down to at least mb possibly if we can also drop the requirement on grpc xds xds being a service mesh protocol a feature we re not really using for the gateway no real urgency here i think the benefit is a smaller image meaning faster to push and pull and for the dependencies we drop a slightly smaller cve surface
1
2,650
8,102,838,058
IssuesEvent
2018-08-13 04:48:56
openshiftio/openshift.io
https://api.github.com/repos/openshiftio/openshift.io
closed
Jenkins is becoming Idle for pipeline build in OSIO launcher flow.
SEV2-high area/architecture/build priority/P4 sprint/next team/build-cd type/bug
Due to this Jenkins issue, No build could not able to see the finish line. This is a critical issue from the build pipeline endpoint. Please check the below screenshot. ![jekins_idle](https://user-images.githubusercontent.com/11207106/38484498-754ac61e-3bf4-11e8-9dfd-f82601e17d9b.png)
1.0
Jenkins is becoming Idle for pipeline build in OSIO launcher flow. - Due to this Jenkins issue, No build could not able to see the finish line. This is a critical issue from the build pipeline endpoint. Please check the below screenshot. ![jekins_idle](https://user-images.githubusercontent.com/11207106/38484498-754ac61e-3bf4-11e8-9dfd-f82601e17d9b.png)
non_main
jenkins is becoming idle for pipeline build in osio launcher flow due to this jenkins issue no build could not able to see the finish line this is a critical issue from the build pipeline endpoint please check the below screenshot
0
2,208
7,802,987,465
IssuesEvent
2018-06-10 18:35:44
OpenLightingProject/ola
https://api.github.com/repos/OpenLightingProject/ola
closed
libftdi API update
Component-Plugin Language-C++ Maintainability OpSys-Linux
Hi, The Debian maintainer of libftdi filed a [bug](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=810374) against ola. I tried the simple fix ("s/libftdi-dev/libftdi1-dev/" over debian/control), but that results in no FTDI plugin being compiled. Someone will need to look at the changes that were made in the new FTDI library and update ola accordingly. Mean time, I'll have to still compile against the old library.
True
libftdi API update - Hi, The Debian maintainer of libftdi filed a [bug](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=810374) against ola. I tried the simple fix ("s/libftdi-dev/libftdi1-dev/" over debian/control), but that results in no FTDI plugin being compiled. Someone will need to look at the changes that were made in the new FTDI library and update ola accordingly. Mean time, I'll have to still compile against the old library.
main
libftdi api update hi the debian maintainer of libftdi filed a against ola i tried the simple fix s libftdi dev dev over debian control but that results in no ftdi plugin being compiled someone will need to look at the changes that were made in the new ftdi library and update ola accordingly mean time i ll have to still compile against the old library
1
371,185
10,962,670,445
IssuesEvent
2019-11-27 17:46:59
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Kubectl version --server should return the server version
kind/feature priority/awaiting-more-evidence sig/cli
<!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: I would like for `kubectl version --server` to return the server version as `kubectl version --client` returns the client version. **Why is this needed**: It will make it easier to write automating scripts for checking the server version. It will maintain consistency as there already is a `--client` flag that returns the client version.
1.0
Kubectl version --server should return the server version - <!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: I would like for `kubectl version --server` to return the server version as `kubectl version --client` returns the client version. **Why is this needed**: It will make it easier to write automating scripts for checking the server version. It will maintain consistency as there already is a `--client` flag that returns the client version.
non_main
kubectl version server should return the server version what would you like to be added i would like for kubectl version server to return the server version as kubectl version client returns the client version why is this needed it will make it easier to write automating scripts for checking the server version it will maintain consistency as there already is a client flag that returns the client version
0
1,226
5,218,843,895
IssuesEvent
2017-01-26 17:27:01
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
apache2_module fails for PHP 5.6 even though it is already enabled
affects_2.2 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /Users/nick/Workspace/-redacted-/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION hostfile & roles_path ##### OS / ENVIRONMENT Running Ansible on macOS Sierra, target server is Ubuntu Xenial ##### SUMMARY Enabling the Apache2 module "[php5.6](https://launchpad.net/~ondrej/+archive/ubuntu/php)" with apache2_module fails even though the module is already enabled. This is the same problem as #5559 and #4744 but with a different package. This module is called `php5.6` but identifies itself in `apache2ctl -M` as `php5_module`. ##### STEPS TO REPRODUCE ``` - name: Enable PHP 5.6 apache2_module: state=present name=php5.6 ``` ##### ACTUAL RESULTS ``` failed: [nicksherlock.com] (item=php5.6) => { "failed": true, "invocation": { "module_args": { "force": false, "name": "php5.6", "state": "present" }, "module_name": "apache2_module" }, "item": "php5.6", "msg": "Failed to set module php5.6 to enabled: Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n", "rc": 0, "stderr": "", "stdout": "Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n", "stdout_lines": [ "Considering dependency mpm_prefork for php5.6:", "Considering conflict mpm_event for mpm_prefork:", "Considering conflict mpm_worker for mpm_prefork:", "Module mpm_prefork already enabled", "Considering conflict php5 for php5.6:", "Module php5.6 already enabled" ] } ``` Running it manually on the server gives: ``` # a2enmod php5.6 Considering dependency mpm_prefork for php5.6: Considering conflict mpm_event for mpm_prefork: Considering conflict mpm_worker for mpm_prefork: Module mpm_prefork already enabled Considering conflict php5 for php5.6: Module php5.6 already enabled # echo $? 0 ``` This is php5.6.load: ``` # Conflicts: php5 # Depends: mpm_prefork LoadModule php5_module /usr/lib/apache2/modules/libphp5.6.so ``` Note that manually running "a2enmod php5.6" on the server directly gives a 0 exit status to signal success, can't apache2_module just check that instead of doing parsing with a regex? What if I wanted several sets of conf files in `mods-available` for the same module? (e.g. php-prod.load, php-dev.load both loading the same module, but with different config) Wouldn't that make it impossible for Ansible to manage those with apache2_module? It just seems odd that Ansible requires that the module's binary name be the same as the name of its .load file.
True
apache2_module fails for PHP 5.6 even though it is already enabled - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apache2_module ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /Users/nick/Workspace/-redacted-/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION hostfile & roles_path ##### OS / ENVIRONMENT Running Ansible on macOS Sierra, target server is Ubuntu Xenial ##### SUMMARY Enabling the Apache2 module "[php5.6](https://launchpad.net/~ondrej/+archive/ubuntu/php)" with apache2_module fails even though the module is already enabled. This is the same problem as #5559 and #4744 but with a different package. This module is called `php5.6` but identifies itself in `apache2ctl -M` as `php5_module`. ##### STEPS TO REPRODUCE ``` - name: Enable PHP 5.6 apache2_module: state=present name=php5.6 ``` ##### ACTUAL RESULTS ``` failed: [nicksherlock.com] (item=php5.6) => { "failed": true, "invocation": { "module_args": { "force": false, "name": "php5.6", "state": "present" }, "module_name": "apache2_module" }, "item": "php5.6", "msg": "Failed to set module php5.6 to enabled: Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n", "rc": 0, "stderr": "", "stdout": "Considering dependency mpm_prefork for php5.6:\nConsidering conflict mpm_event for mpm_prefork:\nConsidering conflict mpm_worker for mpm_prefork:\nModule mpm_prefork already enabled\nConsidering conflict php5 for php5.6:\nModule php5.6 already enabled\n", "stdout_lines": [ "Considering dependency mpm_prefork for php5.6:", "Considering conflict mpm_event for mpm_prefork:", "Considering conflict mpm_worker for mpm_prefork:", "Module mpm_prefork already enabled", "Considering conflict php5 for php5.6:", "Module php5.6 already enabled" ] } ``` Running it manually on the server gives: ``` # a2enmod php5.6 Considering dependency mpm_prefork for php5.6: Considering conflict mpm_event for mpm_prefork: Considering conflict mpm_worker for mpm_prefork: Module mpm_prefork already enabled Considering conflict php5 for php5.6: Module php5.6 already enabled # echo $? 0 ``` This is php5.6.load: ``` # Conflicts: php5 # Depends: mpm_prefork LoadModule php5_module /usr/lib/apache2/modules/libphp5.6.so ``` Note that manually running "a2enmod php5.6" on the server directly gives a 0 exit status to signal success, can't apache2_module just check that instead of doing parsing with a regex? What if I wanted several sets of conf files in `mods-available` for the same module? (e.g. php-prod.load, php-dev.load both loading the same module, but with different config) Wouldn't that make it impossible for Ansible to manage those with apache2_module? It just seems odd that Ansible requires that the module's binary name be the same as the name of its .load file.
main
module fails for php even though it is already enabled issue type bug report component name module ansible version ansible config file users nick workspace redacted ansible cfg configured module search path default w o overrides configuration hostfile roles path os environment running ansible on macos sierra target server is ubuntu xenial summary enabling the module with module fails even though the module is already enabled this is the same problem as and but with a different package this module is called but identifies itself in m as module steps to reproduce name enable php module state present name actual results failed item failed true invocation module args force false name state present module name module item msg failed to set module to enabled considering dependency mpm prefork for nconsidering conflict mpm event for mpm prefork nconsidering conflict mpm worker for mpm prefork nmodule mpm prefork already enabled nconsidering conflict for nmodule already enabled n rc stderr stdout considering dependency mpm prefork for nconsidering conflict mpm event for mpm prefork nconsidering conflict mpm worker for mpm prefork nmodule mpm prefork already enabled nconsidering conflict for nmodule already enabled n stdout lines considering dependency mpm prefork for considering conflict mpm event for mpm prefork considering conflict mpm worker for mpm prefork module mpm prefork already enabled considering conflict for module already enabled running it manually on the server gives considering dependency mpm prefork for considering conflict mpm event for mpm prefork considering conflict mpm worker for mpm prefork module mpm prefork already enabled considering conflict for module already enabled echo this is load conflicts depends mpm prefork loadmodule module usr lib modules so note that manually running on the server directly gives a exit status to signal success can t module just check that instead of doing parsing with a regex what if i wanted several sets of conf files in mods available for the same module e g php prod load php dev load both loading the same module but with different config wouldn t that make it impossible for ansible to manage those with module it just seems odd that ansible requires that the module s binary name be the same as the name of its load file
1
2,639
8,960,177,921
IssuesEvent
2019-01-28 04:11:54
portage-brew/portage-brew-staging-and-evolution
https://api.github.com/repos/portage-brew/portage-brew-staging-and-evolution
closed
Compose a Formal Announcement for Upstream
Needs Discussion Needs Maintainer Feedback enhancement help wanted
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We need to publicize ourselves somewhat better, but I'm having trouble thinking of something to use for that purpose that remains in the spirit of #11 at the moment due to…remaining distaste over how affairs were handled upstream that I'm still processing. I'm thus leaving this issue open for idea submissions, wording proposals, and rough drafts (either as comments here or PRs to close this issue.) CC: - @blogabe (Since I've seen you doing marketing and you might therefore have an opinion here.) - @portage-brew/maintainers in general.
True
Compose a Formal Announcement for Upstream - &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;We need to publicize ourselves somewhat better, but I'm having trouble thinking of something to use for that purpose that remains in the spirit of #11 at the moment due to…remaining distaste over how affairs were handled upstream that I'm still processing. I'm thus leaving this issue open for idea submissions, wording proposals, and rough drafts (either as comments here or PRs to close this issue.) CC: - @blogabe (Since I've seen you doing marketing and you might therefore have an opinion here.) - @portage-brew/maintainers in general.
main
compose a formal announcement for upstream nbsp nbsp nbsp nbsp nbsp we need to publicize ourselves somewhat better but i m having trouble thinking of something to use for that purpose that remains in the spirit of at the moment due to…remaining distaste over how affairs were handled upstream that i m still processing i m thus leaving this issue open for idea submissions wording proposals and rough drafts either as comments here or prs to close this issue cc blogabe since i ve seen you doing marketing and you might therefore have an opinion here portage brew maintainers in general
1
311,695
26,806,045,735
IssuesEvent
2023-02-01 18:22:13
art-here/art-here-backend
https://api.github.com/repos/art-here/art-here-backend
closed
구글 로그인 테스트, 멤버 기능 추가 구현
cleanup test feat
## 🤷 구현할 기능 구글 로그인 코드를 수정한다. 구글 로그인 테스트 코드를 작성한다. 멤버 기능 추가 구현한다. ## 🔨 상세 작업 내용 - [x] 프론트엔드 깃허브 액션 완료 - [x] 구글 로그인 테스트 구현 - [x] 구글 로그인 코드 수정 ## 📄 참고 사항 ## ⏰ 예상 소요 기간 3일
1.0
구글 로그인 테스트, 멤버 기능 추가 구현 - ## 🤷 구현할 기능 구글 로그인 코드를 수정한다. 구글 로그인 테스트 코드를 작성한다. 멤버 기능 추가 구현한다. ## 🔨 상세 작업 내용 - [x] 프론트엔드 깃허브 액션 완료 - [x] 구글 로그인 테스트 구현 - [x] 구글 로그인 코드 수정 ## 📄 참고 사항 ## ⏰ 예상 소요 기간 3일
non_main
구글 로그인 테스트 멤버 기능 추가 구현 🤷 구현할 기능 구글 로그인 코드를 수정한다 구글 로그인 테스트 코드를 작성한다 멤버 기능 추가 구현한다 🔨 상세 작업 내용 프론트엔드 깃허브 액션 완료 구글 로그인 테스트 구현 구글 로그인 코드 수정 📄 참고 사항 ⏰ 예상 소요 기간
0
231,667
17,703,305,722
IssuesEvent
2021-08-25 02:44:05
fuchicorp/main
https://api.github.com/repos/fuchicorp/main
opened
Deploy ELK-Stack to bastion-host
Kubernetes Priority High Bastion Require Documentation elastic-search kibana basic
Hello ! We are going to deploy ELK-Stack to our bastion-host. Here's the Github repo with documentation to follow! https://github.com/fuchicorp/elk-stack
1.0
Deploy ELK-Stack to bastion-host - Hello ! We are going to deploy ELK-Stack to our bastion-host. Here's the Github repo with documentation to follow! https://github.com/fuchicorp/elk-stack
non_main
deploy elk stack to bastion host hello we are going to deploy elk stack to our bastion host here s the github repo with documentation to follow
0
1,816
6,577,318,165
IssuesEvent
2017-09-12 00:04:20
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Ansible ec2_asg module expiring token for AWS with wait_timeout more than 20 mins
affects_2.3 aws bug_report cloud waiting_on_maintainer
##### ISSUE TYPE Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION N/A ##### SUMMARY I am using Ansible ec2_asg module. I have my credentails file generated with TEMP AWS credentails in `~/.aws/credentials` every few minutes. I have wait timeout more than 15 mins , so in between that time the credentails are expiring and my Ansible playbook fail. That is because anisble reads creds only once during start and then keep using that. If i manually use connect_to_aws at every place in ec2_asg then it works fine. Is there any easy fix for that
True
Ansible ec2_asg module expiring token for AWS with wait_timeout more than 20 mins - ##### ISSUE TYPE Bug Report ##### COMPONENT NAME ec2_asg ##### ANSIBLE VERSION N/A ##### SUMMARY I am using Ansible ec2_asg module. I have my credentails file generated with TEMP AWS credentails in `~/.aws/credentials` every few minutes. I have wait timeout more than 15 mins , so in between that time the credentails are expiring and my Ansible playbook fail. That is because anisble reads creds only once during start and then keep using that. If i manually use connect_to_aws at every place in ec2_asg then it works fine. Is there any easy fix for that
main
ansible asg module expiring token for aws with wait timeout more than mins issue type bug report component name asg ansible version n a summary i am using ansible asg module i have my credentails file generated with temp aws credentails in aws credentials every few minutes i have wait timeout more than mins so in between that time the credentails are expiring and my ansible playbook fail that is because anisble reads creds only once during start and then keep using that if i manually use connect to aws at every place in asg then it works fine is there any easy fix for that
1
4,830
24,898,554,336
IssuesEvent
2022-10-28 18:17:00
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
ForeignKey error
type: bug work: backend status: ready restricted: maintainers
I'm not sure what happened, but somehow I got Mathesar into a state where all API requests to fetch the list of tables for my Library schema returned the following error. This made Mathesar unusable. <details> <summary>Traceback</summary> ``` Environment: Request Method: GET Request URL: http://localhost:8000/api/db/v0/tables/?schema=3&limit=500 Django Version: 3.1.14 Python Version: 3.9.9 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'django_filters', 'django_property_filter', 'mathesar'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view return view_func(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception response = exception_handler(exc, context) File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler raise exc File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 40, in list page = self.paginate_queryset(queryset) File "/usr/local/lib/python3.9/site-packages/rest_framework/generics.py", line 171, in paginate_queryset return self.paginator.paginate_queryset(queryset, self.request, view=self) File "/usr/local/lib/python3.9/site-packages/rest_framework/pagination.py", line 395, in paginate_queryset return list(queryset[self.offset:self.offset + self.limit]) File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 269, in __len__ self._fetch_all() File "/code/mathesar/utils/prefetch.py", line 158, in _fetch_all prefetcher.fetch(obj_list, name, self.model, forwarders) File "/code/mathesar/utils/prefetch.py", line 270, in fetch related_data = self.filter(data_mapping.keys(), data_mapping.values()) File "/code/mathesar/models/base.py", line 267, in <lambda> filter=lambda oids, tables: reflect_tables_from_oids( File "/code/db/tables/operations/select.py", line 38, in reflect_tables_from_oids table_oids_to_sa_tables[table_oid] = reflect_table( File "/code/db/tables/operations/select.py", line 19, in reflect_table return Table(name, metadata, schema=schema, autoload_with=autoload_with, extend_existing=True) File "<string>", line 2, in __new__ <source code not available> File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py", line 298, in warned return fn(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 586, in __new__ table._init_existing(*args, **kw) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 787, in _init_existing self._autoload( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 705, in _autoload conn_insp.reflect_table( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 779, in reflect_table self._reflect_column( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 903, in _reflect_column table.append_column(col, replace_existing=True) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 883, in append_column column._set_parent_with_dispatch( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1025, in _set_parent_with_dispatch self._set_parent(parent, **kw) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 1774, in _set_parent table.foreign_keys.remove(fk) Exception Type: KeyError at /api/db/v0/tables/ Exception Value: ForeignKey('Library Management.Publishers.id') ``` </details> Restarting Docker fixed the issue for me. CC @mathemancer
True
ForeignKey error - I'm not sure what happened, but somehow I got Mathesar into a state where all API requests to fetch the list of tables for my Library schema returned the following error. This made Mathesar unusable. <details> <summary>Traceback</summary> ``` Environment: Request Method: GET Request URL: http://localhost:8000/api/db/v0/tables/?schema=3&limit=500 Django Version: 3.1.14 Python Version: 3.9.9 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'django_filters', 'django_property_filter', 'mathesar'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view return view_func(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception response = exception_handler(exc, context) File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler raise exc File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 40, in list page = self.paginate_queryset(queryset) File "/usr/local/lib/python3.9/site-packages/rest_framework/generics.py", line 171, in paginate_queryset return self.paginator.paginate_queryset(queryset, self.request, view=self) File "/usr/local/lib/python3.9/site-packages/rest_framework/pagination.py", line 395, in paginate_queryset return list(queryset[self.offset:self.offset + self.limit]) File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 269, in __len__ self._fetch_all() File "/code/mathesar/utils/prefetch.py", line 158, in _fetch_all prefetcher.fetch(obj_list, name, self.model, forwarders) File "/code/mathesar/utils/prefetch.py", line 270, in fetch related_data = self.filter(data_mapping.keys(), data_mapping.values()) File "/code/mathesar/models/base.py", line 267, in <lambda> filter=lambda oids, tables: reflect_tables_from_oids( File "/code/db/tables/operations/select.py", line 38, in reflect_tables_from_oids table_oids_to_sa_tables[table_oid] = reflect_table( File "/code/db/tables/operations/select.py", line 19, in reflect_table return Table(name, metadata, schema=schema, autoload_with=autoload_with, extend_existing=True) File "<string>", line 2, in __new__ <source code not available> File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py", line 298, in warned return fn(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 586, in __new__ table._init_existing(*args, **kw) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 787, in _init_existing self._autoload( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 705, in _autoload conn_insp.reflect_table( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 779, in reflect_table self._reflect_column( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 903, in _reflect_column table.append_column(col, replace_existing=True) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 883, in append_column column._set_parent_with_dispatch( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1025, in _set_parent_with_dispatch self._set_parent(parent, **kw) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 1774, in _set_parent table.foreign_keys.remove(fk) Exception Type: KeyError at /api/db/v0/tables/ Exception Value: ForeignKey('Library Management.Publishers.id') ``` </details> Restarting Docker fixed the issue for me. CC @mathemancer
main
foreignkey error i m not sure what happened but somehow i got mathesar into a state where all api requests to fetch the list of tables for my library schema returned the following error this made mathesar unusable traceback environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file usr local lib site packages rest framework mixins py line in list page self paginate queryset queryset file usr local lib site packages rest framework generics py line in paginate queryset return self paginator paginate queryset queryset self request view self file usr local lib site packages rest framework pagination py line in paginate queryset return list queryset file usr local lib site packages django db models query py line in len self fetch all file code mathesar utils prefetch py line in fetch all prefetcher fetch obj list name self model forwarders file code mathesar utils prefetch py line in fetch related data self filter data mapping keys data mapping values file code mathesar models base py line in filter lambda oids tables reflect tables from oids file code db tables operations select py line in reflect tables from oids table oids to sa tables reflect table file code db tables operations select py line in reflect table return table name metadata schema schema autoload with autoload with extend existing true file line in new file usr local lib site packages sqlalchemy util deprecations py line in warned return fn args kwargs file usr local lib site packages sqlalchemy sql schema py line in new table init existing args kw file usr local lib site packages sqlalchemy sql schema py line in init existing self autoload file usr local lib site packages sqlalchemy sql schema py line in autoload conn insp reflect table file usr local lib site packages sqlalchemy engine reflection py line in reflect table self reflect column file usr local lib site packages sqlalchemy engine reflection py line in reflect column table append column col replace existing true file usr local lib site packages sqlalchemy sql schema py line in append column column set parent with dispatch file usr local lib site packages sqlalchemy sql base py line in set parent with dispatch self set parent parent kw file usr local lib site packages sqlalchemy sql schema py line in set parent table foreign keys remove fk exception type keyerror at api db tables exception value foreignkey library management publishers id restarting docker fixed the issue for me cc mathemancer
1
568,373
16,977,591,510
IssuesEvent
2021-06-30 02:55:27
googleapis/nodejs-datastore
https://api.github.com/repos/googleapis/nodejs-datastore
closed
getProjectId never resolves
api: datastore priority: p2 type: bug
#### Environment details - OS: Ubuntu 20.04.2 LTS - Node.js version: v14.17.0 - npm version: 6.14.13 - `@google-cloud/datastore` version: 6.4.1 #### Steps to reproduce Run the following code: ```js const { Datastore } = require('@google-cloud/datastore'); const datastore = new Datastore(); datastore.getProjectId() .then(projectId => console.log(`projectId: ${projectId}`)) .catch(err => console.log(`err ${err}`)); ``` Expected output is `projectId: <my-project-id>` or `err <error>` but no output is shown. ### Workaround I had a quick look and it seems like `getProjectId` is promisified even though it already returns a promise. I added `getProjectId` to the promisify exclude list in `/node_modules/@google-cloud/datastore/build/src/index.js:1484`: ```js promisify_1.promisifyAll(Datastore, { exclude: [ 'double', 'isDouble', 'geoPoint', 'isGeoPoint', 'index', 'int', 'isInt', 'createQuery', 'key', 'isKey', 'keyFromLegacyUrlsafe', 'transaction', 'getProjectId' ], }); ``` Also found at: https://github.com/googleapis/nodejs-datastore/blob/ccc466a5fce8cb78a0980f961bfa473c56a12c38/src/index.ts#L1747-L1762 With `getProjectId` excluded, the output is `projectId: <my-project-id>` which is the expected output.
1.0
getProjectId never resolves - #### Environment details - OS: Ubuntu 20.04.2 LTS - Node.js version: v14.17.0 - npm version: 6.14.13 - `@google-cloud/datastore` version: 6.4.1 #### Steps to reproduce Run the following code: ```js const { Datastore } = require('@google-cloud/datastore'); const datastore = new Datastore(); datastore.getProjectId() .then(projectId => console.log(`projectId: ${projectId}`)) .catch(err => console.log(`err ${err}`)); ``` Expected output is `projectId: <my-project-id>` or `err <error>` but no output is shown. ### Workaround I had a quick look and it seems like `getProjectId` is promisified even though it already returns a promise. I added `getProjectId` to the promisify exclude list in `/node_modules/@google-cloud/datastore/build/src/index.js:1484`: ```js promisify_1.promisifyAll(Datastore, { exclude: [ 'double', 'isDouble', 'geoPoint', 'isGeoPoint', 'index', 'int', 'isInt', 'createQuery', 'key', 'isKey', 'keyFromLegacyUrlsafe', 'transaction', 'getProjectId' ], }); ``` Also found at: https://github.com/googleapis/nodejs-datastore/blob/ccc466a5fce8cb78a0980f961bfa473c56a12c38/src/index.ts#L1747-L1762 With `getProjectId` excluded, the output is `projectId: <my-project-id>` which is the expected output.
non_main
getprojectid never resolves environment details os ubuntu lts node js version npm version google cloud datastore version steps to reproduce run the following code js const datastore require google cloud datastore const datastore new datastore datastore getprojectid then projectid console log projectid projectid catch err console log err err expected output is projectid or err but no output is shown workaround i had a quick look and it seems like getprojectid is promisified even though it already returns a promise i added getprojectid to the promisify exclude list in node modules google cloud datastore build src index js js promisify promisifyall datastore exclude double isdouble geopoint isgeopoint index int isint createquery key iskey keyfromlegacyurlsafe transaction getprojectid also found at with getprojectid excluded the output is projectid which is the expected output
0
524,791
15,223,548,615
IssuesEvent
2021-02-18 02:57:11
LBL-EESA/TECA
https://api.github.com/repos/LBL-EESA/TECA
opened
normalize coordinates handling z axis
2_medium_priority
currently normalize_coordinates re-orders so that coordinate axes are in ascending order. however, with pressure levels the correct order is descending. descending order should be the default. or it may be possible to detect when the z axis is pressure levels, perhaps by units or names.
1.0
normalize coordinates handling z axis - currently normalize_coordinates re-orders so that coordinate axes are in ascending order. however, with pressure levels the correct order is descending. descending order should be the default. or it may be possible to detect when the z axis is pressure levels, perhaps by units or names.
non_main
normalize coordinates handling z axis currently normalize coordinates re orders so that coordinate axes are in ascending order however with pressure levels the correct order is descending descending order should be the default or it may be possible to detect when the z axis is pressure levels perhaps by units or names
0
83,444
10,352,661,570
IssuesEvent
2019-09-05 09:43:51
Royal-Navy/standards-toolkit
https://api.github.com/repos/Royal-Navy/standards-toolkit
closed
Docs Site sidebar redesign
Docs Site In Progress - Development Signed Off - Design enhancement
# Overview After conducting multiple User Research sessions on the Docs Site, a recurring issue has emerged with the usability of the sidebar. Currently, the active state uses a blue colour to signify the active page in sidebar, however we also use a slightly darker blue to highlight a parent link in the sidebar. This causes the parent link to look active, even when it isn't. The proposed solution: - Remove the blue highlight from the parent links to prevent confusion as to what is currently the active page. - Placing all sub links inside a collapsable menu underneath the Parent link. - Removing the background to reduce the overall impact of the sidebar <img width="983" alt="Screenshot 2019-08-22 at 12 10 57" src="https://user-images.githubusercontent.com/48090803/63510362-47504880-c4d6-11e9-8234-7c33f604fd45.png">
1.0
Docs Site sidebar redesign - # Overview After conducting multiple User Research sessions on the Docs Site, a recurring issue has emerged with the usability of the sidebar. Currently, the active state uses a blue colour to signify the active page in sidebar, however we also use a slightly darker blue to highlight a parent link in the sidebar. This causes the parent link to look active, even when it isn't. The proposed solution: - Remove the blue highlight from the parent links to prevent confusion as to what is currently the active page. - Placing all sub links inside a collapsable menu underneath the Parent link. - Removing the background to reduce the overall impact of the sidebar <img width="983" alt="Screenshot 2019-08-22 at 12 10 57" src="https://user-images.githubusercontent.com/48090803/63510362-47504880-c4d6-11e9-8234-7c33f604fd45.png">
non_main
docs site sidebar redesign overview after conducting multiple user research sessions on the docs site a recurring issue has emerged with the usability of the sidebar currently the active state uses a blue colour to signify the active page in sidebar however we also use a slightly darker blue to highlight a parent link in the sidebar this causes the parent link to look active even when it isn t the proposed solution remove the blue highlight from the parent links to prevent confusion as to what is currently the active page placing all sub links inside a collapsable menu underneath the parent link removing the background to reduce the overall impact of the sidebar img width alt screenshot at src
0
26,167
5,229,247,730
IssuesEvent
2017-01-29 00:50:14
golang/go
https://api.github.com/repos/golang/go
closed
database/sql: update 1.8 release notes
Documentation
### What version of Go are you using (`go version`)? 1.8rc3 ### What operating system and processor architecture are you using (`go env`)? N/A ### What did you do? https://beta.golang.org/doc/go1.8#database_sql ### What did you expect to see? Documentation mentioning `DB.BeginTx()`. ### What did you see instead? Documentation mentioning non-existant `DB.BeginContext()`. Probably forgot to update documentation as part of #18284.
1.0
database/sql: update 1.8 release notes - ### What version of Go are you using (`go version`)? 1.8rc3 ### What operating system and processor architecture are you using (`go env`)? N/A ### What did you do? https://beta.golang.org/doc/go1.8#database_sql ### What did you expect to see? Documentation mentioning `DB.BeginTx()`. ### What did you see instead? Documentation mentioning non-existant `DB.BeginContext()`. Probably forgot to update documentation as part of #18284.
non_main
database sql update release notes what version of go are you using go version what operating system and processor architecture are you using go env n a what did you do what did you expect to see documentation mentioning db begintx what did you see instead documentation mentioning non existant db begincontext probably forgot to update documentation as part of
0
710
4,287,790,961
IssuesEvent
2016-07-17 00:54:01
gogits/gogs
https://api.github.com/repos/gogits/gogs
closed
Inactive, Deactivated user can still log in
kind/bug status/assigned to maintainer status/needs feedback
In admin interface for each user, there is a checkbox "This account is activated". Upon disabling, and saving the change, the user can still log in. Restarting gogs does not help. - Gogs version (or commit ref): 0.9.15.0323 - Database: - [ ] MySQL
True
Inactive, Deactivated user can still log in - In admin interface for each user, there is a checkbox "This account is activated". Upon disabling, and saving the change, the user can still log in. Restarting gogs does not help. - Gogs version (or commit ref): 0.9.15.0323 - Database: - [ ] MySQL
main
inactive deactivated user can still log in in admin interface for each user there is a checkbox this account is activated upon disabling and saving the change the user can still log in restarting gogs does not help gogs version or commit ref database mysql
1
37,450
10,011,698,614
IssuesEvent
2019-07-15 11:19:57
RiskyKen/Armourers-Workshop
https://api.github.com/repos/RiskyKen/Armourers-Workshop
closed
[1.12.2] Entering Dimensions unloads skins
bug next-build
Entering a different dimension will unload Armourer's Workshop skins until the player presses the open wardrobe key.
1.0
[1.12.2] Entering Dimensions unloads skins - Entering a different dimension will unload Armourer's Workshop skins until the player presses the open wardrobe key.
non_main
entering dimensions unloads skins entering a different dimension will unload armourer s workshop skins until the player presses the open wardrobe key
0
206,779
16,056,312,427
IssuesEvent
2021-04-23 05:55:59
JoshClose/CsvHelper
https://api.github.com/repos/JoshClose/CsvHelper
opened
How to add options for a build in converter
documentation
Hi, it would be good to have documentation on how to add options for a build converter. ByteArrayConverter has options, but I don't know where to specify them
1.0
How to add options for a build in converter - Hi, it would be good to have documentation on how to add options for a build converter. ByteArrayConverter has options, but I don't know where to specify them
non_main
how to add options for a build in converter hi it would be good to have documentation on how to add options for a build converter bytearrayconverter has options but i don t know where to specify them
0
12,769
3,289,013,330
IssuesEvent
2015-10-29 17:15:05
crowdsdom/php-sdk
https://api.github.com/repos/crowdsdom/php-sdk
closed
Travis DNS Resolve Issue
bug testing
``` cURL error 6: Couldn't resolve host 'account.crowdsdom.com' cURL error 6: Couldn't resolve host 'api.crowdsdom.com' ```
1.0
Travis DNS Resolve Issue - ``` cURL error 6: Couldn't resolve host 'account.crowdsdom.com' cURL error 6: Couldn't resolve host 'api.crowdsdom.com' ```
non_main
travis dns resolve issue curl error couldn t resolve host account crowdsdom com curl error couldn t resolve host api crowdsdom com
0
441,237
12,709,710,244
IssuesEvent
2020-06-23 12:48:53
wso2/micro-integrator
https://api.github.com/repos/wso2/micro-integrator
closed
Issue when shutdown a cluster node with Rabbitmq Inbound Endpoint
Priority/High Severity/Minor
**Description:** Deploy a RabbitMq Inbound EP in a cluster. Then shut down all the nodes one by one then following exception will be thrown ``` INFO {TasksDSComponent} - Shutting down coordinated task scheduler. [2020-06-12 12:07:22,237] INFO {InboundEndpoint} - Destroying Inbound Endpoint: MyRabbitInbound [2020-06-12 12:07:22,248] INFO {InboundOneTimeTriggerRequestProcessor} - Inbound endpoint MyRabbitInbound stopping. [2020-06-12 12:07:22,258] ERROR {NTaskTaskManager} - Cannot delete task [MyRabbitInbound-RABBITMQ--SYNAPSE_INBOUND_ENDPOINT::RABBITMQ--SYNAPSE_INBOUND_ENDPOINT::RABBITMQ--SYNAPSE_INBOUND_ENDPOINT]. Error: Error in deleting task with name: MyRabbitInbound-RABBITMQ--SYNAPSE_INBOUND_ENDPOINT org.wso2.micro.integrator.ntask.common.TaskException: Error in deleting task with name: MyRabbitInbound-RABBITMQ--SYNAPSE_INBOUND_ENDPOINT at org.wso2.micro.integrator.ntask.core.impl.AbstractQuartzTaskManager.deleteLocalTask(AbstractQuartzTaskManager.java:152) at org.wso2.micro.integrator.ntask.core.impl.standalone.ScheduledTaskManager.deleteTask(ScheduledTaskManager.java:218) at org.wso2.micro.integrator.mediation.ntask.NTaskTaskManager.delete(NTaskTaskManager.java:179) at org.apache.synapse.task.TaskScheduler.deleteTask(TaskScheduler.java:168) at org.apache.synapse.startup.quartz.StartUpController.destroy(StartUpController.java:80) at org.apache.synapse.startup.quartz.StartUpController.destroy(StartUpController.java:90) at org.wso2.carbon.inbound.endpoint.common.InboundOneTimeTriggerRequestProcessor.destroy(InboundOneTimeTriggerRequestProcessor.java:118) at org.wso2.carbon.inbound.endpoint.protocol.rabbitmq.RabbitMQListener.destroy(RabbitMQListener.java:71) at org.apache.synapse.inbound.InboundEndpoint.destroy(InboundEndpoint.java:156) at org.apache.synapse.config.SynapseConfiguration.destroy(SynapseConfiguration.java:1461) at org.apache.synapse.Axis2SynapseController.destroySynapseConfiguration(Axis2SynapseController.java:542) at org.apache.synapse.ServerManager.stop(ServerManager.java:283) at org.wso2.micro.integrator.initializer.ServiceBusInitializer.deactivate(ServiceBusInitializer.java:223) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.eclipse.equinox.internal.ds.model.ServiceComponent.deactivate(ServiceComponent.java:363) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.deactivate(ServiceComponentProp.java:161) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:387) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:102) at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:344) at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:306) at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:368) at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222) at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:113) at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:985) at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:151) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:866) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:804) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:227) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.unregisterServices(ServiceRegistry.java:668) at org.eclipse.osgi.internal.framework.BundleContextImpl.close(BundleContextImpl.java:133) at org.eclipse.osgi.internal.framework.EquinoxBundle.stopWorker0(EquinoxBundle.java:1029) at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.stopWorker(EquinoxBundle.java:370) at org.eclipse.osgi.container.Module.doStop(Module.java:653) at org.eclipse.osgi.container.Module.stop(Module.java:515) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.decStartLevel(ModuleContainer.java:1861) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1753) at org.eclipse.osgi.container.SystemModule.stopWorker(SystemModule.java:275) at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule.stopWorker(EquinoxBundle.java:202) at org.eclipse.osgi.container.Module.doStop(Module.java:653) at org.eclipse.osgi.container.Module.stop(Module.java:515) at org.eclipse.osgi.container.SystemModule.stop(SystemModule.java:207) at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule$1.run(EquinoxBundle.java:220) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.quartz.SchedulerException: The Scheduler has been shutdown. at org.quartz.core.QuartzScheduler.validateState(QuartzScheduler.java:749) at org.quartz.core.QuartzScheduler.deleteJob(QuartzScheduler.java:936) at org.quartz.impl.StdScheduler.deleteJob(StdScheduler.java:292) at org.wso2.micro.integrator.ntask.core.impl.AbstractQuartzTaskManager.deleteLocalTask(AbstractQuartzTaskManager.java:142) ... 46 more [2020-06-12 12:07:22,263] INFO {SynapseTaskManager} - Shutting down the task manager ```
1.0
Issue when shutdown a cluster node with Rabbitmq Inbound Endpoint - **Description:** Deploy a RabbitMq Inbound EP in a cluster. Then shut down all the nodes one by one then following exception will be thrown ``` INFO {TasksDSComponent} - Shutting down coordinated task scheduler. [2020-06-12 12:07:22,237] INFO {InboundEndpoint} - Destroying Inbound Endpoint: MyRabbitInbound [2020-06-12 12:07:22,248] INFO {InboundOneTimeTriggerRequestProcessor} - Inbound endpoint MyRabbitInbound stopping. [2020-06-12 12:07:22,258] ERROR {NTaskTaskManager} - Cannot delete task [MyRabbitInbound-RABBITMQ--SYNAPSE_INBOUND_ENDPOINT::RABBITMQ--SYNAPSE_INBOUND_ENDPOINT::RABBITMQ--SYNAPSE_INBOUND_ENDPOINT]. Error: Error in deleting task with name: MyRabbitInbound-RABBITMQ--SYNAPSE_INBOUND_ENDPOINT org.wso2.micro.integrator.ntask.common.TaskException: Error in deleting task with name: MyRabbitInbound-RABBITMQ--SYNAPSE_INBOUND_ENDPOINT at org.wso2.micro.integrator.ntask.core.impl.AbstractQuartzTaskManager.deleteLocalTask(AbstractQuartzTaskManager.java:152) at org.wso2.micro.integrator.ntask.core.impl.standalone.ScheduledTaskManager.deleteTask(ScheduledTaskManager.java:218) at org.wso2.micro.integrator.mediation.ntask.NTaskTaskManager.delete(NTaskTaskManager.java:179) at org.apache.synapse.task.TaskScheduler.deleteTask(TaskScheduler.java:168) at org.apache.synapse.startup.quartz.StartUpController.destroy(StartUpController.java:80) at org.apache.synapse.startup.quartz.StartUpController.destroy(StartUpController.java:90) at org.wso2.carbon.inbound.endpoint.common.InboundOneTimeTriggerRequestProcessor.destroy(InboundOneTimeTriggerRequestProcessor.java:118) at org.wso2.carbon.inbound.endpoint.protocol.rabbitmq.RabbitMQListener.destroy(RabbitMQListener.java:71) at org.apache.synapse.inbound.InboundEndpoint.destroy(InboundEndpoint.java:156) at org.apache.synapse.config.SynapseConfiguration.destroy(SynapseConfiguration.java:1461) at org.apache.synapse.Axis2SynapseController.destroySynapseConfiguration(Axis2SynapseController.java:542) at org.apache.synapse.ServerManager.stop(ServerManager.java:283) at org.wso2.micro.integrator.initializer.ServiceBusInitializer.deactivate(ServiceBusInitializer.java:223) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.eclipse.equinox.internal.ds.model.ServiceComponent.deactivate(ServiceComponent.java:363) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.deactivate(ServiceComponentProp.java:161) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:387) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.dispose(ServiceComponentProp.java:102) at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:344) at org.eclipse.equinox.internal.ds.InstanceProcess.disposeInstances(InstanceProcess.java:306) at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:368) at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222) at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:113) at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:985) at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234) at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:151) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:866) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:804) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.unregister(ServiceRegistrationImpl.java:227) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.unregisterServices(ServiceRegistry.java:668) at org.eclipse.osgi.internal.framework.BundleContextImpl.close(BundleContextImpl.java:133) at org.eclipse.osgi.internal.framework.EquinoxBundle.stopWorker0(EquinoxBundle.java:1029) at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.stopWorker(EquinoxBundle.java:370) at org.eclipse.osgi.container.Module.doStop(Module.java:653) at org.eclipse.osgi.container.Module.stop(Module.java:515) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.decStartLevel(ModuleContainer.java:1861) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1753) at org.eclipse.osgi.container.SystemModule.stopWorker(SystemModule.java:275) at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule.stopWorker(EquinoxBundle.java:202) at org.eclipse.osgi.container.Module.doStop(Module.java:653) at org.eclipse.osgi.container.Module.stop(Module.java:515) at org.eclipse.osgi.container.SystemModule.stop(SystemModule.java:207) at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule$1.run(EquinoxBundle.java:220) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.quartz.SchedulerException: The Scheduler has been shutdown. at org.quartz.core.QuartzScheduler.validateState(QuartzScheduler.java:749) at org.quartz.core.QuartzScheduler.deleteJob(QuartzScheduler.java:936) at org.quartz.impl.StdScheduler.deleteJob(StdScheduler.java:292) at org.wso2.micro.integrator.ntask.core.impl.AbstractQuartzTaskManager.deleteLocalTask(AbstractQuartzTaskManager.java:142) ... 46 more [2020-06-12 12:07:22,263] INFO {SynapseTaskManager} - Shutting down the task manager ```
non_main
issue when shutdown a cluster node with rabbitmq inbound endpoint description deploy a rabbitmq inbound ep in a cluster then shut down all the nodes one by one then following exception will be thrown info tasksdscomponent shutting down coordinated task scheduler info inboundendpoint destroying inbound endpoint myrabbitinbound info inboundonetimetriggerrequestprocessor inbound endpoint myrabbitinbound stopping error ntasktaskmanager cannot delete task error error in deleting task with name myrabbitinbound rabbitmq synapse inbound endpoint org micro integrator ntask common taskexception error in deleting task with name myrabbitinbound rabbitmq synapse inbound endpoint at org micro integrator ntask core impl abstractquartztaskmanager deletelocaltask abstractquartztaskmanager java at org micro integrator ntask core impl standalone scheduledtaskmanager deletetask scheduledtaskmanager java at org micro integrator mediation ntask ntasktaskmanager delete ntasktaskmanager java at org apache synapse task taskscheduler deletetask taskscheduler java at org apache synapse startup quartz startupcontroller destroy startupcontroller java at org apache synapse startup quartz startupcontroller destroy startupcontroller java at org carbon inbound endpoint common inboundonetimetriggerrequestprocessor destroy inboundonetimetriggerrequestprocessor java at org carbon inbound endpoint protocol rabbitmq rabbitmqlistener destroy rabbitmqlistener java at org apache synapse inbound inboundendpoint destroy inboundendpoint java at org apache synapse config synapseconfiguration destroy synapseconfiguration java at org apache synapse destroysynapseconfiguration java at org apache synapse servermanager stop servermanager java at org micro integrator initializer servicebusinitializer deactivate servicebusinitializer java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org eclipse equinox internal ds model servicecomponent deactivate servicecomponent java at org eclipse equinox internal ds model servicecomponentprop deactivate servicecomponentprop java at org eclipse equinox internal ds model servicecomponentprop dispose servicecomponentprop java at org eclipse equinox internal ds model servicecomponentprop dispose servicecomponentprop java at org eclipse equinox internal ds instanceprocess disposeinstances instanceprocess java at org eclipse equinox internal ds instanceprocess disposeinstances instanceprocess java at org eclipse equinox internal ds resolver geteligible resolver java at org eclipse equinox internal ds scrmanager servicechanged scrmanager java at org eclipse osgi internal serviceregistry filteredservicelistener servicechanged filteredservicelistener java at org eclipse osgi internal framework bundlecontextimpl dispatchevent bundlecontextimpl java at org eclipse osgi framework eventmgr eventmanager dispatchevent eventmanager java at org eclipse osgi framework eventmgr listenerqueue dispatcheventsynchronous listenerqueue java at org eclipse osgi internal serviceregistry serviceregistry publishserviceeventprivileged serviceregistry java at org eclipse osgi internal serviceregistry serviceregistry publishserviceevent serviceregistry java at org eclipse osgi internal serviceregistry serviceregistrationimpl unregister serviceregistrationimpl java at org eclipse osgi internal serviceregistry serviceregistry unregisterservices serviceregistry java at org eclipse osgi internal framework bundlecontextimpl close bundlecontextimpl java at org eclipse osgi internal framework equinoxbundle equinoxbundle java at org eclipse osgi internal framework equinoxbundle equinoxmodule stopworker equinoxbundle java at org eclipse osgi container module dostop module java at org eclipse osgi container module stop module java at org eclipse osgi container modulecontainer containerstartlevel decstartlevel modulecontainer java at org eclipse osgi container modulecontainer containerstartlevel docontainerstartlevel modulecontainer java at org eclipse osgi container systemmodule stopworker systemmodule java at org eclipse osgi internal framework equinoxbundle systembundle equinoxsystemmodule stopworker equinoxbundle java at org eclipse osgi container module dostop module java at org eclipse osgi container module stop module java at org eclipse osgi container systemmodule stop systemmodule java at org eclipse osgi internal framework equinoxbundle systembundle equinoxsystemmodule run equinoxbundle java at java base java lang thread run thread java caused by org quartz schedulerexception the scheduler has been shutdown at org quartz core quartzscheduler validatestate quartzscheduler java at org quartz core quartzscheduler deletejob quartzscheduler java at org quartz impl stdscheduler deletejob stdscheduler java at org micro integrator ntask core impl abstractquartztaskmanager deletelocaltask abstractquartztaskmanager java more info synapsetaskmanager shutting down the task manager
0
1,263
5,356,286,503
IssuesEvent
2017-02-20 15:17:05
Particular/ServicePulse
https://api.github.com/repos/Particular/ServicePulse
closed
Ability to bookmark an error message or error grouping
Impact: M Size: S State: In Progress Tag: Maintainer Prio Type: Feature
When the DevOps team captures the information from ServicePulse, sometimes they will need to work with their dev team to indicate the problem. It will be nice if DevOps can go to a certain error message or error group and bookmark the Url and simply send the Url to the dev team. Currently the user experience is that based on the information from DevOps, the dev will have to go to SP and scroll down to the exact message group. A bookmark function would make this easier. Requested by Customer
True
Ability to bookmark an error message or error grouping - When the DevOps team captures the information from ServicePulse, sometimes they will need to work with their dev team to indicate the problem. It will be nice if DevOps can go to a certain error message or error group and bookmark the Url and simply send the Url to the dev team. Currently the user experience is that based on the information from DevOps, the dev will have to go to SP and scroll down to the exact message group. A bookmark function would make this easier. Requested by Customer
main
ability to bookmark an error message or error grouping when the devops team captures the information from servicepulse sometimes they will need to work with their dev team to indicate the problem it will be nice if devops can go to a certain error message or error group and bookmark the url and simply send the url to the dev team currently the user experience is that based on the information from devops the dev will have to go to sp and scroll down to the exact message group a bookmark function would make this easier requested by customer
1
141,920
12,992,788,487
IssuesEvent
2020-07-23 07:38:22
RedHatInsights/insights-results-aggregator-utils
https://api.github.com/repos/RedHatInsights/insights-results-aggregator-utils
opened
Provide description and usage subsections for `anonymize.py` utility
documentation
Provide description and usage subsections for `anonymize.py` utility
1.0
Provide description and usage subsections for `anonymize.py` utility - Provide description and usage subsections for `anonymize.py` utility
non_main
provide description and usage subsections for anonymize py utility provide description and usage subsections for anonymize py utility
0
4,234
20,980,242,233
IssuesEvent
2022-03-28 19:09:18
precice/precice
https://api.github.com/repos/precice/precice
closed
Move all integration tests to new structure
maintainability
**Please describe the problem you are trying to solve.** As discussed in #1021, we want to move all integration tests in `precice/src/precice/tests` to the new structure proposed and implemented in #1148. This issue gives an overview over the progress. Please mention this issue in related PRs. **Describe the solution you propose.** See #1148 and the proposed structure for the integration tests. As a first step we only want to apply the new structure to all tests to make it easier to rearrange individual tests. See https://github.com/precice/precice/issues/1021#issuecomment-976698077.
True
Move all integration tests to new structure - **Please describe the problem you are trying to solve.** As discussed in #1021, we want to move all integration tests in `precice/src/precice/tests` to the new structure proposed and implemented in #1148. This issue gives an overview over the progress. Please mention this issue in related PRs. **Describe the solution you propose.** See #1148 and the proposed structure for the integration tests. As a first step we only want to apply the new structure to all tests to make it easier to rearrange individual tests. See https://github.com/precice/precice/issues/1021#issuecomment-976698077.
main
move all integration tests to new structure please describe the problem you are trying to solve as discussed in we want to move all integration tests in precice src precice tests to the new structure proposed and implemented in this issue gives an overview over the progress please mention this issue in related prs describe the solution you propose see and the proposed structure for the integration tests as a first step we only want to apply the new structure to all tests to make it easier to rearrange individual tests see
1
40,148
6,801,274,174
IssuesEvent
2017-11-02 16:19:25
loconomics/loconomics
https://api.github.com/repos/loconomics/loconomics
opened
Review and document booking communications
C: Documentation F: Booking
### Long Description Email communication templates for booking need a review to ensure we are using correctly the flags available (sendReviewToClient, paymentCollected, HIPAA) and the language on every case and to document that clearly since is important to be able to review/fix them and translate/adapt to other languages/countries properly.
1.0
Review and document booking communications - ### Long Description Email communication templates for booking need a review to ensure we are using correctly the flags available (sendReviewToClient, paymentCollected, HIPAA) and the language on every case and to document that clearly since is important to be able to review/fix them and translate/adapt to other languages/countries properly.
non_main
review and document booking communications long description email communication templates for booking need a review to ensure we are using correctly the flags available sendreviewtoclient paymentcollected hipaa and the language on every case and to document that clearly since is important to be able to review fix them and translate adapt to other languages countries properly
0
59,373
7,239,226,137
IssuesEvent
2018-02-13 16:50:31
Opentrons/opentrons
https://api.github.com/repos/Opentrons/opentrons
opened
PD Deck Setup: Move Labware
feature protocol designer small
As a protocol designer, I would like to be able to move labware to different locations on the deck ## Acceptance Criteria: - User can click 'move' in the labware hover menu - User can then click 'move here' when hovering over empty deck slots ## Design: -- Move: https://projects.invisionapp.com/d/main#/console/12888101/270492887/preview -- Move here: https://projects.invisionapp.com/d/main#/console/12888101/270492889/preview
1.0
PD Deck Setup: Move Labware - As a protocol designer, I would like to be able to move labware to different locations on the deck ## Acceptance Criteria: - User can click 'move' in the labware hover menu - User can then click 'move here' when hovering over empty deck slots ## Design: -- Move: https://projects.invisionapp.com/d/main#/console/12888101/270492887/preview -- Move here: https://projects.invisionapp.com/d/main#/console/12888101/270492889/preview
non_main
pd deck setup move labware as a protocol designer i would like to be able to move labware to different locations on the deck acceptance criteria user can click move in the labware hover menu user can then click move here when hovering over empty deck slots design move move here
0
4,563
23,738,666,317
IssuesEvent
2022-08-31 10:22:20
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Refactor BrokerInfo and TopologyManagerImpl to meet Sonar requirements
kind/toil scope/broker area/maintainability
**Description** In #5529 after adding a `partitionHealth`, I've added some tech debt that needs to be fixed. 1. Remove `@SuppressWarnings` that I added in `BrokerInfo` and `TopologyManagerImpl` 2. Refactor these classes to meet Sonar requirements. For some insights see: https://github.com/zeebe-io/zeebe/pull/5529#issuecomment-709984132
True
Refactor BrokerInfo and TopologyManagerImpl to meet Sonar requirements - **Description** In #5529 after adding a `partitionHealth`, I've added some tech debt that needs to be fixed. 1. Remove `@SuppressWarnings` that I added in `BrokerInfo` and `TopologyManagerImpl` 2. Refactor these classes to meet Sonar requirements. For some insights see: https://github.com/zeebe-io/zeebe/pull/5529#issuecomment-709984132
main
refactor brokerinfo and topologymanagerimpl to meet sonar requirements description in after adding a partitionhealth i ve added some tech debt that needs to be fixed remove suppresswarnings that i added in brokerinfo and topologymanagerimpl refactor these classes to meet sonar requirements for some insights see
1
704,260
24,190,727,016
IssuesEvent
2022-09-23 17:17:12
AY2223S1-CS2103T-W17-3/tp
https://api.github.com/repos/AY2223S1-CS2103T-W17-3/tp
opened
Add AboutUs Page
priority.High type.Admin
# AboutUs page: This page (in the /docs folder) is used for module admin purposes\ Please follow the format closely or else our scripts will not be able to give credit for your work. Add your own details. Include a suitable photo as described here. There is no need to mention the the tutor/lecturer, but OK to do so too. The filename of the profile photo should be docs/images/github_username_in_lower_case.png Note the need for lower case ( why lowercase?) e.g. JohnDoe123 -> docs/images/johndoe123.png not docs/images/JohnDoe123.png. If your photo is in jpg format, name the file as .png anyway. Indicate the different roles played and responsibilities held by each team member. You can reassign these roles and responsibilities (as explained in Admin Project Scope) later in the project, if necessary.
1.0
Add AboutUs Page - # AboutUs page: This page (in the /docs folder) is used for module admin purposes\ Please follow the format closely or else our scripts will not be able to give credit for your work. Add your own details. Include a suitable photo as described here. There is no need to mention the the tutor/lecturer, but OK to do so too. The filename of the profile photo should be docs/images/github_username_in_lower_case.png Note the need for lower case ( why lowercase?) e.g. JohnDoe123 -> docs/images/johndoe123.png not docs/images/JohnDoe123.png. If your photo is in jpg format, name the file as .png anyway. Indicate the different roles played and responsibilities held by each team member. You can reassign these roles and responsibilities (as explained in Admin Project Scope) later in the project, if necessary.
non_main
add aboutus page aboutus page this page in the docs folder is used for module admin purposes please follow the format closely or else our scripts will not be able to give credit for your work add your own details include a suitable photo as described here there is no need to mention the the tutor lecturer but ok to do so too the filename of the profile photo should be docs images github username in lower case png note the need for lower case why lowercase e g docs images png not docs images png if your photo is in jpg format name the file as png anyway indicate the different roles played and responsibilities held by each team member you can reassign these roles and responsibilities as explained in admin project scope later in the project if necessary
0
616,431
19,302,428,356
IssuesEvent
2021-12-13 07:51:20
ESCOMP/CTSM
https://api.github.com/repos/ESCOMP/CTSM
closed
Need to update LILAC build process to use cmake macros instead of config_compilers.xml
priority: high
If I understand https://github.com/ESMCI/cime/pull/4093 correctly, the LILAC build on a user-defined machine may stop working out of the box once that CIME PR is merged (though it may continue to work for some time depending on how backwards compatibility is implemented). To keep things working, rather than filling out a config_compilers.xml template, we'll instead need to fill out a cmake macros file. Blocked: depends on https://github.com/ESMCI/cime/pull/4093
1.0
Need to update LILAC build process to use cmake macros instead of config_compilers.xml - If I understand https://github.com/ESMCI/cime/pull/4093 correctly, the LILAC build on a user-defined machine may stop working out of the box once that CIME PR is merged (though it may continue to work for some time depending on how backwards compatibility is implemented). To keep things working, rather than filling out a config_compilers.xml template, we'll instead need to fill out a cmake macros file. Blocked: depends on https://github.com/ESMCI/cime/pull/4093
non_main
need to update lilac build process to use cmake macros instead of config compilers xml if i understand correctly the lilac build on a user defined machine may stop working out of the box once that cime pr is merged though it may continue to work for some time depending on how backwards compatibility is implemented to keep things working rather than filling out a config compilers xml template we ll instead need to fill out a cmake macros file blocked depends on
0
138,240
20,377,732,504
IssuesEvent
2022-02-21 17:19:29
hackforla/CivicTechJobs
https://api.github.com/repos/hackforla/CivicTechJobs
opened
How to join page (UX)
role: UI/UX - Design size: 3pt
### Dependency _No response_ ### Overview As a UX Designer, we need to help new potential volunteers understand the steps of how to join Civic Tech Jobs. We will create an informative How to join page with illustrations and potentially a FAQ section at the bottom for new potential volunteers. ### Action Items - [ ] Ideate via wireframes - [ ] Discuss ideations among designers - [ ] Select and annotate wireframes - [ ] Present to team - [ ] Decide on hi-fi design - [ ] Add to prototype for testing ### Resources/Instructions [How to join page (Figma)](https://www.figma.com/file/G5bOqhud6azbxyR9El9Ygp/Civic-Tech-Jobs?node-id=2423%3A29534) [Resources](https://github.com/hackforla/CivicTechJobs/wiki/Resources)
1.0
How to join page (UX) - ### Dependency _No response_ ### Overview As a UX Designer, we need to help new potential volunteers understand the steps of how to join Civic Tech Jobs. We will create an informative How to join page with illustrations and potentially a FAQ section at the bottom for new potential volunteers. ### Action Items - [ ] Ideate via wireframes - [ ] Discuss ideations among designers - [ ] Select and annotate wireframes - [ ] Present to team - [ ] Decide on hi-fi design - [ ] Add to prototype for testing ### Resources/Instructions [How to join page (Figma)](https://www.figma.com/file/G5bOqhud6azbxyR9El9Ygp/Civic-Tech-Jobs?node-id=2423%3A29534) [Resources](https://github.com/hackforla/CivicTechJobs/wiki/Resources)
non_main
how to join page ux dependency no response overview as a ux designer we need to help new potential volunteers understand the steps of how to join civic tech jobs we will create an informative how to join page with illustrations and potentially a faq section at the bottom for new potential volunteers action items ideate via wireframes discuss ideations among designers select and annotate wireframes present to team decide on hi fi design add to prototype for testing resources instructions
0
870
4,536,511,414
IssuesEvent
2016-09-08 20:39:15
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Reseting all locale environment variables causes checkout / update fail on repos that contain unicode filenames.
affects_2.1 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME subversion ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Ubuntu 15.10 or 14.04 ##### SUMMARY Reseting all locale environment variables causes checkout / update failure on repos that contain unicode filenames. This bad behavior was introduced with commit 7020c8dcbeb6d08761c014730ee61558793ac00f fixing the issue "source_control/subversion.py needs to reset LC_MESSAGES #3255" ##### STEPS TO REPRODUCE ``` test.yml: --- - hosts: localhost tasks: - subversion: repo="https://subversion.assembla.com/svn/test-utf8-files/" dest="test-utf8-files" ansible-playbook test.yml ``` ##### EXPECTED RESULTS ``` $ ansible-playbook test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [subversion] ************************************************************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` $ ansible-playbook test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [subversion] ************************************************************** fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/bin/svn --non-interactive --trust-server-cert --no-auth-cache checkout -r HEAD https://subversion.assembla.com/svn/test-utf8-files/ test-utf8-files", "failed": true, "msg": "svn: E155009: Failed to run the WC DB work queue associated with '/home/mullnerz/Test/ansible-subversion-test/test-utf8-files/branches', work item 1 (file-install 34 ?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt 1 0 1 1)\nsvn: E000022: Can't convert string from 'UTF-8' to native encoding:\nsvn: E000022: /home/mullnerz/Test/ansible-subversion-test/test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt", "rc": 1, "stderr": "svn: E155009: Failed to run the WC DB work queue associated with '/home/mullnerz/Test/ansible-subversion-test/test-utf8-files/branches', work item 1 (file-install 34 ?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt 1 0 1 1)\nsvn: E000022: Can't convert string from 'UTF-8' to native encoding:\nsvn: E000022: /home/mullnerz/Test/ansible-subversion-test/test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt\n", "stdout": "A test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt\nA test-utf8-files/branches\n", "stdout_lines": ["A test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt", "A test-utf8-files/branches"]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @test.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ```
True
Reseting all locale environment variables causes checkout / update fail on repos that contain unicode filenames. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME subversion ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION default ##### OS / ENVIRONMENT Ubuntu 15.10 or 14.04 ##### SUMMARY Reseting all locale environment variables causes checkout / update failure on repos that contain unicode filenames. This bad behavior was introduced with commit 7020c8dcbeb6d08761c014730ee61558793ac00f fixing the issue "source_control/subversion.py needs to reset LC_MESSAGES #3255" ##### STEPS TO REPRODUCE ``` test.yml: --- - hosts: localhost tasks: - subversion: repo="https://subversion.assembla.com/svn/test-utf8-files/" dest="test-utf8-files" ansible-playbook test.yml ``` ##### EXPECTED RESULTS ``` $ ansible-playbook test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [subversion] ************************************************************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=0 ``` ##### ACTUAL RESULTS ``` $ ansible-playbook test.yml PLAY [localhost] *************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [subversion] ************************************************************** fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/bin/svn --non-interactive --trust-server-cert --no-auth-cache checkout -r HEAD https://subversion.assembla.com/svn/test-utf8-files/ test-utf8-files", "failed": true, "msg": "svn: E155009: Failed to run the WC DB work queue associated with '/home/mullnerz/Test/ansible-subversion-test/test-utf8-files/branches', work item 1 (file-install 34 ?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt 1 0 1 1)\nsvn: E000022: Can't convert string from 'UTF-8' to native encoding:\nsvn: E000022: /home/mullnerz/Test/ansible-subversion-test/test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt", "rc": 1, "stderr": "svn: E155009: Failed to run the WC DB work queue associated with '/home/mullnerz/Test/ansible-subversion-test/test-utf8-files/branches', work item 1 (file-install 34 ?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt 1 0 1 1)\nsvn: E000022: Can't convert string from 'UTF-8' to native encoding:\nsvn: E000022: /home/mullnerz/Test/ansible-subversion-test/test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt\n", "stdout": "A test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt\nA test-utf8-files/branches\n", "stdout_lines": ["A test-utf8-files/?\\195?\\161rv?\\195?\\173zt?\\197?\\177r?\\197?\\145t?\\195?\\188k?\\195?\\182rf?\\195?\\186r?\\195?\\179g?\\195?\\169p.txt", "A test-utf8-files/branches"]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @test.retry PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=1 ```
main
reseting all locale environment variables causes checkout update fail on repos that contain unicode filenames issue type bug report component name subversion ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default os environment ubuntu or summary reseting all locale environment variables causes checkout update failure on repos that contain unicode filenames this bad behavior was introduced with commit fixing the issue source control subversion py needs to reset lc messages steps to reproduce test yml hosts localhost tasks subversion repo dest test files ansible playbook test yml expected results ansible playbook test yml play task ok task changed play recap localhost ok changed unreachable failed actual results ansible playbook test yml play task ok task fatal failed changed false cmd usr bin svn non interactive trust server cert no auth cache checkout r head test files failed true msg svn failed to run the wc db work queue associated with home mullnerz test ansible subversion test test files branches work item file install txt nsvn can t convert string from utf to native encoding nsvn home mullnerz test ansible subversion test test files txt rc stderr svn failed to run the wc db work queue associated with home mullnerz test ansible subversion test test files branches work item file install txt nsvn can t convert string from utf to native encoding nsvn home mullnerz test ansible subversion test test files txt n stdout a test files txt na test files branches n stdout lines no more hosts left to retry use limit test retry play recap localhost ok changed unreachable failed
1
254
3,005,497,340
IssuesEvent
2015-07-26 23:38:27
DotNetAnalyzers/StyleCopAnalyzers
https://api.github.com/repos/DotNetAnalyzers/StyleCopAnalyzers
closed
Proposal: Permanently disable rule SA1409
maintainability
Rule SA1409 (RemoveUnnecessaryCode) has two characteristics which make it a particularly poor rule for StyleCopAnalyzers: * It is poorly defined (what is or is not "necessary"?) * It is largely a semantic rule as opposed to a syntax-based style rule I propose rule SA1409 be permanently disabled, and allow other tools which focus on semantics (e.g. a future "FxCopAnalyzers") to pick up this diagnostic at a later time.
True
Proposal: Permanently disable rule SA1409 - Rule SA1409 (RemoveUnnecessaryCode) has two characteristics which make it a particularly poor rule for StyleCopAnalyzers: * It is poorly defined (what is or is not "necessary"?) * It is largely a semantic rule as opposed to a syntax-based style rule I propose rule SA1409 be permanently disabled, and allow other tools which focus on semantics (e.g. a future "FxCopAnalyzers") to pick up this diagnostic at a later time.
main
proposal permanently disable rule rule removeunnecessarycode has two characteristics which make it a particularly poor rule for stylecopanalyzers it is poorly defined what is or is not necessary it is largely a semantic rule as opposed to a syntax based style rule i propose rule be permanently disabled and allow other tools which focus on semantics e g a future fxcopanalyzers to pick up this diagnostic at a later time
1
231,739
7,642,765,065
IssuesEvent
2018-05-08 10:19:28
dwyl/focus-hub
https://api.github.com/repos/dwyl/focus-hub
closed
Print, Laminate and Affix House Rules to Ensure they are Constantly Visible
T25m enhancement help wanted priority-3 va-task
@markwilliamfirth did a great job putting together the FocuHub [/house-rules.md](https://github.com/dwyl/focushub/blob/faaefa1b19478d26a34eb34b4f4c3428e2829d4b/house-rules.md) 🎉 Sadly there are people who will not _read_ them on GitHub ... (IKR!) # Tasks/Todo: + [ ] Print 4 A4 Signs with the "House Rules" (_using the FocusHub slide template_) + [ ] Laminate the signs (_laminator is in the "closet"_) + [ ] Affix the signs to the wall using white-tac in the following locations: + [ ] Bottom of stairs (_so people are reminded to be considerate when they walk in and up/down the stairs_) + [ ] Top of Stairs next to the light switch + [ ] Next to the Kettle + [ ] Next to the "Lunch Table"
1.0
Print, Laminate and Affix House Rules to Ensure they are Constantly Visible - @markwilliamfirth did a great job putting together the FocuHub [/house-rules.md](https://github.com/dwyl/focushub/blob/faaefa1b19478d26a34eb34b4f4c3428e2829d4b/house-rules.md) 🎉 Sadly there are people who will not _read_ them on GitHub ... (IKR!) # Tasks/Todo: + [ ] Print 4 A4 Signs with the "House Rules" (_using the FocusHub slide template_) + [ ] Laminate the signs (_laminator is in the "closet"_) + [ ] Affix the signs to the wall using white-tac in the following locations: + [ ] Bottom of stairs (_so people are reminded to be considerate when they walk in and up/down the stairs_) + [ ] Top of Stairs next to the light switch + [ ] Next to the Kettle + [ ] Next to the "Lunch Table"
non_main
print laminate and affix house rules to ensure they are constantly visible markwilliamfirth did a great job putting together the focuhub 🎉 sadly there are people who will not read them on github ikr tasks todo print signs with the house rules using the focushub slide template laminate the signs laminator is in the closet affix the signs to the wall using white tac in the following locations bottom of stairs so people are reminded to be considerate when they walk in and up down the stairs top of stairs next to the light switch next to the kettle next to the lunch table
0
401,450
11,790,348,406
IssuesEvent
2020-03-17 18:47:54
adobe/brackets
https://api.github.com/repos/adobe/brackets
closed
Missing proxy-usage in "Check for Updates" and "Contributors"
low priority
The existing `proxy`-setting will not handle the "Check for Updates"-request and results in this error: ``` GET http://dev.brackets.io/updates/stable/de-DE.json?_=1399616429308 407 (Proxy Authentication Required) ``` The same happends for the contributors when openign the about-screen: ``` GET https://api.github.com/repos/adobe/brackets/contributors 407 (Proxy Authentication Required) ``` To clarify: - I **did** add the `proxy`-setting. - I **did** specify a username and a passwort. - Downloading extension **works** behind the proxy. - I'm using **Build 0.38.0-12606** - _and yes, I searched for issues about this._
1.0
Missing proxy-usage in "Check for Updates" and "Contributors" - The existing `proxy`-setting will not handle the "Check for Updates"-request and results in this error: ``` GET http://dev.brackets.io/updates/stable/de-DE.json?_=1399616429308 407 (Proxy Authentication Required) ``` The same happends for the contributors when openign the about-screen: ``` GET https://api.github.com/repos/adobe/brackets/contributors 407 (Proxy Authentication Required) ``` To clarify: - I **did** add the `proxy`-setting. - I **did** specify a username and a passwort. - Downloading extension **works** behind the proxy. - I'm using **Build 0.38.0-12606** - _and yes, I searched for issues about this._
non_main
missing proxy usage in check for updates and contributors the existing proxy setting will not handle the check for updates request and results in this error get proxy authentication required the same happends for the contributors when openign the about screen get proxy authentication required to clarify i did add the proxy setting i did specify a username and a passwort downloading extension works behind the proxy i m using build and yes i searched for issues about this
0
5,113
26,038,399,893
IssuesEvent
2022-12-22 08:09:56
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Refactor the DeploymentDistributorImpl
kind/toil scope/broker area/maintainability
Currently the DeploymentDistributorImpl has a few problems. Namely, it might fail if a retry happens after another push is made (https://github.com/zeebe-io/zeebe/issues/2735) but also related logic is split is different places which might make this component hard to understand. For instance, there are two handlers that handle responses. One that handles a response sent after the CREATE is "written" and the second that handles a response after the CREATE is processed (this is necessary because when the first response happens, the event wasn't written to persistent storage and might be lost). For these reasons, the distributor should be refactored
True
Refactor the DeploymentDistributorImpl - Currently the DeploymentDistributorImpl has a few problems. Namely, it might fail if a retry happens after another push is made (https://github.com/zeebe-io/zeebe/issues/2735) but also related logic is split is different places which might make this component hard to understand. For instance, there are two handlers that handle responses. One that handles a response sent after the CREATE is "written" and the second that handles a response after the CREATE is processed (this is necessary because when the first response happens, the event wasn't written to persistent storage and might be lost). For these reasons, the distributor should be refactored
main
refactor the deploymentdistributorimpl currently the deploymentdistributorimpl has a few problems namely it might fail if a retry happens after another push is made but also related logic is split is different places which might make this component hard to understand for instance there are two handlers that handle responses one that handles a response sent after the create is written and the second that handles a response after the create is processed this is necessary because when the first response happens the event wasn t written to persistent storage and might be lost for these reasons the distributor should be refactored
1
301,828
22,776,965,454
IssuesEvent
2022-07-08 15:19:52
hashgraph/guardian
https://api.github.com/repos/hashgraph/guardian
closed
Need better logs for running in production
documentation technical task community
### Problem description - There are few console log in services - logger service store data in mongodb ### Requirements - In production environment, we need lots of logs to troubleshoot the incident - production environment is deployed on AWS/GCP/Azure need the standard logs that works with their logs system such as Cloudwatch/GCP logs ### Definition of done - Add more application logs with the level config (debug, info, error) - Using some well know nodejs logs frameworks such as pino/winston to output the console log so the logs can be ship to the cloud logs frameworks ### Acceptance criteria What are the criteria by which we determine if the issue has been resolved?
1.0
Need better logs for running in production - ### Problem description - There are few console log in services - logger service store data in mongodb ### Requirements - In production environment, we need lots of logs to troubleshoot the incident - production environment is deployed on AWS/GCP/Azure need the standard logs that works with their logs system such as Cloudwatch/GCP logs ### Definition of done - Add more application logs with the level config (debug, info, error) - Using some well know nodejs logs frameworks such as pino/winston to output the console log so the logs can be ship to the cloud logs frameworks ### Acceptance criteria What are the criteria by which we determine if the issue has been resolved?
non_main
need better logs for running in production problem description there are few console log in services logger service store data in mongodb requirements in production environment we need lots of logs to troubleshoot the incident production environment is deployed on aws gcp azure need the standard logs that works with their logs system such as cloudwatch gcp logs definition of done add more application logs with the level config debug info error using some well know nodejs logs frameworks such as pino winston to output the console log so the logs can be ship to the cloud logs frameworks acceptance criteria what are the criteria by which we determine if the issue has been resolved
0
1,839
6,577,373,851
IssuesEvent
2017-09-12 00:27:48
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Pulling repository when using net option in docker module with nonexistent network
affects_2.0 bug_report cloud docker waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME _docker ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> ##### SUMMARY <!--- Explain the problem briefly --> Ansible is pulling repository when using net option in docker module, even if the option pull: missing is set and the image is allready on the system. In our case there was also an error message regarding the repo pull (private repo): unauthorized: authentication required This occurs when the network is not created before using the net option. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> ``` - name: some container docker: name: "some container" image: myhost/image pull: missing restart_policy: always env: ES_HEAP_SIZE: 4g memory_limit: '8192MB' net: 'mynetwork' state: started ``` ##### EXPECTED RESULTS There should be an error message stating, that the network is not existing. Its very hard for users to identify the problem, as there is no correct error message and the message regarding image pull is misleading. After creating the network with docker network create mynetwork the container is started without pull. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` ```
True
Pulling repository when using net option in docker module with nonexistent network - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME _docker ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> ##### SUMMARY <!--- Explain the problem briefly --> Ansible is pulling repository when using net option in docker module, even if the option pull: missing is set and the image is allready on the system. In our case there was also an error message regarding the repo pull (private repo): unauthorized: authentication required This occurs when the network is not created before using the net option. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> ``` - name: some container docker: name: "some container" image: myhost/image pull: missing restart_policy: always env: ES_HEAP_SIZE: 4g memory_limit: '8192MB' net: 'mynetwork' state: started ``` ##### EXPECTED RESULTS There should be an error message stating, that the network is not existing. Its very hard for users to identify the problem, as there is no correct error message and the message regarding image pull is misleading. After creating the network with docker network create mynetwork the container is started without pull. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` ```
main
pulling repository when using net option in docker module with nonexistent network issue type bug report component name docker ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary ansible is pulling repository when using net option in docker module even if the option pull missing is set and the image is allready on the system in our case there was also an error message regarding the repo pull private repo unauthorized authentication required this occurs when the network is not created before using the net option steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name some container docker name some container image myhost image pull missing restart policy always env es heap size memory limit net mynetwork state started expected results there should be an error message stating that the network is not existing its very hard for users to identify the problem as there is no correct error message and the message regarding image pull is misleading after creating the network with docker network create mynetwork the container is started without pull actual results
1
1,795
6,575,902,165
IssuesEvent
2017-09-11 17:46:13
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
shell - not reading .login/.bash_login or .profile/.bash_profile or .bashrc
affects_2.1 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME shell ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/harald/vagrantstuff/node/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg only added this line: inventory = ./hosts Otherwise it's untouched. ##### OS / ENVIRONMENT Linux (Ubuntu on host, Debian on node to be configured) ##### SUMMARY Commands via "shell" do not read the .profile, .bashrc, .login, .bash_profile of the user of who runs ansible. ##### STEPS TO REPRODUCE 1. Have set up ssh passwordless authentication between Ansible host and worker node (=node3) 2. User has /bin/bash as default shell on node 3. On node have "export LAST=bashrc" resp. "export LAST=bash_profile" in ~/.bashrc resp. ~/.bash_profile (LAST would contain the last profile file used) 4. "ssh node3" with subsequent echo $LAST will show bash_profile 5. ansible node3 -m shell -a 'echo $LAST' will show nothing ``` harald@giga:~/vagrantstuff/node/ansible$ ssh node3 ---------------------------------------------------------------- Debian GNU/Linux 8.5 (jessie) built 2016-08-28 ---------------------------------------------------------------- Last login: Thu Sep 22 10:07:18 2016 from giga.lan harald@node3:~$ echo "SHELL=$SHELL LAST=$LAST" SHELL=/bin/bash LAST=bash_profile harald@node3:~$ grep harald /etc/passwd harald:x:2000:100:Harald Kubota:/home/harald:/bin/bash harald@node3:~$ egrep 'LAST=|export LAST' .profile .bashrc .login .bash_login .bash_profile .profile:LAST=profile .profile:export LAST .bashrc:LAST=bashrc .bashrc:export LAST .login:LAST=login .login:export LAST .bash_login:LAST=bash_login .bash_login:export LAST .bash_profile:LAST=bash_profile .bash_profile:export LAST harald@node3:~$ exit logout Connection to node3 closed. harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST= ``` Using a playbook reveals the same: the .profile or .bash_profile since /bin/bash is my shell, is not used. ``` - hosts: node3 gather_facts: true tasks: - name: Testing to run node shell: echo "SHELL=$SHELL LAST=$LAST" #environment: # PATH: "/home/harald/node:{{ ansible_env.PATH }}" args: executable: /bin/bash ``` ##### EXPECTED RESULTS The profile files should be used. Manual setting PATH environment variable in playbooks works, but this is highly inconvenient if this is needed for every shell statement. ``` harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST=bash_profile ``` ##### ACTUAL RESULTS ``` harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST= ``` or via ansible-playbook: ``` harald@giga:~/vagrantstuff/node/ansible$ ansible-playbook -v play.yml Using /home/harald/vagrantstuff/node/ansible/ansible.cfg as config file PLAY [node3] ******************************************************************* TASK [setup] ******************************************************************* ok: [node3] TASK [Testing to run node] ***************************************************** changed: [node3] => {"changed": true, "cmd": "echo \"SHELL=$SHELL LAST=$LAST\"", "delta": "0:00:00.008589", "end": "2016-09-22 10:20:48.320586", "rc": 0, "start": "2016-09-22 10:20:48.311997", "stderr": "", "stdout": "SHELL=/bin/bash LAST=", "stdout_lines": ["SHELL=/bin/bash LAST="], "warnings": []} PLAY RECAP ********************************************************************* node3 : ok=2 changed=1 unreachable=0 failed=0 ``` LAST is supposed to show bash_profile just like the interactive login did.
True
shell - not reading .login/.bash_login or .profile/.bash_profile or .bashrc - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME shell ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /home/harald/vagrantstuff/node/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ansible.cfg only added this line: inventory = ./hosts Otherwise it's untouched. ##### OS / ENVIRONMENT Linux (Ubuntu on host, Debian on node to be configured) ##### SUMMARY Commands via "shell" do not read the .profile, .bashrc, .login, .bash_profile of the user of who runs ansible. ##### STEPS TO REPRODUCE 1. Have set up ssh passwordless authentication between Ansible host and worker node (=node3) 2. User has /bin/bash as default shell on node 3. On node have "export LAST=bashrc" resp. "export LAST=bash_profile" in ~/.bashrc resp. ~/.bash_profile (LAST would contain the last profile file used) 4. "ssh node3" with subsequent echo $LAST will show bash_profile 5. ansible node3 -m shell -a 'echo $LAST' will show nothing ``` harald@giga:~/vagrantstuff/node/ansible$ ssh node3 ---------------------------------------------------------------- Debian GNU/Linux 8.5 (jessie) built 2016-08-28 ---------------------------------------------------------------- Last login: Thu Sep 22 10:07:18 2016 from giga.lan harald@node3:~$ echo "SHELL=$SHELL LAST=$LAST" SHELL=/bin/bash LAST=bash_profile harald@node3:~$ grep harald /etc/passwd harald:x:2000:100:Harald Kubota:/home/harald:/bin/bash harald@node3:~$ egrep 'LAST=|export LAST' .profile .bashrc .login .bash_login .bash_profile .profile:LAST=profile .profile:export LAST .bashrc:LAST=bashrc .bashrc:export LAST .login:LAST=login .login:export LAST .bash_login:LAST=bash_login .bash_login:export LAST .bash_profile:LAST=bash_profile .bash_profile:export LAST harald@node3:~$ exit logout Connection to node3 closed. harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST= ``` Using a playbook reveals the same: the .profile or .bash_profile since /bin/bash is my shell, is not used. ``` - hosts: node3 gather_facts: true tasks: - name: Testing to run node shell: echo "SHELL=$SHELL LAST=$LAST" #environment: # PATH: "/home/harald/node:{{ ansible_env.PATH }}" args: executable: /bin/bash ``` ##### EXPECTED RESULTS The profile files should be used. Manual setting PATH environment variable in playbooks works, but this is highly inconvenient if this is needed for every shell statement. ``` harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST=bash_profile ``` ##### ACTUAL RESULTS ``` harald@giga:~/vagrantstuff/node/ansible$ ansible node3 -m shell -a 'echo SHELL=$SHELL LAST=$LAST' node3 | SUCCESS | rc=0 >> SHELL=/bin/bash LAST= ``` or via ansible-playbook: ``` harald@giga:~/vagrantstuff/node/ansible$ ansible-playbook -v play.yml Using /home/harald/vagrantstuff/node/ansible/ansible.cfg as config file PLAY [node3] ******************************************************************* TASK [setup] ******************************************************************* ok: [node3] TASK [Testing to run node] ***************************************************** changed: [node3] => {"changed": true, "cmd": "echo \"SHELL=$SHELL LAST=$LAST\"", "delta": "0:00:00.008589", "end": "2016-09-22 10:20:48.320586", "rc": 0, "start": "2016-09-22 10:20:48.311997", "stderr": "", "stdout": "SHELL=/bin/bash LAST=", "stdout_lines": ["SHELL=/bin/bash LAST="], "warnings": []} PLAY RECAP ********************************************************************* node3 : ok=2 changed=1 unreachable=0 failed=0 ``` LAST is supposed to show bash_profile just like the interactive login did.
main
shell not reading login bash login or profile bash profile or bashrc issue type bug report component name shell ansible version ansible config file home harald vagrantstuff node ansible ansible cfg configured module search path default w o overrides configuration ansible cfg only added this line inventory hosts otherwise it s untouched os environment linux ubuntu on host debian on node to be configured summary commands via shell do not read the profile bashrc login bash profile of the user of who runs ansible steps to reproduce have set up ssh passwordless authentication between ansible host and worker node user has bin bash as default shell on node on node have export last bashrc resp export last bash profile in bashrc resp bash profile last would contain the last profile file used ssh with subsequent echo last will show bash profile ansible m shell a echo last will show nothing harald giga vagrantstuff node ansible ssh debian gnu linux jessie built last login thu sep from giga lan harald echo shell shell last last shell bin bash last bash profile harald grep harald etc passwd harald x harald kubota home harald bin bash harald egrep last export last profile bashrc login bash login bash profile profile last profile profile export last bashrc last bashrc bashrc export last login last login login export last bash login last bash login bash login export last bash profile last bash profile bash profile export last harald exit logout connection to closed harald giga vagrantstuff node ansible ansible m shell a echo shell shell last last success rc shell bin bash last using a playbook reveals the same the profile or bash profile since bin bash is my shell is not used hosts gather facts true tasks name testing to run node shell echo shell shell last last environment path home harald node ansible env path args executable bin bash expected results the profile files should be used manual setting path environment variable in playbooks works but this is highly inconvenient if this is needed for every shell statement harald giga vagrantstuff node ansible ansible m shell a echo shell shell last last success rc shell bin bash last bash profile actual results harald giga vagrantstuff node ansible ansible m shell a echo shell shell last last success rc shell bin bash last or via ansible playbook harald giga vagrantstuff node ansible ansible playbook v play yml using home harald vagrantstuff node ansible ansible cfg as config file play task ok task changed changed true cmd echo shell shell last last delta end rc start stderr stdout shell bin bash last stdout lines warnings play recap ok changed unreachable failed last is supposed to show bash profile just like the interactive login did
1
4,183
20,206,559,797
IssuesEvent
2022-02-11 21:09:40
aws/serverless-application-model
https://api.github.com/repos/aws/serverless-application-model
closed
Lambda Authorizer httpApi openapi.yaml not created
type/bug maintainer/need-followup
### Description: Good morning, i am trying to implement a lambda authorizer for an httpapi in apigateway. All my resources are created using cloudformation using a template.yaml file and an openapi.yaml file where the routes are defined. The securitySchemes are also defined in the openapi.yaml file using the x-amazon-apigateway-authorizer tag. My problem is that the authorizer is not created in api gateway when I deploy the stack. ### Steps to reproduce: This is my template.yaml file ``` AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: Resources: # API ---------------------------------------------------------------------------------------------------------------- APILogRole: Type: AWS::IAM::Role Properties: Description: 'IAM role for API Gateway Logging' AssumeRolePolicyDocument: Statement: - Effect: Allow Principal: Service: [apigateway.amazonaws.com] Action: sts:AssumeRole ManagedPolicyArns: - 'arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs' APIGatewayAccount: Type: AWS::ApiGateway::Account Properties: CloudWatchRoleArn: !GetAtt APILogRole.Arn BackendHttpAPI: Type: AWS::Serverless::HttpApi DependsOn: - AuthorizerLambda - AuthorizerInvokationRole Properties: StageName: !Ref Stage DefinitionBody: Fn::Transform: Name: AWS::Include Parameters: Location: openapi.yaml # Authorizer AuthorizerLambda: Type: AWS::Serverless::Function Properties: Description: 'Authorizer for AudioMatch API' CodeUri: ./authorizer Handler: handler.handler Runtime: python3.7 Timeout: 10 MemorySize: 128 Layers: - !Ref LibrariesLayer VpcConfig: SecurityGroupIds: - !Ref NoIngressSecurityGroup SubnetIds: - !Ref PrivateSubnetA - !Ref PrivateSubnetB AuthorizerInvokationRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Action: - sts:AssumeRole Effect: Allow Principal: Service: - apigateway.amazonaws.com LambdaInvocationPolicy: Type: AWS::IAM::Policy DependsOn: [AuthorizerInvokationRole] Properties: PolicyName: LambdaInvocationPolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: lambda:InvokeFunction Resource: !GetAtt AuthorizerLambda.Arn Roles: - !Ref AuthorizerInvokationRole LibrariesLayer: Type: AWS::Serverless::LayerVersion Metadata: BuildMethod: python3.7 Properties: Description: 'Dependencies for Lambda functions' RetentionPolicy: Delete ContentUri: libs/ CompatibleRuntimes: - python3.7 ``` This is the openapi.yaml file ``` openapi: 3.0.1 components: securitySchemes: LambdaAuth: type: "http" scheme: "bearer" bearerFormat: "JWT" x-amazon-apigateway-authorizer: type: "request" identitySource: "$request.header.Authorization" authorizerCredentials: Fn::GetAtt: [AuthorizerInvokationRole,Arn] authorizerUri: Fn::Join: - "" - - "arn:aws:apigateway:" - Ref: AWS::Region - ":lambda:path/2015-03-31/functions/" - Fn::GetAtt: [AuthorizerLambda,Arn] - "/invocations" authorizerPayloadFormatVersion: "2.0" enableSimpleResponses: true authorizerResultTtlInSeconds: 3600 ``` This is the command used to build and deploy the stack: ` sam build --use-container && sam deploy ` ### Observed result: Both `sam build` and `sam deploy` compile successfully ### Expected result: LambdaAuth should be visible under Authorization->Manage authorizers in the API Gateway GUI ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: macOS Big Sur 2. If using SAM CLI, `sam --version`: SAM CLI, version 1.37.0 3. AWS region: eu-central-1
True
Lambda Authorizer httpApi openapi.yaml not created - ### Description: Good morning, i am trying to implement a lambda authorizer for an httpapi in apigateway. All my resources are created using cloudformation using a template.yaml file and an openapi.yaml file where the routes are defined. The securitySchemes are also defined in the openapi.yaml file using the x-amazon-apigateway-authorizer tag. My problem is that the authorizer is not created in api gateway when I deploy the stack. ### Steps to reproduce: This is my template.yaml file ``` AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: Resources: # API ---------------------------------------------------------------------------------------------------------------- APILogRole: Type: AWS::IAM::Role Properties: Description: 'IAM role for API Gateway Logging' AssumeRolePolicyDocument: Statement: - Effect: Allow Principal: Service: [apigateway.amazonaws.com] Action: sts:AssumeRole ManagedPolicyArns: - 'arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs' APIGatewayAccount: Type: AWS::ApiGateway::Account Properties: CloudWatchRoleArn: !GetAtt APILogRole.Arn BackendHttpAPI: Type: AWS::Serverless::HttpApi DependsOn: - AuthorizerLambda - AuthorizerInvokationRole Properties: StageName: !Ref Stage DefinitionBody: Fn::Transform: Name: AWS::Include Parameters: Location: openapi.yaml # Authorizer AuthorizerLambda: Type: AWS::Serverless::Function Properties: Description: 'Authorizer for AudioMatch API' CodeUri: ./authorizer Handler: handler.handler Runtime: python3.7 Timeout: 10 MemorySize: 128 Layers: - !Ref LibrariesLayer VpcConfig: SecurityGroupIds: - !Ref NoIngressSecurityGroup SubnetIds: - !Ref PrivateSubnetA - !Ref PrivateSubnetB AuthorizerInvokationRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Action: - sts:AssumeRole Effect: Allow Principal: Service: - apigateway.amazonaws.com LambdaInvocationPolicy: Type: AWS::IAM::Policy DependsOn: [AuthorizerInvokationRole] Properties: PolicyName: LambdaInvocationPolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: lambda:InvokeFunction Resource: !GetAtt AuthorizerLambda.Arn Roles: - !Ref AuthorizerInvokationRole LibrariesLayer: Type: AWS::Serverless::LayerVersion Metadata: BuildMethod: python3.7 Properties: Description: 'Dependencies for Lambda functions' RetentionPolicy: Delete ContentUri: libs/ CompatibleRuntimes: - python3.7 ``` This is the openapi.yaml file ``` openapi: 3.0.1 components: securitySchemes: LambdaAuth: type: "http" scheme: "bearer" bearerFormat: "JWT" x-amazon-apigateway-authorizer: type: "request" identitySource: "$request.header.Authorization" authorizerCredentials: Fn::GetAtt: [AuthorizerInvokationRole,Arn] authorizerUri: Fn::Join: - "" - - "arn:aws:apigateway:" - Ref: AWS::Region - ":lambda:path/2015-03-31/functions/" - Fn::GetAtt: [AuthorizerLambda,Arn] - "/invocations" authorizerPayloadFormatVersion: "2.0" enableSimpleResponses: true authorizerResultTtlInSeconds: 3600 ``` This is the command used to build and deploy the stack: ` sam build --use-container && sam deploy ` ### Observed result: Both `sam build` and `sam deploy` compile successfully ### Expected result: LambdaAuth should be visible under Authorization->Manage authorizers in the API Gateway GUI ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: macOS Big Sur 2. If using SAM CLI, `sam --version`: SAM CLI, version 1.37.0 3. AWS region: eu-central-1
main
lambda authorizer httpapi openapi yaml not created description good morning i am trying to implement a lambda authorizer for an httpapi in apigateway all my resources are created using cloudformation using a template yaml file and an openapi yaml file where the routes are defined the securityschemes are also defined in the openapi yaml file using the x amazon apigateway authorizer tag my problem is that the authorizer is not created in api gateway when i deploy the stack steps to reproduce this is my template yaml file awstemplateformatversion transform aws serverless description resources api apilogrole type aws iam role properties description iam role for api gateway logging assumerolepolicydocument statement effect allow principal service action sts assumerole managedpolicyarns arn aws iam aws policy service role amazonapigatewaypushtocloudwatchlogs apigatewayaccount type aws apigateway account properties cloudwatchrolearn getatt apilogrole arn backendhttpapi type aws serverless httpapi dependson authorizerlambda authorizerinvokationrole properties stagename ref stage definitionbody fn transform name aws include parameters location openapi yaml authorizer authorizerlambda type aws serverless function properties description authorizer for audiomatch api codeuri authorizer handler handler handler runtime timeout memorysize layers ref librarieslayer vpcconfig securitygroupids ref noingresssecuritygroup subnetids ref privatesubneta ref privatesubnetb authorizerinvokationrole type aws iam role properties assumerolepolicydocument version statement action sts assumerole effect allow principal service apigateway amazonaws com lambdainvocationpolicy type aws iam policy dependson properties policyname lambdainvocationpolicy policydocument version statement effect allow action lambda invokefunction resource getatt authorizerlambda arn roles ref authorizerinvokationrole librarieslayer type aws serverless layerversion metadata buildmethod properties description dependencies for lambda functions retentionpolicy delete contenturi libs compatibleruntimes this is the openapi yaml file openapi components securityschemes lambdaauth type http scheme bearer bearerformat jwt x amazon apigateway authorizer type request identitysource request header authorization authorizercredentials fn getatt authorizeruri fn join arn aws apigateway ref aws region lambda path functions fn getatt invocations authorizerpayloadformatversion enablesimpleresponses true authorizerresultttlinseconds this is the command used to build and deploy the stack sam build use container sam deploy observed result both sam build and sam deploy compile successfully expected result lambdaauth should be visible under authorization manage authorizers in the api gateway gui additional environment details ex windows mac amazon linux etc os macos big sur if using sam cli sam version sam cli version aws region eu central
1
4,364
22,104,707,026
IssuesEvent
2022-06-01 16:06:08
Lissy93/dashy
https://api.github.com/repos/Lissy93/dashy
closed
[QUESTION] Deployment from source autostart. 2.0.8
🤷‍♂️ Question 👤 Awaiting Maintainer Response
### Question `yarn start # Start the app` After a deployment form source, can I auto start Dashy ? ### Category Setup and Deployment ### Please tick the boxes - [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number) - [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue) - [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide - [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct)
True
[QUESTION] Deployment from source autostart. 2.0.8 - ### Question `yarn start # Start the app` After a deployment form source, can I auto start Dashy ? ### Category Setup and Deployment ### Please tick the boxes - [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number) - [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue) - [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide - [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct)
main
deployment from source autostart question yarn start start the app after a deployment form source can i auto start dashy category setup and deployment please tick the boxes you are using a version of dashy check the first two digits of the version number you ve checked that this you ve checked the and guide you agree to the
1
2,018
6,757,619,503
IssuesEvent
2017-10-24 11:31:09
Kristinita/Erics-Green-Room
https://api.github.com/repos/Kristinita/Erics-Green-Room
closed
[Feature request] Увеличение размера шрифта
css need-maintainer
### 1. Запрос #### 1. Желательно Неплохо было бы, если б пользователи имели возможность ставить удобный для них размер шрифта по умолчанию. #### 2. Альтернатива Увеличить размер шрифта [**с 12px**](https://ux.stackexchange.com/a/18885/91045) до 16px по умолчанию для всех пользователей. ### 2. Аргументация В настоящее время мне приходится увеличивать размер шрифта каждый раз, как захожу на Альфа-хаб. Это отнимает время. 1. Мелкие размеры шрифтов [**могут увеличить глазное напряжение**](http://www.eyemagazine.com/opinion/article/eye-strain), что может негативно сказаться на здоровье глаз, 1. Наиболее предпочтительным [**считают**](https://ux.stackexchange.com/a/746/91045) размер в 16px. Спасибо.
True
[Feature request] Увеличение размера шрифта - ### 1. Запрос #### 1. Желательно Неплохо было бы, если б пользователи имели возможность ставить удобный для них размер шрифта по умолчанию. #### 2. Альтернатива Увеличить размер шрифта [**с 12px**](https://ux.stackexchange.com/a/18885/91045) до 16px по умолчанию для всех пользователей. ### 2. Аргументация В настоящее время мне приходится увеличивать размер шрифта каждый раз, как захожу на Альфа-хаб. Это отнимает время. 1. Мелкие размеры шрифтов [**могут увеличить глазное напряжение**](http://www.eyemagazine.com/opinion/article/eye-strain), что может негативно сказаться на здоровье глаз, 1. Наиболее предпочтительным [**считают**](https://ux.stackexchange.com/a/746/91045) размер в 16px. Спасибо.
main
увеличение размера шрифта запрос желательно неплохо было бы если б пользователи имели возможность ставить удобный для них размер шрифта по умолчанию альтернатива увеличить размер шрифта до по умолчанию для всех пользователей аргументация в настоящее время мне приходится увеличивать размер шрифта каждый раз как захожу на альфа хаб это отнимает время мелкие размеры шрифтов что может негативно сказаться на здоровье глаз наиболее предпочтительным размер в спасибо
1
214,356
7,269,233,472
IssuesEvent
2018-02-20 13:01:11
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
mobile.twitter.com - site is not usable
browser-firefox-mobile priority-critical
<!-- @browser: Firefox Mobile 60.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:60.0) Gecko/60.0 Firefox/60.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://mobile.twitter.com/karonmoser/status/965327686246764545 **Browser / Version**: Firefox Mobile 60.0 **Operating System**: Android 7.1.2 **Tested Another Browser**: Unknown **Problem type**: Site is not usable **Description**: it doesnt shoe the tweet **Steps to Reproduce**: It worked after using the desktop mode than changing to mobile mode _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
mobile.twitter.com - site is not usable - <!-- @browser: Firefox Mobile 60.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:60.0) Gecko/60.0 Firefox/60.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://mobile.twitter.com/karonmoser/status/965327686246764545 **Browser / Version**: Firefox Mobile 60.0 **Operating System**: Android 7.1.2 **Tested Another Browser**: Unknown **Problem type**: Site is not usable **Description**: it doesnt shoe the tweet **Steps to Reproduce**: It worked after using the desktop mode than changing to mobile mode _From [webcompat.com](https://webcompat.com/) with ❤️_
non_main
mobile twitter com site is not usable url browser version firefox mobile operating system android tested another browser unknown problem type site is not usable description it doesnt shoe the tweet steps to reproduce it worked after using the desktop mode than changing to mobile mode from with ❤️
0
1,946
6,625,477,672
IssuesEvent
2017-09-22 15:34:01
Kristinita/SashaMiscellaneous
https://api.github.com/repos/Kristinita/SashaMiscellaneous
closed
[Bug] English localization
need-maintainer
### 1. Summary В Windows 10 EN в интерфейсе вместо кириллических символов появляются знаки вопроса. ### 2. Expected behavior Чтобы смотреть, как выглядит интерфейс программ при различных локализациях, пользуюсь программой [**Locale Emulator**](https://github.com/xupefei/Locale-Emulator). Открываю `ViDi-DC.exe` в локализациях: `Russian (Russia)`: ![RU](http://i.imgur.com/1J32tMo.png) В `Japanese (Japan)` и `Chinese` кириллические символы интерфейса также отображаются нормально. ### 3. Actual behavior `English (United States)`: ![EN](http://i.imgur.com/Uk9Oajt.png) Такое же поведение для `German (Germany)`. ### 4. Settings Установлена английская Windows 10 из оригинального образа, [**как написано здесь**](http://pcportal.org.ru/forum/60-6346-1). Чтобы интерфейс корректно отображался для non-Unicode программ, произвёл настройку, как расписано [**здесь**](http://www.digitalcitizen.life/changing-display-language-used-non-unicode-programs?page=1): Current system locale → `Russian (Russia)`. После этой настройки проблема с кракозябрами в интерфейсе исчезла для всех остальных программ, но присутствует в ViDi-DC. ### 5. Steps to reproduce Скачал ViDi-DC с официального сайта, ничего (кроме аватарки) не настраивал → получаю актуальное поведение. Перезагрузился на Windows 10 Education 32-bit RU → запуская `ViDi-DC.exe` в разных локализациях, получаю то же поведение, что и в разделах 2 и 3. Это вкупе с вышесказанным позволяет предположить, что проблема в ViDi-DC, а не моих личных настройках. ### 6. Environment **Operating system **and** version:** Windows 10 Enterprise LTSB 64-bit EN **ViDi-DC:** 0.0.0.1 Спасибо.
True
[Bug] English localization - ### 1. Summary В Windows 10 EN в интерфейсе вместо кириллических символов появляются знаки вопроса. ### 2. Expected behavior Чтобы смотреть, как выглядит интерфейс программ при различных локализациях, пользуюсь программой [**Locale Emulator**](https://github.com/xupefei/Locale-Emulator). Открываю `ViDi-DC.exe` в локализациях: `Russian (Russia)`: ![RU](http://i.imgur.com/1J32tMo.png) В `Japanese (Japan)` и `Chinese` кириллические символы интерфейса также отображаются нормально. ### 3. Actual behavior `English (United States)`: ![EN](http://i.imgur.com/Uk9Oajt.png) Такое же поведение для `German (Germany)`. ### 4. Settings Установлена английская Windows 10 из оригинального образа, [**как написано здесь**](http://pcportal.org.ru/forum/60-6346-1). Чтобы интерфейс корректно отображался для non-Unicode программ, произвёл настройку, как расписано [**здесь**](http://www.digitalcitizen.life/changing-display-language-used-non-unicode-programs?page=1): Current system locale → `Russian (Russia)`. После этой настройки проблема с кракозябрами в интерфейсе исчезла для всех остальных программ, но присутствует в ViDi-DC. ### 5. Steps to reproduce Скачал ViDi-DC с официального сайта, ничего (кроме аватарки) не настраивал → получаю актуальное поведение. Перезагрузился на Windows 10 Education 32-bit RU → запуская `ViDi-DC.exe` в разных локализациях, получаю то же поведение, что и в разделах 2 и 3. Это вкупе с вышесказанным позволяет предположить, что проблема в ViDi-DC, а не моих личных настройках. ### 6. Environment **Operating system **and** version:** Windows 10 Enterprise LTSB 64-bit EN **ViDi-DC:** 0.0.0.1 Спасибо.
main
english localization summary в windows en в интерфейсе вместо кириллических символов появляются знаки вопроса expected behavior чтобы смотреть как выглядит интерфейс программ при различных локализациях пользуюсь программой открываю vidi dc exe в локализациях russian russia в japanese japan и chinese кириллические символы интерфейса также отображаются нормально actual behavior english united states такое же поведение для german germany settings установлена английская windows из оригинального образа чтобы интерфейс корректно отображался для non unicode программ произвёл настройку как расписано current system locale → russian russia после этой настройки проблема с кракозябрами в интерфейсе исчезла для всех остальных программ но присутствует в vidi dc steps to reproduce скачал vidi dc с официального сайта ничего кроме аватарки не настраивал → получаю актуальное поведение перезагрузился на windows education bit ru → запуская vidi dc exe в разных локализациях получаю то же поведение что и в разделах и это вкупе с вышесказанным позволяет предположить что проблема в vidi dc а не моих личных настройках environment operating system and version windows enterprise ltsb bit en vidi dc спасибо
1
3,084
11,713,794,937
IssuesEvent
2020-03-09 11:00:11
pace/bricks
https://api.github.com/repos/pace/bricks
opened
json api type errors are not helpful
T::Maintainance
### Problem The `github.com/google/jsonapi` unmarshal func does type check the request but doesn't inform about the particular issue. If the caller is using a string instead of a number, the generic error `Invalid type provided` is returned. ### Solution The library needs to be changed to report the issue.
True
json api type errors are not helpful - ### Problem The `github.com/google/jsonapi` unmarshal func does type check the request but doesn't inform about the particular issue. If the caller is using a string instead of a number, the generic error `Invalid type provided` is returned. ### Solution The library needs to be changed to report the issue.
main
json api type errors are not helpful problem the github com google jsonapi unmarshal func does type check the request but doesn t inform about the particular issue if the caller is using a string instead of a number the generic error invalid type provided is returned solution the library needs to be changed to report the issue
1
245,855
26,569,472,667
IssuesEvent
2023-01-21 01:07:24
nidhi7598/linux-3.0.35_CVE-2022-45934
https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2022-45934
opened
CVE-2021-28972 (Medium) detected in linux-stable-rtv3.8.6
security vulnerability
## CVE-2021-28972 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In drivers/pci/hotplug/rpadlpar_sysfs.c in the Linux kernel through 5.11.8, the RPA PCI Hotplug driver has a user-tolerable buffer overflow when writing a new device name to the driver from userspace, allowing userspace to write data to the kernel stack frame directly. This occurs because add_slot_store and remove_slot_store mishandle drc_name '\0' termination, aka CID-cc7a0bb058b8. <p>Publish Date: 2021-03-22 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28972>CVE-2021-28972</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972</a></p> <p>Release Date: 2021-03-22</p> <p>Fix Resolution: v4.4.263, v4.9.263, v4.14.227, v4.19.183, v5.4.108, v5.10.26, v5.11.9, v5.12-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-28972 (Medium) detected in linux-stable-rtv3.8.6 - ## CVE-2021-28972 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/pci/hotplug/rpadlpar_sysfs.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In drivers/pci/hotplug/rpadlpar_sysfs.c in the Linux kernel through 5.11.8, the RPA PCI Hotplug driver has a user-tolerable buffer overflow when writing a new device name to the driver from userspace, allowing userspace to write data to the kernel stack frame directly. This occurs because add_slot_store and remove_slot_store mishandle drc_name '\0' termination, aka CID-cc7a0bb058b8. <p>Publish Date: 2021-03-22 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28972>CVE-2021-28972</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972</a></p> <p>Release Date: 2021-03-22</p> <p>Fix Resolution: v4.4.263, v4.9.263, v4.14.227, v4.19.183, v5.4.108, v5.10.26, v5.11.9, v5.12-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files drivers pci hotplug rpadlpar sysfs c drivers pci hotplug rpadlpar sysfs c drivers pci hotplug rpadlpar sysfs c vulnerability details in drivers pci hotplug rpadlpar sysfs c in the linux kernel through the rpa pci hotplug driver has a user tolerable buffer overflow when writing a new device name to the driver from userspace allowing userspace to write data to the kernel stack frame directly this occurs because add slot store and remove slot store mishandle drc name termination aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
78,643
27,659,955,722
IssuesEvent
2023-03-12 12:08:44
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
DataTable RowExpansion: missing headerText
:lady_beetle: defect :bangbang: needs-triage
### Describe the bug When creating a sub `DataTable` under `RowExpansion` headerText is missing if the parent DataTable has `reflow="true"` parameter. When set `reflow="false"`, headerText appears. ### Reproducer You can find the output as screen shot and sample codes inside issureport.zip [issuereport.zip](https://github.com/primefaces/primefaces/files/10949170/issuereport.zip) <img width="656" alt="Screenshot 2023-03-11 at 19 17 30" src="https://user-images.githubusercontent.com/9448030/224505521-0bbb243f-fd94-4d24-ad1a-276d4fdaf9df.png"> ### Expected behavior headerText must be shown ### PrimeFaces edition None ### PrimeFaces version 13.0.0-SNAPSHOT ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.3.14 ### Java version 11 ### Browser(s) Safari
1.0
DataTable RowExpansion: missing headerText - ### Describe the bug When creating a sub `DataTable` under `RowExpansion` headerText is missing if the parent DataTable has `reflow="true"` parameter. When set `reflow="false"`, headerText appears. ### Reproducer You can find the output as screen shot and sample codes inside issureport.zip [issuereport.zip](https://github.com/primefaces/primefaces/files/10949170/issuereport.zip) <img width="656" alt="Screenshot 2023-03-11 at 19 17 30" src="https://user-images.githubusercontent.com/9448030/224505521-0bbb243f-fd94-4d24-ad1a-276d4fdaf9df.png"> ### Expected behavior headerText must be shown ### PrimeFaces edition None ### PrimeFaces version 13.0.0-SNAPSHOT ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.3.14 ### Java version 11 ### Browser(s) Safari
non_main
datatable rowexpansion missing headertext describe the bug when creating a sub datatable under rowexpansion headertext is missing if the parent datatable has reflow true parameter when set reflow false headertext appears reproducer you can find the output as screen shot and sample codes inside issureport zip img width alt screenshot at src expected behavior headertext must be shown primefaces edition none primefaces version snapshot theme no response jsf implementation mojarra jsf version java version browser s safari
0
111,458
11,733,160,759
IssuesEvent
2020-03-11 06:16:45
MarkBind/markbind
https://api.github.com/repos/MarkBind/markbind
closed
How to use multiple features for code blocks
a-Documentation 📝 c.Bug 🐛 p.Medium
<!-- Before opening a new issue, please search existing issues: https://github.com/MarkBind/markbind/issues --> **Is your request related to a problem?** Nope. It is a request for documentation. <!-- Provide a clear and concise description of what the problem is. Ex. I have an issue when [...] --> There is no example shown regarding how to use multiple features with code blocks. **Describe the solution you'd like** <!-- Provide a clear and concise description of what you want to happen. --> Give an example using multiple feature of code blocks so that it becomes easier for the user to see and understand. **Describe alternatives you've considered** <!-- Let us know about other solutions you've tried or researched. --> N.A. **Additional context** <!-- Is there anything else you can add about the proposal? You might want to link to related issues here if you haven't already. --> N.A.
1.0
How to use multiple features for code blocks - <!-- Before opening a new issue, please search existing issues: https://github.com/MarkBind/markbind/issues --> **Is your request related to a problem?** Nope. It is a request for documentation. <!-- Provide a clear and concise description of what the problem is. Ex. I have an issue when [...] --> There is no example shown regarding how to use multiple features with code blocks. **Describe the solution you'd like** <!-- Provide a clear and concise description of what you want to happen. --> Give an example using multiple feature of code blocks so that it becomes easier for the user to see and understand. **Describe alternatives you've considered** <!-- Let us know about other solutions you've tried or researched. --> N.A. **Additional context** <!-- Is there anything else you can add about the proposal? You might want to link to related issues here if you haven't already. --> N.A.
non_main
how to use multiple features for code blocks before opening a new issue please search existing issues is your request related to a problem nope it is a request for documentation provide a clear and concise description of what the problem is ex i have an issue when there is no example shown regarding how to use multiple features with code blocks describe the solution you d like provide a clear and concise description of what you want to happen give an example using multiple feature of code blocks so that it becomes easier for the user to see and understand describe alternatives you ve considered let us know about other solutions you ve tried or researched n a additional context is there anything else you can add about the proposal you might want to link to related issues here if you haven t already n a
0
1,960
6,688,361,153
IssuesEvent
2017-10-08 13:54:48
cannawen/metric_units_reddit_bot
https://api.github.com/repos/cannawen/metric_units_reddit_bot
closed
Feedback request - Process discussion
discussion maintainer approved
EDIT: [CONTRIBUTE.md](https://github.com/cannawen/metric_units_reddit_bot/blob/master/CONTRIBUTING.md) document created, but feel free to keep discussing process-related improvements that can be made --- Hey everyone! I am having some problems with our current process, where Alice volunteers to fix an issue, but then Bob makes a PR for it (either not noticing the "already-assigned" tag, or they started working on it without commenting on the issue). There are also times I don't know a story is being worked on, and suddenly a PR appears. It's awesome that unexpected features are "magically" getting done, but also a bit scary While it's amazing to have so much interest, I feel like this lack of transparency is not sustainable because there may be duplicated efforts (and that would 100% suck). I am thinking of making a CONTRIBUTING.md document that states if you start working on something, you must make a github issue to let everyone else know. Some concerns/questions I have: 1) Should PRs without open github issues be rejected? Perhaps if there is no conflict, the PR can be accepted with a warning for next time? 2) If someone opens a PR for an issue they were not assigned, should the PR be rejected? What if the original assignee does not finish the issue? 3) How long should an issue be assigned to a person for? Should we enforce a "deadline" for when a feature must be done? I am opening this issue for discussion, has anyone else run into this kind of problem before? Can anyone think of a good solution? Any thoughts or feedback are welcome.
True
Feedback request - Process discussion - EDIT: [CONTRIBUTE.md](https://github.com/cannawen/metric_units_reddit_bot/blob/master/CONTRIBUTING.md) document created, but feel free to keep discussing process-related improvements that can be made --- Hey everyone! I am having some problems with our current process, where Alice volunteers to fix an issue, but then Bob makes a PR for it (either not noticing the "already-assigned" tag, or they started working on it without commenting on the issue). There are also times I don't know a story is being worked on, and suddenly a PR appears. It's awesome that unexpected features are "magically" getting done, but also a bit scary While it's amazing to have so much interest, I feel like this lack of transparency is not sustainable because there may be duplicated efforts (and that would 100% suck). I am thinking of making a CONTRIBUTING.md document that states if you start working on something, you must make a github issue to let everyone else know. Some concerns/questions I have: 1) Should PRs without open github issues be rejected? Perhaps if there is no conflict, the PR can be accepted with a warning for next time? 2) If someone opens a PR for an issue they were not assigned, should the PR be rejected? What if the original assignee does not finish the issue? 3) How long should an issue be assigned to a person for? Should we enforce a "deadline" for when a feature must be done? I am opening this issue for discussion, has anyone else run into this kind of problem before? Can anyone think of a good solution? Any thoughts or feedback are welcome.
main
feedback request process discussion edit document created but feel free to keep discussing process related improvements that can be made hey everyone i am having some problems with our current process where alice volunteers to fix an issue but then bob makes a pr for it either not noticing the already assigned tag or they started working on it without commenting on the issue there are also times i don t know a story is being worked on and suddenly a pr appears it s awesome that unexpected features are magically getting done but also a bit scary while it s amazing to have so much interest i feel like this lack of transparency is not sustainable because there may be duplicated efforts and that would suck i am thinking of making a contributing md document that states if you start working on something you must make a github issue to let everyone else know some concerns questions i have should prs without open github issues be rejected perhaps if there is no conflict the pr can be accepted with a warning for next time if someone opens a pr for an issue they were not assigned should the pr be rejected what if the original assignee does not finish the issue how long should an issue be assigned to a person for should we enforce a deadline for when a feature must be done i am opening this issue for discussion has anyone else run into this kind of problem before can anyone think of a good solution any thoughts or feedback are welcome
1
31,664
5,968,101,632
IssuesEvent
2017-05-30 17:23:04
geodynamics/aspect
https://api.github.com/repos/geodynamics/aspect
closed
Remove doc/modules/todo.h
documentation starter project
This file was last touched in 2014, and many of the items (although not all) are already addressed. Someone should go through the list of items, delete those that are fixed, and convert the others into github issues. Then remove the file.
1.0
Remove doc/modules/todo.h - This file was last touched in 2014, and many of the items (although not all) are already addressed. Someone should go through the list of items, delete those that are fixed, and convert the others into github issues. Then remove the file.
non_main
remove doc modules todo h this file was last touched in and many of the items although not all are already addressed someone should go through the list of items delete those that are fixed and convert the others into github issues then remove the file
0
26,315
12,399,036,371
IssuesEvent
2020-05-21 03:49:29
vmware/singleton
https://api.github.com/repos/vmware/singleton
closed
[REQUIREMENT] Extract pattern data about compact number formats
area/service kind/feature priority/high
**Is your feature request related to a problem? Please describe.** To support the compact representation of a number: like 987654321 => 988M The reference link: https://www.unicode.org/reports/tr35/tr35-numbers.html#Compact_Number_Formats eg: locale: 'en-GB' formatOptions:{ notation: "compact" , compactDisplay: "short" }) input:(987654321)); output:( 988M **Describe the solution you'd like** The backend service can provide the corresponding pattern data regarding the 'decimalFormats-[numberSystem]' For the first version, the number system is the value of 'defaultNumberingSystem' in raw pattern data: https://github.com/unicode-cldr/cldr-numbers-full/blob/master/main/en/numbers.json **Describe alternatives you've considered** A clear and concise description of any alternative solution or feature you've considered. **Additional context** CLDR data: https://github.com/unicode-cldr/cldr-numbers-full/blob/master/main/en/numbers.json
1.0
[REQUIREMENT] Extract pattern data about compact number formats - **Is your feature request related to a problem? Please describe.** To support the compact representation of a number: like 987654321 => 988M The reference link: https://www.unicode.org/reports/tr35/tr35-numbers.html#Compact_Number_Formats eg: locale: 'en-GB' formatOptions:{ notation: "compact" , compactDisplay: "short" }) input:(987654321)); output:( 988M **Describe the solution you'd like** The backend service can provide the corresponding pattern data regarding the 'decimalFormats-[numberSystem]' For the first version, the number system is the value of 'defaultNumberingSystem' in raw pattern data: https://github.com/unicode-cldr/cldr-numbers-full/blob/master/main/en/numbers.json **Describe alternatives you've considered** A clear and concise description of any alternative solution or feature you've considered. **Additional context** CLDR data: https://github.com/unicode-cldr/cldr-numbers-full/blob/master/main/en/numbers.json
non_main
extract pattern data about compact number formats is your feature request related to a problem please describe to support the compact representation of a number like the reference link eg locale en gb formatoptions notation compact compactdisplay short input output describe the solution you d like the backend service can provide the corresponding pattern data regarding the decimalformats for the first version the number system is the value of defaultnumberingsystem in raw pattern data describe alternatives you ve considered a clear and concise description of any alternative solution or feature you ve considered additional context cldr data
0
454,164
13,095,836,775
IssuesEvent
2020-08-03 14:44:30
FAIRsharing/fairsharing.github.io
https://api.github.com/repos/FAIRsharing/fairsharing.github.io
closed
improving login popup style
High priority enhancement
- [x] changing the background - [x] changing buttons color - [x] center Login title
1.0
improving login popup style - - [x] changing the background - [x] changing buttons color - [x] center Login title
non_main
improving login popup style changing the background changing buttons color center login title
0
347,390
31,161,527,712
IssuesEvent
2023-08-16 16:20:20
azurenoops/ref-scca-enclave-landing-zone-starter
https://api.github.com/repos/azurenoops/ref-scca-enclave-landing-zone-starter
opened
TEST CASE - Deploy LZ Starter to MAG, Single Subscription, using Terraform CLI, Remote State Storage
test case
This issue is a test case for landing zone starter deployment. ### This issue is for a: (mark with an `x`) ``` - [ ] bug report -> please search issues before submitting - [X] test case - [ ] feature request - [ ] documentation issue or request - [ ] regression (a behavior that used to work and stopped in a new release) ``` ### Test steps 0. (Optionally) Create a branch (or Fork) for testing 1. Clone repository to your local computer (or into a Codespace) 1. `cd <cloned-dir>/infrastructure/terraform` 1. `cp ../../tfvars/parameters.tfvars .` 1. `terraform version` (should be version >= 1.4.6) 1. [Authenticate Terraform with Azure](https://learn.microsoft.com/en-us/azure/developer/terraform/authenticate-to-azure?tabs=bash). Note that authenticating with Azure Government requires the use of the ARM_ENVIRONMENT environment variable and `az cloud set` 1. Configure Terraform for [Remote state storage](https://learn.microsoft.com/en-us/azure/developer/terraform/store-state-in-azure-storage?tabs=azure-cli). A helper script `az-remote-backend.sh` is described in [this doc](https://github.com/azurenoops/ref-scca-enclave-landing-zone-starter/blob/main/docs/00-Remote-State-Storage.md), but YMMV. 3. `terraform init` 4. `terraform plan -out test.plan` 5. `terraform apply -f test.plan` ### Other helpful details
1.0
TEST CASE - Deploy LZ Starter to MAG, Single Subscription, using Terraform CLI, Remote State Storage - This issue is a test case for landing zone starter deployment. ### This issue is for a: (mark with an `x`) ``` - [ ] bug report -> please search issues before submitting - [X] test case - [ ] feature request - [ ] documentation issue or request - [ ] regression (a behavior that used to work and stopped in a new release) ``` ### Test steps 0. (Optionally) Create a branch (or Fork) for testing 1. Clone repository to your local computer (or into a Codespace) 1. `cd <cloned-dir>/infrastructure/terraform` 1. `cp ../../tfvars/parameters.tfvars .` 1. `terraform version` (should be version >= 1.4.6) 1. [Authenticate Terraform with Azure](https://learn.microsoft.com/en-us/azure/developer/terraform/authenticate-to-azure?tabs=bash). Note that authenticating with Azure Government requires the use of the ARM_ENVIRONMENT environment variable and `az cloud set` 1. Configure Terraform for [Remote state storage](https://learn.microsoft.com/en-us/azure/developer/terraform/store-state-in-azure-storage?tabs=azure-cli). A helper script `az-remote-backend.sh` is described in [this doc](https://github.com/azurenoops/ref-scca-enclave-landing-zone-starter/blob/main/docs/00-Remote-State-Storage.md), but YMMV. 3. `terraform init` 4. `terraform plan -out test.plan` 5. `terraform apply -f test.plan` ### Other helpful details
non_main
test case deploy lz starter to mag single subscription using terraform cli remote state storage this issue is a test case for landing zone starter deployment this issue is for a mark with an x bug report please search issues before submitting test case feature request documentation issue or request regression a behavior that used to work and stopped in a new release test steps optionally create a branch or fork for testing clone repository to your local computer or into a codespace cd infrastructure terraform cp tfvars parameters tfvars terraform version should be version note that authenticating with azure government requires the use of the arm environment environment variable and az cloud set configure terraform for a helper script az remote backend sh is described in but ymmv terraform init terraform plan out test plan terraform apply f test plan other helpful details
0
1,629
6,572,656,290
IssuesEvent
2017-09-11 04:08:01
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
maven_artifact fails if repository_url not specified
affects_2.2 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME maven_artifact ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no ##### OS / ENVIRONMENT N/A ##### SUMMARY `repository_url` is supposed to default to `http://repo1.maven.org/maven2` if not specified, however the change to support S3 URLs parsed the URL parameter (which defaults to `None`) before it can be changed to the default value. ##### STEPS TO REPRODUCE ``` --- - hosts: localhost connection: local tasks: - maven_artifact: group_id=log4j artifact_id=log4j version=1.2.17 dest=/tmp/log4j-1.2.17.jar ``` ##### EXPECTED RESULTS File is downloaded . ##### ACTUAL RESULTS Error occurs: ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "url parsing went wrong 'NoneType' object has no attribute 'find'"} ```
True
maven_artifact fails if repository_url not specified - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME maven_artifact ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no ##### OS / ENVIRONMENT N/A ##### SUMMARY `repository_url` is supposed to default to `http://repo1.maven.org/maven2` if not specified, however the change to support S3 URLs parsed the URL parameter (which defaults to `None`) before it can be changed to the default value. ##### STEPS TO REPRODUCE ``` --- - hosts: localhost connection: local tasks: - maven_artifact: group_id=log4j artifact_id=log4j version=1.2.17 dest=/tmp/log4j-1.2.17.jar ``` ##### EXPECTED RESULTS File is downloaded . ##### ACTUAL RESULTS Error occurs: ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "url parsing went wrong 'NoneType' object has no attribute 'find'"} ```
main
maven artifact fails if repository url not specified issue type bug report component name maven artifact ansible version ansible config file configured module search path default w o overrides configuration no os environment n a summary repository url is supposed to default to if not specified however the change to support urls parsed the url parameter which defaults to none before it can be changed to the default value steps to reproduce hosts localhost connection local tasks maven artifact group id artifact id version dest tmp jar expected results file is downloaded actual results error occurs fatal failed changed false failed true msg url parsing went wrong nonetype object has no attribute find
1
248,251
18,858,053,230
IssuesEvent
2021-11-12 09:19:57
cookiedan42/pe
https://api.github.com/repos/cookiedan42/pe
opened
[UG] Creating your first order screenshots
type.DocumentationBug severity.Medium
Wall of text with no screenshots ![image.png](https://raw.githubusercontent.com/cookiedan42/pe/main/files/3cc966f4-0460-409d-a883-c613769aac6e.png) This is a problem for sections that ask the reader to notice something ![image.png](https://raw.githubusercontent.com/cookiedan42/pe/main/files/85a30136-b3f6-4017-a5a2-b7ec376ab03f.png) ![image.png](https://raw.githubusercontent.com/cookiedan42/pe/main/files/ece61f66-a9f3-42c5-8f3d-5f76a5a8d335.png) but a new user would not be likely to know where to look or what specific changes to notice <!--session: 1636703198206-186e467b-e921-4ac2-801b-2f2ffdd7e511--> <!--Version: Web v3.4.1-->
1.0
[UG] Creating your first order screenshots - Wall of text with no screenshots ![image.png](https://raw.githubusercontent.com/cookiedan42/pe/main/files/3cc966f4-0460-409d-a883-c613769aac6e.png) This is a problem for sections that ask the reader to notice something ![image.png](https://raw.githubusercontent.com/cookiedan42/pe/main/files/85a30136-b3f6-4017-a5a2-b7ec376ab03f.png) ![image.png](https://raw.githubusercontent.com/cookiedan42/pe/main/files/ece61f66-a9f3-42c5-8f3d-5f76a5a8d335.png) but a new user would not be likely to know where to look or what specific changes to notice <!--session: 1636703198206-186e467b-e921-4ac2-801b-2f2ffdd7e511--> <!--Version: Web v3.4.1-->
non_main
creating your first order screenshots wall of text with no screenshots this is a problem for sections that ask the reader to notice something but a new user would not be likely to know where to look or what specific changes to notice
0
63,066
17,365,970,475
IssuesEvent
2021-07-30 07:19:44
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
no UI when switching room whilst setting up a voice/video call
A-VoIP P1 S-Tolerable T-Defect
If you set up a voice/video call and whilst it's ringing switch to a different room, there is zero UI to tell you what's going on. We should at least show the 'ongoing call' UI at the top of LeftPanel
1.0
no UI when switching room whilst setting up a voice/video call - If you set up a voice/video call and whilst it's ringing switch to a different room, there is zero UI to tell you what's going on. We should at least show the 'ongoing call' UI at the top of LeftPanel
non_main
no ui when switching room whilst setting up a voice video call if you set up a voice video call and whilst it s ringing switch to a different room there is zero ui to tell you what s going on we should at least show the ongoing call ui at the top of leftpanel
0
584,916
17,466,951,462
IssuesEvent
2021-08-06 18:19:22
azuline/repertoire
https://api.github.com/repos/azuline/repertoire
opened
polish configuration page interface
enhancement frontend priority-high
its really a rough MVP right now, needs to be polished to be at least not shit
1.0
polish configuration page interface - its really a rough MVP right now, needs to be polished to be at least not shit
non_main
polish configuration page interface its really a rough mvp right now needs to be polished to be at least not shit
0
2,397
8,514,698,855
IssuesEvent
2018-10-31 19:19:22
TravisSpark/spark-website
https://api.github.com/repos/TravisSpark/spark-website
closed
Remove Coding Event
maintainence
### Checklist - [x] Searched for, and did not find, duplicate [issue](https://github.com/TravisSpark/spark-website/issues) - [x] Indicated whether the issue is a bug or a feature - [x] Focused on one specific bug/feature - [x] Gave a concise and relevant name - [x] Created clear and concise description - [x] Outlined which components are affected - [x] Assigned issue to project, appropriate contributors, and relevant labels <!-- Edit as Appropriate --> ### Issue Type: Feature <!-- Pick one --> ### Description 26 Oct Coding event is over. Should be removed from events page and contact form. ### Affected Components - Contact.md - Contact.yml - Events.yml - Google Form
True
Remove Coding Event - ### Checklist - [x] Searched for, and did not find, duplicate [issue](https://github.com/TravisSpark/spark-website/issues) - [x] Indicated whether the issue is a bug or a feature - [x] Focused on one specific bug/feature - [x] Gave a concise and relevant name - [x] Created clear and concise description - [x] Outlined which components are affected - [x] Assigned issue to project, appropriate contributors, and relevant labels <!-- Edit as Appropriate --> ### Issue Type: Feature <!-- Pick one --> ### Description 26 Oct Coding event is over. Should be removed from events page and contact form. ### Affected Components - Contact.md - Contact.yml - Events.yml - Google Form
main
remove coding event checklist searched for and did not find duplicate indicated whether the issue is a bug or a feature focused on one specific bug feature gave a concise and relevant name created clear and concise description outlined which components are affected assigned issue to project appropriate contributors and relevant labels issue type feature description oct coding event is over should be removed from events page and contact form affected components contact md contact yml events yml google form
1
5,726
30,276,877,437
IssuesEvent
2023-07-07 20:36:15
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
IntelliJ Plugin Aspect tests are failing with Bazel@HEAD on CI
type: bug product: IntelliJ topic: bazel awaiting-maintainer
https://buildkite.com/bazel/bazel-at-head-plus-downstream/builds/3149#01892e1b-b74b-41a1-834a-6a032d8314fe Platform : Ubuntu Aspect tests 18.04, 20.04 Logs : ``` //aspect/testing/tests/src/com/google/idea/blaze/aspect/java/javabinary:JavaBinaryTest FAILED //aspect/testing/tests/src/com/google/idea/blaze/aspect/java/javatest:JavaTestTest FAILED ``` Steps : ``` git clone -v https://github.com/bazelbuild/intellij.git git reset 023a786e711d47d1ce873ed0ad53b4ff25089ad5 --hard export USE_BAZEL_VERSION=fcfefc15b17dd70ab249e3d8d09d1ccc5da7d347 bazel test --define=ij_product=intellij-latest --test_output=errors --notrim_test_configuration -- //aspect/testing/... ``` Culprit :https://github.com/bazelbuild/bazel/commit/67c31eedb189d0b320bd4b2a011f25e9c57548c6 CC Green team @Wyverald
True
IntelliJ Plugin Aspect tests are failing with Bazel@HEAD on CI - https://buildkite.com/bazel/bazel-at-head-plus-downstream/builds/3149#01892e1b-b74b-41a1-834a-6a032d8314fe Platform : Ubuntu Aspect tests 18.04, 20.04 Logs : ``` //aspect/testing/tests/src/com/google/idea/blaze/aspect/java/javabinary:JavaBinaryTest FAILED //aspect/testing/tests/src/com/google/idea/blaze/aspect/java/javatest:JavaTestTest FAILED ``` Steps : ``` git clone -v https://github.com/bazelbuild/intellij.git git reset 023a786e711d47d1ce873ed0ad53b4ff25089ad5 --hard export USE_BAZEL_VERSION=fcfefc15b17dd70ab249e3d8d09d1ccc5da7d347 bazel test --define=ij_product=intellij-latest --test_output=errors --notrim_test_configuration -- //aspect/testing/... ``` Culprit :https://github.com/bazelbuild/bazel/commit/67c31eedb189d0b320bd4b2a011f25e9c57548c6 CC Green team @Wyverald
main
intellij plugin aspect tests are failing with bazel head on ci platform ubuntu aspect tests logs aspect testing tests src com google idea blaze aspect java javabinary javabinarytest failed aspect testing tests src com google idea blaze aspect java javatest javatesttest failed steps git clone v git reset hard export use bazel version bazel test define ij product intellij latest test output errors notrim test configuration aspect testing culprit cc green team wyverald
1
129,894
5,105,440,556
IssuesEvent
2017-01-05 07:25:41
HuskieRobotics/roborioExpansion
https://api.github.com/repos/HuskieRobotics/roborioExpansion
closed
LCD Newline characters
bug High-Priority labview roboRIO
To go to the beginning of the next line, the Parallax LCD expects the bytes `\r\n` (0x13 0x10). However, a new line on linux systems (like the roboRIO) only has the `\n` part, which means that data written to the LCD isn't formatted as expected. The cursor will move to the next line, but will not reset to the beginning of the line. Possible solution: Automatically replace the default newline character with an explicit `\r\n`. @gcschmit Do you think we should allow this to be turned off with a boolean input? The default would obviously be to replace the character.
1.0
LCD Newline characters - To go to the beginning of the next line, the Parallax LCD expects the bytes `\r\n` (0x13 0x10). However, a new line on linux systems (like the roboRIO) only has the `\n` part, which means that data written to the LCD isn't formatted as expected. The cursor will move to the next line, but will not reset to the beginning of the line. Possible solution: Automatically replace the default newline character with an explicit `\r\n`. @gcschmit Do you think we should allow this to be turned off with a boolean input? The default would obviously be to replace the character.
non_main
lcd newline characters to go to the beginning of the next line the parallax lcd expects the bytes r n however a new line on linux systems like the roborio only has the n part which means that data written to the lcd isn t formatted as expected the cursor will move to the next line but will not reset to the beginning of the line possible solution automatically replace the default newline character with an explicit r n gcschmit do you think we should allow this to be turned off with a boolean input the default would obviously be to replace the character
0
300,519
25,973,803,340
IssuesEvent
2022-12-19 13:23:01
ubtue/DatenProbleme
https://api.github.com/repos/ubtue/DatenProbleme
closed
ISSN 2166-8094 | Journal of Theoretical and Philosophical Criminology | neuer Translator
ready for testing Zotero_SEMI-AUTO
#### URL http://www.jtpcrim.org/archives.htm #### Import-Translator Einzel- und Mehrfachimport: keiner ### Problembeschreibung Ich habe zwar wenig Hoffnung, da die Seite einen nicht einmal nach Heften filtern lässt, aber: Ist es irgendwie möglich hier einen Translator zu entwickeln?
1.0
ISSN 2166-8094 | Journal of Theoretical and Philosophical Criminology | neuer Translator - #### URL http://www.jtpcrim.org/archives.htm #### Import-Translator Einzel- und Mehrfachimport: keiner ### Problembeschreibung Ich habe zwar wenig Hoffnung, da die Seite einen nicht einmal nach Heften filtern lässt, aber: Ist es irgendwie möglich hier einen Translator zu entwickeln?
non_main
issn journal of theoretical and philosophical criminology neuer translator url import translator einzel und mehrfachimport keiner problembeschreibung ich habe zwar wenig hoffnung da die seite einen nicht einmal nach heften filtern lässt aber ist es irgendwie möglich hier einen translator zu entwickeln
0
4,449
23,142,417,349
IssuesEvent
2022-07-28 19:54:41
tethysplatform/tethys
https://api.github.com/repos/tethysplatform/tethys
closed
Publish tethys_dataset_services on conda-forge
maintain dependencies
- [ ] Add recipes for dependencies that need to be added to conda forge See: https://dev.azure.com/conda-forge/feedstock-builds/_build/results?buildId=502541&view=logs&j=6f142865-96c3-535c-b7ea-873d86b887bd&t=22b0682d-ab9e-55d7-9c79-49f3c3ba4823
True
Publish tethys_dataset_services on conda-forge - - [ ] Add recipes for dependencies that need to be added to conda forge See: https://dev.azure.com/conda-forge/feedstock-builds/_build/results?buildId=502541&view=logs&j=6f142865-96c3-535c-b7ea-873d86b887bd&t=22b0682d-ab9e-55d7-9c79-49f3c3ba4823
main
publish tethys dataset services on conda forge add recipes for dependencies that need to be added to conda forge see
1
1,408
6,084,589,358
IssuesEvent
2017-06-17 05:28:56
WhitestormJS/whitestorm.js
https://api.github.com/repos/WhitestormJS/whitestorm.js
opened
Markdown guide for examples
EXAMPLES MAINTAINANCE REFACTORING
I'd like to see this feature in docs/examples. Some "complex" or popular examples should have alike description tab that explains: - How to build such app step-by-step - Which modules were used (that are not in whs build) - Some tips that you should note while making a similar thing ###### Version: - [x] v2.x.x - [ ] v1.x.x ###### Issue type: - [ ] Bug - [x] Proposal/Enhancement - [ ] Question - [ ] Discussion ------ <details> <summary> <b>Tested on: </b> </summary> ###### Desktop - [ ] Chrome - [ ] Chrome Canary - [ ] Chrome dev-channel - [ ] Firefox - [ ] Opera - [ ] Microsoft IE - [ ] Microsoft Edge ###### Android - [ ] Chrome - [ ] Firefox - [ ] Opera ###### IOS - [ ] Chrome - [ ] Firefox - [ ] Opera </details>
True
Markdown guide for examples - I'd like to see this feature in docs/examples. Some "complex" or popular examples should have alike description tab that explains: - How to build such app step-by-step - Which modules were used (that are not in whs build) - Some tips that you should note while making a similar thing ###### Version: - [x] v2.x.x - [ ] v1.x.x ###### Issue type: - [ ] Bug - [x] Proposal/Enhancement - [ ] Question - [ ] Discussion ------ <details> <summary> <b>Tested on: </b> </summary> ###### Desktop - [ ] Chrome - [ ] Chrome Canary - [ ] Chrome dev-channel - [ ] Firefox - [ ] Opera - [ ] Microsoft IE - [ ] Microsoft Edge ###### Android - [ ] Chrome - [ ] Firefox - [ ] Opera ###### IOS - [ ] Chrome - [ ] Firefox - [ ] Opera </details>
main
markdown guide for examples i d like to see this feature in docs examples some complex or popular examples should have alike description tab that explains how to build such app step by step which modules were used that are not in whs build some tips that you should note while making a similar thing version x x x x issue type bug proposal enhancement question discussion tested on desktop chrome chrome canary chrome dev channel firefox opera microsoft ie microsoft edge android chrome firefox opera ios chrome firefox opera
1
2,556
8,698,140,356
IssuesEvent
2018-12-04 22:22:19
Homebrew/homebrew-core
https://api.github.com/repos/Homebrew/homebrew-core
closed
Discussion: Ensure that projects are rebuild if upstream decides to retag
checksum mismatch maintainer feedback
[Retags](#18048), [unfortunately](#15663), [are](#15059) [a](#13792) [thing](#34771) and happen from time to time. For those unaware of the issue: Tags of projects tracked by Homebrew sometimes are pushed to their upstream repository, removed shortly after, and are then re-pushed in a slightly different form (e.g. to include supposedly important fixes). Most likely a lot of those "emergency" retags go without notice, yet it happens that Homebrew maintainers are quick enough to spot the initial tag, issuing a new "Homebrew release" of that project. As Homebrew formulae include the checksum of the project's release tarball and this checksum changes after a retag, installations from source via Homebrew will yield a `checksum mismatch` error. The checksum of those formula will eventually be fixed, making source-builds succeed again. It so happens that Homebrew end users can be quick enough to build the initial release tag (by running `brew upgrade` before the retag was issued). Those users will continue to use a build made using the outdated release if no [revision bump](https://github.com/Homebrew/brew/blob/c89f6c8f8c60c52a9278ccb035dadf73d999df13/docs/Formula-Cookbook.md#formulae-revisions) was made. Revisions of Homebrew formulae are only issued if the retag appears to be changed in a significant way (e.g. if changes to the source code are made). Yet source code changes are not the _only_ reason why a rebuild might be required. Take #25845 as an example where `libjpeg` retagged their most recent `v9c` release, [apparently](https://github.com/Homebrew/homebrew-core/issues/25845#issuecomment-376645802) only removing a seemingly unimportant, empty file (`jpeg-9c/.directory`) which couldn't possibly result in a different build, therefore an increase of revision [was not made](#25918). Well, this empty file could actually cause a completely different build if the build process somehow depends on it. How was it verified that this is not the case? Did the `libjpeg` authors bless the retag? A revision bump is available in #34767. Here is a list of some other checksum fixes which did not receive a revision bump: - #18048 – the checksums of 94 formulas were fixed. Most of the mismatches are likely caused by [GitHub applying a bugfix to their internal Git version](https://github.com/libgit2/libgit2/issues/4343#issuecomment-328631745). It remains unknown whether some of those projects retagged a release before the issue was introduced by GitHub. There's no easy way to find out if the new tarballs ship with an important fix. - #23852, the initial release [misses a part of the changelog](https://github.com/Homebrew/homebrew-core/pull/23665#issuecomment-363899728). (How) was it verified that this changelog file doesn't affect the build process? A revision bump is available in #34766. - #31767 – no information is available on what actually changed inside the release tarball. A revision bump is available in #34768. - #15663 – project maintainer assures that [no code changes have been made to the modified tar.gz files](https://github.com/Exiv2/exiv2/issues/19#issuecomment-315563564), yet this doesn't imply that the build itself will not be a different. - #31790 – although the project's author confirmed that no code but only a `NEWS` file was changed, he didn't assert that this file is not used inside the build process, possibly causing a different build to be made. A revision bump is available in #34769. - #32042 – some Windows build scripts were altered. Does this affect builds on other platforms? If not, how was this verified? A revision bump is available in #34770. - #32043 – changes to the README. While this file most likely is not used inside the build process, how was this _verified_? A revision bump is available in #34771. - #31948 – apparently a patch's checksum changed, although the patch itself wasn't. Does this possibly affect the build process in any way? A revision bump is available in #34773. - #32060 – no information available on what changed. A revision bump is available in #34772. Humans, by their very nature, make mistakes. As it is easy to _not_ rely on humans to decide whether forcing a rebuild is sensible or not and most formulae don't take hours to build, I ask to simply _always_ enforce rebuilds (this includes bottles as well) by revision bumping formulas after any of their included checksums change for whatever reason.
True
Discussion: Ensure that projects are rebuild if upstream decides to retag - [Retags](#18048), [unfortunately](#15663), [are](#15059) [a](#13792) [thing](#34771) and happen from time to time. For those unaware of the issue: Tags of projects tracked by Homebrew sometimes are pushed to their upstream repository, removed shortly after, and are then re-pushed in a slightly different form (e.g. to include supposedly important fixes). Most likely a lot of those "emergency" retags go without notice, yet it happens that Homebrew maintainers are quick enough to spot the initial tag, issuing a new "Homebrew release" of that project. As Homebrew formulae include the checksum of the project's release tarball and this checksum changes after a retag, installations from source via Homebrew will yield a `checksum mismatch` error. The checksum of those formula will eventually be fixed, making source-builds succeed again. It so happens that Homebrew end users can be quick enough to build the initial release tag (by running `brew upgrade` before the retag was issued). Those users will continue to use a build made using the outdated release if no [revision bump](https://github.com/Homebrew/brew/blob/c89f6c8f8c60c52a9278ccb035dadf73d999df13/docs/Formula-Cookbook.md#formulae-revisions) was made. Revisions of Homebrew formulae are only issued if the retag appears to be changed in a significant way (e.g. if changes to the source code are made). Yet source code changes are not the _only_ reason why a rebuild might be required. Take #25845 as an example where `libjpeg` retagged their most recent `v9c` release, [apparently](https://github.com/Homebrew/homebrew-core/issues/25845#issuecomment-376645802) only removing a seemingly unimportant, empty file (`jpeg-9c/.directory`) which couldn't possibly result in a different build, therefore an increase of revision [was not made](#25918). Well, this empty file could actually cause a completely different build if the build process somehow depends on it. How was it verified that this is not the case? Did the `libjpeg` authors bless the retag? A revision bump is available in #34767. Here is a list of some other checksum fixes which did not receive a revision bump: - #18048 – the checksums of 94 formulas were fixed. Most of the mismatches are likely caused by [GitHub applying a bugfix to their internal Git version](https://github.com/libgit2/libgit2/issues/4343#issuecomment-328631745). It remains unknown whether some of those projects retagged a release before the issue was introduced by GitHub. There's no easy way to find out if the new tarballs ship with an important fix. - #23852, the initial release [misses a part of the changelog](https://github.com/Homebrew/homebrew-core/pull/23665#issuecomment-363899728). (How) was it verified that this changelog file doesn't affect the build process? A revision bump is available in #34766. - #31767 – no information is available on what actually changed inside the release tarball. A revision bump is available in #34768. - #15663 – project maintainer assures that [no code changes have been made to the modified tar.gz files](https://github.com/Exiv2/exiv2/issues/19#issuecomment-315563564), yet this doesn't imply that the build itself will not be a different. - #31790 – although the project's author confirmed that no code but only a `NEWS` file was changed, he didn't assert that this file is not used inside the build process, possibly causing a different build to be made. A revision bump is available in #34769. - #32042 – some Windows build scripts were altered. Does this affect builds on other platforms? If not, how was this verified? A revision bump is available in #34770. - #32043 – changes to the README. While this file most likely is not used inside the build process, how was this _verified_? A revision bump is available in #34771. - #31948 – apparently a patch's checksum changed, although the patch itself wasn't. Does this possibly affect the build process in any way? A revision bump is available in #34773. - #32060 – no information available on what changed. A revision bump is available in #34772. Humans, by their very nature, make mistakes. As it is easy to _not_ rely on humans to decide whether forcing a rebuild is sensible or not and most formulae don't take hours to build, I ask to simply _always_ enforce rebuilds (this includes bottles as well) by revision bumping formulas after any of their included checksums change for whatever reason.
main
discussion ensure that projects are rebuild if upstream decides to retag and happen from time to time for those unaware of the issue tags of projects tracked by homebrew sometimes are pushed to their upstream repository removed shortly after and are then re pushed in a slightly different form e g to include supposedly important fixes most likely a lot of those emergency retags go without notice yet it happens that homebrew maintainers are quick enough to spot the initial tag issuing a new homebrew release of that project as homebrew formulae include the checksum of the project s release tarball and this checksum changes after a retag installations from source via homebrew will yield a checksum mismatch error the checksum of those formula will eventually be fixed making source builds succeed again it so happens that homebrew end users can be quick enough to build the initial release tag by running brew upgrade before the retag was issued those users will continue to use a build made using the outdated release if no was made revisions of homebrew formulae are only issued if the retag appears to be changed in a significant way e g if changes to the source code are made yet source code changes are not the only reason why a rebuild might be required take as an example where libjpeg retagged their most recent release only removing a seemingly unimportant empty file jpeg directory which couldn t possibly result in a different build therefore an increase of revision well this empty file could actually cause a completely different build if the build process somehow depends on it how was it verified that this is not the case did the libjpeg authors bless the retag a revision bump is available in here is a list of some other checksum fixes which did not receive a revision bump – the checksums of formulas were fixed most of the mismatches are likely caused by it remains unknown whether some of those projects retagged a release before the issue was introduced by github there s no easy way to find out if the new tarballs ship with an important fix the initial release   how was it verified that this changelog file doesn t affect the build process a revision bump is available in – no information is available on what actually changed inside the release tarball a revision bump is available in – project maintainer assures that yet this doesn t imply that the build itself will not be a different – although the project s author confirmed that no code but only a news file was changed he didn t assert that this file is not used inside the build process possibly causing a different build to be made a revision bump is available in – some windows build scripts were altered does this affect builds on other platforms if not how was this verified a revision bump is available in – changes to the readme while this file most likely is not used inside the build process how was this verified a revision bump is available in – apparently a patch s checksum changed although the patch itself wasn t does this possibly affect the build process in any way a revision bump is available in – no information available on what changed a revision bump is available in humans by their very nature make mistakes as it is easy to not rely on humans to decide whether forcing a rebuild is sensible or not and most formulae don t take hours to build i ask to simply always enforce rebuilds this includes bottles as well by revision bumping formulas after any of their included checksums change for whatever reason
1
313,647
23,486,633,458
IssuesEvent
2022-08-17 14:53:45
conda/conda
https://api.github.com/repos/conda/conda
opened
Multi-user installs
epic source::anaconda type::documentation tag::multi-user
### Checklist - [X] I added a descriptive title - [X] I searched open reports and couldn't find a duplicate ### Summary Multi-user installs are confusing because our docs are lacking! We should help our users by defining a set of best-practices for these kinds of installs. ### Linked Issues & PRs - [ ] https://github.com/conda/conda/issues/11687 - [ ] https://github.com/conda/conda/issues/11087
1.0
Multi-user installs - ### Checklist - [X] I added a descriptive title - [X] I searched open reports and couldn't find a duplicate ### Summary Multi-user installs are confusing because our docs are lacking! We should help our users by defining a set of best-practices for these kinds of installs. ### Linked Issues & PRs - [ ] https://github.com/conda/conda/issues/11687 - [ ] https://github.com/conda/conda/issues/11087
non_main
multi user installs checklist i added a descriptive title i searched open reports and couldn t find a duplicate summary multi user installs are confusing because our docs are lacking we should help our users by defining a set of best practices for these kinds of installs linked issues prs
0
350,722
10,500,843,331
IssuesEvent
2019-09-26 11:27:52
JuPedSim/jpscore
https://api.github.com/repos/JuPedSim/jpscore
closed
prepare commit: Astyle
Priority: Critical Type: Enhancement
In Gitlab by @chraibi on Jul 14, 2015, 15:32 [[origin](https://gitlab.version.fz-juelich.de/jupedsim/jpscore/issues/118)] run astyle-script to format code according to a predefined style. See for example [prepare-commit.sh](https://github.com/qgis/QGIS/blob/master/scripts/prepare-commit.sh)
1.0
prepare commit: Astyle - In Gitlab by @chraibi on Jul 14, 2015, 15:32 [[origin](https://gitlab.version.fz-juelich.de/jupedsim/jpscore/issues/118)] run astyle-script to format code according to a predefined style. See for example [prepare-commit.sh](https://github.com/qgis/QGIS/blob/master/scripts/prepare-commit.sh)
non_main
prepare commit astyle in gitlab by chraibi on jul run astyle script to format code according to a predefined style see for example
0
1,884
6,577,516,579
IssuesEvent
2017-09-12 01:27:43
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
ec2_metric_alarm always shows changes
affects_2.0 aws bug_report cloud waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_metric_alarm ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OSX El Capitan ##### SUMMARY When running this module with the same inputs, ansible reports changes. This module should be idempotent. ##### STEPS TO REPRODUCE https://gist.github.com/dmcnaught/e06f2230c0cbcbdf6329 gist also shows Alarm history from cloudwatch on 2 consecutive runs (seems to show just yaml order differences - could that be the problem? ##### EXPECTED RESULTS No changes on subsequent ansible runs ##### ACTUAL RESULTS Changes reported: ``` TASK [kube-up-mods : Configure Metric Alarms and link to Scaling Policies] ***** changed: [localhost] => (item={u'threshold': 50.0, u'comparison': u'>=', u'alarm_actions': [u'arn:aws:autoscaling:us-east-1:735056214483:scalingPolicy:6da1ab4d-fca4-4f0b-9c53-ca9b582ba5da:autoScalingGroupName/k8s-hermes-120-minion-group-us-east-1c:policyName/k8s-hermes-120-minion Increase Group Size'], u'name': u'k8s-hermes-120-minion-group-us-east-1c-ScaleUp'}) changed: [localhost] => (item={u'threshold': 20.0, u'comparison': u'<=', u'alarm_actions': [u'arn:aws:autoscaling:us-east-1:735056214483:scalingPolicy:6ffb0797-d089-4f2a-a1b7-16da6bb42de1:autoScalingGroupName/k8s-hermes-120-minion-group-us-east-1c:policyName/k8s-hermes-120-minion Decrease Group Size'], u'name': u'k8s-hermes-120-minion-group-us-east-1c-ScaleDown'}) ```
True
ec2_metric_alarm always shows changes - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_metric_alarm ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION ##### OS / ENVIRONMENT OSX El Capitan ##### SUMMARY When running this module with the same inputs, ansible reports changes. This module should be idempotent. ##### STEPS TO REPRODUCE https://gist.github.com/dmcnaught/e06f2230c0cbcbdf6329 gist also shows Alarm history from cloudwatch on 2 consecutive runs (seems to show just yaml order differences - could that be the problem? ##### EXPECTED RESULTS No changes on subsequent ansible runs ##### ACTUAL RESULTS Changes reported: ``` TASK [kube-up-mods : Configure Metric Alarms and link to Scaling Policies] ***** changed: [localhost] => (item={u'threshold': 50.0, u'comparison': u'>=', u'alarm_actions': [u'arn:aws:autoscaling:us-east-1:735056214483:scalingPolicy:6da1ab4d-fca4-4f0b-9c53-ca9b582ba5da:autoScalingGroupName/k8s-hermes-120-minion-group-us-east-1c:policyName/k8s-hermes-120-minion Increase Group Size'], u'name': u'k8s-hermes-120-minion-group-us-east-1c-ScaleUp'}) changed: [localhost] => (item={u'threshold': 20.0, u'comparison': u'<=', u'alarm_actions': [u'arn:aws:autoscaling:us-east-1:735056214483:scalingPolicy:6ffb0797-d089-4f2a-a1b7-16da6bb42de1:autoScalingGroupName/k8s-hermes-120-minion-group-us-east-1c:policyName/k8s-hermes-120-minion Decrease Group Size'], u'name': u'k8s-hermes-120-minion-group-us-east-1c-ScaleDown'}) ```
main
metric alarm always shows changes issue type bug report component name metric alarm ansible version ansible configuration os environment osx el capitan summary when running this module with the same inputs ansible reports changes this module should be idempotent steps to reproduce gist also shows alarm history from cloudwatch on consecutive runs seems to show just yaml order differences could that be the problem expected results no changes on subsequent ansible runs actual results changes reported task changed item u threshold u comparison u u alarm actions u name u hermes minion group us east scaleup changed item u threshold u comparison u u alarm actions u name u hermes minion group us east scaledown
1
282
3,052,318,348
IssuesEvent
2015-08-12 14:10:00
daemonraco/toobasic
https://api.github.com/repos/daemonraco/toobasic
closed
DBSpecs More Than One Data Specification
bug Database Structure Maintainer
## Error When more than one database structure specification provide entries for the same table and connection, only the last one survives. ## Extra Change: ```php // // Guessing table identifier and creating a pull for its // entries. $tKey = sha1("{$aux->connection}-{$aux->table}"); $this->_specs->data[$tKey] = array(); ``` For: ```php // // Guessing table identifier and creating a pull for its // entries. $tKey = sha1("{$aux->connection}-{$aux->table}"); if(!isset($this->_specs->data[$tKey])) { $this->_specs->data[$tKey] = array(); } ``` At __ROOTDIR/includes/managers/DBStructureManager.php__.
True
DBSpecs More Than One Data Specification - ## Error When more than one database structure specification provide entries for the same table and connection, only the last one survives. ## Extra Change: ```php // // Guessing table identifier and creating a pull for its // entries. $tKey = sha1("{$aux->connection}-{$aux->table}"); $this->_specs->data[$tKey] = array(); ``` For: ```php // // Guessing table identifier and creating a pull for its // entries. $tKey = sha1("{$aux->connection}-{$aux->table}"); if(!isset($this->_specs->data[$tKey])) { $this->_specs->data[$tKey] = array(); } ``` At __ROOTDIR/includes/managers/DBStructureManager.php__.
main
dbspecs more than one data specification error when more than one database structure specification provide entries for the same table and connection only the last one survives extra change php guessing table identifier and creating a pull for its entries tkey aux connection aux table this specs data array for php guessing table identifier and creating a pull for its entries tkey aux connection aux table if isset this specs data this specs data array at rootdir includes managers dbstructuremanager php
1
491,989
14,174,864,805
IssuesEvent
2020-11-12 20:36:36
eclipse-ee4j/openmq
https://api.github.com/repos/eclipse-ee4j/openmq
closed
Document new MQ ConnectionFactory property imqSocketConnectTimeout
Component: doc ERR: Assignee Priority: Major Type: Sub-task
Please update the MQ admin guide to reflect the changes described in #87. Please see that issue for details. #### Affected Versions [4.5.1, 5.0]
1.0
Document new MQ ConnectionFactory property imqSocketConnectTimeout - Please update the MQ admin guide to reflect the changes described in #87. Please see that issue for details. #### Affected Versions [4.5.1, 5.0]
non_main
document new mq connectionfactory property imqsocketconnecttimeout please update the mq admin guide to reflect the changes described in please see that issue for details affected versions
0