Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
190
2,810,555,581
IssuesEvent
2015-05-17 00:21:36
jenkinsci/slack-plugin
https://api.github.com/repos/jenkinsci/slack-plugin
opened
Releasing Milestone 1.8
maintainer communication
I think a release is overdue. I'm currently working on a release as well as documenting how I do a release. Pull request to follow for release preparation. This pull request will be closed when the release is completed. This covers all pull requests merged since PR #46.
True
Releasing Milestone 1.8 - I think a release is overdue. I'm currently working on a release as well as documenting how I do a release. Pull request to follow for release preparation. This pull request will be closed when the release is completed. This covers all pull requests merged since PR #46.
main
releasing milestone i think a release is overdue i m currently working on a release as well as documenting how i do a release pull request to follow for release preparation this pull request will be closed when the release is completed this covers all pull requests merged since pr
1
2,404
8,528,807,827
IssuesEvent
2018-11-03 03:47:15
TabbycatDebate/tabbycat
https://api.github.com/repos/TabbycatDebate/tabbycat
opened
Separate feedback questions for different adj types
awaiting maintainer enhancement
As discussed, this separation might be quite light and not involve fundamental changes to the model. Instead, differentiation could be largely handled on the front-end.
True
Separate feedback questions for different adj types - As discussed, this separation might be quite light and not involve fundamental changes to the model. Instead, differentiation could be largely handled on the front-end.
main
separate feedback questions for different adj types as discussed this separation might be quite light and not involve fundamental changes to the model instead differentiation could be largely handled on the front end
1
5,167
26,287,624,089
IssuesEvent
2023-01-08 01:49:26
NIAEFEUP/website-niaefeup-backend
https://api.github.com/repos/NIAEFEUP/website-niaefeup-backend
closed
dto: transition to JMapper API
maintainability
As discussed in #20, JMapper should be investigated as to if it is a good option to replace our current Dto code. The objectives for the new implementation would be: fewer reflective operations and a cleaner implementation. Depending on how JMapper generates mappers, a caching strategy should be considered.
True
dto: transition to JMapper API - As discussed in #20, JMapper should be investigated as to if it is a good option to replace our current Dto code. The objectives for the new implementation would be: fewer reflective operations and a cleaner implementation. Depending on how JMapper generates mappers, a caching strategy should be considered.
main
dto transition to jmapper api as discussed in jmapper should be investigated as to if it is a good option to replace our current dto code the objectives for the new implementation would be fewer reflective operations and a cleaner implementation depending on how jmapper generates mappers a caching strategy should be considered
1
61,089
14,614,361,109
IssuesEvent
2020-12-22 09:46:40
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
[Security Solution] Rebranded of "Elastic Endpoint Security" under security services.
Team: SecuritySolution bug
**Describe the bug** Rebranded of "Elastic Endpoint Security" under security services **Build Details:** ``` Platform: Staging Version: 8.0-SNAPSHOT Commit: a76c666a6153b062d2f9d97b5c7e87620339a181 Build number: 39157 Artifact: https://artifacts-api.elastic.co/v1/search/8.0-SNAPSHOT ``` **Browser Details** All **Preconditions** 1. Cloud environment on staging should exist. 2. Endpoint should be deployed with Security Integration installed. **Steps to Reproduce** 1. Navigate to Kibana URL on Browser. 2. Click on the "Administration" tab under Security from the left navigation bar. 3. Go to the policy and enable the Register as Antivirus toggle. 4. Observe that instead of Endpoint Security, 'Elastic Endpoint Security' is displaying for the security service. ``` Command used: WMIC /Node:localhost /Namespace:\\root\SecurityCenter2 Path AntiVirusProduct Get displayName /Format:List ``` **Test data** N/A **Impacted Test case(s)** N/A **Actual Result** Rebranded of "Elastic Endpoint Security" under security services is not done. **Expected Result** Rebranded of "Elastic Endpoint Security" under security services should be done. **What's Working** N/A **What's not Working** N/A **Screenshots** ![Bug1,2](https://user-images.githubusercontent.com/60252716/102874216-2bf9a280-4468-11eb-855d-6fdf96fc865c.PNG) **Logs** N/A
True
[Security Solution] Rebranded of "Elastic Endpoint Security" under security services. - **Describe the bug** Rebranded of "Elastic Endpoint Security" under security services **Build Details:** ``` Platform: Staging Version: 8.0-SNAPSHOT Commit: a76c666a6153b062d2f9d97b5c7e87620339a181 Build number: 39157 Artifact: https://artifacts-api.elastic.co/v1/search/8.0-SNAPSHOT ``` **Browser Details** All **Preconditions** 1. Cloud environment on staging should exist. 2. Endpoint should be deployed with Security Integration installed. **Steps to Reproduce** 1. Navigate to Kibana URL on Browser. 2. Click on the "Administration" tab under Security from the left navigation bar. 3. Go to the policy and enable the Register as Antivirus toggle. 4. Observe that instead of Endpoint Security, 'Elastic Endpoint Security' is displaying for the security service. ``` Command used: WMIC /Node:localhost /Namespace:\\root\SecurityCenter2 Path AntiVirusProduct Get displayName /Format:List ``` **Test data** N/A **Impacted Test case(s)** N/A **Actual Result** Rebranded of "Elastic Endpoint Security" under security services is not done. **Expected Result** Rebranded of "Elastic Endpoint Security" under security services should be done. **What's Working** N/A **What's not Working** N/A **Screenshots** ![Bug1,2](https://user-images.githubusercontent.com/60252716/102874216-2bf9a280-4468-11eb-855d-6fdf96fc865c.PNG) **Logs** N/A
non_main
rebranded of elastic endpoint security under security services describe the bug rebranded of elastic endpoint security under security services build details platform staging version snapshot commit build number artifact browser details all preconditions cloud environment on staging should exist endpoint should be deployed with security integration installed steps to reproduce navigate to kibana url on browser click on the administration tab under security from the left navigation bar go to the policy and enable the register as antivirus toggle observe that instead of endpoint security elastic endpoint security is displaying for the security service command used wmic node localhost namespace root path antivirusproduct get displayname format list test data n a impacted test case s n a actual result rebranded of elastic endpoint security under security services is not done expected result rebranded of elastic endpoint security under security services should be done what s working n a what s not working n a screenshots logs n a
0
242,514
18,667,429,511
IssuesEvent
2021-10-30 03:43:13
eraware/dnn-elements
https://api.github.com/repos/eraware/dnn-elements
closed
Add a demo/sample project
documentation
[enhancement request] For folks (like me) trying to understand a new project and see the effect, it would be nice to at least have screen shots of these components to get a feel for what the base UI/appearance is. Ideally there would be a sample/test application to see/exercise all these components with a minimum of effort for lazy folks (like me) browsing around. I understand having a running demo might require hosting fees, but maybe you could have a demo (or one for each component?) run on https://www.typescriptlang.org/play or one of the similar online environments. Love this idea for sure. I may even try to add some of my own?!
1.0
Add a demo/sample project - [enhancement request] For folks (like me) trying to understand a new project and see the effect, it would be nice to at least have screen shots of these components to get a feel for what the base UI/appearance is. Ideally there would be a sample/test application to see/exercise all these components with a minimum of effort for lazy folks (like me) browsing around. I understand having a running demo might require hosting fees, but maybe you could have a demo (or one for each component?) run on https://www.typescriptlang.org/play or one of the similar online environments. Love this idea for sure. I may even try to add some of my own?!
non_main
add a demo sample project for folks like me trying to understand a new project and see the effect it would be nice to at least have screen shots of these components to get a feel for what the base ui appearance is ideally there would be a sample test application to see exercise all these components with a minimum of effort for lazy folks like me browsing around i understand having a running demo might require hosting fees but maybe you could have a demo or one for each component run on or one of the similar online environments love this idea for sure i may even try to add some of my own
0
3,457
13,224,627,048
IssuesEvent
2020-08-17 19:30:54
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
closed
Tooltip is not showing text when in modal
status: needs triage 🕵️‍♀️ status: waiting for maintainer response 💬 type: bug 🐛
## Tooltip is not showing text when in modal ## What package(s) are you using? <!-- Add an x in one of the options below, for example: - [x] package name --> - [x] `carbon-components` - [x] `carbon-components-react` ## Detailed description > Describe in detail the issue you're having. When I'm using a tooltip in a modal, I noticed that the text does not show when I click on the tooltip icon as shown below: ![Screen Shot 2020-07-31 at 4 45 14 PM](https://media.github.ibm.com/user/149566/files/51690100-d34d-11ea-8b01-3162b76fb8dd) Tooltip shows up fine though when it's not in a modal. I'm thinking that it's possibly a z-index issue. > Is this issue related to a specific component? `Tooltip` > What did you expect to happen? What happened instead? What would you like to > see changed? Tooltip text should show when being clicked in modal. > What browser are you working in? Google Chrome > What version of the Carbon Design System are you using? "carbon-components": "10.16.0", "carbon-components-react": "7.16.0", > What offering/product do you work on? Any pressing ship or release dates we > should be aware of? `IBM Cloud Catalog`
True
Tooltip is not showing text when in modal - ## Tooltip is not showing text when in modal ## What package(s) are you using? <!-- Add an x in one of the options below, for example: - [x] package name --> - [x] `carbon-components` - [x] `carbon-components-react` ## Detailed description > Describe in detail the issue you're having. When I'm using a tooltip in a modal, I noticed that the text does not show when I click on the tooltip icon as shown below: ![Screen Shot 2020-07-31 at 4 45 14 PM](https://media.github.ibm.com/user/149566/files/51690100-d34d-11ea-8b01-3162b76fb8dd) Tooltip shows up fine though when it's not in a modal. I'm thinking that it's possibly a z-index issue. > Is this issue related to a specific component? `Tooltip` > What did you expect to happen? What happened instead? What would you like to > see changed? Tooltip text should show when being clicked in modal. > What browser are you working in? Google Chrome > What version of the Carbon Design System are you using? "carbon-components": "10.16.0", "carbon-components-react": "7.16.0", > What offering/product do you work on? Any pressing ship or release dates we > should be aware of? `IBM Cloud Catalog`
main
tooltip is not showing text when in modal tooltip is not showing text when in modal what package s are you using add an x in one of the options below for example package name carbon components carbon components react detailed description describe in detail the issue you re having when i m using a tooltip in a modal i noticed that the text does not show when i click on the tooltip icon as shown below tooltip shows up fine though when it s not in a modal i m thinking that it s possibly a z index issue is this issue related to a specific component tooltip what did you expect to happen what happened instead what would you like to see changed tooltip text should show when being clicked in modal what browser are you working in google chrome what version of the carbon design system are you using carbon components carbon components react what offering product do you work on any pressing ship or release dates we should be aware of ibm cloud catalog
1
3,774
15,864,476,235
IssuesEvent
2021-04-08 13:50:14
heroku/heroku-buildpack-python
https://api.github.com/repos/heroku/heroku-buildpack-python
closed
Add caching to identify new deployments vs existing applications
maintainability-issue
Currently there's no defined way to tell new apps apart from existing ones. Caching this info would enable a lot of refactoring
True
Add caching to identify new deployments vs existing applications - Currently there's no defined way to tell new apps apart from existing ones. Caching this info would enable a lot of refactoring
main
add caching to identify new deployments vs existing applications currently there s no defined way to tell new apps apart from existing ones caching this info would enable a lot of refactoring
1
1,375
5,954,778,306
IssuesEvent
2017-05-27 21:15:58
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
Outdated cask: Paste
awaiting maintainer feedback
Running `brew cask install paste` fails due to `curl` receiving a 404 Not Found when attempting to fetch the download URL in [the cask](https://github.com/caskroom/homebrew-cask/blob/master/Casks/paste.rb). I even tried to use earlier incarnations of the cask that previously worked for me and they now do not. I guess for some reason it just disappeared from the [HockeyApp](https://www.hockeyapp.net/) API. I have no idea why that would happen. Not really brew's fault, I know. But am I correct in my understanding that the HockeyApp account being used is something you all control as infrastructure? Never used it before, but that's what I gather from their docs and I see it used in multiple casks. The newest version of [Paste](http://pasteapp.me/) is "Version 2.2.1 (34)". #### Debugging info <details><summary>Output of the command with `--verbose --debug`</summary> ```sh sholladay$ brew cask install paste --verbose --debug ==> Hbc::Installer#install ==> Printing caveats ==> Hbc::Installer#fetch ==> Downloading ==> Downloading https://rink.hockeyapp.net/api/2/apps/ee24d1a939cd4ff8b2861eb8c788a995/app_versions/6?format=zip ==> Calling curl with args ["https://rink.hockeyapp.net/api/2/apps/ee24d1a939cd4ff8b2861eb8c788a995/app_versions/6?format=zip", "-C", "0", "-o", "#<Pathname:/Users/sholladay/Library/Caches/Homebrew/Cask/paste--2.2.1,6.incomplete>"] /usr/bin/curl --remote-time --location --user-agent Homebrew/1.2.1-92-ge931fee7 (Macintosh; Intel Mac OS X 10.12.4) curl/7.51.0 --fail https://rink.hockeyapp.net/api/2/apps/ee24d1a939cd4ff8b2861eb8c788a995/app_versions/6?format=zip -C 0 -o /Users/sholladay/Library/Caches/Homebrew/Cask/paste--2.2.1,6.incomplete % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (22) The requested URL returned error: 404 Not Found Error: Download failed on Cask 'paste' with message: Download failed: https://rink.hockeyapp.net/api/2/apps/ee24d1a939cd4ff8b2861eb8c788a995/app_versions/6?format=zip The incomplete download is cached at /Users/sholladay/Library/Caches/Homebrew/Cask/paste--2.2.1,6.incomplete Error: nothing to install/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:17:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:34:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:98:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:149:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:93:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:154:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:154:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:140:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:93:in `<main>' ``` </details> <details><summary>Output of `brew doctor`</summary> ``` Your system is ready to brew. ``` </details> <details><summary>Output of `brew cask doctor`</summary> ```sh ==> Homebrew-Cask Version Homebrew-Cask 1.2.1 caskroom/homebrew-cask (git revision 33118; last commit 2017-05-24) ==> Homebrew-Cask Install Location <NONE> ==> Homebrew-Cask Staging Location /usr/local/Caskroom ==> Homebrew-Cask Cached Downloads ~/Library/Caches/Homebrew/Cask (13 files, 663.4MB) ==> Homebrew-Cask Taps: /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3617 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-services (0 casks) ==> Contents of $LOAD_PATH /usr/local/Homebrew/Library/Homebrew/cask/lib /usr/local/Homebrew/Library/Homebrew /Library/Ruby/Site/2.0.0 /Library/Ruby/Site/2.0.0/x86_64-darwin16 /Library/Ruby/Site/2.0.0/universal-darwin16 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16 ==> Environment Variables LANG="en_US.UTF-8" PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm" SHELL="/usr/local/bin/bash" ``` </details>
True
Outdated cask: Paste - Running `brew cask install paste` fails due to `curl` receiving a 404 Not Found when attempting to fetch the download URL in [the cask](https://github.com/caskroom/homebrew-cask/blob/master/Casks/paste.rb). I even tried to use earlier incarnations of the cask that previously worked for me and they now do not. I guess for some reason it just disappeared from the [HockeyApp](https://www.hockeyapp.net/) API. I have no idea why that would happen. Not really brew's fault, I know. But am I correct in my understanding that the HockeyApp account being used is something you all control as infrastructure? Never used it before, but that's what I gather from their docs and I see it used in multiple casks. The newest version of [Paste](http://pasteapp.me/) is "Version 2.2.1 (34)". #### Debugging info <details><summary>Output of the command with `--verbose --debug`</summary> ```sh sholladay$ brew cask install paste --verbose --debug ==> Hbc::Installer#install ==> Printing caveats ==> Hbc::Installer#fetch ==> Downloading ==> Downloading https://rink.hockeyapp.net/api/2/apps/ee24d1a939cd4ff8b2861eb8c788a995/app_versions/6?format=zip ==> Calling curl with args ["https://rink.hockeyapp.net/api/2/apps/ee24d1a939cd4ff8b2861eb8c788a995/app_versions/6?format=zip", "-C", "0", "-o", "#<Pathname:/Users/sholladay/Library/Caches/Homebrew/Cask/paste--2.2.1,6.incomplete>"] /usr/bin/curl --remote-time --location --user-agent Homebrew/1.2.1-92-ge931fee7 (Macintosh; Intel Mac OS X 10.12.4) curl/7.51.0 --fail https://rink.hockeyapp.net/api/2/apps/ee24d1a939cd4ff8b2861eb8c788a995/app_versions/6?format=zip -C 0 -o /Users/sholladay/Library/Caches/Homebrew/Cask/paste--2.2.1,6.incomplete % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (22) The requested URL returned error: 404 Not Found Error: Download failed on Cask 'paste' with message: Download failed: https://rink.hockeyapp.net/api/2/apps/ee24d1a939cd4ff8b2861eb8c788a995/app_versions/6?format=zip The incomplete download is cached at /Users/sholladay/Library/Caches/Homebrew/Cask/paste--2.2.1,6.incomplete Error: nothing to install/usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:17:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:34:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:98:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:149:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:93:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:154:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:154:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:140:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:93:in `<main>' ``` </details> <details><summary>Output of `brew doctor`</summary> ``` Your system is ready to brew. ``` </details> <details><summary>Output of `brew cask doctor`</summary> ```sh ==> Homebrew-Cask Version Homebrew-Cask 1.2.1 caskroom/homebrew-cask (git revision 33118; last commit 2017-05-24) ==> Homebrew-Cask Install Location <NONE> ==> Homebrew-Cask Staging Location /usr/local/Caskroom ==> Homebrew-Cask Cached Downloads ~/Library/Caches/Homebrew/Cask (13 files, 663.4MB) ==> Homebrew-Cask Taps: /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3617 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-services (0 casks) ==> Contents of $LOAD_PATH /usr/local/Homebrew/Library/Homebrew/cask/lib /usr/local/Homebrew/Library/Homebrew /Library/Ruby/Site/2.0.0 /Library/Ruby/Site/2.0.0/x86_64-darwin16 /Library/Ruby/Site/2.0.0/universal-darwin16 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16 ==> Environment Variables LANG="en_US.UTF-8" PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm" SHELL="/usr/local/bin/bash" ``` </details>
main
outdated cask paste running brew cask install paste fails due to curl receiving a not found when attempting to fetch the download url in i even tried to use earlier incarnations of the cask that previously worked for me and they now do not i guess for some reason it just disappeared from the api i have no idea why that would happen not really brew s fault i know but am i correct in my understanding that the hockeyapp account being used is something you all control as infrastructure never used it before but that s what i gather from their docs and i see it used in multiple casks the newest version of is version debugging info output of the command with verbose debug sh sholladay brew cask install paste verbose debug hbc installer install printing caveats hbc installer fetch downloading downloading calling curl with args usr bin curl remote time location user agent homebrew macintosh intel mac os x curl fail c o users sholladay library caches homebrew cask paste incomplete total received xferd average speed time time time current dload upload total spent left speed curl the requested url returned error not found error download failed on cask paste with message download failed the incomplete download is cached at users sholladay library caches homebrew cask paste incomplete error nothing to install usr local homebrew library homebrew cask lib hbc cli install rb in run usr local homebrew library homebrew cask lib hbc cli abstract command rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew doctor your system is ready to brew output of brew cask doctor sh homebrew cask version homebrew cask caskroom homebrew cask git revision last commit homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads library caches homebrew cask files homebrew cask taps usr local homebrew library taps caskroom homebrew cask casks usr local homebrew library taps homebrew homebrew core casks usr local homebrew library taps homebrew homebrew services casks contents of load path usr local homebrew library homebrew cask lib usr local homebrew library homebrew library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal environment variables lang en us utf path usr local sbin usr local bin usr sbin usr bin sbin bin usr local homebrew library taps homebrew homebrew services cmd usr local homebrew library homebrew shims scm shell usr local bin bash
1
470,542
13,540,153,650
IssuesEvent
2020-09-16 14:19:53
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
Add completion support for Object constructor expression
Area/LanguageServer Points/1 Priority/High SwanLakeDump Team/Tooling Type/Task
**Description:** $title, as per related to #25643
1.0
Add completion support for Object constructor expression - **Description:** $title, as per related to #25643
non_main
add completion support for object constructor expression description title as per related to
0
1,246
5,308,978,080
IssuesEvent
2017-02-12 04:03:07
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
vmware_guest.py OS customization
affects_2.2 cloud feature_idea vmware waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME vmware_guest.py ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION Default configuration ##### OS / ENVIRONMENT N/A ##### SUMMARY This module should support ip address, subnet, gateway and dns settings. Network settings should be allowed to set per interface. ``` def deploy_template(self, poweron=False, wait_for_ip=False): # FIXME: # - clusters # - multiple datacenters # - resource pools # - multiple templates by the same name # - use disk config from template by default # - static IPs ``` ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS ``` N/A ```
True
vmware_guest.py OS customization - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME vmware_guest.py ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION Default configuration ##### OS / ENVIRONMENT N/A ##### SUMMARY This module should support ip address, subnet, gateway and dns settings. Network settings should be allowed to set per interface. ``` def deploy_template(self, poweron=False, wait_for_ip=False): # FIXME: # - clusters # - multiple datacenters # - resource pools # - multiple templates by the same name # - use disk config from template by default # - static IPs ``` ##### STEPS TO REPRODUCE N/A ##### EXPECTED RESULTS N/A ##### ACTUAL RESULTS ``` N/A ```
main
vmware guest py os customization issue type feature idea component name vmware guest py ansible version ansible configuration default configuration os environment n a summary this module should support ip address subnet gateway and dns settings network settings should be allowed to set per interface def deploy template self poweron false wait for ip false fixme clusters multiple datacenters resource pools multiple templates by the same name use disk config from template by default static ips steps to reproduce n a expected results n a actual results n a
1
5,781
30,635,346,749
IssuesEvent
2023-07-24 17:19:25
microsoft/mu_basecore
https://api.github.com/repos/microsoft/mu_basecore
closed
[Bug]: SMMUv3 IORT HTTU Override flag needs to be expanded
state:needs-triage type:bug state:needs-maintainer-feedback urgency:low
### Is there an existing issue for this? - [X] I have searched existing issues ### Current Behavior The flag for HTTU override in an SMMUv3 node in the IORT table is currently define in MU_BASECORE/MdePkg/Include/IndustryStandard/IoRemappingTable.h:63 as: ``` #define EFI_ACPI_IORT_SMMUv3_FLAG_HTTU_OVERRIDE BIT1 ``` ### Expected Behavior The length of this field is actually 2 bits. Possible values are: 0b0000: Hardware update of the Access flag and dirty state are not supported. 0b0001: Support for hardware update of the Access flag for Block and Page descriptors. 0b0010: As 0b0001, and adds support for hardware update of the Access flag for Block and Page descriptors. Hardware update of dirty state is supported. For more info see: - Arm System Memory Management Unit Architecture Specification - Hardware Updates to Access Flag and Dirty State in the Armv8.1-A architecture ### Steps To Reproduce N/A ### Build Environment ```markdown N/A ``` ### Version Information ```text v2023020002.1.4 ``` ### Urgency Low ### Are you going to fix this? I will fix it ### Do you need maintainer feedback? Maintainer feedback requested ### Anything else? _No response_
True
[Bug]: SMMUv3 IORT HTTU Override flag needs to be expanded - ### Is there an existing issue for this? - [X] I have searched existing issues ### Current Behavior The flag for HTTU override in an SMMUv3 node in the IORT table is currently define in MU_BASECORE/MdePkg/Include/IndustryStandard/IoRemappingTable.h:63 as: ``` #define EFI_ACPI_IORT_SMMUv3_FLAG_HTTU_OVERRIDE BIT1 ``` ### Expected Behavior The length of this field is actually 2 bits. Possible values are: 0b0000: Hardware update of the Access flag and dirty state are not supported. 0b0001: Support for hardware update of the Access flag for Block and Page descriptors. 0b0010: As 0b0001, and adds support for hardware update of the Access flag for Block and Page descriptors. Hardware update of dirty state is supported. For more info see: - Arm System Memory Management Unit Architecture Specification - Hardware Updates to Access Flag and Dirty State in the Armv8.1-A architecture ### Steps To Reproduce N/A ### Build Environment ```markdown N/A ``` ### Version Information ```text v2023020002.1.4 ``` ### Urgency Low ### Are you going to fix this? I will fix it ### Do you need maintainer feedback? Maintainer feedback requested ### Anything else? _No response_
main
iort httu override flag needs to be expanded is there an existing issue for this i have searched existing issues current behavior the flag for httu override in an node in the iort table is currently define in mu basecore mdepkg include industrystandard ioremappingtable h as define efi acpi iort flag httu override expected behavior the length of this field is actually bits possible values are hardware update of the access flag and dirty state are not supported support for hardware update of the access flag for block and page descriptors as and adds support for hardware update of the access flag for block and page descriptors hardware update of dirty state is supported for more info see arm system memory management unit architecture specification hardware updates to access flag and dirty state in the a architecture steps to reproduce n a build environment markdown n a version information text urgency low are you going to fix this i will fix it do you need maintainer feedback maintainer feedback requested anything else no response
1
4,623
23,930,781,364
IssuesEvent
2022-09-10 14:07:21
cncf/glossary
https://api.github.com/repos/cncf/glossary
closed
The custom search uses old indexes
maintainers
Hi, I found that several search index data were old and didn't reflect the actual path format. ref) https://github.com/cncf/glossary/pull/922 For example, currently `Shift Left` page is on `/shift-left`, but the search result links to `/shift_left`.
True
The custom search uses old indexes - Hi, I found that several search index data were old and didn't reflect the actual path format. ref) https://github.com/cncf/glossary/pull/922 For example, currently `Shift Left` page is on `/shift-left`, but the search result links to `/shift_left`.
main
the custom search uses old indexes hi i found that several search index data were old and didn t reflect the actual path format ref for example currently shift left page is on shift left but the search result links to shift left
1
1,856
6,577,402,365
IssuesEvent
2017-09-12 00:39:53
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
os_router: HA interfaces break os_router module.
affects_2.0 bug_report cloud openstack waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_router.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### OS / ENVIRONMENT NA ##### SUMMARY The HA ports cause issues when deleting a router through this module. This means that any router updates or deletions through this module will fail. Currently, for updates, the code retrieves all internal interfaces of a router(including the HA ports), then tries to delete them. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L330 The principle is the same for deletion. However, neutron does not allow these interfaces to be deleted and will throw an error on any such attempt. ##### STEPS TO REPRODUCE 1. Create a router using the os_router module on an environment running Neutron L3HA using the VRRP protocol(i'm unsure about DVR). 2. Update it's configurations 3. Re-run the playbooks. They will fail when trying to delete the HA ports. ##### EXPECTED RESULTS The playbooks will fail to run.
True
os_router: HA interfaces break os_router module. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_router.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### OS / ENVIRONMENT NA ##### SUMMARY The HA ports cause issues when deleting a router through this module. This means that any router updates or deletions through this module will fail. Currently, for updates, the code retrieves all internal interfaces of a router(including the HA ports), then tries to delete them. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L330 The principle is the same for deletion. However, neutron does not allow these interfaces to be deleted and will throw an error on any such attempt. ##### STEPS TO REPRODUCE 1. Create a router using the os_router module on an environment running Neutron L3HA using the VRRP protocol(i'm unsure about DVR). 2. Update it's configurations 3. Re-run the playbooks. They will fail when trying to delete the HA ports. ##### EXPECTED RESULTS The playbooks will fail to run.
main
os router ha interfaces break os router module issue type bug report component name os router py ansible version ansible os environment na summary the ha ports cause issues when deleting a router through this module this means that any router updates or deletions through this module will fail currently for updates the code retrieves all internal interfaces of a router including the ha ports then tries to delete them see the principle is the same for deletion however neutron does not allow these interfaces to be deleted and will throw an error on any such attempt steps to reproduce create a router using the os router module on an environment running neutron using the vrrp protocol i m unsure about dvr update it s configurations re run the playbooks they will fail when trying to delete the ha ports expected results the playbooks will fail to run
1
4,823
24,857,939,086
IssuesEvent
2022-10-27 05:15:40
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Bug: sam invoke local - Docker 404 Client Error: Not Found
type/question blocked/more-info-needed maintainer/need-followup
### Description: I'm having a hard time trying to locally invoking lambda code with hello-world templates ### Steps to reproduce: `sam init -r python3.8` `1` `1` `N` `sam-app-py38` `cd sam-app-py38` `sam build --debug` 2022-10-19 19:05:12,888 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:05:12,888 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:05:12,888 | Expand command line arguments to: 2022-10-19 19:05:12,888 | --template_file=/PATH_TO_SAM_APP/template.yml --build_dir=.aws-sam/build --cache_dir=.aws-sam/cache 2022-10-19 19:05:13,127 | 'build' command is called 2022-10-19 19:05:13,127 | Template is not provided in context, skip adding project type metric 2022-10-19 19:05:13,154 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '5dab619d-ddc3-435c-a390-8deddd4ff231', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': '109eed1c-a0cf-4170-a22a-0b43307498a9', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'gitOrigin': None, 'projectName': '894b4894943a1373a55cb1b7c0a8aa33530234fd6f4de1e562bc37c62c949767', 'initialCommit': None}, 'duration': 265, 'exitReason': 'TemplateNotFoundException', 'exitCode': 1}}]} 2022-10-19 19:05:13,853 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Error: Template file not found at /PATH_TO_SAM_APP/template.yml afrigerio@Alessandros-MacBook-Pro lambdas % cd sam-app-py38 afrigerio@Alessandros-MacBook-Pro sam-app-py38 % sam build --debug 2022-10-19 19:06:07,009 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:06:07,009 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:06:07,009 | Expand command line arguments to: 2022-10-19 19:06:07,009 | --template_file=/PATH_TO_SAM_APP/sam-app-py38/template.yaml --build_dir=.aws-sam/build --cache_dir=.aws-sam/cache 2022-10-19 19:06:07,144 | 'build' command is called 2022-10-19 19:06:07,147 | No Parameters detected in the template 2022-10-19 19:06:07,158 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,158 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,158 | 0 stacks found in the template 2022-10-19 19:06:07,158 | No Parameters detected in the template 2022-10-19 19:06:07,163 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,164 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,164 | 2 resources found in the stack 2022-10-19 19:06:07,164 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2022-10-19 19:06:07,164 | --base-dir is not presented, adjusting uri hello_world/ relative to /PATH_TO_SAM_APP/sam-app-py38/template.yaml 2022-10-19 19:06:07,166 | Your template contains a resource with logical ID "ServerlessRestApi", which is a reserved logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended. 2022-10-19 19:06:07,166 | 2 resources found in the stack 2022-10-19 19:06:07,166 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2022-10-19 19:06:07,167 | Instantiating build definitions 2022-10-19 19:06:07,167 | No previous build graph found, generating new one 2022-10-19 19:06:07,167 | Unique function build definition found, adding as new (Function Build Definition: BuildDefinition(python3.8, /PATH_TO_SAM_APP/sam-app-py38/hello_world, Zip, , 4a7df146-0be6-426a-b355-2e43bc6fce03, {}, {}, x86_64, []), Function: Function(function_id='HelloWorldFunction', name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.8', memory=None, timeout=3, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/PATH_TO_SAM_APP/sam-app-py38/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/hello', 'Method': 'get', 'RestApiId': 'ServerlessRestApi'}}}, metadata={'SamResourceId': 'HelloWorldFunction'}, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], function_url_config=None, stack_path='')) 2022-10-19 19:06:07,168 | Building codeuri: /PATH_TO_SAM_APP/sam-app-py38/hello_world runtime: python3.8 metadata: {} architecture: x86_64 functions: HelloWorldFunction 2022-10-19 19:06:07,168 | Building to following folder /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:06:07,168 | Loading workflow module 'aws_lambda_builders.workflows' 2022-10-19 19:06:07,172 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2022-10-19 19:06:07,173 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)' 2022-10-19 19:06:07,174 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)' 2022-10-19 19:06:07,176 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)' 2022-10-19 19:06:07,178 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)' 2022-10-19 19:06:07,179 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)' 2022-10-19 19:06:07,181 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)' 2022-10-19 19:06:07,182 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)' 2022-10-19 19:06:07,183 | Registering workflow 'NodejsNpmEsbuildBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm-esbuild', application_framework=None)' 2022-10-19 19:06:07,183 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2022-10-19 19:06:07,198 | Running workflow 'PythonPipBuilder' 2022-10-19 19:06:07,199 | Running PythonPipBuilder:ResolveDependencies 2022-10-19 19:06:07,216 | calling pip download -r /PATH_TO_SAM_APP/sam-app-py38/hello_world/requirements.txt --dest /var/folders/7s/02wlz_dx27q5f_hjqbrsjdsr0000gn/T/tmpm3ffmr1d --exists-action i 2022-10-19 19:06:07,782 | Full dependency closure: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | initial compatible: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | initial incompatible: set() 2022-10-19 19:06:07,783 | Downloading missing wheels: set() 2022-10-19 19:06:07,783 | compatible wheels after second download pass: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Build missing wheels from sdists (C compiling True): set() 2022-10-19 19:06:07,783 | compatible after building wheels (no C compiling): {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Build missing wheels from sdists (C compiling False): set() 2022-10-19 19:06:07,783 | compatible after building wheels (C compiling): {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Final compatible: {idna==3.4(wheel), urllib3==1.26.12(wheel), requests==2.28.1(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Final incompatible: set() 2022-10-19 19:06:07,783 | Final missing wheels: set() 2022-10-19 19:06:07,797 | PythonPipBuilder:ResolveDependencies succeeded 2022-10-19 19:06:07,797 | Running PythonPipBuilder:CopySource 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/requirements.txt) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/requirements.txt) 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/__init__.py) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/__init__.py) 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/app.py) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/app.py) 2022-10-19 19:06:07,798 | PythonPipBuilder:CopySource succeeded 2022-10-19 19:06:07,799 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,799 | 2 resources found in the stack 2022-10-19 19:06:07,799 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' Build Succeeded Built Artifacts : .aws-sam/build Built Template : .aws-sam/build/template.yaml Commands you can use next [*] Validate SAM template: sam validate [*] Invoke Function: sam local invoke [*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch [*] Deploy: sam deploy --guided 2022-10-19 19:06:07,826 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:06:07,826 | Unable to find Click Context for getting session_id. 2022-10-19 19:06:07,827 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '7ac7183d-5134-44d2-a310-1ee8d30ece10', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b549e766-17a5-4641-b0de-d53095b082cb', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'projectType': 'CFN', 'gitOrigin': None, 'projectName': '0fdcacdf0a5a5c982672247f3c17ee797bd2e5c80105099d05b5329699391655', 'initialCommit': None}, 'duration': 816, 'exitReason': 'success', 'exitCode': 0}}]} 2022-10-19 19:06:07,827 | Sending Telemetry: {'metrics': [{'events': {'requestId': 'dd4677f9-d7d4-4ebb-9214-aa1aebc9712c', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b549e766-17a5-4641-b0de-d53095b082cb', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'metricSpecificAttributes': {'events': [{'event_name': 'BuildWorkflowUsed', 'event_value': 'python-pip', 'thread_id': 4376790400, 'time_stamp': '2022-10-19 17:06:07.168'}, {'event_name': 'BuildFunctionRuntime', 'event_value': 'python3.8', 'thread_id': 4376790400, 'time_stamp': '2022-10-19 17:06:07.801'}]}}}]} 2022-10-19 19:06:08,521 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) 2022-10-19 19:06:08,522 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) `sam local invoke --debug` 2022-10-19 19:15:55,258 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:15:55,258 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:15:55,258 | Expand command line arguments to: 2022-10-19 19:15:55,258 | --template_file=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/template.yaml --event=events/event.json --no_event --layer_cache_basedir=/Users/afrigerio/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1 2022-10-19 19:15:55,258 | local invoke command is called 2022-10-19 19:15:55,261 | No Parameters detected in the template 2022-10-19 19:15:55,270 | Sam customer defined id is more priority than other IDs. Customer defined id for resource HelloWorldFunction is HelloWorldFunction 2022-10-19 19:15:55,270 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:15:55,270 | 0 stacks found in the template 2022-10-19 19:15:55,270 | No Parameters detected in the template 2022-10-19 19:15:55,275 | Sam customer defined id is more priority than other IDs. Customer defined id for resource HelloWorldFunction is HelloWorldFunction 2022-10-19 19:15:55,275 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:15:55,276 | 2 resources found in the stack 2022-10-19 19:15:55,276 | Found Serverless function with name='HelloWorldFunction' and CodeUri='HelloWorldFunction' 2022-10-19 19:15:55,276 | --base-dir is not presented, adjusting uri HelloWorldFunction relative to /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/template.yaml 2022-10-19 19:15:55,299 | Found one Lambda function with name 'HelloWorldFunction' 2022-10-19 19:15:55,299 | Invoking app.lambda_handler (python3.8) 2022-10-19 19:15:55,299 | No environment variables found for function 'HelloWorldFunction' 2022-10-19 19:15:55,299 | Loading AWS credentials from session with profile 'None' 2022-10-19 19:15:55,304 | Resolving code path. Cwd=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build, CodeUri=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:15:55,304 | Resolved absolute path to code is /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:15:55,305 | Code /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction is not a zip/jar file 2022-10-19 19:15:55,307 | Cleaning all decompressed code dirs 2022-10-19 19:15:55,333 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '94dcccda-e42d-4b06-b446-ad4bcc6fe6c3', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b2d0e00e-d549-46e3-bab5-0cd6a21e9883', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'metricSpecificAttributes': {'projectType': 'CFN', 'gitOrigin': None, 'projectName': '0fdcacdf0a5a5c982672247f3c17ee797bd2e5c80105099d05b5329699391655', 'initialCommit': None}, 'duration': 75, 'exitReason': 'NotFound', 'exitCode': 255}}]} 2022-10-19 19:15:56,064 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Traceback (most recent call last): File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status response.raise_for_status() File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/public.ecr.aws/sam/emulation-python3.8:rapid-1.60.0-x86_64/json During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/homebrew/bin/sam", line 8, in <module> sys.exit(cli()) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 176, in wrapped raise exception # pylint: disable=raising-bad-type File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 126, in wrapped return_value = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/utils/version_checker.py", line 41, in wrapped actual_result = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/cli/main.py", line 86, in wrapper return func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/invoke/cli.py", line 85, in cli do_cli( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/invoke/cli.py", line 182, in do_cli context.local_lambda_runner.invoke( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/lib/local_lambda.py", line 137, in invoke self.local_runtime.invoke( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 240, in wrapped_func return_value = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 177, in invoke container = self.create(function_config, debug_context, container_host, container_host_interface) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 73, in create container = LambdaContainer( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 93, in __init__ image = LambdaContainer._get_image( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 236, in _get_image return lambda_image.build(runtime, packagetype, image, layers, architecture, function_name=function_name) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 145, in build self.docker_client.images.get(image_tag) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/models/images.py", line 316, in get return self.prepare_model(self.client.api.inspect_image(name)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped return f(self, resource_id, *args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/image.py", line 245, in inspect_image return self._result( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 267, in _result self._raise_for_status(response) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 263, in _raise_for_status raise create_api_error_from_http_exception(e) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception raise cls(e, response=response, explanation=explanation) docker.errors.NotFound: 404 Client Error: Not Found ("id not found") ### Expected result: I was expecting the result coming from the lambda execution. I got a similar output if i try `sam local start-api` and then `curl http://127.0.0.1:3000/hello` ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: macOS Monterey (M1) 2. `sam --version`: SAM CLI, version 1.60.0 3. AWS region: eu-central-1
True
Bug: sam invoke local - Docker 404 Client Error: Not Found - ### Description: I'm having a hard time trying to locally invoking lambda code with hello-world templates ### Steps to reproduce: `sam init -r python3.8` `1` `1` `N` `sam-app-py38` `cd sam-app-py38` `sam build --debug` 2022-10-19 19:05:12,888 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:05:12,888 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:05:12,888 | Expand command line arguments to: 2022-10-19 19:05:12,888 | --template_file=/PATH_TO_SAM_APP/template.yml --build_dir=.aws-sam/build --cache_dir=.aws-sam/cache 2022-10-19 19:05:13,127 | 'build' command is called 2022-10-19 19:05:13,127 | Template is not provided in context, skip adding project type metric 2022-10-19 19:05:13,154 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '5dab619d-ddc3-435c-a390-8deddd4ff231', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': '109eed1c-a0cf-4170-a22a-0b43307498a9', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'gitOrigin': None, 'projectName': '894b4894943a1373a55cb1b7c0a8aa33530234fd6f4de1e562bc37c62c949767', 'initialCommit': None}, 'duration': 265, 'exitReason': 'TemplateNotFoundException', 'exitCode': 1}}]} 2022-10-19 19:05:13,853 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Error: Template file not found at /PATH_TO_SAM_APP/template.yml afrigerio@Alessandros-MacBook-Pro lambdas % cd sam-app-py38 afrigerio@Alessandros-MacBook-Pro sam-app-py38 % sam build --debug 2022-10-19 19:06:07,009 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:06:07,009 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:06:07,009 | Expand command line arguments to: 2022-10-19 19:06:07,009 | --template_file=/PATH_TO_SAM_APP/sam-app-py38/template.yaml --build_dir=.aws-sam/build --cache_dir=.aws-sam/cache 2022-10-19 19:06:07,144 | 'build' command is called 2022-10-19 19:06:07,147 | No Parameters detected in the template 2022-10-19 19:06:07,158 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,158 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,158 | 0 stacks found in the template 2022-10-19 19:06:07,158 | No Parameters detected in the template 2022-10-19 19:06:07,163 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,164 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,164 | 2 resources found in the stack 2022-10-19 19:06:07,164 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2022-10-19 19:06:07,164 | --base-dir is not presented, adjusting uri hello_world/ relative to /PATH_TO_SAM_APP/sam-app-py38/template.yaml 2022-10-19 19:06:07,166 | Your template contains a resource with logical ID "ServerlessRestApi", which is a reserved logical ID in AWS SAM. It could result in unexpected behaviors and is not recommended. 2022-10-19 19:06:07,166 | 2 resources found in the stack 2022-10-19 19:06:07,166 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' 2022-10-19 19:06:07,167 | Instantiating build definitions 2022-10-19 19:06:07,167 | No previous build graph found, generating new one 2022-10-19 19:06:07,167 | Unique function build definition found, adding as new (Function Build Definition: BuildDefinition(python3.8, /PATH_TO_SAM_APP/sam-app-py38/hello_world, Zip, , 4a7df146-0be6-426a-b355-2e43bc6fce03, {}, {}, x86_64, []), Function: Function(function_id='HelloWorldFunction', name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.8', memory=None, timeout=3, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/PATH_TO_SAM_APP/sam-app-py38/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/hello', 'Method': 'get', 'RestApiId': 'ServerlessRestApi'}}}, metadata={'SamResourceId': 'HelloWorldFunction'}, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], function_url_config=None, stack_path='')) 2022-10-19 19:06:07,168 | Building codeuri: /PATH_TO_SAM_APP/sam-app-py38/hello_world runtime: python3.8 metadata: {} architecture: x86_64 functions: HelloWorldFunction 2022-10-19 19:06:07,168 | Building to following folder /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:06:07,168 | Loading workflow module 'aws_lambda_builders.workflows' 2022-10-19 19:06:07,172 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2022-10-19 19:06:07,173 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)' 2022-10-19 19:06:07,174 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)' 2022-10-19 19:06:07,176 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)' 2022-10-19 19:06:07,178 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)' 2022-10-19 19:06:07,179 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)' 2022-10-19 19:06:07,181 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)' 2022-10-19 19:06:07,182 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)' 2022-10-19 19:06:07,183 | Registering workflow 'NodejsNpmEsbuildBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm-esbuild', application_framework=None)' 2022-10-19 19:06:07,183 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)' 2022-10-19 19:06:07,198 | Running workflow 'PythonPipBuilder' 2022-10-19 19:06:07,199 | Running PythonPipBuilder:ResolveDependencies 2022-10-19 19:06:07,216 | calling pip download -r /PATH_TO_SAM_APP/sam-app-py38/hello_world/requirements.txt --dest /var/folders/7s/02wlz_dx27q5f_hjqbrsjdsr0000gn/T/tmpm3ffmr1d --exists-action i 2022-10-19 19:06:07,782 | Full dependency closure: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | initial compatible: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | initial incompatible: set() 2022-10-19 19:06:07,783 | Downloading missing wheels: set() 2022-10-19 19:06:07,783 | compatible wheels after second download pass: {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Build missing wheels from sdists (C compiling True): set() 2022-10-19 19:06:07,783 | compatible after building wheels (no C compiling): {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Build missing wheels from sdists (C compiling False): set() 2022-10-19 19:06:07,783 | compatible after building wheels (C compiling): {urllib3==1.26.12(wheel), requests==2.28.1(wheel), idna==3.4(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Final compatible: {idna==3.4(wheel), urllib3==1.26.12(wheel), requests==2.28.1(wheel), certifi==2022.9.24(wheel), charset-normalizer==2.1.1(wheel)} 2022-10-19 19:06:07,783 | Final incompatible: set() 2022-10-19 19:06:07,783 | Final missing wheels: set() 2022-10-19 19:06:07,797 | PythonPipBuilder:ResolveDependencies succeeded 2022-10-19 19:06:07,797 | Running PythonPipBuilder:CopySource 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/requirements.txt) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/requirements.txt) 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/__init__.py) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/__init__.py) 2022-10-19 19:06:07,798 | Copying source file (/PATH_TO_SAM_APP/sam-app-py38/hello_world/app.py) to destination (/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction/app.py) 2022-10-19 19:06:07,798 | PythonPipBuilder:CopySource succeeded 2022-10-19 19:06:07,799 | There is no customer defined id or cdk path defined for resource HelloWorldFunction, so we will use the resource logical id as the resource id 2022-10-19 19:06:07,799 | 2 resources found in the stack 2022-10-19 19:06:07,799 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/' Build Succeeded Built Artifacts : .aws-sam/build Built Template : .aws-sam/build/template.yaml Commands you can use next [*] Validate SAM template: sam validate [*] Invoke Function: sam local invoke [*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch [*] Deploy: sam deploy --guided 2022-10-19 19:06:07,826 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:06:07,826 | Unable to find Click Context for getting session_id. 2022-10-19 19:06:07,827 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '7ac7183d-5134-44d2-a310-1ee8d30ece10', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b549e766-17a5-4641-b0de-d53095b082cb', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'metricSpecificAttributes': {'projectType': 'CFN', 'gitOrigin': None, 'projectName': '0fdcacdf0a5a5c982672247f3c17ee797bd2e5c80105099d05b5329699391655', 'initialCommit': None}, 'duration': 816, 'exitReason': 'success', 'exitCode': 0}}]} 2022-10-19 19:06:07,827 | Sending Telemetry: {'metrics': [{'events': {'requestId': 'dd4677f9-d7d4-4ebb-9214-aa1aebc9712c', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b549e766-17a5-4641-b0de-d53095b082cb', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'metricSpecificAttributes': {'events': [{'event_name': 'BuildWorkflowUsed', 'event_value': 'python-pip', 'thread_id': 4376790400, 'time_stamp': '2022-10-19 17:06:07.168'}, {'event_name': 'BuildFunctionRuntime', 'event_value': 'python3.8', 'thread_id': 4376790400, 'time_stamp': '2022-10-19 17:06:07.801'}]}}}]} 2022-10-19 19:06:08,521 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) 2022-10-19 19:06:08,522 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) `sam local invoke --debug` 2022-10-19 19:15:55,258 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics 2022-10-19 19:15:55,258 | Using config file: samconfig.toml, config environment: default 2022-10-19 19:15:55,258 | Expand command line arguments to: 2022-10-19 19:15:55,258 | --template_file=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/template.yaml --event=events/event.json --no_event --layer_cache_basedir=/Users/afrigerio/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1 2022-10-19 19:15:55,258 | local invoke command is called 2022-10-19 19:15:55,261 | No Parameters detected in the template 2022-10-19 19:15:55,270 | Sam customer defined id is more priority than other IDs. Customer defined id for resource HelloWorldFunction is HelloWorldFunction 2022-10-19 19:15:55,270 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:15:55,270 | 0 stacks found in the template 2022-10-19 19:15:55,270 | No Parameters detected in the template 2022-10-19 19:15:55,275 | Sam customer defined id is more priority than other IDs. Customer defined id for resource HelloWorldFunction is HelloWorldFunction 2022-10-19 19:15:55,275 | There is no customer defined id or cdk path defined for resource ServerlessRestApi, so we will use the resource logical id as the resource id 2022-10-19 19:15:55,276 | 2 resources found in the stack 2022-10-19 19:15:55,276 | Found Serverless function with name='HelloWorldFunction' and CodeUri='HelloWorldFunction' 2022-10-19 19:15:55,276 | --base-dir is not presented, adjusting uri HelloWorldFunction relative to /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/template.yaml 2022-10-19 19:15:55,299 | Found one Lambda function with name 'HelloWorldFunction' 2022-10-19 19:15:55,299 | Invoking app.lambda_handler (python3.8) 2022-10-19 19:15:55,299 | No environment variables found for function 'HelloWorldFunction' 2022-10-19 19:15:55,299 | Loading AWS credentials from session with profile 'None' 2022-10-19 19:15:55,304 | Resolving code path. Cwd=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build, CodeUri=/PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:15:55,304 | Resolved absolute path to code is /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction 2022-10-19 19:15:55,305 | Code /PATH_TO_SAM_APP/sam-app-py38/.aws-sam/build/HelloWorldFunction is not a zip/jar file 2022-10-19 19:15:55,307 | Cleaning all decompressed code dirs 2022-10-19 19:15:55,333 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '94dcccda-e42d-4b06-b446-ad4bcc6fe6c3', 'installationId': '8ada282a-332d-4b64-98e1-4f7fb559111a', 'sessionId': 'b2d0e00e-d549-46e3-bab5-0cd6a21e9883', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.15', 'samcliVersion': '1.60.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'metricSpecificAttributes': {'projectType': 'CFN', 'gitOrigin': None, 'projectName': '0fdcacdf0a5a5c982672247f3c17ee797bd2e5c80105099d05b5329699391655', 'initialCommit': None}, 'duration': 75, 'exitReason': 'NotFound', 'exitCode': 255}}]} 2022-10-19 19:15:56,064 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Traceback (most recent call last): File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status response.raise_for_status() File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.35/images/public.ecr.aws/sam/emulation-python3.8:rapid-1.60.0-x86_64/json During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/homebrew/bin/sam", line 8, in <module> sys.exit(cli()) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 176, in wrapped raise exception # pylint: disable=raising-bad-type File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 126, in wrapped return_value = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/utils/version_checker.py", line 41, in wrapped actual_result = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/cli/main.py", line 86, in wrapper return func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/invoke/cli.py", line 85, in cli do_cli( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/invoke/cli.py", line 182, in do_cli context.local_lambda_runner.invoke( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/commands/local/lib/local_lambda.py", line 137, in invoke self.local_runtime.invoke( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/lib/telemetry/metric.py", line 240, in wrapped_func return_value = func(*args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 177, in invoke container = self.create(function_config, debug_context, container_host, container_host_interface) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/lambdafn/runtime.py", line 73, in create container = LambdaContainer( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 93, in __init__ image = LambdaContainer._get_image( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_container.py", line 236, in _get_image return lambda_image.build(runtime, packagetype, image, layers, architecture, function_name=function_name) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/samcli/local/docker/lambda_image.py", line 145, in build self.docker_client.images.get(image_tag) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/models/images.py", line 316, in get return self.prepare_model(self.client.api.inspect_image(name)) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped return f(self, resource_id, *args, **kwargs) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/image.py", line 245, in inspect_image return self._result( File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 267, in _result self._raise_for_status(response) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/api/client.py", line 263, in _raise_for_status raise create_api_error_from_http_exception(e) File "/opt/homebrew/Cellar/aws-sam-cli/1.60.0/libexec/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception raise cls(e, response=response, explanation=explanation) docker.errors.NotFound: 404 Client Error: Not Found ("id not found") ### Expected result: I was expecting the result coming from the lambda execution. I got a similar output if i try `sam local start-api` and then `curl http://127.0.0.1:3000/hello` ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: macOS Monterey (M1) 2. `sam --version`: SAM CLI, version 1.60.0 3. AWS region: eu-central-1
main
bug sam invoke local docker client error not found description i m having a hard time trying to locally invoking lambda code with hello world templates steps to reproduce sam init r n sam app cd sam app sam build debug telemetry endpoint configured to be using config file samconfig toml config environment default expand command line arguments to template file path to sam app template yml build dir aws sam build cache dir aws sam cache build command is called template is not provided in context skip adding project type metric sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout error template file not found at path to sam app template yml afrigerio alessandros macbook pro lambdas cd sam app afrigerio alessandros macbook pro sam app sam build debug telemetry endpoint configured to be using config file samconfig toml config environment default expand command line arguments to template file path to sam app sam app template yaml build dir aws sam build cache dir aws sam cache build command is called no parameters detected in the template there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id stacks found in the template no parameters detected in the template there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id resources found in the stack found serverless function with name helloworldfunction and codeuri hello world base dir is not presented adjusting uri hello world relative to path to sam app sam app template yaml your template contains a resource with logical id serverlessrestapi which is a reserved logical id in aws sam it could result in unexpected behaviors and is not recommended resources found in the stack found serverless function with name helloworldfunction and codeuri hello world instantiating build definitions no previous build graph found generating new one unique function build definition found adding as new function build definition builddefinition path to sam app sam app hello world zip function function function id helloworldfunction name helloworldfunction functionname helloworldfunction runtime memory none timeout handler app lambda handler imageuri none packagetype zip imageconfig none codeuri path to sam app sam app hello world environment none rolearn none layers events helloworld type api properties path hello method get restapiid serverlessrestapi metadata samresourceid helloworldfunction inlinecode none codesign config arn none architectures function url config none stack path building codeuri path to sam app sam app hello world runtime metadata architecture functions helloworldfunction building to following folder path to sam app sam app aws sam build helloworldfunction loading workflow module aws lambda builders workflows registering workflow pythonpipbuilder with capability capability language python dependency manager pip application framework none registering workflow nodejsnpmbuilder with capability capability language nodejs dependency manager npm application framework none registering workflow rubybundlerbuilder with capability capability language ruby dependency manager bundler application framework none registering workflow gomodulesbuilder with capability capability language go dependency manager modules application framework none registering workflow javagradleworkflow with capability capability language java dependency manager gradle application framework none registering workflow javamavenworkflow with capability capability language java dependency manager maven application framework none registering workflow dotnetclipackagebuilder with capability capability language dotnet dependency manager cli package application framework none registering workflow custommakebuilder with capability capability language provided dependency manager none application framework none registering workflow nodejsnpmesbuildbuilder with capability capability language nodejs dependency manager npm esbuild application framework none found workflow pythonpipbuilder to support capabilities capability language python dependency manager pip application framework none running workflow pythonpipbuilder running pythonpipbuilder resolvedependencies calling pip download r path to sam app sam app hello world requirements txt dest var folders t exists action i full dependency closure wheel requests wheel idna wheel certifi wheel charset normalizer wheel initial compatible wheel requests wheel idna wheel certifi wheel charset normalizer wheel initial incompatible set downloading missing wheels set compatible wheels after second download pass wheel requests wheel idna wheel certifi wheel charset normalizer wheel build missing wheels from sdists c compiling true set compatible after building wheels no c compiling wheel requests wheel idna wheel certifi wheel charset normalizer wheel build missing wheels from sdists c compiling false set compatible after building wheels c compiling wheel requests wheel idna wheel certifi wheel charset normalizer wheel final compatible idna wheel wheel requests wheel certifi wheel charset normalizer wheel final incompatible set final missing wheels set pythonpipbuilder resolvedependencies succeeded running pythonpipbuilder copysource copying source file path to sam app sam app hello world requirements txt to destination path to sam app sam app aws sam build helloworldfunction requirements txt copying source file path to sam app sam app hello world init py to destination path to sam app sam app aws sam build helloworldfunction init py copying source file path to sam app sam app hello world app py to destination path to sam app sam app aws sam build helloworldfunction app py pythonpipbuilder copysource succeeded there is no customer defined id or cdk path defined for resource helloworldfunction so we will use the resource logical id as the resource id resources found in the stack found serverless function with name helloworldfunction and codeuri hello world build succeeded built artifacts aws sam build built template aws sam build template yaml commands you can use next validate sam template sam validate invoke function sam local invoke test function in the cloud sam sync stack name stack name watch deploy sam deploy guided telemetry endpoint configured to be unable to find click context for getting session id sending telemetry metrics sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout sam local invoke debug telemetry endpoint configured to be using config file samconfig toml config environment default expand command line arguments to template file path to sam app sam app aws sam build template yaml event events event json no event layer cache basedir users afrigerio aws sam layers pkg container host localhost container host interface local invoke command is called no parameters detected in the template sam customer defined id is more priority than other ids customer defined id for resource helloworldfunction is helloworldfunction there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id stacks found in the template no parameters detected in the template sam customer defined id is more priority than other ids customer defined id for resource helloworldfunction is helloworldfunction there is no customer defined id or cdk path defined for resource serverlessrestapi so we will use the resource logical id as the resource id resources found in the stack found serverless function with name helloworldfunction and codeuri helloworldfunction base dir is not presented adjusting uri helloworldfunction relative to path to sam app sam app aws sam build template yaml found one lambda function with name helloworldfunction invoking app lambda handler no environment variables found for function helloworldfunction loading aws credentials from session with profile none resolving code path cwd path to sam app sam app aws sam build codeuri path to sam app sam app aws sam build helloworldfunction resolved absolute path to code is path to sam app sam app aws sam build helloworldfunction code path to sam app sam app aws sam build helloworldfunction is not a zip jar file cleaning all decompressed code dirs sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout traceback most recent call last file opt homebrew cellar aws sam cli libexec lib site packages docker api client py line in raise for status response raise for status file opt homebrew cellar aws sam cli libexec lib site packages requests models py line in raise for status raise httperror http error msg response self requests exceptions httperror client error not found for url http docker localhost images public ecr aws sam emulation rapid json during handling of the above exception another exception occurred traceback most recent call last file opt homebrew bin sam line in sys exit cli file opt homebrew cellar aws sam cli libexec lib site packages click core py line in call return self main args kwargs file opt homebrew cellar aws sam cli libexec lib site packages click core py line in main rv self invoke ctx file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return ctx invoke self callback ctx params file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return callback args kwargs file opt homebrew cellar aws sam cli libexec lib site packages click decorators py line in new func return ctx invoke f obj args kwargs file opt homebrew cellar aws sam cli libexec lib site packages click core py line in invoke return callback args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli lib telemetry metric py line in wrapped raise exception pylint disable raising bad type file opt homebrew cellar aws sam cli libexec lib site packages samcli lib telemetry metric py line in wrapped return value func args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli lib utils version checker py line in wrapped actual result func args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli cli main py line in wrapper return func args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli commands local invoke cli py line in cli do cli file opt homebrew cellar aws sam cli libexec lib site packages samcli commands local invoke cli py line in do cli context local lambda runner invoke file opt homebrew cellar aws sam cli libexec lib site packages samcli commands local lib local lambda py line in invoke self local runtime invoke file opt homebrew cellar aws sam cli libexec lib site packages samcli lib telemetry metric py line in wrapped func return value func args kwargs file opt homebrew cellar aws sam cli libexec lib site packages samcli local lambdafn runtime py line in invoke container self create function config debug context container host container host interface file opt homebrew cellar aws sam cli libexec lib site packages samcli local lambdafn runtime py line in create container lambdacontainer file opt homebrew cellar aws sam cli libexec lib site packages samcli local docker lambda container py line in init image lambdacontainer get image file opt homebrew cellar aws sam cli libexec lib site packages samcli local docker lambda container py line in get image return lambda image build runtime packagetype image layers architecture function name function name file opt homebrew cellar aws sam cli libexec lib site packages samcli local docker lambda image py line in build self docker client images get image tag file opt homebrew cellar aws sam cli libexec lib site packages docker models images py line in get return self prepare model self client api inspect image name file opt homebrew cellar aws sam cli libexec lib site packages docker utils decorators py line in wrapped return f self resource id args kwargs file opt homebrew cellar aws sam cli libexec lib site packages docker api image py line in inspect image return self result file opt homebrew cellar aws sam cli libexec lib site packages docker api client py line in result self raise for status response file opt homebrew cellar aws sam cli libexec lib site packages docker api client py line in raise for status raise create api error from http exception e file opt homebrew cellar aws sam cli libexec lib site packages docker errors py line in create api error from http exception raise cls e response response explanation explanation docker errors notfound client error not found id not found expected result i was expecting the result coming from the lambda execution i got a similar output if i try sam local start api and then curl additional environment details ex windows mac amazon linux etc os macos monterey sam version sam cli version aws region eu central
1
1,315
5,621,885,193
IssuesEvent
2017-04-04 11:15:57
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
Bug report: dwarf-fortress installation fails
awaiting maintainer feedback
#### General troubleshooting steps - [x] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [x] None of the templates was appropriate for my issue, or I’m not sure. - [x] I ran `brew update-reset && brew update` and retried my command. - [x] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue I had the old home-brew version of dwarf-fortress installed (not cask) and tried to 'upgrade' to cask version; I got no errors on upgrading but it would not run (complained about SDL errors). So I completely removed it and tried to install fresh from cask instead (and got this error). It seems to be trying to back up my saves even though I don't have any since the game isn't installed yet... #### Output of your command with `--verbose --debug` ``` ==> Hbc::Installer#install ==> Printing caveats ==> Caveats During uninstall, your save data will be copied to /tmp/dwarf-fortress-save ==> Hbc::Installer#fetch ==> Satisfying dependencies ==> Installing Cask dependencies: sdl-framework, sdl-ttf-framework sdl-framework ... already installed sdl-ttf-framework ... already installed complete ==> Downloading ==> Downloading http://www.bay12games.com/dwarves/df_43_05_osx.tar.bz2 Already downloaded: /Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2 ==> Downloaded to -> /Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2 ==> Verifying download ==> Determining which verifications to run for Cask dwarf-fortress ==> Checking for verification class Hbc::Verify::Checksum ==> 1 verifications defined Hbc::Verify::Checksum ==> Running verification of class Hbc::Verify::Checksum ==> Verifying checksum for Cask dwarf-fortress ==> SHA256 checksums match ==> Hbc::Installer#stage ==> Extracting primary container ==> Determining which containers to use based on filetype ==> Checking container class Hbc::Container::Pkg ==> Checking container class Hbc::Container::Ttf ==> Checking container class Hbc::Container::Otf ==> Checking container class Hbc::Container::Air ==> Checking container class Hbc::Container::Cab ==> Checking container class Hbc::Container::Dmg ==> Executing: ["/usr/bin/hdiutil", "imageinfo", "/Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2"] ==> Checking container class Hbc::Container::SevenZip ==> Checking container class Hbc::Container::Sit ==> Checking container class Hbc::Container::Rar ==> Checking container class Hbc::Container::Zip ==> Checking container class Hbc::Container::Xar ==> Checking container class Hbc::Container::Tar ==> Using container class Hbc::Container::Tar for /Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2 ==> Executing: ["/usr/bin/tar", "-x", "-f", "/Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2", "-C", "/var/folders/9k/trvx5bm93xg9cb06227fbnxh0000gn/T/d20170403-9557-ol3u5w"] ==> Executing: ["/usr/bin/ditto", "--", "/var/folders/9k/trvx5bm93xg9cb06227fbnxh0000gn/T/d20170403-9557-ol3u5w", "/usr/local/Caskroom/dwarf-fortress/0.43.05"] ==> Creating metadata directory /usr/local/Caskroom/dwarf-fortress/.metadata/0.43.05/20170403153417.301 ==> Creating metadata subdirectory /usr/local/Caskroom/dwarf-fortress/.metadata/0.43.05/20170403153417.301/Casks ==> Installing artifacts ==> Determining which artifacts are present in Cask dwarf-fortress ==> 3 artifact/s defined #<Hbc::Artifact::PreflightBlock:0x007fb0a5289350> #<Hbc::Artifact::Binary:0x007fb0a5289210> #<Hbc::Artifact::PostflightBlock:0x007fb0a52890d0> ==> Installing artifact of class Hbc::Artifact::PreflightBlock ==> Installing artifact of class Hbc::Artifact::Binary ==> Linking Binary 'df.wrapper.sh' to '/usr/local/bin/dwarf-fortress'. ==> Executing: ["/bin/ln", "-h", "-f", "-s", "--", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh", "/usr/local/bin/dwarf-fortress"] ==> Adding com.apple.metadata:kMDItemAlternateNames metadata ==> Executing: ["/usr/bin/xattr", "-p", "com.apple.metadata:kMDItemAlternateNames", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh"] ==> Existing metadata is: '' ==> Executing: ["/bin/chmod", "--", "u+rw", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh"] ==> Executing: ["/usr/bin/xattr", "-w", "com.apple.metadata:kMDItemAlternateNames", "(\\\"dwarf-fortress\\\")", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh"] ==> Installing artifact of class Hbc::Artifact::PostflightBlock ==> Reverting installation of artifact of class Hbc::Artifact::Binary ==> Unlinking Binary '/usr/local/bin/dwarf-fortress'. ==> Reverting installation of artifact of class Hbc::Artifact::PreflightBlock ==> Executing: ["/bin/cp", "-rf", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/data/save", "/tmp/dwarf-fortress-save/"] ==> cp: directory /tmp/dwarf-fortress-save does not exist ==> Purging files for version 0.43.05 of Cask dwarf-fortress Error: private method `load' called for Hbc:Module Follow the instructions here: https://github.com/caskroom/homebrew-cask#reporting-bugs /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:39:in `block (4 levels) in load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:36:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:36:in `block (3 levels) in load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:32:in `chdir' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:32:in `block (2 levels) in load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_flight_block.rb:35:in `instance_eval' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_flight_block.rb:35:in `block in abstract_phase' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/set.rb:232:in `each_key' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/set.rb:232:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_flight_block.rb:34:in `abstract_phase' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_flight_block.rb:24:in `install_phase' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:144:in `block in install_artifacts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:141:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:141:in `install_artifacts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:87:in `install' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:25:in `block in install_casks' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:19:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:19:in `install_casks' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:10:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:115:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:158:in `process' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:91:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:169:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:169:in `rescue in process' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:149:in `process' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:91:in `<main>' ``` #### Output of `brew cask doctor` ``` ==> Homebrew-Cask Version Homebrew-Cask 1.1.12-2-g5ab904ed caskroom/homebrew-cask (git revision 7ccc4; last commit 2017-04-03) ==> Homebrew-Cask Install Location <NONE> ==> Homebrew-Cask Staging Location /usr/local/Caskroom ==> Homebrew-Cask Cached Downloads ~/Library/Caches/Homebrew/Cask (2 files, 155.1MB) ==> Homebrew-Cask Taps: /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3630 casks) /usr/local/Homebrew/Library/Taps/cartr/homebrew-qt4 (0 casks) /usr/local/Homebrew/Library/Taps/goldcaddy77/homebrew-firefox (16 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-command-not-found (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-dupes (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-fuse (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-games (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-versions (0 casks) ==> Contents of $LOAD_PATH /usr/local/Homebrew/Library/Homebrew/cask/lib /usr/local/Homebrew/Library/Homebrew /Library/Ruby/Site/2.0.0 /Library/Ruby/Site/2.0.0/x86_64-darwin16 /Library/Ruby/Site/2.0.0/universal-darwin16 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16 ==> Environment Variables LANG="en_US.UTF-8" PATH="~/Qt5.5.1/5.5/clang_64/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/share/dotnet:/Library/Frameworks/Mono.framework/Versions/Current/Commands:~/.bin:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/cmd:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-command-not-found/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm" SHELL="/usr/local/bin/zsh" ```
True
Bug report: dwarf-fortress installation fails - #### General troubleshooting steps - [x] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [x] None of the templates was appropriate for my issue, or I’m not sure. - [x] I ran `brew update-reset && brew update` and retried my command. - [x] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue I had the old home-brew version of dwarf-fortress installed (not cask) and tried to 'upgrade' to cask version; I got no errors on upgrading but it would not run (complained about SDL errors). So I completely removed it and tried to install fresh from cask instead (and got this error). It seems to be trying to back up my saves even though I don't have any since the game isn't installed yet... #### Output of your command with `--verbose --debug` ``` ==> Hbc::Installer#install ==> Printing caveats ==> Caveats During uninstall, your save data will be copied to /tmp/dwarf-fortress-save ==> Hbc::Installer#fetch ==> Satisfying dependencies ==> Installing Cask dependencies: sdl-framework, sdl-ttf-framework sdl-framework ... already installed sdl-ttf-framework ... already installed complete ==> Downloading ==> Downloading http://www.bay12games.com/dwarves/df_43_05_osx.tar.bz2 Already downloaded: /Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2 ==> Downloaded to -> /Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2 ==> Verifying download ==> Determining which verifications to run for Cask dwarf-fortress ==> Checking for verification class Hbc::Verify::Checksum ==> 1 verifications defined Hbc::Verify::Checksum ==> Running verification of class Hbc::Verify::Checksum ==> Verifying checksum for Cask dwarf-fortress ==> SHA256 checksums match ==> Hbc::Installer#stage ==> Extracting primary container ==> Determining which containers to use based on filetype ==> Checking container class Hbc::Container::Pkg ==> Checking container class Hbc::Container::Ttf ==> Checking container class Hbc::Container::Otf ==> Checking container class Hbc::Container::Air ==> Checking container class Hbc::Container::Cab ==> Checking container class Hbc::Container::Dmg ==> Executing: ["/usr/bin/hdiutil", "imageinfo", "/Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2"] ==> Checking container class Hbc::Container::SevenZip ==> Checking container class Hbc::Container::Sit ==> Checking container class Hbc::Container::Rar ==> Checking container class Hbc::Container::Zip ==> Checking container class Hbc::Container::Xar ==> Checking container class Hbc::Container::Tar ==> Using container class Hbc::Container::Tar for /Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2 ==> Executing: ["/usr/bin/tar", "-x", "-f", "/Users/sandlst/Library/Caches/Homebrew/Cask/dwarf-fortress--0.43.05.tar.bz2", "-C", "/var/folders/9k/trvx5bm93xg9cb06227fbnxh0000gn/T/d20170403-9557-ol3u5w"] ==> Executing: ["/usr/bin/ditto", "--", "/var/folders/9k/trvx5bm93xg9cb06227fbnxh0000gn/T/d20170403-9557-ol3u5w", "/usr/local/Caskroom/dwarf-fortress/0.43.05"] ==> Creating metadata directory /usr/local/Caskroom/dwarf-fortress/.metadata/0.43.05/20170403153417.301 ==> Creating metadata subdirectory /usr/local/Caskroom/dwarf-fortress/.metadata/0.43.05/20170403153417.301/Casks ==> Installing artifacts ==> Determining which artifacts are present in Cask dwarf-fortress ==> 3 artifact/s defined #<Hbc::Artifact::PreflightBlock:0x007fb0a5289350> #<Hbc::Artifact::Binary:0x007fb0a5289210> #<Hbc::Artifact::PostflightBlock:0x007fb0a52890d0> ==> Installing artifact of class Hbc::Artifact::PreflightBlock ==> Installing artifact of class Hbc::Artifact::Binary ==> Linking Binary 'df.wrapper.sh' to '/usr/local/bin/dwarf-fortress'. ==> Executing: ["/bin/ln", "-h", "-f", "-s", "--", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh", "/usr/local/bin/dwarf-fortress"] ==> Adding com.apple.metadata:kMDItemAlternateNames metadata ==> Executing: ["/usr/bin/xattr", "-p", "com.apple.metadata:kMDItemAlternateNames", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh"] ==> Existing metadata is: '' ==> Executing: ["/bin/chmod", "--", "u+rw", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh"] ==> Executing: ["/usr/bin/xattr", "-w", "com.apple.metadata:kMDItemAlternateNames", "(\\\"dwarf-fortress\\\")", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/df.wrapper.sh"] ==> Installing artifact of class Hbc::Artifact::PostflightBlock ==> Reverting installation of artifact of class Hbc::Artifact::Binary ==> Unlinking Binary '/usr/local/bin/dwarf-fortress'. ==> Reverting installation of artifact of class Hbc::Artifact::PreflightBlock ==> Executing: ["/bin/cp", "-rf", "/usr/local/Caskroom/dwarf-fortress/0.43.05/df_osx/data/save", "/tmp/dwarf-fortress-save/"] ==> cp: directory /tmp/dwarf-fortress-save does not exist ==> Purging files for version 0.43.05 of Cask dwarf-fortress Error: private method `load' called for Hbc:Module Follow the instructions here: https://github.com/caskroom/homebrew-cask#reporting-bugs /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:39:in `block (4 levels) in load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:36:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:36:in `block (3 levels) in load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:32:in `chdir' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:32:in `block (2 levels) in load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_flight_block.rb:35:in `instance_eval' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_flight_block.rb:35:in `block in abstract_phase' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/set.rb:232:in `each_key' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/set.rb:232:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_flight_block.rb:34:in `abstract_phase' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_flight_block.rb:24:in `install_phase' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:144:in `block in install_artifacts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:141:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:141:in `install_artifacts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:87:in `install' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:25:in `block in install_casks' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:19:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:19:in `install_casks' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:10:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:115:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:158:in `process' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:91:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:169:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:169:in `rescue in process' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:149:in `process' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:91:in `<main>' ``` #### Output of `brew cask doctor` ``` ==> Homebrew-Cask Version Homebrew-Cask 1.1.12-2-g5ab904ed caskroom/homebrew-cask (git revision 7ccc4; last commit 2017-04-03) ==> Homebrew-Cask Install Location <NONE> ==> Homebrew-Cask Staging Location /usr/local/Caskroom ==> Homebrew-Cask Cached Downloads ~/Library/Caches/Homebrew/Cask (2 files, 155.1MB) ==> Homebrew-Cask Taps: /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3630 casks) /usr/local/Homebrew/Library/Taps/cartr/homebrew-qt4 (0 casks) /usr/local/Homebrew/Library/Taps/goldcaddy77/homebrew-firefox (16 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-command-not-found (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-dupes (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-fuse (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-games (0 casks) /usr/local/Homebrew/Library/Taps/homebrew/homebrew-versions (0 casks) ==> Contents of $LOAD_PATH /usr/local/Homebrew/Library/Homebrew/cask/lib /usr/local/Homebrew/Library/Homebrew /Library/Ruby/Site/2.0.0 /Library/Ruby/Site/2.0.0/x86_64-darwin16 /Library/Ruby/Site/2.0.0/universal-darwin16 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16 ==> Environment Variables LANG="en_US.UTF-8" PATH="~/Qt5.5.1/5.5/clang_64/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/share/dotnet:/Library/Frameworks/Mono.framework/Versions/Current/Commands:~/.bin:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-bundle/cmd:/usr/local/Homebrew/Library/Taps/homebrew/homebrew-command-not-found/cmd:/usr/local/Homebrew/Library/Homebrew/shims/scm" SHELL="/usr/local/bin/zsh" ```
main
bug report dwarf fortress installation fails general troubleshooting steps i have checked the instructions for or before opening the issue none of the templates was appropriate for my issue or i’m not sure i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue i had the old home brew version of dwarf fortress installed not cask and tried to upgrade to cask version i got no errors on upgrading but it would not run complained about sdl errors so i completely removed it and tried to install fresh from cask instead and got this error it seems to be trying to back up my saves even though i don t have any since the game isn t installed yet output of your command with verbose debug hbc installer install printing caveats caveats during uninstall your save data will be copied to tmp dwarf fortress save hbc installer fetch satisfying dependencies installing cask dependencies sdl framework sdl ttf framework sdl framework already installed sdl ttf framework already installed complete downloading downloading already downloaded users sandlst library caches homebrew cask dwarf fortress tar downloaded to users sandlst library caches homebrew cask dwarf fortress tar verifying download determining which verifications to run for cask dwarf fortress checking for verification class hbc verify checksum verifications defined hbc verify checksum running verification of class hbc verify checksum verifying checksum for cask dwarf fortress checksums match hbc installer stage extracting primary container determining which containers to use based on filetype checking container class hbc container pkg checking container class hbc container ttf checking container class hbc container otf checking container class hbc container air checking container class hbc container cab checking container class hbc container dmg executing checking container class hbc container sevenzip checking container class hbc container sit checking container class hbc container rar checking container class hbc container zip checking container class hbc container xar checking container class hbc container tar using container class hbc container tar for users sandlst library caches homebrew cask dwarf fortress tar executing executing creating metadata directory usr local caskroom dwarf fortress metadata creating metadata subdirectory usr local caskroom dwarf fortress metadata casks installing artifacts determining which artifacts are present in cask dwarf fortress artifact s defined installing artifact of class hbc artifact preflightblock installing artifact of class hbc artifact binary linking binary df wrapper sh to usr local bin dwarf fortress executing adding com apple metadata kmditemalternatenames metadata executing existing metadata is executing executing installing artifact of class hbc artifact postflightblock reverting installation of artifact of class hbc artifact binary unlinking binary usr local bin dwarf fortress reverting installation of artifact of class hbc artifact preflightblock executing cp directory tmp dwarf fortress save does not exist purging files for version of cask dwarf fortress error private method load called for hbc module follow the instructions here usr local homebrew library homebrew cask lib hbc cask loader rb in block levels in load usr local homebrew library homebrew cask lib hbc cask loader rb in each usr local homebrew library homebrew cask lib hbc cask loader rb in block levels in load usr local homebrew library homebrew cask lib hbc cask loader rb in chdir usr local homebrew library homebrew cask lib hbc cask loader rb in block levels in load usr local homebrew library homebrew cask lib hbc artifact abstract flight block rb in instance eval usr local homebrew library homebrew cask lib hbc artifact abstract flight block rb in block in abstract phase system library frameworks ruby framework versions usr lib ruby set rb in each key system library frameworks ruby framework versions usr lib ruby set rb in each usr local homebrew library homebrew cask lib hbc artifact abstract flight block rb in abstract phase usr local homebrew library homebrew cask lib hbc artifact abstract flight block rb in install phase usr local homebrew library homebrew cask lib hbc installer rb in block in install artifacts usr local homebrew library homebrew cask lib hbc installer rb in each usr local homebrew library homebrew cask lib hbc installer rb in install artifacts usr local homebrew library homebrew cask lib hbc installer rb in install usr local homebrew library homebrew cask lib hbc cli install rb in block in install casks usr local homebrew library homebrew cask lib hbc cli install rb in each usr local homebrew library homebrew cask lib hbc cli install rb in install casks usr local homebrew library homebrew cask lib hbc cli install rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in process usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in process usr local homebrew library homebrew cask lib hbc cli rb in process usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor homebrew cask version homebrew cask caskroom homebrew cask git revision last commit homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads library caches homebrew cask files homebrew cask taps usr local homebrew library taps caskroom homebrew cask casks usr local homebrew library taps cartr homebrew casks usr local homebrew library taps homebrew firefox casks usr local homebrew library taps homebrew homebrew bundle casks usr local homebrew library taps homebrew homebrew command not found casks usr local homebrew library taps homebrew homebrew core casks usr local homebrew library taps homebrew homebrew dupes casks usr local homebrew library taps homebrew homebrew fuse casks usr local homebrew library taps homebrew homebrew games casks usr local homebrew library taps homebrew homebrew versions casks contents of load path usr local homebrew library homebrew cask lib usr local homebrew library homebrew library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal environment variables lang en us utf path clang bin usr local sbin usr local bin usr bin bin usr sbin sbin opt bin usr local share dotnet library frameworks mono framework versions current commands bin usr local homebrew library taps homebrew homebrew bundle cmd usr local homebrew library taps homebrew homebrew command not found cmd usr local homebrew library homebrew shims scm shell usr local bin zsh
1
18,399
12,878,771,150
IssuesEvent
2020-07-11 18:19:23
lookit/lookit-api
https://api.github.com/repos/lookit/lookit-api
closed
Fine-tune permissions for Lookit access by researchers
researcher support researcher usability scaling
*Pain point*: Onboarding new researchers to use the staging and/or production Lookit sites requires a substantial number of manual steps and some awkward workarounds. For instance, researchers have to try to log in, let Kim know they've done so, and then Kim grants access to the site (she is not automatically notified as an admin that someone has requested access). Kim also grants them access to existing example studies manually. There is no way for closely associated researchers (i.e., those in the same lab) to automatically get access to all of their lab's studies, and no way to associate studies with a particular research group beyond the PI contact info provided. There is no distinction between access to study details (which we might want to share broadly, e.g. to allow easy replication) and access to study data. *Planned functionality/changes*: * Access: Automatically give researcher access to Lookit upon authentication via OSF. Researchers will require lab permissions to create new studies or access existing studies. * Add a study-level setting "share study protocol on Lookit" which makes study details (not data!) accessible by any Lookit researcher (no lab affiliation required), i.e. grants preview access below. * Create finer-grained permissions for individual study access, and apply to views & API. Better names for the groups are welcome... | | Preview | Design | Analysis | Submission-processor | Researcher | Manager | Admin |---|---|---|---|---|---|---|---| | Read study details (protocol, etc) | x | x| x |x | x| x| x| | Write study details | | x | | | | x|x| | Change study status (incl. write changes that would reject study) | | | | x | x| x| x| | Manage study researchers (grant/change permissions) | | | | | | x|x| | View/download study data | | | x | | x| |x| | Code consent and create/edit feedback | | | |x | x| |x| | Change study lab | | | | | | |x| * Create new Lab model to encompass the following functionality: * Joining a lab: * Any researcher can see a list of all labs and click to request to join one. * A request to join triggers an email to lab managers (see below). * A lab manager/admin can see a list of researchers who have requested to join their lab and assign permissions (as on 'Manage Researchers' for MIT admins currently.) There must always be at least one lab admin. * (We can create a "practice" lab we administer that people can join to try it out until their lab is approved.) * Editing/managing a lab: * Some basic information about each lab is saved. To include: name (e.g. "MIT Early Childhood Cognition Lab"), PI name(s), contact email, contact phone, lab website, parent-facing description of lab, IRB contact info, PDF upload for access agreement * Lab manager can edit & save info, any lab member can view info * Not otherwise used for now, but will eventually allow displaying lab info to participants along with studies (current/past) * Creating a lab: * Any researcher can click (on same page as list of all labs) to request a new lab. * This takes them to a form where they need to fill out some information about the lab, see above. * A Lookit admin is notified when a request for a new lab is submitted. * A Lookit admin can view pending labs, edit information, and create the Lab. (It may be that the lab gets actually created, but some "approved" flag is in initially false and unapproved labs are basically treated as nonexistent for all other purposes.) * The requester is automatically added to and made an admin of the approved lab. * Lab permissions: | | Lab researcher | Lab read | Lab admin | |---|---|---|---| | Create studies associated this Lab, and can be added manually to any of this Lab's studies | x | x | x | | Preview role for all Lab studies | | x | x | | Manager role for all Lab studies | | | x | | Manage permissions for Lab (add new researchers, etc.) | | | x | | Edit lab metadata (description, website, etc.) | | | x | * Changes to study model & views: * A study must be associated with exactly 1 lab. When the study is created, this must be a lab the creator is a member of. Provide dropdown menu on study create? * All study researchers must be members of the study's lab. Limit searches when adding researchers in "manage researchers" section to the study lab's members. * Study lab can be changed, but only by an admin to another lab they are a member of. Probably put in new small form near 'manage researchers'. * Note that when cloning a study, we will need to change the lab of the study to a lab the cloning user is a member of; possibly prompt for "clone into..." Note that new Lookit-wide permissions will be expected to eventually replace the "MIT Org Read" and "MIT Org Admin," (see #459) but for now (given the staff of two...) we will just rely on superuser perms.
True
Fine-tune permissions for Lookit access by researchers - *Pain point*: Onboarding new researchers to use the staging and/or production Lookit sites requires a substantial number of manual steps and some awkward workarounds. For instance, researchers have to try to log in, let Kim know they've done so, and then Kim grants access to the site (she is not automatically notified as an admin that someone has requested access). Kim also grants them access to existing example studies manually. There is no way for closely associated researchers (i.e., those in the same lab) to automatically get access to all of their lab's studies, and no way to associate studies with a particular research group beyond the PI contact info provided. There is no distinction between access to study details (which we might want to share broadly, e.g. to allow easy replication) and access to study data. *Planned functionality/changes*: * Access: Automatically give researcher access to Lookit upon authentication via OSF. Researchers will require lab permissions to create new studies or access existing studies. * Add a study-level setting "share study protocol on Lookit" which makes study details (not data!) accessible by any Lookit researcher (no lab affiliation required), i.e. grants preview access below. * Create finer-grained permissions for individual study access, and apply to views & API. Better names for the groups are welcome... | | Preview | Design | Analysis | Submission-processor | Researcher | Manager | Admin |---|---|---|---|---|---|---|---| | Read study details (protocol, etc) | x | x| x |x | x| x| x| | Write study details | | x | | | | x|x| | Change study status (incl. write changes that would reject study) | | | | x | x| x| x| | Manage study researchers (grant/change permissions) | | | | | | x|x| | View/download study data | | | x | | x| |x| | Code consent and create/edit feedback | | | |x | x| |x| | Change study lab | | | | | | |x| * Create new Lab model to encompass the following functionality: * Joining a lab: * Any researcher can see a list of all labs and click to request to join one. * A request to join triggers an email to lab managers (see below). * A lab manager/admin can see a list of researchers who have requested to join their lab and assign permissions (as on 'Manage Researchers' for MIT admins currently.) There must always be at least one lab admin. * (We can create a "practice" lab we administer that people can join to try it out until their lab is approved.) * Editing/managing a lab: * Some basic information about each lab is saved. To include: name (e.g. "MIT Early Childhood Cognition Lab"), PI name(s), contact email, contact phone, lab website, parent-facing description of lab, IRB contact info, PDF upload for access agreement * Lab manager can edit & save info, any lab member can view info * Not otherwise used for now, but will eventually allow displaying lab info to participants along with studies (current/past) * Creating a lab: * Any researcher can click (on same page as list of all labs) to request a new lab. * This takes them to a form where they need to fill out some information about the lab, see above. * A Lookit admin is notified when a request for a new lab is submitted. * A Lookit admin can view pending labs, edit information, and create the Lab. (It may be that the lab gets actually created, but some "approved" flag is in initially false and unapproved labs are basically treated as nonexistent for all other purposes.) * The requester is automatically added to and made an admin of the approved lab. * Lab permissions: | | Lab researcher | Lab read | Lab admin | |---|---|---|---| | Create studies associated this Lab, and can be added manually to any of this Lab's studies | x | x | x | | Preview role for all Lab studies | | x | x | | Manager role for all Lab studies | | | x | | Manage permissions for Lab (add new researchers, etc.) | | | x | | Edit lab metadata (description, website, etc.) | | | x | * Changes to study model & views: * A study must be associated with exactly 1 lab. When the study is created, this must be a lab the creator is a member of. Provide dropdown menu on study create? * All study researchers must be members of the study's lab. Limit searches when adding researchers in "manage researchers" section to the study lab's members. * Study lab can be changed, but only by an admin to another lab they are a member of. Probably put in new small form near 'manage researchers'. * Note that when cloning a study, we will need to change the lab of the study to a lab the cloning user is a member of; possibly prompt for "clone into..." Note that new Lookit-wide permissions will be expected to eventually replace the "MIT Org Read" and "MIT Org Admin," (see #459) but for now (given the staff of two...) we will just rely on superuser perms.
non_main
fine tune permissions for lookit access by researchers pain point onboarding new researchers to use the staging and or production lookit sites requires a substantial number of manual steps and some awkward workarounds for instance researchers have to try to log in let kim know they ve done so and then kim grants access to the site she is not automatically notified as an admin that someone has requested access kim also grants them access to existing example studies manually there is no way for closely associated researchers i e those in the same lab to automatically get access to all of their lab s studies and no way to associate studies with a particular research group beyond the pi contact info provided there is no distinction between access to study details which we might want to share broadly e g to allow easy replication and access to study data planned functionality changes access automatically give researcher access to lookit upon authentication via osf researchers will require lab permissions to create new studies or access existing studies add a study level setting share study protocol on lookit which makes study details not data accessible by any lookit researcher no lab affiliation required i e grants preview access below create finer grained permissions for individual study access and apply to views api better names for the groups are welcome preview design analysis submission processor researcher manager admin read study details protocol etc x x x x x x x write study details x x x change study status incl write changes that would reject study x x x x manage study researchers grant change permissions x x view download study data x x x code consent and create edit feedback x x x change study lab x create new lab model to encompass the following functionality joining a lab any researcher can see a list of all labs and click to request to join one a request to join triggers an email to lab managers see below a lab manager admin can see a list of researchers who have requested to join their lab and assign permissions as on manage researchers for mit admins currently there must always be at least one lab admin we can create a practice lab we administer that people can join to try it out until their lab is approved editing managing a lab some basic information about each lab is saved to include name e g mit early childhood cognition lab pi name s contact email contact phone lab website parent facing description of lab irb contact info pdf upload for access agreement lab manager can edit save info any lab member can view info not otherwise used for now but will eventually allow displaying lab info to participants along with studies current past creating a lab any researcher can click on same page as list of all labs to request a new lab this takes them to a form where they need to fill out some information about the lab see above a lookit admin is notified when a request for a new lab is submitted a lookit admin can view pending labs edit information and create the lab it may be that the lab gets actually created but some approved flag is in initially false and unapproved labs are basically treated as nonexistent for all other purposes the requester is automatically added to and made an admin of the approved lab lab permissions lab researcher lab read lab admin create studies associated this lab and can be added manually to any of this lab s studies x x x preview role for all lab studies x x manager role for all lab studies x manage permissions for lab add new researchers etc x edit lab metadata description website etc x changes to study model views a study must be associated with exactly lab when the study is created this must be a lab the creator is a member of provide dropdown menu on study create all study researchers must be members of the study s lab limit searches when adding researchers in manage researchers section to the study lab s members study lab can be changed but only by an admin to another lab they are a member of probably put in new small form near manage researchers note that when cloning a study we will need to change the lab of the study to a lab the cloning user is a member of possibly prompt for clone into note that new lookit wide permissions will be expected to eventually replace the mit org read and mit org admin see but for now given the staff of two we will just rely on superuser perms
0
83,298
3,633,372,411
IssuesEvent
2016-02-11 14:22:12
oriel-hub/OKHub-Content-WordPress
https://api.github.com/repos/oriel-hub/OKHub-Content-WordPress
opened
Widget - document links to data.okhub.org item url rather than source or actual url
bug Priority 1
Content items from OKhub (output in the WP widget) link to the item on data.okhub.org. The links should go either to: a) website_url b) urls c) a modal pop up like on the HTML widget (http://data.okhub.org/apps/widget/)
1.0
Widget - document links to data.okhub.org item url rather than source or actual url - Content items from OKhub (output in the WP widget) link to the item on data.okhub.org. The links should go either to: a) website_url b) urls c) a modal pop up like on the HTML widget (http://data.okhub.org/apps/widget/)
non_main
widget document links to data okhub org item url rather than source or actual url content items from okhub output in the wp widget link to the item on data okhub org the links should go either to a website url b urls c a modal pop up like on the html widget
0
355,076
10,576,194,590
IssuesEvent
2019-10-07 17:19:32
predictive-technology-laboratory/sensus
https://api.github.com/repos/predictive-technology-laboratory/sensus
closed
An issue with snoozing
High priority bug
Looks like the probes properties (e.g. alert when backgrounded) will be overwritten to default after the snooze period ends.
1.0
An issue with snoozing - Looks like the probes properties (e.g. alert when backgrounded) will be overwritten to default after the snooze period ends.
non_main
an issue with snoozing looks like the probes properties e g alert when backgrounded will be overwritten to default after the snooze period ends
0
256,995
8,131,760,792
IssuesEvent
2018-08-18 01:53:24
wesnoth/wesnoth
https://api.github.com/repos/wesnoth/wesnoth
opened
[message] could cause OOS if option visibility differs per-client
Bug Low-priority MP WML
Thanks to @sevu for thinking of this. The `[message]` code first filters the options list for those options that should be displayed (depending on the `[show_if]` conditions). It then synchronizes your choice based on the index of the chosen option in the filtered list. If one of the `[show_if]` conditions depended on something that is not the same on all clients, such as a persistent global variable, the filtered list would be different on each client which could lead to an out-of-sync error. To fix this I think we'd need some sort of two-way mapping between the filtered and unfiltered option indices - after the choice is returned, map it to the unfiltered index, synchronize that, then map it back after the choice is synchronized.
1.0
[message] could cause OOS if option visibility differs per-client - Thanks to @sevu for thinking of this. The `[message]` code first filters the options list for those options that should be displayed (depending on the `[show_if]` conditions). It then synchronizes your choice based on the index of the chosen option in the filtered list. If one of the `[show_if]` conditions depended on something that is not the same on all clients, such as a persistent global variable, the filtered list would be different on each client which could lead to an out-of-sync error. To fix this I think we'd need some sort of two-way mapping between the filtered and unfiltered option indices - after the choice is returned, map it to the unfiltered index, synchronize that, then map it back after the choice is synchronized.
non_main
could cause oos if option visibility differs per client thanks to sevu for thinking of this the code first filters the options list for those options that should be displayed depending on the conditions it then synchronizes your choice based on the index of the chosen option in the filtered list if one of the conditions depended on something that is not the same on all clients such as a persistent global variable the filtered list would be different on each client which could lead to an out of sync error to fix this i think we d need some sort of two way mapping between the filtered and unfiltered option indices after the choice is returned map it to the unfiltered index synchronize that then map it back after the choice is synchronized
0
532
8,375,384,891
IssuesEvent
2018-10-05 16:14:04
Cha-OS/colabo
https://api.github.com/repos/Cha-OS/colabo
opened
ColaboFlow robustness
reliability
+ initial feedback on availability of workers/consumers/services + pinging/heartbeat of availability
True
ColaboFlow robustness - + initial feedback on availability of workers/consumers/services + pinging/heartbeat of availability
non_main
colaboflow robustness initial feedback on availability of workers consumers services pinging heartbeat of availability
0
60
2,508,334,480
IssuesEvent
2015-01-13 01:09:40
Whonix/Whonix
https://api.github.com/repos/Whonix/Whonix
closed
mail OTF for security review
security wait
When Whonix 9 has been released, notify OTF and ask for the security review they offered.
True
mail OTF for security review - When Whonix 9 has been released, notify OTF and ask for the security review they offered.
non_main
mail otf for security review when whonix has been released notify otf and ask for the security review they offered
0
2,494
8,655,457,917
IssuesEvent
2018-11-27 16:00:18
codestation/qcma
https://api.github.com/repos/codestation/qcma
closed
adding to Debian archive
unmaintained
Hi, You have taken so many efforts to build a nice .deb ; why not ask to add your package to the Debian archive ? You may think of the ITP (Intent To Package) process as a bit circimonvoluted; I can do most steps for you if you like, and upload to mentors.debian.org where a Debian Developper will review it once again and push it in the archive.
True
adding to Debian archive - Hi, You have taken so many efforts to build a nice .deb ; why not ask to add your package to the Debian archive ? You may think of the ITP (Intent To Package) process as a bit circimonvoluted; I can do most steps for you if you like, and upload to mentors.debian.org where a Debian Developper will review it once again and push it in the archive.
main
adding to debian archive hi you have taken so many efforts to build a nice deb why not ask to add your package to the debian archive you may think of the itp intent to package process as a bit circimonvoluted i can do most steps for you if you like and upload to mentors debian org where a debian developper will review it once again and push it in the archive
1
123,584
16,508,715,984
IssuesEvent
2021-05-25 23:22:30
microsoft/TypeScript
https://api.github.com/repos/microsoft/TypeScript
closed
Design Meeting Notes, 1/7/2021
Design Notes
# Abstract Constructor Types https://github.com/microsoft/TypeScript/pull/36392 * \[\[Reiteration of rules of abstract constructor, see the PR]] * Plan to support `abstract` constructor types to the `InstanceType` helper type in a follow-up release. * Who wants this? * API Extractor is one example * What are they doing without it? * Hacks like `Function & { prototype: T }`. * How does the "...any" signature work? Does it preserve `abstract`? * When it comes to `abstract class`, there are some intractable problems. * Classes become nominal as soon as you add `private`/`protected`. * That conflicts with our strategy of displaying things structurally. * Unique symbol has similar issues. * Note: the quick info display is very misleading. * Whole experience around mixin factories feels a little half-baked, but `abstract`. * Arguable point of view. * `typeof class { ... }` would ideally work better. * What about `abstract` on instance sides? Seems odd that you need to declare a class to get an `abstract` instance side. * We've never been 100% with classes to instances. # Preserved Origin Types for Unions and Intersections https://github.com/microsoft/TypeScript/pull/42149 * Very common issue where people have an alias like ```ts type T = "a" | "b" | "c"; // displays as `"a" | "b" | "c" | "d"` let x: T | "d"; ``` * Oops! * This is because we flatten union types out to "normalize" them. * This means that that there's no `"a" | "b" | "c"` union whose type can be tied back to `T`. * This PR ensures that we track the "origin" node before the types get normalized/flattened so that we can display these correctly. * Also other work: when we end up normalizing intersections of union types, we'll preserve the original structure of the written type for display. * All this means that we can do a little extra work to display things better, more consistently, avoid unrelated aliasing. * This manifests itself in the language service; hover over an alias before a direct usage of a union, and vice versa, and you'll get consistent results. ```ts type T = "a" | "b" | "c"; // won't display as 'T', regardless of whether // you hover over T first. let x: "a" | "b" | "c"; ``` * So we'll do better at displaying the type that you wrote! * We also do this for `keyof` types now as well; we'll preserve `keyof`s. * Super useful for code which maps over every possible property, where you really DON'T want to have a union of 1000 literals. * Is there a cost to this? * Yes, we do we a bit of work being done. * This is a little bit unfortunate because you can't really expand these out. * We do something with literal types and fresh types - is there a similar approach we could use here? * As of 3 days ago, we started caching relations between unions, and we have other heuristics - so that's part of why the perf isn't too bad. * Does it make sense to have a first-class alias type internally? * Would generalize the aliasing logic farther than unions and `keyof`. * Can imagine some editor UI to give the alternative representation to avoid opaque type display. * Conclusion: looks good - let's get it in.
1.0
Design Meeting Notes, 1/7/2021 - # Abstract Constructor Types https://github.com/microsoft/TypeScript/pull/36392 * \[\[Reiteration of rules of abstract constructor, see the PR]] * Plan to support `abstract` constructor types to the `InstanceType` helper type in a follow-up release. * Who wants this? * API Extractor is one example * What are they doing without it? * Hacks like `Function & { prototype: T }`. * How does the "...any" signature work? Does it preserve `abstract`? * When it comes to `abstract class`, there are some intractable problems. * Classes become nominal as soon as you add `private`/`protected`. * That conflicts with our strategy of displaying things structurally. * Unique symbol has similar issues. * Note: the quick info display is very misleading. * Whole experience around mixin factories feels a little half-baked, but `abstract`. * Arguable point of view. * `typeof class { ... }` would ideally work better. * What about `abstract` on instance sides? Seems odd that you need to declare a class to get an `abstract` instance side. * We've never been 100% with classes to instances. # Preserved Origin Types for Unions and Intersections https://github.com/microsoft/TypeScript/pull/42149 * Very common issue where people have an alias like ```ts type T = "a" | "b" | "c"; // displays as `"a" | "b" | "c" | "d"` let x: T | "d"; ``` * Oops! * This is because we flatten union types out to "normalize" them. * This means that that there's no `"a" | "b" | "c"` union whose type can be tied back to `T`. * This PR ensures that we track the "origin" node before the types get normalized/flattened so that we can display these correctly. * Also other work: when we end up normalizing intersections of union types, we'll preserve the original structure of the written type for display. * All this means that we can do a little extra work to display things better, more consistently, avoid unrelated aliasing. * This manifests itself in the language service; hover over an alias before a direct usage of a union, and vice versa, and you'll get consistent results. ```ts type T = "a" | "b" | "c"; // won't display as 'T', regardless of whether // you hover over T first. let x: "a" | "b" | "c"; ``` * So we'll do better at displaying the type that you wrote! * We also do this for `keyof` types now as well; we'll preserve `keyof`s. * Super useful for code which maps over every possible property, where you really DON'T want to have a union of 1000 literals. * Is there a cost to this? * Yes, we do we a bit of work being done. * This is a little bit unfortunate because you can't really expand these out. * We do something with literal types and fresh types - is there a similar approach we could use here? * As of 3 days ago, we started caching relations between unions, and we have other heuristics - so that's part of why the perf isn't too bad. * Does it make sense to have a first-class alias type internally? * Would generalize the aliasing logic farther than unions and `keyof`. * Can imagine some editor UI to give the alternative representation to avoid opaque type display. * Conclusion: looks good - let's get it in.
non_main
design meeting notes abstract constructor types plan to support abstract constructor types to the instancetype helper type in a follow up release who wants this api extractor is one example what are they doing without it hacks like function prototype t how does the any signature work does it preserve abstract when it comes to abstract class there are some intractable problems classes become nominal as soon as you add private protected that conflicts with our strategy of displaying things structurally unique symbol has similar issues note the quick info display is very misleading whole experience around mixin factories feels a little half baked but abstract arguable point of view typeof class would ideally work better what about abstract on instance sides seems odd that you need to declare a class to get an abstract instance side we ve never been with classes to instances preserved origin types for unions and intersections very common issue where people have an alias like ts type t a b c displays as a b c d let x t d oops this is because we flatten union types out to normalize them this means that that there s no a b c union whose type can be tied back to t this pr ensures that we track the origin node before the types get normalized flattened so that we can display these correctly also other work when we end up normalizing intersections of union types we ll preserve the original structure of the written type for display all this means that we can do a little extra work to display things better more consistently avoid unrelated aliasing this manifests itself in the language service hover over an alias before a direct usage of a union and vice versa and you ll get consistent results ts type t a b c won t display as t regardless of whether you hover over t first let x a b c so we ll do better at displaying the type that you wrote we also do this for keyof types now as well we ll preserve keyof s super useful for code which maps over every possible property where you really don t want to have a union of literals is there a cost to this yes we do we a bit of work being done this is a little bit unfortunate because you can t really expand these out we do something with literal types and fresh types is there a similar approach we could use here as of days ago we started caching relations between unions and we have other heuristics so that s part of why the perf isn t too bad does it make sense to have a first class alias type internally would generalize the aliasing logic farther than unions and keyof can imagine some editor ui to give the alternative representation to avoid opaque type display conclusion looks good let s get it in
0
16
2,515,195,346
IssuesEvent
2015-01-15 17:01:06
simplesamlphp/simplesamlphp
https://api.github.com/repos/simplesamlphp/simplesamlphp
opened
Extract the oauth module out of the repository
enhancement low maintainability
It should get its own repository and allow installation through composer. For other modules depending on this one, add a composer dependency on the module.
True
Extract the oauth module out of the repository - It should get its own repository and allow installation through composer. For other modules depending on this one, add a composer dependency on the module.
main
extract the oauth module out of the repository it should get its own repository and allow installation through composer for other modules depending on this one add a composer dependency on the module
1
4,639
24,024,636,243
IssuesEvent
2022-09-15 10:26:13
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Filters not applied when calculating count of items within group
type: bug work: backend status: ready restricted: maintainers
## Reproduce 1. Go to the Library Management schema. 1. Load the Table Page for the Publications table. 1. Group by "Publication Year". 1. Observe the first group, for year 1900, to contain 10 records and to display a "Count" of 10. Good. 1. Add a filter condition requiring Title to contain the string "To". 1. Observe the first group, for year 1900, to contain 2 records. 1. Expect "Count" to display 2. 1. Observe "Count" displays 10. ![image](https://user-images.githubusercontent.com/42411/188676046-46fe3997-5c5e-433c-9f5e-f0f73bf36bb8.png)
True
Filters not applied when calculating count of items within group - ## Reproduce 1. Go to the Library Management schema. 1. Load the Table Page for the Publications table. 1. Group by "Publication Year". 1. Observe the first group, for year 1900, to contain 10 records and to display a "Count" of 10. Good. 1. Add a filter condition requiring Title to contain the string "To". 1. Observe the first group, for year 1900, to contain 2 records. 1. Expect "Count" to display 2. 1. Observe "Count" displays 10. ![image](https://user-images.githubusercontent.com/42411/188676046-46fe3997-5c5e-433c-9f5e-f0f73bf36bb8.png)
main
filters not applied when calculating count of items within group reproduce go to the library management schema load the table page for the publications table group by publication year observe the first group for year to contain records and to display a count of good add a filter condition requiring title to contain the string to observe the first group for year to contain records expect count to display observe count displays
1
20,214
10,679,056,095
IssuesEvent
2019-10-21 18:30:27
flutter/flutter
https://api.github.com/repos/flutter/flutter
closed
Automatic shader-compile jank removal using SkSL precompile
customer: dream (g3) engine severe: customer blocker severe: performance
Original title: "Shader compilations during image scrolling on ToT" See https://b.corp.google.com/issues/140174804 for discussion. Conversation should occur there.
True
Automatic shader-compile jank removal using SkSL precompile - Original title: "Shader compilations during image scrolling on ToT" See https://b.corp.google.com/issues/140174804 for discussion. Conversation should occur there.
non_main
automatic shader compile jank removal using sksl precompile original title shader compilations during image scrolling on tot see for discussion conversation should occur there
0
1,931
6,609,868,493
IssuesEvent
2017-09-19 15:48:21
Kristinita/Erics-Green-Room
https://api.github.com/repos/Kristinita/Erics-Green-Room
closed
[Feature request] Показ после отыгрыша неотвеченных вопросов
need-maintainer
### 1. Запрос Неплохо было бы показывать после игры вопросы, на которые не ответил, с ответами. ### 2. Аргументация После отыгрыша я всегда смотрю на вопросы, которые не смог ответить, и пытаюсь запомнить ответы на них, дабы успешнее отыграть в следующий раз. Сейчас приходится искать их по всему чату, что — при условии, что пакет большой, — может быть довольно долго. ### 3. Желаемое поведение Закончился последний вопрос пакета → показываются неотвеченные вопросы и ответы на них. Пример: ```markdown [11:02:23] <GREEN> Слова закончились. Всем спасибо. [10:53:12] <GREEN> Вопрос №15 из 67: -------------------------------------------------------- Гернси -------------------------------------------------------- [10:53:26] <GREEN> Правильный ответ: "Сен-Питер-Порт" [10:53:49] <GREEN> Вопрос №18 из 67: -------------------------------------------------------- Остров Норфолк -------------------------------------------------------- [10:54:03] <GREEN> Правильный ответ: "Кингстон" [10:54:39] <GREEN> Вопрос №22 из 67: -------------------------------------------------------- Гренада -------------------------------------------------------- [10:54:53] <GREEN> Правильный ответ: "Сент-Джорджес" ``` Спасибо.
True
[Feature request] Показ после отыгрыша неотвеченных вопросов - ### 1. Запрос Неплохо было бы показывать после игры вопросы, на которые не ответил, с ответами. ### 2. Аргументация После отыгрыша я всегда смотрю на вопросы, которые не смог ответить, и пытаюсь запомнить ответы на них, дабы успешнее отыграть в следующий раз. Сейчас приходится искать их по всему чату, что — при условии, что пакет большой, — может быть довольно долго. ### 3. Желаемое поведение Закончился последний вопрос пакета → показываются неотвеченные вопросы и ответы на них. Пример: ```markdown [11:02:23] <GREEN> Слова закончились. Всем спасибо. [10:53:12] <GREEN> Вопрос №15 из 67: -------------------------------------------------------- Гернси -------------------------------------------------------- [10:53:26] <GREEN> Правильный ответ: "Сен-Питер-Порт" [10:53:49] <GREEN> Вопрос №18 из 67: -------------------------------------------------------- Остров Норфолк -------------------------------------------------------- [10:54:03] <GREEN> Правильный ответ: "Кингстон" [10:54:39] <GREEN> Вопрос №22 из 67: -------------------------------------------------------- Гренада -------------------------------------------------------- [10:54:53] <GREEN> Правильный ответ: "Сент-Джорджес" ``` Спасибо.
main
показ после отыгрыша неотвеченных вопросов запрос неплохо было бы показывать после игры вопросы на которые не ответил с ответами аргументация после отыгрыша я всегда смотрю на вопросы которые не смог ответить и пытаюсь запомнить ответы на них дабы успешнее отыграть в следующий раз сейчас приходится искать их по всему чату что — при условии что пакет большой — может быть довольно долго желаемое поведение закончился последний вопрос пакета → показываются неотвеченные вопросы и ответы на них пример markdown слова закончились всем спасибо вопрос № из гернси правильный ответ сен питер порт вопрос № из остров норфолк правильный ответ кингстон вопрос № из гренада правильный ответ сент джорджес спасибо
1
1,220
5,203,333,663
IssuesEvent
2017-01-24 12:29:32
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
ecs_taskdefinition bug when looping with_items
affects_2.1 aws bug_report cloud waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> ecs_taskdefinition.py ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` 2.1.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Running on: OS X El Capitan Managing: aws esc ##### SUMMARY <!--- Explain the problem briefly --> When using module ecs_taskdefinition in an ansible loop (with_items), all item<dict> values are converted to <string>. This causes ecs task definition validator to break the module on submit. ``` The error was: Invalid type for parameter containerDefinitions[0].cpu, value : 128, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> ``` ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` --- - name: ECS tasks hosts: localhost gather_facts: no vars: ecs_cluster_name: "ecsAnsibleTest" ecs_syslog_server: "tcp://syslogserver.ecample.com:514" tasks: - name: "some-task" revision: 1 state: present cont_name: "contname" cont_cpu: 128 cont_memory: 256 cont_image: "someorg/someimage:latest" cont_command: - "/some/command" cont_bind_port: 9999 cont_env: - name: "APPLICATION_SECRET" value: "!!!somesecret!!!" tasks: - name: Manage ECS tasks local_action: module: ecs_taskdefinition family: "{{ item.name }}" revision: "{{ item.revision }}" state: "{{ item.state }}" containers: - name: "{{ item.cont_name }}" command: "{{ item.cont_command }}" cpu: "{{ item.cont_cpu | int }}" memory: "{{ item.cont_memory | default(128) }}" image: "{{ item.cont_image }}" essential: true portMappings: - containerPort: "{{ item.cont_port | default(8888) }}" hostPort: "{{ item.cont_bind_port }}" logConfiguration: logDriver: "syslog" options: syslog-address: "{{ ecs_syslog_server }}" environment: "{{ item.cont_env }}" register: manage_tasks_output with_items: "{{ tasks }}" - debug: var: "{{ manage_tasks_output }}" verbosity: 4 with_items: - "{{ manage_tasks_output }}" ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> I expect it to work :) ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` Invalid type for parameter containerDefinitions[0].memory, value: 256, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].portMappings[0].containerPort, value: 8888, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].portMappings[0].hostPort, value: 9999, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].cpu, value: 128, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> ``` <!--- Paste verbatim command output between quotes below --> In the meantime i solved it with this patch. However, I don't PR it because it solves just my specific issue and the fix is kinda ugly. Probably there is a much better way to fix this. ``` --- ecs_taskdefinition.py 2016-07-31 20:50:33.000000000 +0200 +++ /usr/local/Cellar/ansible/2.1.0.0/libexec/lib/python2.7/site-packages/ansible/modules/extras/cloud/amazon/ecs_taskdefinition.py 2016-07-31 20:51:22.000000000 +0200 @@ -192,6 +192,19 @@ volumes = module.params['volumes'] if volumes is None: volumes = [] + + def setIntTypesInContainers(c): + ck = c.keys() + if 'cpu' in ck: c['cpu'] = int(c['cpu']) + if 'memory' in ck: c['memory'] = int(c['memory']) + if 'portMappings' in ck: + for m in c['portMappings']: + mk = m.keys() + if 'containerPort' in mk: m['containerPort'] = int(m['containerPort']) + if 'hostPort' in mk: m['hostPort'] = int(m['hostPort']) + + map(setIntTypesInContainers, module.params['containers']) + results['taskdefinition'] = task_mgr.register_task(module.params['family'], module.params['containers'], volumes) results['changed'] = True ```
True
ecs_taskdefinition bug when looping with_items - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> ecs_taskdefinition.py ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` 2.1.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Running on: OS X El Capitan Managing: aws esc ##### SUMMARY <!--- Explain the problem briefly --> When using module ecs_taskdefinition in an ansible loop (with_items), all item<dict> values are converted to <string>. This causes ecs task definition validator to break the module on submit. ``` The error was: Invalid type for parameter containerDefinitions[0].cpu, value : 128, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> ``` ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` --- - name: ECS tasks hosts: localhost gather_facts: no vars: ecs_cluster_name: "ecsAnsibleTest" ecs_syslog_server: "tcp://syslogserver.ecample.com:514" tasks: - name: "some-task" revision: 1 state: present cont_name: "contname" cont_cpu: 128 cont_memory: 256 cont_image: "someorg/someimage:latest" cont_command: - "/some/command" cont_bind_port: 9999 cont_env: - name: "APPLICATION_SECRET" value: "!!!somesecret!!!" tasks: - name: Manage ECS tasks local_action: module: ecs_taskdefinition family: "{{ item.name }}" revision: "{{ item.revision }}" state: "{{ item.state }}" containers: - name: "{{ item.cont_name }}" command: "{{ item.cont_command }}" cpu: "{{ item.cont_cpu | int }}" memory: "{{ item.cont_memory | default(128) }}" image: "{{ item.cont_image }}" essential: true portMappings: - containerPort: "{{ item.cont_port | default(8888) }}" hostPort: "{{ item.cont_bind_port }}" logConfiguration: logDriver: "syslog" options: syslog-address: "{{ ecs_syslog_server }}" environment: "{{ item.cont_env }}" register: manage_tasks_output with_items: "{{ tasks }}" - debug: var: "{{ manage_tasks_output }}" verbosity: 4 with_items: - "{{ manage_tasks_output }}" ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> I expect it to work :) ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` Invalid type for parameter containerDefinitions[0].memory, value: 256, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].portMappings[0].containerPort, value: 8888, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].portMappings[0].hostPort, value: 9999, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].cpu, value: 128, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> ``` <!--- Paste verbatim command output between quotes below --> In the meantime i solved it with this patch. However, I don't PR it because it solves just my specific issue and the fix is kinda ugly. Probably there is a much better way to fix this. ``` --- ecs_taskdefinition.py 2016-07-31 20:50:33.000000000 +0200 +++ /usr/local/Cellar/ansible/2.1.0.0/libexec/lib/python2.7/site-packages/ansible/modules/extras/cloud/amazon/ecs_taskdefinition.py 2016-07-31 20:51:22.000000000 +0200 @@ -192,6 +192,19 @@ volumes = module.params['volumes'] if volumes is None: volumes = [] + + def setIntTypesInContainers(c): + ck = c.keys() + if 'cpu' in ck: c['cpu'] = int(c['cpu']) + if 'memory' in ck: c['memory'] = int(c['memory']) + if 'portMappings' in ck: + for m in c['portMappings']: + mk = m.keys() + if 'containerPort' in mk: m['containerPort'] = int(m['containerPort']) + if 'hostPort' in mk: m['hostPort'] = int(m['hostPort']) + + map(setIntTypesInContainers, module.params['containers']) + results['taskdefinition'] = task_mgr.register_task(module.params['family'], module.params['containers'], volumes) results['changed'] = True ```
main
ecs taskdefinition bug when looping with items issue type bug report component name ecs taskdefinition py ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running on os x el capitan managing aws esc summary when using module ecs taskdefinition in an ansible loop with items all item values are converted to this causes ecs task definition validator to break the module on submit the error was invalid type for parameter containerdefinitions cpu value type valid types steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name ecs tasks hosts localhost gather facts no vars ecs cluster name ecsansibletest ecs syslog server tcp syslogserver ecample com tasks name some task revision state present cont name contname cont cpu cont memory cont image someorg someimage latest cont command some command cont bind port cont env name application secret value somesecret tasks name manage ecs tasks local action module ecs taskdefinition family item name revision item revision state item state containers name item cont name command item cont command cpu item cont cpu int memory item cont memory default image item cont image essential true portmappings containerport item cont port default hostport item cont bind port logconfiguration logdriver syslog options syslog address ecs syslog server environment item cont env register manage tasks output with items tasks debug var manage tasks output verbosity with items manage tasks output expected results i expect it to work actual results invalid type for parameter containerdefinitions memory value type valid types invalid type for parameter containerdefinitions portmappings containerport value type valid types invalid type for parameter containerdefinitions portmappings hostport value type valid types invalid type for parameter containerdefinitions cpu value type valid types in the meantime i solved it with this patch however i don t pr it because it solves just my specific issue and the fix is kinda ugly probably there is a much better way to fix this ecs taskdefinition py usr local cellar ansible libexec lib site packages ansible modules extras cloud amazon ecs taskdefinition py volumes module params if volumes is none volumes def setinttypesincontainers c ck c keys if cpu in ck c int c if memory in ck c int c if portmappings in ck for m in c mk m keys if containerport in mk m int m if hostport in mk m int m map setinttypesincontainers module params results task mgr register task module params module params volumes results true
1
169,650
26,836,482,362
IssuesEvent
2023-02-02 19:57:49
cov-lineages/pango-designation
https://api.github.com/repos/cov-lineages/pango-designation
closed
Big sublineage of BN.1.3 defined by Orf7b:C41W emerged in Vietnam977sequences as 2023/02/02
designated BA.2.75
I randomly met this sublineage while checking airport surveillance it caught my attention cause its main mutation Orf7b:C41W (T27878G) sounded new to me and collection dates were all quite recent. Digging a bit more i found that this sublineage of BN.1.3 is circulating with a significant prevalence in South Korea hanging around 2% of samples in the last few weeks while growing to 0,5% of cases in Japan. So i decided to check growth advantages in SOuth Korea and it has a slight but solid advantage [versus parental BN.1.3 ](https://cov-spectrum.org/explore/South%20Korea/AllSamples/Past3M/variants?nextcladePangoLineage=BN.1.3*&aaMutations1=Orf7b%3AC41W&nextcladePangoLineage1=BN.1.3*&analysisMode=CompareToBaseline&) <img width="923" alt="Schermata 2023-01-04 alle 22 21 26" src="https://user-images.githubusercontent.com/87669813/210652178-944b28b0-e5ec-4dc9-846d-401c51764242.png"> It seems ahead of BQ.1 clan while just ahead the leading group with CH.1.1, even if it is still far from XBB.1.5. I would like to highlight that a sublineage of this one gained a further ORf7b:W41L mutation with G27877T (15 sequences) ( @thomaspeacock could you check if i got it right please). Considering those sequences too the advantage seems slighty bigger **Defining mutations:** BN.1.3 + ORF1a:M3627I (G11146T) > ORF1a:H110Y (C593T) > C7390A > Orf7b:C41W (T27878G ) **Usher Tree:** i would like to highlight the big saltation branch on the top of the USher tree currently circulating in Japan and internationally and the little cluster with S:E471Q in the bottom part of the tree: https://nextstrain.org/fetch/genome.ucsc.edu/trash/ct/subtreeAuspice1_genome_2143_5e6d10.json?c=country&label=id:node_7869439 <img width="1164" alt="Schermata 2023-01-04 alle 22 30 32" src="https://user-images.githubusercontent.com/87669813/210653565-1c1a4890-8ffa-41f2-b77a-d7e6e0f5f91d.png"> Gisaid query: NS7b_C41W,E_T11A,NS3_T229I finds 455 Sequences : <Details> <summary>Expand for EPI_ISLs</summary> EPI_ISL_15280956, EPI_ISL_15341181, EPI_ISL_15609500, EPI_ISL_15609618, EPI_ISL_15609627, EPI_ISL_15641407, EPI_ISL_15653379, EPI_ISL_15671987, EPI_ISL_15672526, EPI_ISL_15694051, EPI_ISL_15695703, EPI_ISL_15695743, EPI_ISL_15712256, EPI_ISL_15732187, EPI_ISL_15732258, EPI_ISL_15732878, EPI_ISL_15733027, EPI_ISL_15736596, EPI_ISL_15755897, EPI_ISL_15756084, EPI_ISL_15756220, EPI_ISL_15757122, EPI_ISL_15757130, EPI_ISL_15757137, EPI_ISL_15783588, EPI_ISL_15783784, EPI_ISL_15784352, EPI_ISL_15794137, EPI_ISL_15794722, EPI_ISL_15804436, EPI_ISL_15811198-15811199, EPI_ISL_15811297, EPI_ISL_15811301, EPI_ISL_15820479, EPI_ISL_15831920, EPI_ISL_15838570, EPI_ISL_15838573, EPI_ISL_15842274, EPI_ISL_15848957, EPI_ISL_15849081, EPI_ISL_15850241, EPI_ISL_15850249, EPI_ISL_15875604, EPI_ISL_15887437, EPI_ISL_15887456, EPI_ISL_15896856, EPI_ISL_15896972, EPI_ISL_15897014, EPI_ISL_15905785, EPI_ISL_15905920, EPI_ISL_15906229, EPI_ISL_15906345, EPI_ISL_15906718, EPI_ISL_15907094, EPI_ISL_15907356, EPI_ISL_15907753, EPI_ISL_15907982, EPI_ISL_15910734, EPI_ISL_15916557, EPI_ISL_15917361, EPI_ISL_15923945, EPI_ISL_15938456, EPI_ISL_15943762, EPI_ISL_15944351, EPI_ISL_15944372, EPI_ISL_15944966, EPI_ISL_15946986, EPI_ISL_15950714, EPI_ISL_15957977, EPI_ISL_15961223, EPI_ISL_15978568, EPI_ISL_15979026, EPI_ISL_15979247, EPI_ISL_15979270, EPI_ISL_15979469, EPI_ISL_15979472, EPI_ISL_15979493, EPI_ISL_15979638, EPI_ISL_15979692, EPI_ISL_15979967, EPI_ISL_15980180, EPI_ISL_15981420, EPI_ISL_15981556-15981557, EPI_ISL_15981660, EPI_ISL_15987669, EPI_ISL_15987677, EPI_ISL_15997256, EPI_ISL_15997337, EPI_ISL_15999784, EPI_ISL_15999963, EPI_ISL_16004094, EPI_ISL_16004167, EPI_ISL_16004234, EPI_ISL_16004466, EPI_ISL_16010751, EPI_ISL_16011832, EPI_ISL_16026624, EPI_ISL_16026626, EPI_ISL_16035681-16035682, EPI_ISL_16036315, EPI_ISL_16036943, EPI_ISL_16040533, EPI_ISL_16048728, EPI_ISL_16051039, EPI_ISL_16051737, EPI_ISL_16051833, EPI_ISL_16058743, EPI_ISL_16058908, EPI_ISL_16059055, EPI_ISL_16060198, EPI_ISL_16060276, EPI_ISL_16060522, EPI_ISL_16060639, EPI_ISL_16060762, EPI_ISL_16067524, EPI_ISL_16069571, EPI_ISL_16073621, EPI_ISL_16073623, EPI_ISL_16077651, EPI_ISL_16077750, EPI_ISL_16077768, EPI_ISL_16077785, EPI_ISL_16077845, EPI_ISL_16078921, EPI_ISL_16086653, EPI_ISL_16091723, EPI_ISL_16091955, EPI_ISL_16092606, EPI_ISL_16093839, EPI_ISL_16096196, EPI_ISL_16099089, EPI_ISL_16105767, EPI_ISL_16107792, EPI_ISL_16109839, EPI_ISL_16109932, EPI_ISL_16110011, EPI_ISL_16111044, EPI_ISL_16112219, EPI_ISL_16112822, EPI_ISL_16114521, EPI_ISL_16115590, EPI_ISL_16115713, EPI_ISL_16120239, EPI_ISL_16120430, EPI_ISL_16120697, EPI_ISL_16125129, EPI_ISL_16125177, EPI_ISL_16125267, EPI_ISL_16128398, EPI_ISL_16128861, EPI_ISL_16129606, EPI_ISL_16129612, EPI_ISL_16130481, EPI_ISL_16131284, EPI_ISL_16132657, EPI_ISL_16133910, EPI_ISL_16133933, EPI_ISL_16135111, EPI_ISL_16135153, EPI_ISL_16135252, EPI_ISL_16135307, EPI_ISL_16135374, EPI_ISL_16135413, EPI_ISL_16135502, EPI_ISL_16135634, EPI_ISL_16135699, EPI_ISL_16135915, EPI_ISL_16135938, EPI_ISL_16135969-16135970, EPI_ISL_16136052, EPI_ISL_16136136, EPI_ISL_16136186, EPI_ISL_16136340, EPI_ISL_16136364, EPI_ISL_16136380, EPI_ISL_16136385, EPI_ISL_16136415, EPI_ISL_16136907, EPI_ISL_16137116, EPI_ISL_16137122, EPI_ISL_16137170, EPI_ISL_16137232-16137239, EPI_ISL_16137285, EPI_ISL_16137341, EPI_ISL_16137385, EPI_ISL_16137401, EPI_ISL_16137403, EPI_ISL_16138779, EPI_ISL_16141406, EPI_ISL_16143511, EPI_ISL_16144052, EPI_ISL_16144120, EPI_ISL_16151787, EPI_ISL_16153376, EPI_ISL_16153523, EPI_ISL_16158537-16158538, EPI_ISL_16161745, EPI_ISL_16161853, EPI_ISL_16165348, EPI_ISL_16165767, EPI_ISL_16167531, EPI_ISL_16169389, EPI_ISL_16173920, EPI_ISL_16173943, EPI_ISL_16174192-16174193, EPI_ISL_16174392, EPI_ISL_16174737, EPI_ISL_16175239, EPI_ISL_16180229, EPI_ISL_16186420, EPI_ISL_16186725, EPI_ISL_16186792, EPI_ISL_16186972, EPI_ISL_16187002, EPI_ISL_16187010, EPI_ISL_16187096, EPI_ISL_16189318, EPI_ISL_16190550, EPI_ISL_16191270, EPI_ISL_16191736, EPI_ISL_16192615, EPI_ISL_16192777, EPI_ISL_16192782, EPI_ISL_16192831, EPI_ISL_16192840, EPI_ISL_16192986, EPI_ISL_16193424, EPI_ISL_16194038, EPI_ISL_16196484, EPI_ISL_16201160, EPI_ISL_16202003, EPI_ISL_16206214, EPI_ISL_16209640, EPI_ISL_16210070, EPI_ISL_16210745, EPI_ISL_16216251, EPI_ISL_16217368, EPI_ISL_16217414, EPI_ISL_16217594, EPI_ISL_16220991, EPI_ISL_16222658, EPI_ISL_16225868, EPI_ISL_16225875, EPI_ISL_16225883, EPI_ISL_16229254, EPI_ISL_16234714, EPI_ISL_16238369, EPI_ISL_16245772, EPI_ISL_16245903, EPI_ISL_16245913, EPI_ISL_16246649, EPI_ISL_16246833, EPI_ISL_16250577, EPI_ISL_16254458, EPI_ISL_16255347, EPI_ISL_16255553, EPI_ISL_16256015, EPI_ISL_16256804, EPI_ISL_16257458, EPI_ISL_16259454, EPI_ISL_16264666, EPI_ISL_16264821, EPI_ISL_16265428, EPI_ISL_16265768, EPI_ISL_16270893, EPI_ISL_16270953, EPI_ISL_16271226, EPI_ISL_16271442, EPI_ISL_16271506, EPI_ISL_16272796, EPI_ISL_16273338, EPI_ISL_16273357, EPI_ISL_16273613, EPI_ISL_16273908, EPI_ISL_16273922, EPI_ISL_16274070, EPI_ISL_16278531, EPI_ISL_16279286, EPI_ISL_16279297, EPI_ISL_16279305, EPI_ISL_16279329, EPI_ISL_16279348, EPI_ISL_16279384, EPI_ISL_16279840, EPI_ISL_16279926, EPI_ISL_16280690, EPI_ISL_16284170, EPI_ISL_16284183, EPI_ISL_16284429, EPI_ISL_16291136, EPI_ISL_16291409, EPI_ISL_16291532, EPI_ISL_16291628, EPI_ISL_16291731, EPI_ISL_16292222, EPI_ISL_16292303, EPI_ISL_16292367, EPI_ISL_16292466, EPI_ISL_16292548, EPI_ISL_16292607, EPI_ISL_16292632, EPI_ISL_16292639, EPI_ISL_16292858, EPI_ISL_16294641, EPI_ISL_16298933, EPI_ISL_16299067, EPI_ISL_16299085, EPI_ISL_16299236, EPI_ISL_16299254, EPI_ISL_16301094, EPI_ISL_16301844, EPI_ISL_16301941, EPI_ISL_16303190, EPI_ISL_16305383, EPI_ISL_16305415, EPI_ISL_16307771, EPI_ISL_16308392, EPI_ISL_16311860, EPI_ISL_16313345, EPI_ISL_16314029, EPI_ISL_16315608, EPI_ISL_16319235, EPI_ISL_16319254, EPI_ISL_16320079, EPI_ISL_16322030, EPI_ISL_16322041-16322042, EPI_ISL_16322296, EPI_ISL_16322310-16322311, EPI_ISL_16323732-16323733, EPI_ISL_16323735, EPI_ISL_16323739, EPI_ISL_16323921, EPI_ISL_16324973, EPI_ISL_16329712, EPI_ISL_16329836, EPI_ISL_16330482, EPI_ISL_16330500, EPI_ISL_16333508, EPI_ISL_16333879, EPI_ISL_16333989, EPI_ISL_16336344, EPI_ISL_16336371, EPI_ISL_16336453, EPI_ISL_16336470, EPI_ISL_16336518-16336519, EPI_ISL_16336523, EPI_ISL_16336621, EPI_ISL_16336644, EPI_ISL_16336689, EPI_ISL_16336772, EPI_ISL_16336812, EPI_ISL_16337870, EPI_ISL_16337981, EPI_ISL_16338035-16338036, EPI_ISL_16338045, EPI_ISL_16338095, EPI_ISL_16338198, EPI_ISL_16338219, EPI_ISL_16338250, EPI_ISL_16339128, EPI_ISL_16339162, EPI_ISL_16339200, EPI_ISL_16339279, EPI_ISL_16339343, EPI_ISL_16339365-16339366, EPI_ISL_16339457, EPI_ISL_16339481, EPI_ISL_16339589, EPI_ISL_16339618, EPI_ISL_16339630, EPI_ISL_16339704, EPI_ISL_16339744, EPI_ISL_16339924, EPI_ISL_16339950, EPI_ISL_16339961, EPI_ISL_16340001, EPI_ISL_16340064, EPI_ISL_16340222, EPI_ISL_16340262, EPI_ISL_16340269, EPI_ISL_16340297, EPI_ISL_16340377, EPI_ISL_16340418, EPI_ISL_16340496, EPI_ISL_16340605, EPI_ISL_16340611, EPI_ISL_16340729, EPI_ISL_16340741, EPI_ISL_16340805, EPI_ISL_16340910, EPI_ISL_16340994, EPI_ISL_16341100, EPI_ISL_16341144, EPI_ISL_16341264, EPI_ISL_16341339, EPI_ISL_16341457, EPI_ISL_16341485, EPI_ISL_16341488, EPI_ISL_16341510, EPI_ISL_16341556, EPI_ISL_16341562, EPI_ISL_16342044, EPI_ISL_16342072, EPI_ISL_16342366, EPI_ISL_16342375, EPI_ISL_16342388, EPI_ISL_16342419, EPI_ISL_16342421, EPI_ISL_16342428, EPI_ISL_16342726, EPI_ISL_16344676, EPI_ISL_16347015, EPI_ISL_16347277, EPI_ISL_16352264, EPI_ISL_16353616, EPI_ISL_16353868, EPI_ISL_16359036, EPI_ISL_16359298, EPI_ISL_16359359, EPI_ISL_16359374, EPI_ISL_16359582, EPI_ISL_16360513, EPI_ISL_16360528, EPI_ISL_16360530, EPI_ISL_16360544, EPI_ISL_16364226, EPI_ISL_16370161, EPI_ISL_16370659, EPI_ISL_16372196, EPI_ISL_16374747, EPI_ISL_16376673, EPI_ISL_16377095, EPI_ISL_16377108, EPI_ISL_16377955, EPI_ISL_16378818, EPI_ISL_16379434 </details>
1.0
Big sublineage of BN.1.3 defined by Orf7b:C41W emerged in Vietnam977sequences as 2023/02/02 - I randomly met this sublineage while checking airport surveillance it caught my attention cause its main mutation Orf7b:C41W (T27878G) sounded new to me and collection dates were all quite recent. Digging a bit more i found that this sublineage of BN.1.3 is circulating with a significant prevalence in South Korea hanging around 2% of samples in the last few weeks while growing to 0,5% of cases in Japan. So i decided to check growth advantages in SOuth Korea and it has a slight but solid advantage [versus parental BN.1.3 ](https://cov-spectrum.org/explore/South%20Korea/AllSamples/Past3M/variants?nextcladePangoLineage=BN.1.3*&aaMutations1=Orf7b%3AC41W&nextcladePangoLineage1=BN.1.3*&analysisMode=CompareToBaseline&) <img width="923" alt="Schermata 2023-01-04 alle 22 21 26" src="https://user-images.githubusercontent.com/87669813/210652178-944b28b0-e5ec-4dc9-846d-401c51764242.png"> It seems ahead of BQ.1 clan while just ahead the leading group with CH.1.1, even if it is still far from XBB.1.5. I would like to highlight that a sublineage of this one gained a further ORf7b:W41L mutation with G27877T (15 sequences) ( @thomaspeacock could you check if i got it right please). Considering those sequences too the advantage seems slighty bigger **Defining mutations:** BN.1.3 + ORF1a:M3627I (G11146T) > ORF1a:H110Y (C593T) > C7390A > Orf7b:C41W (T27878G ) **Usher Tree:** i would like to highlight the big saltation branch on the top of the USher tree currently circulating in Japan and internationally and the little cluster with S:E471Q in the bottom part of the tree: https://nextstrain.org/fetch/genome.ucsc.edu/trash/ct/subtreeAuspice1_genome_2143_5e6d10.json?c=country&label=id:node_7869439 <img width="1164" alt="Schermata 2023-01-04 alle 22 30 32" src="https://user-images.githubusercontent.com/87669813/210653565-1c1a4890-8ffa-41f2-b77a-d7e6e0f5f91d.png"> Gisaid query: NS7b_C41W,E_T11A,NS3_T229I finds 455 Sequences : <Details> <summary>Expand for EPI_ISLs</summary> EPI_ISL_15280956, EPI_ISL_15341181, EPI_ISL_15609500, EPI_ISL_15609618, EPI_ISL_15609627, EPI_ISL_15641407, EPI_ISL_15653379, EPI_ISL_15671987, EPI_ISL_15672526, EPI_ISL_15694051, EPI_ISL_15695703, EPI_ISL_15695743, EPI_ISL_15712256, EPI_ISL_15732187, EPI_ISL_15732258, EPI_ISL_15732878, EPI_ISL_15733027, EPI_ISL_15736596, EPI_ISL_15755897, EPI_ISL_15756084, EPI_ISL_15756220, EPI_ISL_15757122, EPI_ISL_15757130, EPI_ISL_15757137, EPI_ISL_15783588, EPI_ISL_15783784, EPI_ISL_15784352, EPI_ISL_15794137, EPI_ISL_15794722, EPI_ISL_15804436, EPI_ISL_15811198-15811199, EPI_ISL_15811297, EPI_ISL_15811301, EPI_ISL_15820479, EPI_ISL_15831920, EPI_ISL_15838570, EPI_ISL_15838573, EPI_ISL_15842274, EPI_ISL_15848957, EPI_ISL_15849081, EPI_ISL_15850241, EPI_ISL_15850249, EPI_ISL_15875604, EPI_ISL_15887437, EPI_ISL_15887456, EPI_ISL_15896856, EPI_ISL_15896972, EPI_ISL_15897014, EPI_ISL_15905785, EPI_ISL_15905920, EPI_ISL_15906229, EPI_ISL_15906345, EPI_ISL_15906718, EPI_ISL_15907094, EPI_ISL_15907356, EPI_ISL_15907753, EPI_ISL_15907982, EPI_ISL_15910734, EPI_ISL_15916557, EPI_ISL_15917361, EPI_ISL_15923945, EPI_ISL_15938456, EPI_ISL_15943762, EPI_ISL_15944351, EPI_ISL_15944372, EPI_ISL_15944966, EPI_ISL_15946986, EPI_ISL_15950714, EPI_ISL_15957977, EPI_ISL_15961223, EPI_ISL_15978568, EPI_ISL_15979026, EPI_ISL_15979247, EPI_ISL_15979270, EPI_ISL_15979469, EPI_ISL_15979472, EPI_ISL_15979493, EPI_ISL_15979638, EPI_ISL_15979692, EPI_ISL_15979967, EPI_ISL_15980180, EPI_ISL_15981420, EPI_ISL_15981556-15981557, EPI_ISL_15981660, EPI_ISL_15987669, EPI_ISL_15987677, EPI_ISL_15997256, EPI_ISL_15997337, EPI_ISL_15999784, EPI_ISL_15999963, EPI_ISL_16004094, EPI_ISL_16004167, EPI_ISL_16004234, EPI_ISL_16004466, EPI_ISL_16010751, EPI_ISL_16011832, EPI_ISL_16026624, EPI_ISL_16026626, EPI_ISL_16035681-16035682, EPI_ISL_16036315, EPI_ISL_16036943, EPI_ISL_16040533, EPI_ISL_16048728, EPI_ISL_16051039, EPI_ISL_16051737, EPI_ISL_16051833, EPI_ISL_16058743, EPI_ISL_16058908, EPI_ISL_16059055, EPI_ISL_16060198, EPI_ISL_16060276, EPI_ISL_16060522, EPI_ISL_16060639, EPI_ISL_16060762, EPI_ISL_16067524, EPI_ISL_16069571, EPI_ISL_16073621, EPI_ISL_16073623, EPI_ISL_16077651, EPI_ISL_16077750, EPI_ISL_16077768, EPI_ISL_16077785, EPI_ISL_16077845, EPI_ISL_16078921, EPI_ISL_16086653, EPI_ISL_16091723, EPI_ISL_16091955, EPI_ISL_16092606, EPI_ISL_16093839, EPI_ISL_16096196, EPI_ISL_16099089, EPI_ISL_16105767, EPI_ISL_16107792, EPI_ISL_16109839, EPI_ISL_16109932, EPI_ISL_16110011, EPI_ISL_16111044, EPI_ISL_16112219, EPI_ISL_16112822, EPI_ISL_16114521, EPI_ISL_16115590, EPI_ISL_16115713, EPI_ISL_16120239, EPI_ISL_16120430, EPI_ISL_16120697, EPI_ISL_16125129, EPI_ISL_16125177, EPI_ISL_16125267, EPI_ISL_16128398, EPI_ISL_16128861, EPI_ISL_16129606, EPI_ISL_16129612, EPI_ISL_16130481, EPI_ISL_16131284, EPI_ISL_16132657, EPI_ISL_16133910, EPI_ISL_16133933, EPI_ISL_16135111, EPI_ISL_16135153, EPI_ISL_16135252, EPI_ISL_16135307, EPI_ISL_16135374, EPI_ISL_16135413, EPI_ISL_16135502, EPI_ISL_16135634, EPI_ISL_16135699, EPI_ISL_16135915, EPI_ISL_16135938, EPI_ISL_16135969-16135970, EPI_ISL_16136052, EPI_ISL_16136136, EPI_ISL_16136186, EPI_ISL_16136340, EPI_ISL_16136364, EPI_ISL_16136380, EPI_ISL_16136385, EPI_ISL_16136415, EPI_ISL_16136907, EPI_ISL_16137116, EPI_ISL_16137122, EPI_ISL_16137170, EPI_ISL_16137232-16137239, EPI_ISL_16137285, EPI_ISL_16137341, EPI_ISL_16137385, EPI_ISL_16137401, EPI_ISL_16137403, EPI_ISL_16138779, EPI_ISL_16141406, EPI_ISL_16143511, EPI_ISL_16144052, EPI_ISL_16144120, EPI_ISL_16151787, EPI_ISL_16153376, EPI_ISL_16153523, EPI_ISL_16158537-16158538, EPI_ISL_16161745, EPI_ISL_16161853, EPI_ISL_16165348, EPI_ISL_16165767, EPI_ISL_16167531, EPI_ISL_16169389, EPI_ISL_16173920, EPI_ISL_16173943, EPI_ISL_16174192-16174193, EPI_ISL_16174392, EPI_ISL_16174737, EPI_ISL_16175239, EPI_ISL_16180229, EPI_ISL_16186420, EPI_ISL_16186725, EPI_ISL_16186792, EPI_ISL_16186972, EPI_ISL_16187002, EPI_ISL_16187010, EPI_ISL_16187096, EPI_ISL_16189318, EPI_ISL_16190550, EPI_ISL_16191270, EPI_ISL_16191736, EPI_ISL_16192615, EPI_ISL_16192777, EPI_ISL_16192782, EPI_ISL_16192831, EPI_ISL_16192840, EPI_ISL_16192986, EPI_ISL_16193424, EPI_ISL_16194038, EPI_ISL_16196484, EPI_ISL_16201160, EPI_ISL_16202003, EPI_ISL_16206214, EPI_ISL_16209640, EPI_ISL_16210070, EPI_ISL_16210745, EPI_ISL_16216251, EPI_ISL_16217368, EPI_ISL_16217414, EPI_ISL_16217594, EPI_ISL_16220991, EPI_ISL_16222658, EPI_ISL_16225868, EPI_ISL_16225875, EPI_ISL_16225883, EPI_ISL_16229254, EPI_ISL_16234714, EPI_ISL_16238369, EPI_ISL_16245772, EPI_ISL_16245903, EPI_ISL_16245913, EPI_ISL_16246649, EPI_ISL_16246833, EPI_ISL_16250577, EPI_ISL_16254458, EPI_ISL_16255347, EPI_ISL_16255553, EPI_ISL_16256015, EPI_ISL_16256804, EPI_ISL_16257458, EPI_ISL_16259454, EPI_ISL_16264666, EPI_ISL_16264821, EPI_ISL_16265428, EPI_ISL_16265768, EPI_ISL_16270893, EPI_ISL_16270953, EPI_ISL_16271226, EPI_ISL_16271442, EPI_ISL_16271506, EPI_ISL_16272796, EPI_ISL_16273338, EPI_ISL_16273357, EPI_ISL_16273613, EPI_ISL_16273908, EPI_ISL_16273922, EPI_ISL_16274070, EPI_ISL_16278531, EPI_ISL_16279286, EPI_ISL_16279297, EPI_ISL_16279305, EPI_ISL_16279329, EPI_ISL_16279348, EPI_ISL_16279384, EPI_ISL_16279840, EPI_ISL_16279926, EPI_ISL_16280690, EPI_ISL_16284170, EPI_ISL_16284183, EPI_ISL_16284429, EPI_ISL_16291136, EPI_ISL_16291409, EPI_ISL_16291532, EPI_ISL_16291628, EPI_ISL_16291731, EPI_ISL_16292222, EPI_ISL_16292303, EPI_ISL_16292367, EPI_ISL_16292466, EPI_ISL_16292548, EPI_ISL_16292607, EPI_ISL_16292632, EPI_ISL_16292639, EPI_ISL_16292858, EPI_ISL_16294641, EPI_ISL_16298933, EPI_ISL_16299067, EPI_ISL_16299085, EPI_ISL_16299236, EPI_ISL_16299254, EPI_ISL_16301094, EPI_ISL_16301844, EPI_ISL_16301941, EPI_ISL_16303190, EPI_ISL_16305383, EPI_ISL_16305415, EPI_ISL_16307771, EPI_ISL_16308392, EPI_ISL_16311860, EPI_ISL_16313345, EPI_ISL_16314029, EPI_ISL_16315608, EPI_ISL_16319235, EPI_ISL_16319254, EPI_ISL_16320079, EPI_ISL_16322030, EPI_ISL_16322041-16322042, EPI_ISL_16322296, EPI_ISL_16322310-16322311, EPI_ISL_16323732-16323733, EPI_ISL_16323735, EPI_ISL_16323739, EPI_ISL_16323921, EPI_ISL_16324973, EPI_ISL_16329712, EPI_ISL_16329836, EPI_ISL_16330482, EPI_ISL_16330500, EPI_ISL_16333508, EPI_ISL_16333879, EPI_ISL_16333989, EPI_ISL_16336344, EPI_ISL_16336371, EPI_ISL_16336453, EPI_ISL_16336470, EPI_ISL_16336518-16336519, EPI_ISL_16336523, EPI_ISL_16336621, EPI_ISL_16336644, EPI_ISL_16336689, EPI_ISL_16336772, EPI_ISL_16336812, EPI_ISL_16337870, EPI_ISL_16337981, EPI_ISL_16338035-16338036, EPI_ISL_16338045, EPI_ISL_16338095, EPI_ISL_16338198, EPI_ISL_16338219, EPI_ISL_16338250, EPI_ISL_16339128, EPI_ISL_16339162, EPI_ISL_16339200, EPI_ISL_16339279, EPI_ISL_16339343, EPI_ISL_16339365-16339366, EPI_ISL_16339457, EPI_ISL_16339481, EPI_ISL_16339589, EPI_ISL_16339618, EPI_ISL_16339630, EPI_ISL_16339704, EPI_ISL_16339744, EPI_ISL_16339924, EPI_ISL_16339950, EPI_ISL_16339961, EPI_ISL_16340001, EPI_ISL_16340064, EPI_ISL_16340222, EPI_ISL_16340262, EPI_ISL_16340269, EPI_ISL_16340297, EPI_ISL_16340377, EPI_ISL_16340418, EPI_ISL_16340496, EPI_ISL_16340605, EPI_ISL_16340611, EPI_ISL_16340729, EPI_ISL_16340741, EPI_ISL_16340805, EPI_ISL_16340910, EPI_ISL_16340994, EPI_ISL_16341100, EPI_ISL_16341144, EPI_ISL_16341264, EPI_ISL_16341339, EPI_ISL_16341457, EPI_ISL_16341485, EPI_ISL_16341488, EPI_ISL_16341510, EPI_ISL_16341556, EPI_ISL_16341562, EPI_ISL_16342044, EPI_ISL_16342072, EPI_ISL_16342366, EPI_ISL_16342375, EPI_ISL_16342388, EPI_ISL_16342419, EPI_ISL_16342421, EPI_ISL_16342428, EPI_ISL_16342726, EPI_ISL_16344676, EPI_ISL_16347015, EPI_ISL_16347277, EPI_ISL_16352264, EPI_ISL_16353616, EPI_ISL_16353868, EPI_ISL_16359036, EPI_ISL_16359298, EPI_ISL_16359359, EPI_ISL_16359374, EPI_ISL_16359582, EPI_ISL_16360513, EPI_ISL_16360528, EPI_ISL_16360530, EPI_ISL_16360544, EPI_ISL_16364226, EPI_ISL_16370161, EPI_ISL_16370659, EPI_ISL_16372196, EPI_ISL_16374747, EPI_ISL_16376673, EPI_ISL_16377095, EPI_ISL_16377108, EPI_ISL_16377955, EPI_ISL_16378818, EPI_ISL_16379434 </details>
non_main
big sublineage of bn defined by emerged in as i randomly met this sublineage while checking airport surveillance it caught my attention cause its main mutation sounded new to me and collection dates were all quite recent digging a bit more i found that this sublineage of bn is circulating with a significant prevalence in south korea hanging around of samples in the last few weeks while growing to of cases in japan so i decided to check growth advantages in south korea and it has a slight but solid advantage img width alt schermata alle src it seems ahead of bq clan while just ahead the leading group with ch even if it is still far from xbb i would like to highlight that a sublineage of this one gained a further mutation with sequences thomaspeacock could you check if i got it right please considering those sequences too the advantage seems slighty bigger defining mutations bn usher tree i would like to highlight the big saltation branch on the top of the usher tree currently circulating in japan and internationally and the little cluster with s in the bottom part of the tree img width alt schermata alle src gisaid query e finds sequences expand for epi isls epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl
0
358,752
10,632,056,510
IssuesEvent
2019-10-15 09:35:48
qlcchain/go-qlc
https://api.github.com/repos/qlcchain/go-qlc
closed
implement rpc pub/sub
Priority: High Type: Enhancement
### Description of the issue implement rpc pub/sub ### Issue-Type - [ ] bug report - [x] feature request - [ ] Documentation improvement
1.0
implement rpc pub/sub - ### Description of the issue implement rpc pub/sub ### Issue-Type - [ ] bug report - [x] feature request - [ ] Documentation improvement
non_main
implement rpc pub sub description of the issue implement rpc pub sub issue type bug report feature request documentation improvement
0
5,439
27,246,428,407
IssuesEvent
2023-02-22 02:38:19
VA-Explorer/va_explorer
https://api.github.com/repos/VA-Explorer/va_explorer
opened
Transition Field Workers to Data Viewers and convert Field Worker to Dashboard Access
Type: Maintainance Domain: Security Domain: Frontend
**What is the expected state?** As an admin I have clearly defined roles to choose from for my users each with unique sets of permissions targeted towards specific use cases. One of these roles is only able to view the dashboard. **What is the actual state?** Currently, 2 possible user roles are essentially the same following #261: Field Worker and Data Viewer. As an admin, I'm not sure why I would choose one over the other as the have the same permissions. Likewise, there is no user that exists with Dashboard-only access **Relevant context** Dashboard-only access was an end-user request, altering Field Worker and Data Viewer is necessary following #261.
True
Transition Field Workers to Data Viewers and convert Field Worker to Dashboard Access - **What is the expected state?** As an admin I have clearly defined roles to choose from for my users each with unique sets of permissions targeted towards specific use cases. One of these roles is only able to view the dashboard. **What is the actual state?** Currently, 2 possible user roles are essentially the same following #261: Field Worker and Data Viewer. As an admin, I'm not sure why I would choose one over the other as the have the same permissions. Likewise, there is no user that exists with Dashboard-only access **Relevant context** Dashboard-only access was an end-user request, altering Field Worker and Data Viewer is necessary following #261.
main
transition field workers to data viewers and convert field worker to dashboard access what is the expected state as an admin i have clearly defined roles to choose from for my users each with unique sets of permissions targeted towards specific use cases one of these roles is only able to view the dashboard what is the actual state currently possible user roles are essentially the same following field worker and data viewer as an admin i m not sure why i would choose one over the other as the have the same permissions likewise there is no user that exists with dashboard only access relevant context dashboard only access was an end user request altering field worker and data viewer is necessary following
1
2,865
10,271,528,735
IssuesEvent
2019-08-23 16:19:34
arcticicestudio/arctic
https://api.github.com/repos/arcticicestudio/arctic
opened
GitHub code owners
context-workflow scope-maintainability scope-quality scope-stability type-task
<p align="center"><img src="https://user-images.githubusercontent.com/7836623/63598769-85c23200-c5c0-11e9-967e-c8b3e5b43458.png" width="18%" /></p> The project should adapt to GitHub's [code owners][intro] feature. This will allow to define matching pattern for project paths to automatically add all required reviewers of the core team and contributors to new PRs. See [GitHub Help][help] for more details. <p> <figure> <div align="center"> <img src="https://user-images.githubusercontent.com/7836623/63598793-91adf400-c5c0-11e9-99f8-2feaeaf57bd3.png" /> <figcaption>Sidebar for <em>code owner</em> PR review requests and review stats</figcaption> </div> </figure> </p> <p> <figure> <div align="center"> <img src="https://user-images.githubusercontent.com/7836623/63599279-8effce80-c5c1-11e9-9b87-1e1c276f7c6d.png" /> <figcaption>Branch protection configuration to enable required <em>code owner</em> review approvals</figcaption> </div> </figure> </p> <p> <figure> <div align="center"> <img src="https://user-images.githubusercontent.com/2513/27803610-544ba222-5ff8-11e7-9313-e4062315fb0c.png" /> <figcaption>PR status checks when required <em>code owner</em> review is pending</figcaption> </div> </figure> </p> [help]: https://help.github.com/articles/about-codeowners [intro]: https://github.com/blog/2392-introducing-code-owners
True
GitHub code owners - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/63598769-85c23200-c5c0-11e9-967e-c8b3e5b43458.png" width="18%" /></p> The project should adapt to GitHub's [code owners][intro] feature. This will allow to define matching pattern for project paths to automatically add all required reviewers of the core team and contributors to new PRs. See [GitHub Help][help] for more details. <p> <figure> <div align="center"> <img src="https://user-images.githubusercontent.com/7836623/63598793-91adf400-c5c0-11e9-99f8-2feaeaf57bd3.png" /> <figcaption>Sidebar for <em>code owner</em> PR review requests and review stats</figcaption> </div> </figure> </p> <p> <figure> <div align="center"> <img src="https://user-images.githubusercontent.com/7836623/63599279-8effce80-c5c1-11e9-9b87-1e1c276f7c6d.png" /> <figcaption>Branch protection configuration to enable required <em>code owner</em> review approvals</figcaption> </div> </figure> </p> <p> <figure> <div align="center"> <img src="https://user-images.githubusercontent.com/2513/27803610-544ba222-5ff8-11e7-9313-e4062315fb0c.png" /> <figcaption>PR status checks when required <em>code owner</em> review is pending</figcaption> </div> </figure> </p> [help]: https://help.github.com/articles/about-codeowners [intro]: https://github.com/blog/2392-introducing-code-owners
main
github code owners the project should adapt to github s feature this will allow to define matching pattern for project paths to automatically add all required reviewers of the core team and contributors to new prs see for more details sidebar for code owner pr review requests and review stats branch protection configuration to enable required code owner review approvals pr status checks when required code owner review is pending
1
40,593
16,506,228,240
IssuesEvent
2021-05-25 19:41:33
cityofaustin/atd-data-tech
https://api.github.com/repos/cityofaustin/atd-data-tech
closed
Data Driven PHBs - Questions about Street Select PHB Data - Renee
Service: Geo Type: Data Workgroup: AMD
From Renee: > Good morning, I was able to import the data driven PHB map into Pro and am prepping for the D2, D3 meeting next week. While doing this I noticed that this segment scored pretty high in the ranking, but if you look closely you will see that there is already a PHB here. The location is S1st at King Edward. When you get a chance can you please take a look at this and see if we can do the 300' buffer from this site? Thanks so much. The PHB was turned on in 2018.
1.0
Data Driven PHBs - Questions about Street Select PHB Data - Renee - From Renee: > Good morning, I was able to import the data driven PHB map into Pro and am prepping for the D2, D3 meeting next week. While doing this I noticed that this segment scored pretty high in the ranking, but if you look closely you will see that there is already a PHB here. The location is S1st at King Edward. When you get a chance can you please take a look at this and see if we can do the 300' buffer from this site? Thanks so much. The PHB was turned on in 2018.
non_main
data driven phbs questions about street select phb data renee from renee good morning i was able to import the data driven phb map into pro and am prepping for the meeting next week while doing this i noticed that this segment scored pretty high in the ranking but if you look closely you will see that there is already a phb here the location is at king edward when you get a chance can you please take a look at this and see if we can do the buffer from this site thanks so much the phb was turned on in
0
2,753
9,852,680,544
IssuesEvent
2019-06-19 13:22:44
nventive/Uno
https://api.github.com/repos/nventive/Uno
closed
Unable to compile master on my machine using v16.0.4 (known good version of vswin)
area/vswin kind/contributor-experience kind/maintainer-experience kind/regression
# Current behavior - Unable to compile `master` on my machine using v16.0.4 (known good version of vswin) - Created virtual machine in Azure to repro. - It repros. ``` "N:\uno\dev\src\Uno.UI.sln" (default target) (1:2) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj.metaproj" (default target) (42) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj" (default target) (2:12) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj" (Build target) (2:13) -> "N:\uno\dev\src\Uno.UI\Uno.UI.csproj" (default target) (16:58) -> "N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj" (default target) (4:53) -> "N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj" (Build target) (4:60) -> (_CompileUnoJava target) -> N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj(107,3): error MSB3073: The command ""C:\Program Files\Android\jdk\microsoft_dist_openjdk_1.8.0.25\bin\javac.exe" -g -source 1.8 -d .\obj\Debug\MonoAndroid80\\unoclasses -target 1.8 -J-Dfile.encoding=UTF8 -class path "%ProgramFiles(x86)%\Reference Assemblies\Microsoft\Framework\MonoAndroid\v8.0\mono.android.jar;obj\Debug\MonoAndroid80\\__library_projects__\Xamarin.Android.Support.Annotations\library_project_imports\support-annotations.jar" -bootclasspath "C:\Program Files (x86)\Android\android-sdk\p latforms\android-26\android.jar" -encoding UTF-8 .\Uno\UI\*.java" exited with code 1. "N:\uno\dev\src\Uno.UI.sln" (default target) (1:2) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj.metaproj" (default target) (42) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj" (default target) (2:12) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj" (Build target) (2:13) -> "N:\uno\dev\src\Uno.UI\Uno.UI.csproj" (default target) (16:58) -> "N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj" (default target) (4:53) -> "N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj" (Build target) (4:61) -> N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj(107,3): error MSB3073: The command ""C:\Program Files\Android\jdk\microsoft_dist_openjdk_1.8.0.25\bin\javac.exe" -g -source 1.8 -d .\obj\Debug\MonoAndroid90\\unoclasses -target 1.8 -J-Dfile.encoding=UTF8 -class path "%ProgramFiles(x86)%\Reference Assemblies\Microsoft\Framework\MonoAndroid\v9.0\mono.android.jar;obj\Debug\MonoAndroid90\\__library_projects__\Xamarin.Android.Support.Annotations\library_project_imports\support-annotations.jar" -bootclasspath "C:\Program Files (x86)\Android\android-sdk\p latforms\android-28\android.jar" -encoding UTF-8 .\Uno\UI\*.java" exited with code 1. ``` ## How to reproduce it (as minimally and precisely as possible) Running `N:\uno\dev\src>msbuild Uno.UI.sln /r /verbosity:detailed /bl:red-build.binlog` results in a red build. Running `N:\uno\dev\src>msbuild Uno.UI.sln /r /t:restore /verbosity:detailed /bl:green-build.binlog` results in a green build. ## Environment Visual Studio - [ ] 2017 (version: ) - [x] 2019 (version: v16.0.4) - [ ] for Mac (version: ) ## Anything else we need to know? See https://drive.google.com/file/d/1g_NV-e4XzD15PSpT_jI5j9ogxMLKyRJr/view?usp=sharing for binlogs of a red build and a green build.
True
Unable to compile master on my machine using v16.0.4 (known good version of vswin) - # Current behavior - Unable to compile `master` on my machine using v16.0.4 (known good version of vswin) - Created virtual machine in Azure to repro. - It repros. ``` "N:\uno\dev\src\Uno.UI.sln" (default target) (1:2) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj.metaproj" (default target) (42) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj" (default target) (2:12) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj" (Build target) (2:13) -> "N:\uno\dev\src\Uno.UI\Uno.UI.csproj" (default target) (16:58) -> "N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj" (default target) (4:53) -> "N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj" (Build target) (4:60) -> (_CompileUnoJava target) -> N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj(107,3): error MSB3073: The command ""C:\Program Files\Android\jdk\microsoft_dist_openjdk_1.8.0.25\bin\javac.exe" -g -source 1.8 -d .\obj\Debug\MonoAndroid80\\unoclasses -target 1.8 -J-Dfile.encoding=UTF8 -class path "%ProgramFiles(x86)%\Reference Assemblies\Microsoft\Framework\MonoAndroid\v8.0\mono.android.jar;obj\Debug\MonoAndroid80\\__library_projects__\Xamarin.Android.Support.Annotations\library_project_imports\support-annotations.jar" -bootclasspath "C:\Program Files (x86)\Android\android-sdk\p latforms\android-26\android.jar" -encoding UTF-8 .\Uno\UI\*.java" exited with code 1. "N:\uno\dev\src\Uno.UI.sln" (default target) (1:2) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj.metaproj" (default target) (42) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj" (default target) (2:12) -> "N:\uno\dev\src\Uno.UI.Tests\Uno.UI.Tests.csproj" (Build target) (2:13) -> "N:\uno\dev\src\Uno.UI\Uno.UI.csproj" (default target) (16:58) -> "N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj" (default target) (4:53) -> "N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj" (Build target) (4:61) -> N:\uno\dev\src\Uno.UI.BindingHelper.Android\Uno.UI.BindingHelper.Android.csproj(107,3): error MSB3073: The command ""C:\Program Files\Android\jdk\microsoft_dist_openjdk_1.8.0.25\bin\javac.exe" -g -source 1.8 -d .\obj\Debug\MonoAndroid90\\unoclasses -target 1.8 -J-Dfile.encoding=UTF8 -class path "%ProgramFiles(x86)%\Reference Assemblies\Microsoft\Framework\MonoAndroid\v9.0\mono.android.jar;obj\Debug\MonoAndroid90\\__library_projects__\Xamarin.Android.Support.Annotations\library_project_imports\support-annotations.jar" -bootclasspath "C:\Program Files (x86)\Android\android-sdk\p latforms\android-28\android.jar" -encoding UTF-8 .\Uno\UI\*.java" exited with code 1. ``` ## How to reproduce it (as minimally and precisely as possible) Running `N:\uno\dev\src>msbuild Uno.UI.sln /r /verbosity:detailed /bl:red-build.binlog` results in a red build. Running `N:\uno\dev\src>msbuild Uno.UI.sln /r /t:restore /verbosity:detailed /bl:green-build.binlog` results in a green build. ## Environment Visual Studio - [ ] 2017 (version: ) - [x] 2019 (version: v16.0.4) - [ ] for Mac (version: ) ## Anything else we need to know? See https://drive.google.com/file/d/1g_NV-e4XzD15PSpT_jI5j9ogxMLKyRJr/view?usp=sharing for binlogs of a red build and a green build.
main
unable to compile master on my machine using known good version of vswin current behavior unable to compile master on my machine using known good version of vswin created virtual machine in azure to repro it repros n uno dev src uno ui sln default target n uno dev src uno ui tests uno ui tests csproj metaproj default target n uno dev src uno ui tests uno ui tests csproj default target n uno dev src uno ui tests uno ui tests csproj build target n uno dev src uno ui uno ui csproj default target n uno dev src uno ui bindinghelper android uno ui bindinghelper android csproj default target n uno dev src uno ui bindinghelper android uno ui bindinghelper android csproj build target compileunojava target n uno dev src uno ui bindinghelper android uno ui bindinghelper android csproj error the command c program files android jdk microsoft dist openjdk bin javac exe g source d obj debug unoclasses target j dfile encoding class path programfiles reference assemblies microsoft framework monoandroid mono android jar obj debug library projects xamarin android support annotations library project imports support annotations jar bootclasspath c program files android android sdk p latforms android android jar encoding utf uno ui java exited with code n uno dev src uno ui sln default target n uno dev src uno ui tests uno ui tests csproj metaproj default target n uno dev src uno ui tests uno ui tests csproj default target n uno dev src uno ui tests uno ui tests csproj build target n uno dev src uno ui uno ui csproj default target n uno dev src uno ui bindinghelper android uno ui bindinghelper android csproj default target n uno dev src uno ui bindinghelper android uno ui bindinghelper android csproj build target n uno dev src uno ui bindinghelper android uno ui bindinghelper android csproj error the command c program files android jdk microsoft dist openjdk bin javac exe g source d obj debug unoclasses target j dfile encoding class path programfiles reference assemblies microsoft framework monoandroid mono android jar obj debug library projects xamarin android support annotations library project imports support annotations jar bootclasspath c program files android android sdk p latforms android android jar encoding utf uno ui java exited with code how to reproduce it as minimally and precisely as possible running n uno dev src msbuild uno ui sln r verbosity detailed bl red build binlog results in a red build running n uno dev src msbuild uno ui sln r t restore verbosity detailed bl green build binlog results in a green build environment visual studio version version for mac version anything else we need to know see for binlogs of a red build and a green build
1
3,040
11,270,160,442
IssuesEvent
2020-01-14 10:20:35
RalfKoban/MiKo-Analyzers
https://api.github.com/repos/RalfKoban/MiKo-Analyzers
opened
Properties should not return Select/Where clauses
Area: analyzer Area: maintainability feature
Properties that return `IEnumerable` should not return results of Linq's `Select` or `Where` clauses. The reason is that the results is lazily evaluated later on (when iterating over the result), thus probably leading to unexpected behavior. E.g.: ```C# public IEnumerable<int> MyProperty { get { return _myField.Where(_ => _ != 42); } } ``` Here the `Where` clause should be reported.
True
Properties should not return Select/Where clauses - Properties that return `IEnumerable` should not return results of Linq's `Select` or `Where` clauses. The reason is that the results is lazily evaluated later on (when iterating over the result), thus probably leading to unexpected behavior. E.g.: ```C# public IEnumerable<int> MyProperty { get { return _myField.Where(_ => _ != 42); } } ``` Here the `Where` clause should be reported.
main
properties should not return select where clauses properties that return ienumerable should not return results of linq s select or where clauses the reason is that the results is lazily evaluated later on when iterating over the result thus probably leading to unexpected behavior e g c public ienumerable myproperty get return myfield where here the where clause should be reported
1
1,406
6,037,068,620
IssuesEvent
2017-06-09 17:44:07
backdrop-ops/contrib
https://api.github.com/repos/backdrop-ops/contrib
closed
Group application request
Maintainer application
I'm porting the drupal 7 Usertabs module to backdrop and mainting it here: https://github.com/robertgarrigos/usertabs Thus thi is my application to the contrib group. Thanks.
True
Group application request - I'm porting the drupal 7 Usertabs module to backdrop and mainting it here: https://github.com/robertgarrigos/usertabs Thus thi is my application to the contrib group. Thanks.
main
group application request i m porting the drupal usertabs module to backdrop and mainting it here thus thi is my application to the contrib group thanks
1
33,463
12,216,564,764
IssuesEvent
2020-05-01 15:24:13
bsbtd/Teste
https://api.github.com/repos/bsbtd/Teste
opened
CVE-2017-18214 (High) detected in moment-2.18.1.tgz
security vulnerability
## CVE-2017-18214 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.18.1.tgz</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://registry.npmjs.org/moment/-/moment-2.18.1.tgz">https://registry.npmjs.org/moment/-/moment-2.18.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/Teste/ionic-angular-twitter-pwa/server/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/Teste/ionic-angular-twitter-pwa/server/node_modules/firebase-admin/node_modules/moment/package.json</p> <p> Dependency Hierarchy: - firebase-admin-4.1.4.tgz (Root Library) - jsonwebtoken-7.1.9.tgz - joi-6.10.1.tgz - :x: **moment-2.18.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/50a539d66e7a2f790cf8a8d8d1471993698c9adc">50a539d66e7a2f790cf8a8d8d1471993698c9adc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055. <p>Publish Date: 2018-03-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18214>CVE-2017-18214</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214</a></p> <p>Release Date: 2018-03-04</p> <p>Fix Resolution: 2.19.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-18214 (High) detected in moment-2.18.1.tgz - ## CVE-2017-18214 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.18.1.tgz</b></p></summary> <p>Parse, validate, manipulate, and display dates</p> <p>Library home page: <a href="https://registry.npmjs.org/moment/-/moment-2.18.1.tgz">https://registry.npmjs.org/moment/-/moment-2.18.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/Teste/ionic-angular-twitter-pwa/server/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/Teste/ionic-angular-twitter-pwa/server/node_modules/firebase-admin/node_modules/moment/package.json</p> <p> Dependency Hierarchy: - firebase-admin-4.1.4.tgz (Root Library) - jsonwebtoken-7.1.9.tgz - joi-6.10.1.tgz - :x: **moment-2.18.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/50a539d66e7a2f790cf8a8d8d1471993698c9adc">50a539d66e7a2f790cf8a8d8d1471993698c9adc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055. <p>Publish Date: 2018-03-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18214>CVE-2017-18214</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214</a></p> <p>Release Date: 2018-03-04</p> <p>Fix Resolution: 2.19.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in moment tgz cve high severity vulnerability vulnerable library moment tgz parse validate manipulate and display dates library home page a href path to dependency file tmp ws scm teste ionic angular twitter pwa server package json path to vulnerable library tmp ws scm teste ionic angular twitter pwa server node modules firebase admin node modules moment package json dependency hierarchy firebase admin tgz root library jsonwebtoken tgz joi tgz x moment tgz vulnerable library found in head commit a href vulnerability details the moment module before for node js is prone to a regular expression denial of service via a crafted date string a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
18,020
24,032,777,902
IssuesEvent
2022-09-15 16:18:40
googleapis/java-apigee-registry
https://api.github.com/repos/googleapis/java-apigee-registry
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'apigee-registry' invalid in .repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'apigee-registry' invalid in .repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
non_main
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname apigee registry invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
0
3,557
14,165,535,070
IssuesEvent
2020-11-12 07:25:14
casperstorm/ajour
https://api.github.com/repos/casperstorm/ajour
closed
Install addon from github repo
B - missing feature C - waiting on maintainer S - addons
**Is your feature request related to a problem? Please describe.** Sometimes addons stop being maintained by the original author, and people fork the repo and make fixes. It would be nice if we can install addons from these github repos and have Ajour manage them. **Describe the solution you'd like** Users click a button, then enter a URL to a git repo. Ajour clones the repo into the appropriate directory, and when there are new commits, pulls the repo to update the addon. I would be fine if this required people to have git installed on their computer first (for Windows). **Describe alternatives you've considered** Alternatively it could use the github API to download the source instead. This approach means that the user doesn't need to have git installed and Ajour doesn't have to use git apis. The advantage of using git is that it would work for *any* git repo, not just Github ones. **Additional context** This was prompted by [DialogKey](https://github.com/TonyRaccoon/wow-dialogkey) which hasn't been updated in a year. Fixes have been made in [other repos](https://github.com/Kayakflo/wow-dialogkey) but I do not want to manually install these if I don't have to. In general, I worry that the wow addon system is too centrally managed and being able to install from a git clone would decentralize it.
True
Install addon from github repo - **Is your feature request related to a problem? Please describe.** Sometimes addons stop being maintained by the original author, and people fork the repo and make fixes. It would be nice if we can install addons from these github repos and have Ajour manage them. **Describe the solution you'd like** Users click a button, then enter a URL to a git repo. Ajour clones the repo into the appropriate directory, and when there are new commits, pulls the repo to update the addon. I would be fine if this required people to have git installed on their computer first (for Windows). **Describe alternatives you've considered** Alternatively it could use the github API to download the source instead. This approach means that the user doesn't need to have git installed and Ajour doesn't have to use git apis. The advantage of using git is that it would work for *any* git repo, not just Github ones. **Additional context** This was prompted by [DialogKey](https://github.com/TonyRaccoon/wow-dialogkey) which hasn't been updated in a year. Fixes have been made in [other repos](https://github.com/Kayakflo/wow-dialogkey) but I do not want to manually install these if I don't have to. In general, I worry that the wow addon system is too centrally managed and being able to install from a git clone would decentralize it.
main
install addon from github repo is your feature request related to a problem please describe sometimes addons stop being maintained by the original author and people fork the repo and make fixes it would be nice if we can install addons from these github repos and have ajour manage them describe the solution you d like users click a button then enter a url to a git repo ajour clones the repo into the appropriate directory and when there are new commits pulls the repo to update the addon i would be fine if this required people to have git installed on their computer first for windows describe alternatives you ve considered alternatively it could use the github api to download the source instead this approach means that the user doesn t need to have git installed and ajour doesn t have to use git apis the advantage of using git is that it would work for any git repo not just github ones additional context this was prompted by which hasn t been updated in a year fixes have been made in but i do not want to manually install these if i don t have to in general i worry that the wow addon system is too centrally managed and being able to install from a git clone would decentralize it
1
422,744
28,456,310,797
IssuesEvent
2023-04-17 07:21:31
OpenDataGIS/ckanext-iepnb
https://api.github.com/repos/OpenDataGIS/ckanext-iepnb
closed
Update README.md
documentation
Be more specific in the documentation to specify exactly which lines need to be added to `schema.xml` in Solr. https://github.com/OpenDataGIS/ckanext-iepnb/blob/105c4667877169d65b73f2afe9ac6f740bceb662/README.md?plain=1#L56-L76 To: Modify the schema file on Solr (schema or managed schema) to add this fields (if they dont exist yet): ```xml <fields> ... <! IEPNB extra fields - > <field name="tag_uri" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="conforms_to" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="lineage_source" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="lineage_process_step" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="reference" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="theme" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="resource_relation" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> </fields> >**Note** >Be sure to restart Solr after modify the schema ```
1.0
Update README.md - Be more specific in the documentation to specify exactly which lines need to be added to `schema.xml` in Solr. https://github.com/OpenDataGIS/ckanext-iepnb/blob/105c4667877169d65b73f2afe9ac6f740bceb662/README.md?plain=1#L56-L76 To: Modify the schema file on Solr (schema or managed schema) to add this fields (if they dont exist yet): ```xml <fields> ... <! IEPNB extra fields - > <field name="tag_uri" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="conforms_to" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="lineage_source" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="lineage_process_step" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="reference" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="theme" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> <field name="resource_relation" type="string" uninvertible="false" docValues="true" indexed="true" stored="true" multiValued="true"/> </fields> >**Note** >Be sure to restart Solr after modify the schema ```
non_main
update readme md be more specific in the documentation to specify exactly which lines need to be added to schema xml in solr to modify the schema file on solr schema or managed schema to add this fields if they dont exist yet xml note be sure to restart solr after modify the schema
0
53,252
6,306,259,746
IssuesEvent
2017-07-21 20:31:16
geoffhumphrey/brewcompetitiononlineentry
https://api.github.com/repos/geoffhumphrey/brewcompetitiononlineentry
closed
Enhancement: Include Mini-BoS column in Participant Summary Report
enhancement in latest master commit v2.1.10
Enhancement Request to add a column to the Participant Summary Report to show if the entry went to the second round. I was able to use the minibos_check function to make the check and added the column. My column header is "2nd Round" and I put a "Y" or "N" in the entry row. Thanks.
1.0
Enhancement: Include Mini-BoS column in Participant Summary Report - Enhancement Request to add a column to the Participant Summary Report to show if the entry went to the second round. I was able to use the minibos_check function to make the check and added the column. My column header is "2nd Round" and I put a "Y" or "N" in the entry row. Thanks.
non_main
enhancement include mini bos column in participant summary report enhancement request to add a column to the participant summary report to show if the entry went to the second round i was able to use the minibos check function to make the check and added the column my column header is round and i put a y or n in the entry row thanks
0
1,106
4,981,802,962
IssuesEvent
2016-12-07 09:18:36
tgstation/tgstation
https://api.github.com/repos/tgstation/tgstation
closed
The number of maps is becoming a maintainabilty issue
Maintainability - Hinders improvements - Not a bug
> change a single item path > have to update a dozen maps or more > merge conflicts forever Really sad we lost the simple animal bots to this. We either need to cut it down to focus on a couple core maps or have some kind of priority merging for PRs that touch several maps.
True
The number of maps is becoming a maintainabilty issue - > change a single item path > have to update a dozen maps or more > merge conflicts forever Really sad we lost the simple animal bots to this. We either need to cut it down to focus on a couple core maps or have some kind of priority merging for PRs that touch several maps.
main
the number of maps is becoming a maintainabilty issue change a single item path have to update a dozen maps or more merge conflicts forever really sad we lost the simple animal bots to this we either need to cut it down to focus on a couple core maps or have some kind of priority merging for prs that touch several maps
1
806
4,425,654,897
IssuesEvent
2016-08-16 16:01:08
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
apt_key always fails to import a subkey
bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> apt_key ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> host OS: Ubuntu 14.04 controlled nodes' OS: Ubuntu 14.04 ##### SUMMARY <!--- Explain the problem briefly --> Importing a (sign only) subkey with apt_key always fails, however the actual keyring gets created and contains the correct (sub)key. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` $ cat >> aptrepo.yml <<EOF --- - hosts: all become: True tasks: - name: import custom repo keyring apt_key: > id=A254F5F0 keyserver=keyserver.ubuntu.com EOF $ ansible-playbook aptrepo.yml ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> The specified (sub)key gets successfuly imported, ansible returns exit code 0 (success) ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` fatal: [saceph-osd2.maas]: FAILED! => {"changed": false, "failed": true, "id": "A254F5F0", "msg": "key does not seem to have been added"} fatal: [saceph-osd1.maas]: FAILED! => {"changed": false, "failed": true, "id": "A254F5F0", "msg": "key does not seem to have been added"} fatal: [saceph-osd3.maas]: FAILED! => {"changed": false, "failed": true, "id": "A254F5F0", "msg": "key does not seem to have been added"} However the key has been successfully imported: $ ansible all -m shell -a 'apt-key --keyring /etc/apt/trusted.gpg.d/ceph.gpg adv --list-public-keys | grep -e A254F5F0' saceph-osd2.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] saceph-osd1.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] saceph-osd3.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] ```
True
apt_key always fails to import a subkey - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> apt_key ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> host OS: Ubuntu 14.04 controlled nodes' OS: Ubuntu 14.04 ##### SUMMARY <!--- Explain the problem briefly --> Importing a (sign only) subkey with apt_key always fails, however the actual keyring gets created and contains the correct (sub)key. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` $ cat >> aptrepo.yml <<EOF --- - hosts: all become: True tasks: - name: import custom repo keyring apt_key: > id=A254F5F0 keyserver=keyserver.ubuntu.com EOF $ ansible-playbook aptrepo.yml ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> The specified (sub)key gets successfuly imported, ansible returns exit code 0 (success) ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` fatal: [saceph-osd2.maas]: FAILED! => {"changed": false, "failed": true, "id": "A254F5F0", "msg": "key does not seem to have been added"} fatal: [saceph-osd1.maas]: FAILED! => {"changed": false, "failed": true, "id": "A254F5F0", "msg": "key does not seem to have been added"} fatal: [saceph-osd3.maas]: FAILED! => {"changed": false, "failed": true, "id": "A254F5F0", "msg": "key does not seem to have been added"} However the key has been successfully imported: $ ansible all -m shell -a 'apt-key --keyring /etc/apt/trusted.gpg.d/ceph.gpg adv --list-public-keys | grep -e A254F5F0' saceph-osd2.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] saceph-osd1.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] saceph-osd3.maas | SUCCESS | rc=0 >> sub 4096R/A254F5F0 2016-07-29 [expires: 2026-07-27] ```
main
apt key always fails to import a subkey issue type bug report component name apt key ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific host os ubuntu controlled nodes os ubuntu summary importing a sign only subkey with apt key always fails however the actual keyring gets created and contains the correct sub key steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used cat aptrepo yml eof hosts all become true tasks name import custom repo keyring apt key id keyserver keyserver ubuntu com eof ansible playbook aptrepo yml expected results the specified sub key gets successfuly imported ansible returns exit code success actual results fatal failed changed false failed true id msg key does not seem to have been added fatal failed changed false failed true id msg key does not seem to have been added fatal failed changed false failed true id msg key does not seem to have been added however the key has been successfully imported ansible all m shell a apt key keyring etc apt trusted gpg d ceph gpg adv list public keys grep e saceph maas success rc sub saceph maas success rc sub saceph maas success rc sub
1
5,864
31,801,255,545
IssuesEvent
2023-09-13 11:16:08
beyarkay/eskom-calendar
https://api.github.com/repos/beyarkay/eskom-calendar
opened
Missing area schedule
waiting-on-maintainer missing-area-schedule
**What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?** Buccleuch Outlying (16) **Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?** A search on github for "loadshedding"
True
Missing area schedule - **What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?** Buccleuch Outlying (16) **Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?** A search on github for "loadshedding"
main
missing area schedule what area s couldn t you find on buccleuch outlying where did you hear about a search on github for loadshedding
1
40,043
12,744,920,043
IssuesEvent
2020-06-26 13:23:22
RG4421/atlasdb
https://api.github.com/repos/RG4421/atlasdb
opened
CVE-2015-6420 (High) detected in commons-collections-3.2.1.jar
security vulnerability
## CVE-2015-6420 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary> <p>Types that extend and augment the Java Collections Framework.</p> <p>Path to dependency file: /tmp/ws-scm/atlasdb/leader-election-api/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.1/761ea405b9b37ced573d2df0d1e3a4e0f9edc668/commons-collections-3.2.1.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.1/761ea405b9b37ced573d2df0d1e3a4e0f9edc668/commons-collections-3.2.1.jar</p> <p> Dependency Hierarchy: - checkstyle-6.18.jar (Root Library) - commons-beanutils-1.9.2.jar - :x: **commons-collections-3.2.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/RG4421/atlasdb/commit/6c613675868440052ef3631d79eea71e4ab49c96">6c613675868440052ef3631d79eea71e4ab49c96</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library. <p>Publish Date: 2015-12-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420>CVE-2015-6420</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-6420">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-6420</a></p> <p>Release Date: 2015-12-15</p> <p>Fix Resolution: org.apache.commons:commons-collections4:4.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-collections","packageName":"commons-collections","packageVersion":"3.2.1","isTransitiveDependency":true,"dependencyTree":"com.puppycrawl.tools:checkstyle:6.18;commons-beanutils:commons-beanutils:1.9.2;commons-collections:commons-collections:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.commons:commons-collections4:4.1"}],"vulnerabilityIdentifier":"CVE-2015-6420","vulnerabilityDetails":"Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
True
CVE-2015-6420 (High) detected in commons-collections-3.2.1.jar - ## CVE-2015-6420 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary> <p>Types that extend and augment the Java Collections Framework.</p> <p>Path to dependency file: /tmp/ws-scm/atlasdb/leader-election-api/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.1/761ea405b9b37ced573d2df0d1e3a4e0f9edc668/commons-collections-3.2.1.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.1/761ea405b9b37ced573d2df0d1e3a4e0f9edc668/commons-collections-3.2.1.jar</p> <p> Dependency Hierarchy: - checkstyle-6.18.jar (Root Library) - commons-beanutils-1.9.2.jar - :x: **commons-collections-3.2.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/RG4421/atlasdb/commit/6c613675868440052ef3631d79eea71e4ab49c96">6c613675868440052ef3631d79eea71e4ab49c96</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library. <p>Publish Date: 2015-12-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420>CVE-2015-6420</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-6420">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-6420</a></p> <p>Release Date: 2015-12-15</p> <p>Fix Resolution: org.apache.commons:commons-collections4:4.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-collections","packageName":"commons-collections","packageVersion":"3.2.1","isTransitiveDependency":true,"dependencyTree":"com.puppycrawl.tools:checkstyle:6.18;commons-beanutils:commons-beanutils:1.9.2;commons-collections:commons-collections:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.commons:commons-collections4:4.1"}],"vulnerabilityIdentifier":"CVE-2015-6420","vulnerabilityDetails":"Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
non_main
cve high detected in commons collections jar cve high severity vulnerability vulnerable library commons collections jar types that extend and augment the java collections framework path to dependency file tmp ws scm atlasdb leader election api build gradle path to vulnerable library home wss scanner gradle caches modules files commons collections commons collections commons collections jar home wss scanner gradle caches modules files commons collections commons collections commons collections jar dependency hierarchy checkstyle jar root library commons beanutils jar x commons collections jar vulnerable library found in head commit a href vulnerability details serialized object interfaces in certain cisco collaboration and social media endpoint clients and client software network application service and acceleration network and content security devices network management and provisioning routing and switching enterprise and service provider unified computing voice and unified communications devices video streaming telepresence and transcoding devices wireless and cisco hosted services products allow remote attackers to execute arbitrary commands via a crafted serialized java object related to the apache commons collections acc library publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution org apache commons commons isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails serialized object interfaces in certain cisco collaboration and social media endpoint clients and client software network application service and acceleration network and content security devices network management and provisioning routing and switching enterprise and service provider unified computing voice and unified communications devices video streaming telepresence and transcoding devices wireless and cisco hosted services products allow remote attackers to execute arbitrary commands via a crafted serialized java object related to the apache commons collections acc library vulnerabilityurl
0
399
3,442,636,459
IssuesEvent
2015-12-14 23:30:25
tethysplatform/tethys
https://api.github.com/repos/tethysplatform/tethys
closed
Upgrade Django prior to Release and check for bugs
maintain dependencies
This really needs to happen, because 1.7 has been deprecated now...
True
Upgrade Django prior to Release and check for bugs - This really needs to happen, because 1.7 has been deprecated now...
main
upgrade django prior to release and check for bugs this really needs to happen because has been deprecated now
1
45,290
13,109,051,537
IssuesEvent
2020-08-04 17:57:44
phunware/Specs
https://api.github.com/repos/phunware/Specs
opened
CVE-2014-10077 (High) detected in i18n-0.6.9.gem
security vulnerability
## CVE-2014-10077 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>i18n-0.6.9.gem</b></p></summary> <p>New wave Internationalization support for Ruby.</p> <p>Library home page: <a href="https://rubygems.org/gems/i18n-0.6.9.gem">https://rubygems.org/gems/i18n-0.6.9.gem</a></p> <p> Dependency Hierarchy: - activesupport-3.2.17.gem (Root Library) - :x: **i18n-0.6.9.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/phunware/Specs/commit/4c3be2446b24f30f76a7c91b820ee65e33719cfe">4c3be2446b24f30f76a7c91b820ee65e33719cfe</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Hash#slice in lib/i18n/core_ext/hash.rb in the i18n gem before 0.8.0 for Ruby allows remote attackers to cause a denial of service (application crash) via a call in a situation where :some_key is present in keep_keys but not present in the hash. <p>Publish Date: 2018-11-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-10077>CVE-2014-10077</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-10077">https://nvd.nist.gov/vuln/detail/CVE-2014-10077</a></p> <p>Release Date: 2018-11-06</p> <p>Fix Resolution: 0.8.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2014-10077 (High) detected in i18n-0.6.9.gem - ## CVE-2014-10077 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>i18n-0.6.9.gem</b></p></summary> <p>New wave Internationalization support for Ruby.</p> <p>Library home page: <a href="https://rubygems.org/gems/i18n-0.6.9.gem">https://rubygems.org/gems/i18n-0.6.9.gem</a></p> <p> Dependency Hierarchy: - activesupport-3.2.17.gem (Root Library) - :x: **i18n-0.6.9.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/phunware/Specs/commit/4c3be2446b24f30f76a7c91b820ee65e33719cfe">4c3be2446b24f30f76a7c91b820ee65e33719cfe</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Hash#slice in lib/i18n/core_ext/hash.rb in the i18n gem before 0.8.0 for Ruby allows remote attackers to cause a denial of service (application crash) via a call in a situation where :some_key is present in keep_keys but not present in the hash. <p>Publish Date: 2018-11-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-10077>CVE-2014-10077</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-10077">https://nvd.nist.gov/vuln/detail/CVE-2014-10077</a></p> <p>Release Date: 2018-11-06</p> <p>Fix Resolution: 0.8.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in gem cve high severity vulnerability vulnerable library gem new wave internationalization support for ruby library home page a href dependency hierarchy activesupport gem root library x gem vulnerable library found in head commit a href vulnerability details hash slice in lib core ext hash rb in the gem before for ruby allows remote attackers to cause a denial of service application crash via a call in a situation where some key is present in keep keys but not present in the hash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
2,743
9,769,562,074
IssuesEvent
2019-06-06 08:52:08
zaproxy/zaproxy
https://api.github.com/repos/zaproxy/zaproxy
closed
Provide universal formatter / code formatting guidelines
Maintainability Type-Task
It would be great if the zap team could provide an eclipse code formatter file which can be imported to eclipse. So everyone has the same code style when doing Code -> Format in eclipse. No more whitespace issues in pull requests!
True
Provide universal formatter / code formatting guidelines - It would be great if the zap team could provide an eclipse code formatter file which can be imported to eclipse. So everyone has the same code style when doing Code -> Format in eclipse. No more whitespace issues in pull requests!
main
provide universal formatter code formatting guidelines it would be great if the zap team could provide an eclipse code formatter file which can be imported to eclipse so everyone has the same code style when doing code format in eclipse no more whitespace issues in pull requests
1
166,971
6,329,584,565
IssuesEvent
2017-07-26 03:38:47
openaddresses/openaddresses
https://api.github.com/repos/openaddresses/openaddresses
closed
Add addresses in Minot, ND
data-priority-4 ready for PR Reviewed
Status: - [ ] **Research sources** and propose best source (and it's data license) - [ ] **Review proposed source** - [ ] **Approve source** based on review comments - [ ] **Add** to OpenAddresses _NOTE: Proposed source notes should include basic column mapping to OpenAddress data model._
1.0
Add addresses in Minot, ND - Status: - [ ] **Research sources** and propose best source (and it's data license) - [ ] **Review proposed source** - [ ] **Approve source** based on review comments - [ ] **Add** to OpenAddresses _NOTE: Proposed source notes should include basic column mapping to OpenAddress data model._
non_main
add addresses in minot nd status research sources and propose best source and it s data license review proposed source approve source based on review comments add to openaddresses note proposed source notes should include basic column mapping to openaddress data model
0
1,632
6,572,657,093
IssuesEvent
2017-09-11 04:08:28
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
s3: verify checksum after get operation
affects_2.0 aws cloud feature_idea waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME s3 ##### ANSIBLE VERSION ``` ansible --version ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no change ##### OS / ENVIRONMENT OS X host, ubuntu trusty VM in virtualbox ##### SUMMARY The S3 GET command should verify the checksum of a newly downloaded file to ensure it was successful. Currently, the there is no verification and the file is assumed to have been retrieved successfully. The existing `keysum()` routine in cloud/amazon/s3.py should be sufficient, and failures could be handled by the existing `retries` parameter. ##### STEPS TO REPRODUCE Not sure exactly how to reproduce, as the download failure does not happen every time. ##### EXPECTED RESULTS The s3 module downloads a file (~515MB) and reports success, but then a subsequent step that attempts to unarchive that file fails. I expect the download step to fail if it did not successfully download the uncorrupted file from S3. ##### OBSERVED RESULTS I verified manually that the unarchive operation failed and that the checksum for the downloaded file did not match the checksum on s3. The `overwrite` param is set to `overwrite=different`, I run `vagrant provision` and indeed see the file get re-downloaded. This time it has no trouble, and unarchiving is successful. (side note: the error for `unarchive` is misleading in this case - it says `"Failed to find handler for \"somefile.tar\". Make sure the required command to extract the file is installed."` The handler is installed, the extraction step is what failed)
True
s3: verify checksum after get operation - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME s3 ##### ANSIBLE VERSION ``` ansible --version ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no change ##### OS / ENVIRONMENT OS X host, ubuntu trusty VM in virtualbox ##### SUMMARY The S3 GET command should verify the checksum of a newly downloaded file to ensure it was successful. Currently, the there is no verification and the file is assumed to have been retrieved successfully. The existing `keysum()` routine in cloud/amazon/s3.py should be sufficient, and failures could be handled by the existing `retries` parameter. ##### STEPS TO REPRODUCE Not sure exactly how to reproduce, as the download failure does not happen every time. ##### EXPECTED RESULTS The s3 module downloads a file (~515MB) and reports success, but then a subsequent step that attempts to unarchive that file fails. I expect the download step to fail if it did not successfully download the uncorrupted file from S3. ##### OBSERVED RESULTS I verified manually that the unarchive operation failed and that the checksum for the downloaded file did not match the checksum on s3. The `overwrite` param is set to `overwrite=different`, I run `vagrant provision` and indeed see the file get re-downloaded. This time it has no trouble, and unarchiving is successful. (side note: the error for `unarchive` is misleading in this case - it says `"Failed to find handler for \"somefile.tar\". Make sure the required command to extract the file is installed."` The handler is installed, the extraction step is what failed)
main
verify checksum after get operation issue type feature idea component name ansible version ansible version ansible config file configured module search path default w o overrides configuration no change os environment os x host ubuntu trusty vm in virtualbox summary the get command should verify the checksum of a newly downloaded file to ensure it was successful currently the there is no verification and the file is assumed to have been retrieved successfully the existing keysum routine in cloud amazon py should be sufficient and failures could be handled by the existing retries parameter steps to reproduce not sure exactly how to reproduce as the download failure does not happen every time expected results the module downloads a file and reports success but then a subsequent step that attempts to unarchive that file fails i expect the download step to fail if it did not successfully download the uncorrupted file from observed results i verified manually that the unarchive operation failed and that the checksum for the downloaded file did not match the checksum on the overwrite param is set to overwrite different i run vagrant provision and indeed see the file get re downloaded this time it has no trouble and unarchiving is successful side note the error for unarchive is misleading in this case it says failed to find handler for somefile tar make sure the required command to extract the file is installed the handler is installed the extraction step is what failed
1
4,742
24,467,131,714
IssuesEvent
2022-10-07 15:59:11
FairRootGroup/FairRoot
https://api.github.com/repos/FairRootGroup/FairRoot
closed
Create a reset method for class FairRun
feature need-feedback:maintainer
**Motivation:** In the current algorithm, the base singleton class FairRun cannot be reinstantiated for another purpose. For example, if we want to have a program to run the simulation using FairSimRun and then once it's finished, run an analysis using FairAnaRun. This is impossible because FairAnaRun cannot be instantiated since its base class FairRun has been already instantiated when running FairSimRun. Even if we delete FairRunSim in the end of simulation, we would still have segmentation violation because FairRun class simply cannot be reinstantiated. **Describe the solution you'd like** Add an `ResetInstance()` method inside the class, like: ```cpp void static FairRun::ResetInstance() { fInstance = 0; } ``` If this is implemented, following code would work: ```cpp auto sim = new FairSimRun(); // simulation starts .... // simulation ends delete sim; FairRun::ResetInstance(); // analysis begins: auto ana = new FairAnaRun ``` It could also be done by reseting instance of FairRun in its deconstructor: ```cpp FairRun::~FairRun() { LOG(debug) << "Enter Destructor of FairRun"; delete fTask; // There is another tasklist in MCApplication, // but this should be independent delete fRtdb; // who is responsible for the RuntimeDataBase delete fEvtHeader; fInstance = 0; // reset the pointer to 0; LOG(debug) << "Leave Destructor of FairRun"; } ```
True
Create a reset method for class FairRun - **Motivation:** In the current algorithm, the base singleton class FairRun cannot be reinstantiated for another purpose. For example, if we want to have a program to run the simulation using FairSimRun and then once it's finished, run an analysis using FairAnaRun. This is impossible because FairAnaRun cannot be instantiated since its base class FairRun has been already instantiated when running FairSimRun. Even if we delete FairRunSim in the end of simulation, we would still have segmentation violation because FairRun class simply cannot be reinstantiated. **Describe the solution you'd like** Add an `ResetInstance()` method inside the class, like: ```cpp void static FairRun::ResetInstance() { fInstance = 0; } ``` If this is implemented, following code would work: ```cpp auto sim = new FairSimRun(); // simulation starts .... // simulation ends delete sim; FairRun::ResetInstance(); // analysis begins: auto ana = new FairAnaRun ``` It could also be done by reseting instance of FairRun in its deconstructor: ```cpp FairRun::~FairRun() { LOG(debug) << "Enter Destructor of FairRun"; delete fTask; // There is another tasklist in MCApplication, // but this should be independent delete fRtdb; // who is responsible for the RuntimeDataBase delete fEvtHeader; fInstance = 0; // reset the pointer to 0; LOG(debug) << "Leave Destructor of FairRun"; } ```
main
create a reset method for class fairrun motivation in the current algorithm the base singleton class fairrun cannot be reinstantiated for another purpose for example if we want to have a program to run the simulation using fairsimrun and then once it s finished run an analysis using fairanarun this is impossible because fairanarun cannot be instantiated since its base class fairrun has been already instantiated when running fairsimrun even if we delete fairrunsim in the end of simulation we would still have segmentation violation because fairrun class simply cannot be reinstantiated describe the solution you d like add an resetinstance method inside the class like cpp void static fairrun resetinstance finstance if this is implemented following code would work cpp auto sim new fairsimrun simulation starts simulation ends delete sim fairrun resetinstance analysis begins auto ana new fairanarun it could also be done by reseting instance of fairrun in its deconstructor cpp fairrun fairrun log debug enter destructor of fairrun delete ftask there is another tasklist in mcapplication but this should be independent delete frtdb who is responsible for the runtimedatabase delete fevtheader finstance reset the pointer to log debug leave destructor of fairrun
1
2,764
9,872,959,538
IssuesEvent
2019-06-22 09:49:54
arcticicestudio/snowsaw
https://api.github.com/repos/arcticicestudio/snowsaw
opened
git-crypt
context-config context-workflow scope-maintainability scope-security type-feature
<p align="center"><img src="https://user-images.githubusercontent.com/7836623/52774423-6d83c000-303d-11e9-83cb-4704419c63c6.png" width="25%"/></p> Integrate [git-crypt][] into the repository to allow to encrypt specific files using [GPG][]. git-crypt is a stable and production proven concept that works safely and allows to use a transparent encryption with Git. _snowsaw_ will use it to encrypt files containing sensitive data like deployment, API or any other kind of secret keys. Another way would be to use [Circle CI's environment variables features][cci-env] to make sensitive data available during build time, but using git-crypt ensures that all required project data is stored in the repository and tracked by Git without the need to manually configure CI/CD providers and servers. ### Integration Steps - [ ] **1** Add files to `.gitattributes` and configure `filter` and `diff` to use `git-crypt` setup - [ ] **2** Initialize `git-crypt` for the repository: `git-crypt init` (default key) - [ ] **3** Add the GPG keys of all core team members keys and Nord theme CI/CD virtual user: `git-crypt add-gpg-user --trusted --no-commit <ID>` (`--no-commit` flag prevents automatic commit of generated files while `--trusted` assumes the GPG user IDs are trusted) - [ ] **4** Commit the new generated `.git-crypt` folder - [ ] **5** Unlock the repository: `git-crypt unlock` - [ ] **6** Ensure all target files are tracked to be encrypted: `git-crypt status` - [ ] **7** Commit all encrypted target files - [ ] **8** Validate the encryption works by locking the repository again: `git-crypt lock` [cci-env]: https://circleci.com/docs/2.0/env-vars [git-crypt]: https://github.com/AGWA/git-crypt [gpg]: https://wiki.archlinux.org/index.php/GnuPG
True
git-crypt - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/52774423-6d83c000-303d-11e9-83cb-4704419c63c6.png" width="25%"/></p> Integrate [git-crypt][] into the repository to allow to encrypt specific files using [GPG][]. git-crypt is a stable and production proven concept that works safely and allows to use a transparent encryption with Git. _snowsaw_ will use it to encrypt files containing sensitive data like deployment, API or any other kind of secret keys. Another way would be to use [Circle CI's environment variables features][cci-env] to make sensitive data available during build time, but using git-crypt ensures that all required project data is stored in the repository and tracked by Git without the need to manually configure CI/CD providers and servers. ### Integration Steps - [ ] **1** Add files to `.gitattributes` and configure `filter` and `diff` to use `git-crypt` setup - [ ] **2** Initialize `git-crypt` for the repository: `git-crypt init` (default key) - [ ] **3** Add the GPG keys of all core team members keys and Nord theme CI/CD virtual user: `git-crypt add-gpg-user --trusted --no-commit <ID>` (`--no-commit` flag prevents automatic commit of generated files while `--trusted` assumes the GPG user IDs are trusted) - [ ] **4** Commit the new generated `.git-crypt` folder - [ ] **5** Unlock the repository: `git-crypt unlock` - [ ] **6** Ensure all target files are tracked to be encrypted: `git-crypt status` - [ ] **7** Commit all encrypted target files - [ ] **8** Validate the encryption works by locking the repository again: `git-crypt lock` [cci-env]: https://circleci.com/docs/2.0/env-vars [git-crypt]: https://github.com/AGWA/git-crypt [gpg]: https://wiki.archlinux.org/index.php/GnuPG
main
git crypt integrate into the repository to allow to encrypt specific files using git crypt is a stable and production proven concept that works safely and allows to use a transparent encryption with git snowsaw will use it to encrypt files containing sensitive data like deployment api or any other kind of secret keys another way would be to use to make sensitive data available during build time but using git crypt ensures that all required project data is stored in the repository and tracked by git without the need to manually configure ci cd providers and servers integration steps add files to gitattributes and configure filter and diff to use git crypt setup initialize git crypt for the repository git crypt init default key add the gpg keys of all core team members keys and nord theme ci cd virtual user git crypt add gpg user trusted no commit no commit flag prevents automatic commit of generated files while trusted assumes the gpg user ids are trusted commit the new generated git crypt folder unlock the repository git crypt unlock ensure all target files are tracked to be encrypted git crypt status commit all encrypted target files validate the encryption works by locking the repository again git crypt lock
1
5,066
25,945,181,193
IssuesEvent
2022-12-16 23:26:29
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Make Links section within table explorer clickable
type: enhancement work: frontend status: ready restricted: maintainers
## Current behavior - The Table Inspector has a "Links" section which shows other tables ![image](https://user-images.githubusercontent.com/42411/202919488-903b16d5-d91e-46ba-8f20-f5a04b6d298b.png) - Clicking on these links does nothing ## Desired behavior - Each entry should be a hyperlink to the Table Page for that table. CC @rajatvijay
True
Make Links section within table explorer clickable - ## Current behavior - The Table Inspector has a "Links" section which shows other tables ![image](https://user-images.githubusercontent.com/42411/202919488-903b16d5-d91e-46ba-8f20-f5a04b6d298b.png) - Clicking on these links does nothing ## Desired behavior - Each entry should be a hyperlink to the Table Page for that table. CC @rajatvijay
main
make links section within table explorer clickable current behavior the table inspector has a links section which shows other tables clicking on these links does nothing desired behavior each entry should be a hyperlink to the table page for that table cc rajatvijay
1
3,544
13,962,766,867
IssuesEvent
2020-10-25 11:09:04
precice/tutorials
https://api.github.com/repos/precice/tutorials
opened
Add tutorial template
compatibility maintainability
It could be helpful to have a tutorials template to copy from for new tutorials. For contributors as they then have to do less. For maintainers as contributors are nudged towards consistency. Such a template could include: * a `README.md` with things are readme should / could include * an `INSTRUCTIONS.md` with all you have to do if you contribute a tutorial. The contributor should finally delete this file * a `precice-config.xml` with some different variants in there. Whatever you don't need, you can delete * an `Allclean` script, if there is no global one? * a `gitignore` if there is no global one * an `Allrun` script with different options in there again Could be placed directly on the top level. ``` * CHT * FSI * template ``` Opinions on this?
True
Add tutorial template - It could be helpful to have a tutorials template to copy from for new tutorials. For contributors as they then have to do less. For maintainers as contributors are nudged towards consistency. Such a template could include: * a `README.md` with things are readme should / could include * an `INSTRUCTIONS.md` with all you have to do if you contribute a tutorial. The contributor should finally delete this file * a `precice-config.xml` with some different variants in there. Whatever you don't need, you can delete * an `Allclean` script, if there is no global one? * a `gitignore` if there is no global one * an `Allrun` script with different options in there again Could be placed directly on the top level. ``` * CHT * FSI * template ``` Opinions on this?
main
add tutorial template it could be helpful to have a tutorials template to copy from for new tutorials for contributors as they then have to do less for maintainers as contributors are nudged towards consistency such a template could include a readme md with things are readme should could include an instructions md with all you have to do if you contribute a tutorial the contributor should finally delete this file a precice config xml with some different variants in there whatever you don t need you can delete an allclean script if there is no global one a gitignore if there is no global one an allrun script with different options in there again could be placed directly on the top level cht fsi template opinions on this
1
134,823
5,237,865,384
IssuesEvent
2017-01-31 01:34:19
graphcool/console
https://api.github.com/repos/graphcool/console
closed
Graphcool-themed GraphiQL
area/playground enhancement onhold priority/medium UI
The entire design of the console is amazing but when you click on the Playground, you're suddenly taken out of this integrated cohesive experience and thrown into a iOS 3 world. I wish you guys could fork GraphiQL and give it a design lift.
1.0
Graphcool-themed GraphiQL - The entire design of the console is amazing but when you click on the Playground, you're suddenly taken out of this integrated cohesive experience and thrown into a iOS 3 world. I wish you guys could fork GraphiQL and give it a design lift.
non_main
graphcool themed graphiql the entire design of the console is amazing but when you click on the playground you re suddenly taken out of this integrated cohesive experience and thrown into a ios world i wish you guys could fork graphiql and give it a design lift
0
280,840
21,315,207,136
IssuesEvent
2022-04-16 06:35:24
redpelican2108/pe
https://api.github.com/repos/redpelican2108/pe
opened
Date format not consistent
severity.VeryLow type.DocumentationBug
![image.png](https://raw.githubusercontent.com/redpelican2108/pe/main/files/b78286c6-fc3c-4730-9ce1-1ddbe890ebb7.png) Month in the format is capital m, but year and date are small letters. <!--session: 1650088094267-86dfb6d9-8554-40ab-b092-b978eb970c90--> <!--Version: Web v3.4.2-->
1.0
Date format not consistent - ![image.png](https://raw.githubusercontent.com/redpelican2108/pe/main/files/b78286c6-fc3c-4730-9ce1-1ddbe890ebb7.png) Month in the format is capital m, but year and date are small letters. <!--session: 1650088094267-86dfb6d9-8554-40ab-b092-b978eb970c90--> <!--Version: Web v3.4.2-->
non_main
date format not consistent month in the format is capital m but year and date are small letters
0
335,633
30,056,156,514
IssuesEvent
2023-06-28 06:58:52
vmiklos/osm-gimmisn
https://api.github.com/repos/vmiklos/osm-gimmisn
closed
Érvénytelen addr:city értékek lista számosságának statisztikázása
enhancement needs testing confirmed
Jó lenne egy T-14 vagy T-30 napos (esetleg heti, havi ahogy céleszerű) grafikon, ami az addr:city értékeinek számosságát ábrázolja az idő függvényében
1.0
Érvénytelen addr:city értékek lista számosságának statisztikázása - Jó lenne egy T-14 vagy T-30 napos (esetleg heti, havi ahogy céleszerű) grafikon, ami az addr:city értékeinek számosságát ábrázolja az idő függvényében
non_main
érvénytelen addr city értékek lista számosságának statisztikázása jó lenne egy t vagy t napos esetleg heti havi ahogy céleszerű grafikon ami az addr city értékeinek számosságát ábrázolja az idő függvényében
0
29,297
8,318,244,153
IssuesEvent
2018-09-25 14:14:14
dart-lang/build
https://api.github.com/repos/dart-lang/build
closed
Change Builder.build to FutureOr<void>
P3 low next-breaking-release package: build type: enhancement type: question
Prepares for `FutureOr<void>`, and doesn't require adding `async` to your method body. ... though maybe all of this is not too important if post-Barback we can make methods sync?
1.0
Change Builder.build to FutureOr<void> - Prepares for `FutureOr<void>`, and doesn't require adding `async` to your method body. ... though maybe all of this is not too important if post-Barback we can make methods sync?
non_main
change builder build to futureor prepares for futureor and doesn t require adding async to your method body though maybe all of this is not too important if post barback we can make methods sync
0
72,468
9,594,208,006
IssuesEvent
2019-05-09 13:30:11
quarkusio/quarkus
https://api.github.com/repos/quarkusio/quarkus
closed
Container registry secrets link is broken
component:documentation
GETTING STARTED KNATIVE GUIDE[1] > Container registry secrets[2] link is broken. [1]https://quarkus.io/guides/getting-started-knative-guide [2]https://github.com/knative/docs/tree/master/serving/samples/build-private-repo-go#creating-a-dockerhub-push-credential
1.0
Container registry secrets link is broken - GETTING STARTED KNATIVE GUIDE[1] > Container registry secrets[2] link is broken. [1]https://quarkus.io/guides/getting-started-knative-guide [2]https://github.com/knative/docs/tree/master/serving/samples/build-private-repo-go#creating-a-dockerhub-push-credential
non_main
container registry secrets link is broken getting started knative guide container registry secrets link is broken
0
5,826
30,848,747,002
IssuesEvent
2023-08-02 15:16:48
jupyter-naas/awesome-notebooks
https://api.github.com/repos/jupyter-naas/awesome-notebooks
closed
LinkedIn - Send like to post
templates maintainer
This notebook will show how to send a like to a post on LinkedIn using Python and the LinkedIn API. It is usefull for organizations to quickly interact with their followers.
True
LinkedIn - Send like to post - This notebook will show how to send a like to a post on LinkedIn using Python and the LinkedIn API. It is usefull for organizations to quickly interact with their followers.
main
linkedin send like to post this notebook will show how to send a like to a post on linkedin using python and the linkedin api it is usefull for organizations to quickly interact with their followers
1
3,294
12,681,331,193
IssuesEvent
2020-06-19 15:12:49
chocolatey-community/chocolatey-package-requests
https://api.github.com/repos/chocolatey-community/chocolatey-package-requests
closed
RFP - Daemon Tools Lite
Status: Available For Maintainer(s)
## Checklist - [x] The package I am requesting does not already exist on https://chocolatey.org/packages; - [x] There is no open issue for this package; - [x] The issue title starts 'RFP - '; - [x] The download URL is public and not locked behind a paywall / login; ## Package Details Software project URL : https://www.daemon-tools.cc/products/dtLite Direct download URL for the software / installer : https://mirror07.daemon-tools.cc/getfile.php?p=http://na-us7.disc-tools.com/490c5175bff30bb11d317ca70e46a76f/DTLiteInstaller.exe Software summary / short description: DAEMON Tools Lite allows you to mount all known types of disc image files and emulates up to 4 DT + SCSI + HDD devices. It enables you to create images of your optical discs and access them via well-organized catalog. Made by Disc Soft Ltd.
True
RFP - Daemon Tools Lite - ## Checklist - [x] The package I am requesting does not already exist on https://chocolatey.org/packages; - [x] There is no open issue for this package; - [x] The issue title starts 'RFP - '; - [x] The download URL is public and not locked behind a paywall / login; ## Package Details Software project URL : https://www.daemon-tools.cc/products/dtLite Direct download URL for the software / installer : https://mirror07.daemon-tools.cc/getfile.php?p=http://na-us7.disc-tools.com/490c5175bff30bb11d317ca70e46a76f/DTLiteInstaller.exe Software summary / short description: DAEMON Tools Lite allows you to mount all known types of disc image files and emulates up to 4 DT + SCSI + HDD devices. It enables you to create images of your optical discs and access them via well-organized catalog. Made by Disc Soft Ltd.
main
rfp daemon tools lite checklist the package i am requesting does not already exist on there is no open issue for this package the issue title starts rfp the download url is public and not locked behind a paywall login package details software project url direct download url for the software installer software summary short description daemon tools lite allows you to mount all known types of disc image files and emulates up to dt scsi hdd devices it enables you to create images of your optical discs and access them via well organized catalog made by disc soft ltd
1
1,729
6,574,836,703
IssuesEvent
2017-09-11 14:14:29
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
group_by fails on integer values
affects_2.0 bug_report in progress waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME group_by ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.0.0.2 ``` ##### CONFIGURATION <!--- vanilla --> ##### OS / ENVIRONMENT <!--- n/a --> ##### SUMMARY if any key has numeric value, following is thrown: An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py", line 114, in run self._shared_loader_obj, File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 119, in run res = self._execute() File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 401, in _execute result = self._handler.run(task_vars=variables) File "/usr/lib/python2.7/dist-packages/ansible/plugins/action/group_by.py", line 41, in run group_name = group_name.replace(' ','-') AttributeError: 'int' object has no attribute 'replace' ##### STEPS TO REPRODUCE inventory: ~~~ host1 ansible_ssh_host=127.0.0.1 groupkey=1 ~~~ ~~~ - hosts: all name: testing groups tasks: - group_by: key={{ groupkey }} - hosts: 1 tasks: - ping: ~~~ ##### EXPECTED RESULTS hosts in group ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py", line 114, in run self._shared_loader_obj, File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 119, in run res = self._execute() File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 401, in _execute result = self._handler.run(task_vars=variables) File "/usr/lib/python2.7/dist-packages/ansible/plugins/action/group_by.py", line 41, in run group_name = group_name.replace(' ','-') AttributeError: 'int' object has no attribute 'replace' ```
True
group_by fails on integer values - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME group_by ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.0.0.2 ``` ##### CONFIGURATION <!--- vanilla --> ##### OS / ENVIRONMENT <!--- n/a --> ##### SUMMARY if any key has numeric value, following is thrown: An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py", line 114, in run self._shared_loader_obj, File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 119, in run res = self._execute() File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 401, in _execute result = self._handler.run(task_vars=variables) File "/usr/lib/python2.7/dist-packages/ansible/plugins/action/group_by.py", line 41, in run group_name = group_name.replace(' ','-') AttributeError: 'int' object has no attribute 'replace' ##### STEPS TO REPRODUCE inventory: ~~~ host1 ansible_ssh_host=127.0.0.1 groupkey=1 ~~~ ~~~ - hosts: all name: testing groups tasks: - group_by: key={{ groupkey }} - hosts: 1 tasks: - ping: ~~~ ##### EXPECTED RESULTS hosts in group ##### ACTUAL RESULTS ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/ansible/executor/process/worker.py", line 114, in run self._shared_loader_obj, File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 119, in run res = self._execute() File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 401, in _execute result = self._handler.run(task_vars=variables) File "/usr/lib/python2.7/dist-packages/ansible/plugins/action/group_by.py", line 41, in run group_name = group_name.replace(' ','-') AttributeError: 'int' object has no attribute 'replace' ```
main
group by fails on integer values issue type bug report component name group by ansible version ansible configuration vanilla os environment n a summary if any key has numeric value following is thrown an exception occurred during task execution the full traceback is traceback most recent call last file usr lib dist packages ansible executor process worker py line in run self shared loader obj file usr lib dist packages ansible executor task executor py line in run res self execute file usr lib dist packages ansible executor task executor py line in execute result self handler run task vars variables file usr lib dist packages ansible plugins action group by py line in run group name group name replace attributeerror int object has no attribute replace steps to reproduce inventory ansible ssh host groupkey hosts all name testing groups tasks group by key groupkey hosts tasks ping expected results hosts in group actual results an exception occurred during task execution the full traceback is traceback most recent call last file usr lib dist packages ansible executor process worker py line in run self shared loader obj file usr lib dist packages ansible executor task executor py line in run res self execute file usr lib dist packages ansible executor task executor py line in execute result self handler run task vars variables file usr lib dist packages ansible plugins action group by py line in run group name group name replace attributeerror int object has no attribute replace
1
2,359
8,415,088,063
IssuesEvent
2018-10-13 10:47:00
chocolatey/chocolatey-package-requests
https://api.github.com/repos/chocolatey/chocolatey-package-requests
closed
RFP - Cutegram
Status: Available For Maintainer(s)
Cutegram is a free and opensource telegram clients for Linux, Windows, OS X and OpenBSD, focusing on user friendly, compatibility with desktop environments. Cutegram using Qt5, QML, libqtelegram, libappindication, AsemanQtTools technologies and Faenza icons and Twitter emojies graphic sets. It's free and released under GPLv3 license. http://aseman.co/en/products/cutegram/
True
RFP - Cutegram - Cutegram is a free and opensource telegram clients for Linux, Windows, OS X and OpenBSD, focusing on user friendly, compatibility with desktop environments. Cutegram using Qt5, QML, libqtelegram, libappindication, AsemanQtTools technologies and Faenza icons and Twitter emojies graphic sets. It's free and released under GPLv3 license. http://aseman.co/en/products/cutegram/
main
rfp cutegram cutegram is a free and opensource telegram clients for linux windows os x and openbsd focusing on user friendly compatibility with desktop environments cutegram using qml libqtelegram libappindication asemanqttools technologies and faenza icons and twitter emojies graphic sets it s free and released under license
1
4,754
24,510,457,313
IssuesEvent
2022-10-10 20:50:20
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
The columns endpoint results in a 500, possibly metadata related
type: bug work: backend status: ready restricted: maintainers
## Description Endpoint: `http://localhost:8000/api/db/v0/tables/<table_id>/columns/` ``` Environment: Request Method: GET Request URL: http://localhost:8000/api/db/v0/tables/5/columns/?limit=500 Django Version: 3.1.14 Python Version: 3.9.8 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'django_filters', 'django_property_filter', 'mathesar'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback (most recent call last): File "/code/mathesar/models/base.py", line 675, in __getattribute__ return super().__getattribute__(name) During handling of the above exception ('Column' object has no attribute 'primary_key'), another exception occurred: File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1167, in __getattr__ return self._index[key] The above exception ('nspname') was the direct cause of the following exception: File "/code/mathesar/models/base.py", line 675, in __getattribute__ return super().__getattribute__(name) File "/code/mathesar/models/base.py", line 696, in _sa_column return self.table.sa_columns[self.name] File "/code/mathesar/models/base.py", line 362, in sa_columns return self._enriched_column_sa_table.columns File "/code/mathesar/models/base.py", line 350, in _enriched_column_sa_table table=self._sa_table, File "/code/mathesar/state/cached_property.py", line 62, in __get__ new_value = self.original_get_fn(instance) File "/code/mathesar/models/base.py", line 331, in _sa_table sa_table = reflect_table_from_oid( File "/code/db/tables/operations/select.py", line 23, in reflect_table_from_oid tables = reflect_tables_from_oids([oid], engine, metadata=metadata, connection_to_use=connection_to_use) File "/code/db/tables/operations/select.py", line 29, in reflect_tables_from_oids get_map_of_table_oid_to_schema_name_and_table_name( File "/code/db/tables/operations/select.py", line 59, in get_map_of_table_oid_to_schema_name_and_table_name select(pg_namespace.c.nspname, pg_class.c.relname, pg_class.c.oid) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1169, in __getattr__ util.raise_(AttributeError(key), replace_context=err) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception During handling of the above exception (nspname), another exception occurred: File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1167, in __getattr__ return self._index[key] The above exception ('nspname') was the direct cause of the following exception: File "/code/mathesar/models/base.py", line 675, in __getattribute__ return super().__getattribute__(name) File "/code/mathesar/models/base.py", line 696, in _sa_column return self.table.sa_columns[self.name] File "/code/mathesar/models/base.py", line 362, in sa_columns return self._enriched_column_sa_table.columns File "/code/mathesar/models/base.py", line 350, in _enriched_column_sa_table table=self._sa_table, File "/code/mathesar/state/cached_property.py", line 62, in __get__ new_value = self.original_get_fn(instance) File "/code/mathesar/models/base.py", line 331, in _sa_table sa_table = reflect_table_from_oid( File "/code/db/tables/operations/select.py", line 23, in reflect_table_from_oid tables = reflect_tables_from_oids([oid], engine, metadata=metadata, connection_to_use=connection_to_use) File "/code/db/tables/operations/select.py", line 29, in reflect_tables_from_oids get_map_of_table_oid_to_schema_name_and_table_name( File "/code/db/tables/operations/select.py", line 59, in get_map_of_table_oid_to_schema_name_and_table_name select(pg_namespace.c.nspname, pg_class.c.relname, pg_class.c.oid) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1169, in __getattr__ util.raise_(AttributeError(key), replace_context=err) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception During handling of the above exception (nspname), another exception occurred: File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 826, in __getattr__ return getattr(self.comparator, key) The above exception ('Comparator' object has no attribute '_sa_column') was the direct cause of the following exception: File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view return view_func(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception response = exception_handler(exc, context) File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler raise exc File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 38, in list queryset = self.filter_queryset(self.get_queryset()) File "/code/mathesar/api/db/viewsets/columns.py", line 33, in get_queryset queryset = Column.objects.filter(table=self.kwargs['table_pk']).order_by('attnum') File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/code/mathesar/models/base.py", line 70, in get_queryset make_sure_initial_reflection_happened() File "/code/mathesar/state/base.py", line 8, in make_sure_initial_reflection_happened reset_reflection() File "/code/mathesar/state/base.py", line 27, in reset_reflection _trigger_django_model_reflection() File "/code/mathesar/state/base.py", line 31, in _trigger_django_model_reflection reflect_db_objects(metadata=get_cached_metadata()) File "/code/mathesar/state/django.py", line 44, in reflect_db_objects reflect_columns_from_tables(tables, metadata=metadata) File "/code/mathesar/state/django.py", line 123, in reflect_columns_from_tables models._compute_preview_template(table) File "/code/mathesar/models/base.py", line 859, in _compute_preview_template if column.primary_key: File "/code/mathesar/models/base.py", line 681, in __getattribute__ return getattr(self._sa_column, name) File "/code/mathesar/models/base.py", line 681, in __getattribute__ return getattr(self._sa_column, name) File "/code/mathesar/models/base.py", line 681, in __getattribute__ return getattr(self._sa_column, name) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 828, in __getattr__ util.raise_( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception Exception Type: AttributeError at /api/db/v0/tables/5/columns/ Exception Value: Neither 'MathesarColumn' object nor 'Comparator' object has an attribute '_sa_column' ```
True
The columns endpoint results in a 500, possibly metadata related - ## Description Endpoint: `http://localhost:8000/api/db/v0/tables/<table_id>/columns/` ``` Environment: Request Method: GET Request URL: http://localhost:8000/api/db/v0/tables/5/columns/?limit=500 Django Version: 3.1.14 Python Version: 3.9.8 Installed Applications: ['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'django_filters', 'django_property_filter', 'mathesar'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback (most recent call last): File "/code/mathesar/models/base.py", line 675, in __getattribute__ return super().__getattribute__(name) During handling of the above exception ('Column' object has no attribute 'primary_key'), another exception occurred: File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1167, in __getattr__ return self._index[key] The above exception ('nspname') was the direct cause of the following exception: File "/code/mathesar/models/base.py", line 675, in __getattribute__ return super().__getattribute__(name) File "/code/mathesar/models/base.py", line 696, in _sa_column return self.table.sa_columns[self.name] File "/code/mathesar/models/base.py", line 362, in sa_columns return self._enriched_column_sa_table.columns File "/code/mathesar/models/base.py", line 350, in _enriched_column_sa_table table=self._sa_table, File "/code/mathesar/state/cached_property.py", line 62, in __get__ new_value = self.original_get_fn(instance) File "/code/mathesar/models/base.py", line 331, in _sa_table sa_table = reflect_table_from_oid( File "/code/db/tables/operations/select.py", line 23, in reflect_table_from_oid tables = reflect_tables_from_oids([oid], engine, metadata=metadata, connection_to_use=connection_to_use) File "/code/db/tables/operations/select.py", line 29, in reflect_tables_from_oids get_map_of_table_oid_to_schema_name_and_table_name( File "/code/db/tables/operations/select.py", line 59, in get_map_of_table_oid_to_schema_name_and_table_name select(pg_namespace.c.nspname, pg_class.c.relname, pg_class.c.oid) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1169, in __getattr__ util.raise_(AttributeError(key), replace_context=err) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception During handling of the above exception (nspname), another exception occurred: File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1167, in __getattr__ return self._index[key] The above exception ('nspname') was the direct cause of the following exception: File "/code/mathesar/models/base.py", line 675, in __getattribute__ return super().__getattribute__(name) File "/code/mathesar/models/base.py", line 696, in _sa_column return self.table.sa_columns[self.name] File "/code/mathesar/models/base.py", line 362, in sa_columns return self._enriched_column_sa_table.columns File "/code/mathesar/models/base.py", line 350, in _enriched_column_sa_table table=self._sa_table, File "/code/mathesar/state/cached_property.py", line 62, in __get__ new_value = self.original_get_fn(instance) File "/code/mathesar/models/base.py", line 331, in _sa_table sa_table = reflect_table_from_oid( File "/code/db/tables/operations/select.py", line 23, in reflect_table_from_oid tables = reflect_tables_from_oids([oid], engine, metadata=metadata, connection_to_use=connection_to_use) File "/code/db/tables/operations/select.py", line 29, in reflect_tables_from_oids get_map_of_table_oid_to_schema_name_and_table_name( File "/code/db/tables/operations/select.py", line 59, in get_map_of_table_oid_to_schema_name_and_table_name select(pg_namespace.c.nspname, pg_class.c.relname, pg_class.c.oid) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1169, in __getattr__ util.raise_(AttributeError(key), replace_context=err) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception During handling of the above exception (nspname), another exception occurred: File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 826, in __getattr__ return getattr(self.comparator, key) The above exception ('Comparator' object has no attribute '_sa_column') was the direct cause of the following exception: File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view return view_func(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view return self.dispatch(request, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch response = self.handle_exception(exc) File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception response = exception_handler(exc, context) File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler raise exc File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch response = handler(request, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 38, in list queryset = self.filter_queryset(self.get_queryset()) File "/code/mathesar/api/db/viewsets/columns.py", line 33, in get_queryset queryset = Column.objects.filter(table=self.kwargs['table_pk']).order_by('attnum') File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/code/mathesar/models/base.py", line 70, in get_queryset make_sure_initial_reflection_happened() File "/code/mathesar/state/base.py", line 8, in make_sure_initial_reflection_happened reset_reflection() File "/code/mathesar/state/base.py", line 27, in reset_reflection _trigger_django_model_reflection() File "/code/mathesar/state/base.py", line 31, in _trigger_django_model_reflection reflect_db_objects(metadata=get_cached_metadata()) File "/code/mathesar/state/django.py", line 44, in reflect_db_objects reflect_columns_from_tables(tables, metadata=metadata) File "/code/mathesar/state/django.py", line 123, in reflect_columns_from_tables models._compute_preview_template(table) File "/code/mathesar/models/base.py", line 859, in _compute_preview_template if column.primary_key: File "/code/mathesar/models/base.py", line 681, in __getattribute__ return getattr(self._sa_column, name) File "/code/mathesar/models/base.py", line 681, in __getattribute__ return getattr(self._sa_column, name) File "/code/mathesar/models/base.py", line 681, in __getattribute__ return getattr(self._sa_column, name) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 828, in __getattr__ util.raise_( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception Exception Type: AttributeError at /api/db/v0/tables/5/columns/ Exception Value: Neither 'MathesarColumn' object nor 'Comparator' object has an attribute '_sa_column' ```
main
the columns endpoint results in a possibly metadata related description endpoint environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file code mathesar models base py line in getattribute return super getattribute name during handling of the above exception column object has no attribute primary key another exception occurred file usr local lib site packages sqlalchemy sql base py line in getattr return self index the above exception nspname was the direct cause of the following exception file code mathesar models base py line in getattribute return super getattribute name file code mathesar models base py line in sa column return self table sa columns file code mathesar models base py line in sa columns return self enriched column sa table columns file code mathesar models base py line in enriched column sa table table self sa table file code mathesar state cached property py line in get new value self original get fn instance file code mathesar models base py line in sa table sa table reflect table from oid file code db tables operations select py line in reflect table from oid tables reflect tables from oids engine metadata metadata connection to use connection to use file code db tables operations select py line in reflect tables from oids get map of table oid to schema name and table name file code db tables operations select py line in get map of table oid to schema name and table name select pg namespace c nspname pg class c relname pg class c oid file usr local lib site packages sqlalchemy sql base py line in getattr util raise attributeerror key replace context err file usr local lib site packages sqlalchemy util compat py line in raise raise exception during handling of the above exception nspname another exception occurred file usr local lib site packages sqlalchemy sql base py line in getattr return self index the above exception nspname was the direct cause of the following exception file code mathesar models base py line in getattribute return super getattribute name file code mathesar models base py line in sa column return self table sa columns file code mathesar models base py line in sa columns return self enriched column sa table columns file code mathesar models base py line in enriched column sa table table self sa table file code mathesar state cached property py line in get new value self original get fn instance file code mathesar models base py line in sa table sa table reflect table from oid file code db tables operations select py line in reflect table from oid tables reflect tables from oids engine metadata metadata connection to use connection to use file code db tables operations select py line in reflect tables from oids get map of table oid to schema name and table name file code db tables operations select py line in get map of table oid to schema name and table name select pg namespace c nspname pg class c relname pg class c oid file usr local lib site packages sqlalchemy sql base py line in getattr util raise attributeerror key replace context err file usr local lib site packages sqlalchemy util compat py line in raise raise exception during handling of the above exception nspname another exception occurred file usr local lib site packages sqlalchemy sql elements py line in getattr return getattr self comparator key the above exception comparator object has no attribute sa column was the direct cause of the following exception file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file usr local lib site packages rest framework mixins py line in list queryset self filter queryset self get queryset file code mathesar api db viewsets columns py line in get queryset queryset column objects filter table self kwargs order by attnum file usr local lib site packages django db models manager py line in manager method return getattr self get queryset name args kwargs file code mathesar models base py line in get queryset make sure initial reflection happened file code mathesar state base py line in make sure initial reflection happened reset reflection file code mathesar state base py line in reset reflection trigger django model reflection file code mathesar state base py line in trigger django model reflection reflect db objects metadata get cached metadata file code mathesar state django py line in reflect db objects reflect columns from tables tables metadata metadata file code mathesar state django py line in reflect columns from tables models compute preview template table file code mathesar models base py line in compute preview template if column primary key file code mathesar models base py line in getattribute return getattr self sa column name file code mathesar models base py line in getattribute return getattr self sa column name file code mathesar models base py line in getattribute return getattr self sa column name file usr local lib site packages sqlalchemy sql elements py line in getattr util raise file usr local lib site packages sqlalchemy util compat py line in raise raise exception exception type attributeerror at api db tables columns exception value neither mathesarcolumn object nor comparator object has an attribute sa column
1
621
4,117,008,540
IssuesEvent
2016-06-08 04:42:31
Particular/ServiceControl
https://api.github.com/repos/Particular/ServiceControl
closed
License dialog is not refreshed when importing a new license file
Tag: Installer Tag: Maintainer Prio Type: Bug
Here are the steps to reproduce the issue. I have the following license entries in my Registry: HKLM -> Software -> ParticularSoftware -> License (using Particular's internal license file) HKLM -> Software -> ParticularSoftware -> TrialStart = 2015-09-30 HKCU -> Software -> ParticularSoftware -> License (again, using Particular's internal license file) HKCU -> Software -> ParticularSoftware -> TrialStart = 2015-09-30 I launch the Management Utility app. Open the license dialog. I can see that the effective license is shown correctly. I then update the license (using another valid license file) and import it. Here's what happens: - There's no indication that the license is updated. (Related to #670) - I checked the registry, the License key is actually updated But now when I close the dialog and re-open it, although the content of the license registry key has changed, in the dialog the old license information is shown.
True
License dialog is not refreshed when importing a new license file - Here are the steps to reproduce the issue. I have the following license entries in my Registry: HKLM -> Software -> ParticularSoftware -> License (using Particular's internal license file) HKLM -> Software -> ParticularSoftware -> TrialStart = 2015-09-30 HKCU -> Software -> ParticularSoftware -> License (again, using Particular's internal license file) HKCU -> Software -> ParticularSoftware -> TrialStart = 2015-09-30 I launch the Management Utility app. Open the license dialog. I can see that the effective license is shown correctly. I then update the license (using another valid license file) and import it. Here's what happens: - There's no indication that the license is updated. (Related to #670) - I checked the registry, the License key is actually updated But now when I close the dialog and re-open it, although the content of the license registry key has changed, in the dialog the old license information is shown.
main
license dialog is not refreshed when importing a new license file here are the steps to reproduce the issue i have the following license entries in my registry hklm software particularsoftware license using particular s internal license file hklm software particularsoftware trialstart hkcu software particularsoftware license again using particular s internal license file hkcu software particularsoftware trialstart i launch the management utility app open the license dialog i can see that the effective license is shown correctly i then update the license using another valid license file and import it here s what happens there s no indication that the license is updated related to i checked the registry the license key is actually updated but now when i close the dialog and re open it although the content of the license registry key has changed in the dialog the old license information is shown
1
193,623
22,216,231,503
IssuesEvent
2022-06-08 02:09:13
maddyCode23/linux-4.1.15
https://api.github.com/repos/maddyCode23/linux-4.1.15
reopened
CVE-2020-1749 (High) detected in linux-stable-rtv4.1.33, linux-stable-rtv4.1.33
security vulnerability
## CVE-2020-1749 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Linux kernel's implementation of some networking protocols in IPsec, such as VXLAN and GENEVE tunnels over IPv6. When an encrypted tunnel is created between two hosts, the kernel isn't correctly routing tunneled data over the encrypted link; rather sending the data unencrypted. This would allow anyone in between the two endpoints to read the traffic unencrypted. The main threat from this vulnerability is to data confidentiality. <p>Publish Date: 2020-09-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1749>CVE-2020-1749</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1749">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1749</a></p> <p>Release Date: 2020-09-09</p> <p>Fix Resolution: v5.5-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-1749 (High) detected in linux-stable-rtv4.1.33, linux-stable-rtv4.1.33 - ## CVE-2020-1749 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Linux kernel's implementation of some networking protocols in IPsec, such as VXLAN and GENEVE tunnels over IPv6. When an encrypted tunnel is created between two hosts, the kernel isn't correctly routing tunneled data over the encrypted link; rather sending the data unencrypted. This would allow anyone in between the two endpoints to read the traffic unencrypted. The main threat from this vulnerability is to data confidentiality. <p>Publish Date: 2020-09-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1749>CVE-2020-1749</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1749">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1749</a></p> <p>Release Date: 2020-09-09</p> <p>Fix Resolution: v5.5-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in linux stable linux stable cve high severity vulnerability vulnerable libraries linux stable linux stable vulnerability details a flaw was found in the linux kernel s implementation of some networking protocols in ipsec such as vxlan and geneve tunnels over when an encrypted tunnel is created between two hosts the kernel isn t correctly routing tunneled data over the encrypted link rather sending the data unencrypted this would allow anyone in between the two endpoints to read the traffic unencrypted the main threat from this vulnerability is to data confidentiality publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
271,590
20,681,452,262
IssuesEvent
2022-03-10 14:16:42
openBackhaul/ApplicationPattern
https://api.github.com/repos/openBackhaul/ApplicationPattern
closed
Add: Branch merging: think about/establish guideline / checklists
documentation enhancement
**Is your feature request related to a problem? Please describe.** Working with the GitFlow workflow could introduce merge conflicts due to the various branches being created and merged. **Describe the solution you'd like** We might need to establish guidelines or checklists for merging the branches to ensure we can properly identify and resolve any conflicts. **Additional context** Example: How do we make sure we don't get problems due to conflicts if we e.g. create new feature branches from different versions of the develop branch (e.g. if the new feature uses some underlying functions which are have changed in the develop branch in between)? There needs to be a clear merging process. ![image](https://user-images.githubusercontent.com/57349523/157228633-f2c5353e-781e-444b-ad11-3a91726e3b52.png) **Solution and next next step(s)** First check if we are likely to run into such conflicts, if yes check if we already have ideas for how to handle those or what recommendations exist for that.
1.0
Add: Branch merging: think about/establish guideline / checklists - **Is your feature request related to a problem? Please describe.** Working with the GitFlow workflow could introduce merge conflicts due to the various branches being created and merged. **Describe the solution you'd like** We might need to establish guidelines or checklists for merging the branches to ensure we can properly identify and resolve any conflicts. **Additional context** Example: How do we make sure we don't get problems due to conflicts if we e.g. create new feature branches from different versions of the develop branch (e.g. if the new feature uses some underlying functions which are have changed in the develop branch in between)? There needs to be a clear merging process. ![image](https://user-images.githubusercontent.com/57349523/157228633-f2c5353e-781e-444b-ad11-3a91726e3b52.png) **Solution and next next step(s)** First check if we are likely to run into such conflicts, if yes check if we already have ideas for how to handle those or what recommendations exist for that.
non_main
add branch merging think about establish guideline checklists is your feature request related to a problem please describe working with the gitflow workflow could introduce merge conflicts due to the various branches being created and merged describe the solution you d like we might need to establish guidelines or checklists for merging the branches to ensure we can properly identify and resolve any conflicts additional context example how do we make sure we don t get problems due to conflicts if we e g create new feature branches from different versions of the develop branch e g if the new feature uses some underlying functions which are have changed in the develop branch in between there needs to be a clear merging process solution and next next step s first check if we are likely to run into such conflicts if yes check if we already have ideas for how to handle those or what recommendations exist for that
0
569
4,045,215,816
IssuesEvent
2016-05-21 21:10:20
opencaching/opencaching-pl
https://api.github.com/repos/opencaching/opencaching-pl
opened
Unify log display
Type_Enhancement x_Maintainability
Currently there are 2 implementations for logs: - viewcache with dynamic loading of logs (via AJAX) - viewlogs with legacy code This enhancement suggests that viewlogs use the same mechanism and code to load the logs (ie. use a single code base to generate displayable log entries).
True
Unify log display - Currently there are 2 implementations for logs: - viewcache with dynamic loading of logs (via AJAX) - viewlogs with legacy code This enhancement suggests that viewlogs use the same mechanism and code to load the logs (ie. use a single code base to generate displayable log entries).
main
unify log display currently there are implementations for logs viewcache with dynamic loading of logs via ajax viewlogs with legacy code this enhancement suggests that viewlogs use the same mechanism and code to load the logs ie use a single code base to generate displayable log entries
1
18,397
10,959,057,342
IssuesEvent
2019-11-27 10:36:44
terraform-providers/terraform-provider-azurerm
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
closed
Missing azurerm_api_management_policy resource
duplicate new-resource service/api-management
<!--- Please note the following potential times when an issue might be in Terraform core: * [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues * [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues * [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues * [Registry](https://registry.terraform.io/) issues * Spans resources across multiple providers If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead. ---> <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform (and AzureRM Provider) Version * `azurerm 1.36.0` <!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). ---> ### Affected Resource(s) <!--- Please list the affected resources and data sources. ---> * `azurerm_api_management` ### Debug Output ``` [error]Terraform command 'apply' failed with exit code '1'.: Error setting Policies for API Management Service "***" (Resource Group "*******"): apimanagement.PolicyClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="ValidationError" Message="One or more fields contain incorrect values:" Details=[{"code":"ValidationError","message":"Error in element 'PropertyName' on line 62, column 7: Cannot find a property 'PropertyName'","target":"PropertyName"},{"code":"ValidationError","message":"Error in element 'set-variable' on line 76, column 4: Cannot find a property 'PorpertyName'","target":"set-variable"}] ``` ### Expected It is necessary to have the global policy deployed *after* properties (aka NamedValues) since the global policy can (and probably will) reference a property value. ### Current behaviour * It sometimes deploys successfully, it sometimes fails to deploy in this order. * there is a *_policy resource for operation, product, and api, but not for global policy ### Suggested solution Create extra resource for *`azurerm_api_management_policy`* so it can be made dependent on `azurerm_api_management` similar to the current behavior of `azurerm_api_management_authorization_server`
1.0
Missing azurerm_api_management_policy resource - <!--- Please note the following potential times when an issue might be in Terraform core: * [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues * [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues * [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues * [Registry](https://registry.terraform.io/) issues * Spans resources across multiple providers If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead. ---> <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform (and AzureRM Provider) Version * `azurerm 1.36.0` <!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). ---> ### Affected Resource(s) <!--- Please list the affected resources and data sources. ---> * `azurerm_api_management` ### Debug Output ``` [error]Terraform command 'apply' failed with exit code '1'.: Error setting Policies for API Management Service "***" (Resource Group "*******"): apimanagement.PolicyClient#CreateOrUpdate: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="ValidationError" Message="One or more fields contain incorrect values:" Details=[{"code":"ValidationError","message":"Error in element 'PropertyName' on line 62, column 7: Cannot find a property 'PropertyName'","target":"PropertyName"},{"code":"ValidationError","message":"Error in element 'set-variable' on line 76, column 4: Cannot find a property 'PorpertyName'","target":"set-variable"}] ``` ### Expected It is necessary to have the global policy deployed *after* properties (aka NamedValues) since the global policy can (and probably will) reference a property value. ### Current behaviour * It sometimes deploys successfully, it sometimes fails to deploy in this order. * there is a *_policy resource for operation, product, and api, but not for global policy ### Suggested solution Create extra resource for *`azurerm_api_management_policy`* so it can be made dependent on `azurerm_api_management` similar to the current behavior of `azurerm_api_management_authorization_server`
non_main
missing azurerm api management policy resource please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version azurerm affected resource s azurerm api management debug output terraform command apply failed with exit code error setting policies for api management service resource group apimanagement policyclient createorupdate failure responding to request statuscode original error autorest azure service returned an error status code validationerror message one or more fields contain incorrect values details expected it is necessary to have the global policy deployed after properties aka namedvalues since the global policy can and probably will reference a property value current behaviour it sometimes deploys successfully it sometimes fails to deploy in this order there is a policy resource for operation product and api but not for global policy suggested solution create extra resource for azurerm api management policy so it can be made dependent on azurerm api management similar to the current behavior of azurerm api management authorization server
0
264,544
20,024,506,426
IssuesEvent
2022-02-01 19:44:24
grafana/thema
https://api.github.com/repos/grafana/thema
opened
Put final state of tutorials into easily-fetched form
documentation enhancement good first issue
The tutorials are quite helpful, but they introduce too much code, in too many broken-up parts, to reasonably expect the user to copy/paste or otherwise recreate it all themselves. We need to provide a clonable repo, or some subdirectory within this repo, from which folks can fetch the final state.
1.0
Put final state of tutorials into easily-fetched form - The tutorials are quite helpful, but they introduce too much code, in too many broken-up parts, to reasonably expect the user to copy/paste or otherwise recreate it all themselves. We need to provide a clonable repo, or some subdirectory within this repo, from which folks can fetch the final state.
non_main
put final state of tutorials into easily fetched form the tutorials are quite helpful but they introduce too much code in too many broken up parts to reasonably expect the user to copy paste or otherwise recreate it all themselves we need to provide a clonable repo or some subdirectory within this repo from which folks can fetch the final state
0
2,742
9,746,139,912
IssuesEvent
2019-06-03 11:27:25
trump-fmi/area-simplification-client
https://api.github.com/repos/trump-fmi/area-simplification-client
closed
Convert label project to typescript
DONE maintainance
It looks like the preceding project ("TRUMP") intented to implement things in a class-based oop manner which is not really supported by vanilla javascript. For this reason, the code contains different approaches of patterns that try to imitate a class system by exploiting javascript prototype objects. In order to make the code more read- and understandable and to get rid of all the confusing prototype stuff, it would make sense to convert the code to typescript which then just generates the corresponding javascript on the basis of an easy-to-read java-/c#-like syntax and a type-safety-aware compiler.
True
Convert label project to typescript - It looks like the preceding project ("TRUMP") intented to implement things in a class-based oop manner which is not really supported by vanilla javascript. For this reason, the code contains different approaches of patterns that try to imitate a class system by exploiting javascript prototype objects. In order to make the code more read- and understandable and to get rid of all the confusing prototype stuff, it would make sense to convert the code to typescript which then just generates the corresponding javascript on the basis of an easy-to-read java-/c#-like syntax and a type-safety-aware compiler.
main
convert label project to typescript it looks like the preceding project trump intented to implement things in a class based oop manner which is not really supported by vanilla javascript for this reason the code contains different approaches of patterns that try to imitate a class system by exploiting javascript prototype objects in order to make the code more read and understandable and to get rid of all the confusing prototype stuff it would make sense to convert the code to typescript which then just generates the corresponding javascript on the basis of an easy to read java c like syntax and a type safety aware compiler
1
463,658
13,285,593,079
IssuesEvent
2020-08-24 08:22:43
wso2/kubernetes-pipeline
https://api.github.com/repos/wso2/kubernetes-pipeline
closed
[1.1.0] Micro integrator endpoint requirement at the docker build stage
Priority/High Type/Task
**Description:** It is required to have a Micro integrator endpoint at the Docker image build stage. As per the current implementation, it is not possible. **Affected Product Version:** 1.1.0
1.0
[1.1.0] Micro integrator endpoint requirement at the docker build stage - **Description:** It is required to have a Micro integrator endpoint at the Docker image build stage. As per the current implementation, it is not possible. **Affected Product Version:** 1.1.0
non_main
micro integrator endpoint requirement at the docker build stage description it is required to have a micro integrator endpoint at the docker image build stage as per the current implementation it is not possible affected product version
0
114,727
24,651,545,290
IssuesEvent
2022-10-17 19:06:02
WordPress/openverse-catalog
https://api.github.com/repos/WordPress/openverse-catalog
closed
Trigger TSV loading in ingestion workflow DAGs
🟩 priority: low ✨ goal: improvement 💻 aspect: code 🔧 tech: airflow
## Problem <!-- Describe a problem solved by this feature; or delete the section entirely. --> The various ingestion workflow DAGs (`europeana_ingestion_workflow`, `flickr_ingestion_workflow`, `wikimedia_ingestion_workflow`) are used to help update _old_ data for `dated` DAGs, which otherwise only consume data for the current date. They work by calculating a list of `reingestion_days`, generally weighted toward more recent data (so for example, we'll reingest every day in the past week, and then increasingly sparsely as we go further back in time), and then pulling data for those days. The DAGs all run what amounts to just the `pull_data` task for each of these days, meaning a tsv is generated but not loaded into the upstream db. That happens completely separately in the `tsv_to_postgres_loader` DAG. This is undesirable for reasons described in #357 (which resolved the issue for our standard provider DAGs). ## Description <!-- Describe the feature and how it solves the problem. --> If we could move the tsv loading steps into the ingestion DAGs, we could get rid of the `tsv_to_postgres_loader` once and for all 😄 A couple of options: * We could use the [TriggerDagOperator](https://www.astronomer.io/guides/cross-dag-dependencies/) to simply trigger the corresponding provider dag (ie `europeana_ingestion_workflow` triggers `europeana_workflow`) for each of the `reingestion_days`. * If we go that route, we'll need to take a close look at the `max_active_tasks` for these DAGs. Currently most `workflow` DAGs have `max_active_tasks=1`, while the `ingestion` DAGs bump this up to allow for parallelization. * Alternatively, we could keep the DAGs separate and refactor out the loading code so it is reusable. Then within the ingestion DAGs we could parallelize just the `pull_data` tasks and then run the `load_data` steps once for _all_ the consumed data * This will result in a much longer-running `load_data` task * Some refactoring may be required to ensure `load_data` can pull data from many upstream `pull_data` tasks. ## Implementation <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in implementing this feature.
1.0
Trigger TSV loading in ingestion workflow DAGs - ## Problem <!-- Describe a problem solved by this feature; or delete the section entirely. --> The various ingestion workflow DAGs (`europeana_ingestion_workflow`, `flickr_ingestion_workflow`, `wikimedia_ingestion_workflow`) are used to help update _old_ data for `dated` DAGs, which otherwise only consume data for the current date. They work by calculating a list of `reingestion_days`, generally weighted toward more recent data (so for example, we'll reingest every day in the past week, and then increasingly sparsely as we go further back in time), and then pulling data for those days. The DAGs all run what amounts to just the `pull_data` task for each of these days, meaning a tsv is generated but not loaded into the upstream db. That happens completely separately in the `tsv_to_postgres_loader` DAG. This is undesirable for reasons described in #357 (which resolved the issue for our standard provider DAGs). ## Description <!-- Describe the feature and how it solves the problem. --> If we could move the tsv loading steps into the ingestion DAGs, we could get rid of the `tsv_to_postgres_loader` once and for all 😄 A couple of options: * We could use the [TriggerDagOperator](https://www.astronomer.io/guides/cross-dag-dependencies/) to simply trigger the corresponding provider dag (ie `europeana_ingestion_workflow` triggers `europeana_workflow`) for each of the `reingestion_days`. * If we go that route, we'll need to take a close look at the `max_active_tasks` for these DAGs. Currently most `workflow` DAGs have `max_active_tasks=1`, while the `ingestion` DAGs bump this up to allow for parallelization. * Alternatively, we could keep the DAGs separate and refactor out the loading code so it is reusable. Then within the ingestion DAGs we could parallelize just the `pull_data` tasks and then run the `load_data` steps once for _all_ the consumed data * This will result in a much longer-running `load_data` task * Some refactoring may be required to ensure `load_data` can pull data from many upstream `pull_data` tasks. ## Implementation <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in implementing this feature.
non_main
trigger tsv loading in ingestion workflow dags problem the various ingestion workflow dags europeana ingestion workflow flickr ingestion workflow wikimedia ingestion workflow are used to help update old data for dated dags which otherwise only consume data for the current date they work by calculating a list of reingestion days generally weighted toward more recent data so for example we ll reingest every day in the past week and then increasingly sparsely as we go further back in time and then pulling data for those days the dags all run what amounts to just the pull data task for each of these days meaning a tsv is generated but not loaded into the upstream db that happens completely separately in the tsv to postgres loader dag this is undesirable for reasons described in which resolved the issue for our standard provider dags description if we could move the tsv loading steps into the ingestion dags we could get rid of the tsv to postgres loader once and for all 😄 a couple of options we could use the to simply trigger the corresponding provider dag ie europeana ingestion workflow triggers europeana workflow for each of the reingestion days if we go that route we ll need to take a close look at the max active tasks for these dags currently most workflow dags have max active tasks while the ingestion dags bump this up to allow for parallelization alternatively we could keep the dags separate and refactor out the loading code so it is reusable then within the ingestion dags we could parallelize just the pull data tasks and then run the load data steps once for all the consumed data this will result in a much longer running load data task some refactoring may be required to ensure load data can pull data from many upstream pull data tasks implementation 🙋 i would be interested in implementing this feature
0
1,229
5,225,526,370
IssuesEvent
2017-01-27 18:33:37
tgstation/tgstation
https://api.github.com/repos/tgstation/tgstation
closed
tesla has an absurd bias to the east
Bug Maintainability/Hinders improvements
it gets stuck at the east of the station pretty much every time without fail
True
tesla has an absurd bias to the east - it gets stuck at the east of the station pretty much every time without fail
main
tesla has an absurd bias to the east it gets stuck at the east of the station pretty much every time without fail
1
4,618
2,737,057,041
IssuesEvent
2015-04-20 00:02:18
piwik/piwik
https://api.github.com/repos/piwik/piwik
closed
Marketplace Plugin blocks are too big in some cases
Bug c: Design / UI
Here is an example of a Marketplace page rendered: ![marketplace looks wrong](https://cloud.githubusercontent.com/assets/466765/7196075/cbba01ac-e51e-11e4-8c32-4faa18e021d9.png) Because of the PerformanceInfo plugin that has 2 yellow boxes and has a tall box, all other plugin boxes are tall as well We expect to see instead the plugin blocks less high such as in this example: ![normal](https://cloud.githubusercontent.com/assets/466765/7196103/44add0de-e51f-11e4-80fd-8de8c87b1030.png)
1.0
Marketplace Plugin blocks are too big in some cases - Here is an example of a Marketplace page rendered: ![marketplace looks wrong](https://cloud.githubusercontent.com/assets/466765/7196075/cbba01ac-e51e-11e4-8c32-4faa18e021d9.png) Because of the PerformanceInfo plugin that has 2 yellow boxes and has a tall box, all other plugin boxes are tall as well We expect to see instead the plugin blocks less high such as in this example: ![normal](https://cloud.githubusercontent.com/assets/466765/7196103/44add0de-e51f-11e4-80fd-8de8c87b1030.png)
non_main
marketplace plugin blocks are too big in some cases here is an example of a marketplace page rendered because of the performanceinfo plugin that has yellow boxes and has a tall box all other plugin boxes are tall as well we expect to see instead the plugin blocks less high such as in this example
0
226,181
17,315,290,416
IssuesEvent
2021-07-27 04:44:20
dev-protocol/docs.devprotocol.xyz
https://api.github.com/repos/dev-protocol/docs.devprotocol.xyz
opened
docs: Insert URL link to 14 main contracts
documentation enhancement
Much better if we put a URL to these main contracts, they don't know what is it... ![image](https://user-images.githubusercontent.com/73097560/127096561-53785912-762c-4727-b8f3-bc8811577868.png)
1.0
docs: Insert URL link to 14 main contracts - Much better if we put a URL to these main contracts, they don't know what is it... ![image](https://user-images.githubusercontent.com/73097560/127096561-53785912-762c-4727-b8f3-bc8811577868.png)
non_main
docs insert url link to main contracts much better if we put a url to these main contracts they don t know what is it
0
5,125
26,125,073,439
IssuesEvent
2022-12-28 17:20:03
hamcrest/JavaHamcrest
https://api.github.com/repos/hamcrest/JavaHamcrest
closed
Gather sources under a single root
maintainability
The source tree is organised into multiple source roots. This makes the build more complicated. Since 7.0 will be released as a single JAR (see #86), there is no need for different source roots.
True
Gather sources under a single root - The source tree is organised into multiple source roots. This makes the build more complicated. Since 7.0 will be released as a single JAR (see #86), there is no need for different source roots.
main
gather sources under a single root the source tree is organised into multiple source roots this makes the build more complicated since will be released as a single jar see there is no need for different source roots
1
386,786
11,450,514,432
IssuesEvent
2020-02-06 09:45:49
wso2/product-is
https://api.github.com/repos/wso2/product-is
closed
Issues with install Identity Server as a windows service -JDK 11
Priority/Highest Severity/Blocker Type/Bug
Affected - wso2is -5.10.0 Severity : High Priority : High Issue 1: #https://github.com/wso2/carbon-kernel/issues/2435 Issue 2: #https://github.com/wso2/carbon-kernel/issues/2434
1.0
Issues with install Identity Server as a windows service -JDK 11 - Affected - wso2is -5.10.0 Severity : High Priority : High Issue 1: #https://github.com/wso2/carbon-kernel/issues/2435 Issue 2: #https://github.com/wso2/carbon-kernel/issues/2434
non_main
issues with install identity server as a windows service jdk affected severity high priority high issue issue
0
111,434
24,128,018,334
IssuesEvent
2022-09-21 03:41:05
pingcap/tiflash
https://api.github.com/repos/pingcap/tiflash
closed
Fix clang-tidy error in `dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp`
type/enhancement good first issue type/code-quality-improvement
## Enhancement ``` clang-tidy -p=/home/finder/projects/tiflash/build/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp /home/finder/projects/tiflash/contrib/libcityhash/include/city.h:48:10: error: 'utility' file not found [clang-diagnostic-error] #include <utility> ^ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:44:5: error: use '= default' to define a trivial default constructor [modernize-use-equals-default,-warnings-as-errors] DMMinMaxIndexTest() {} ^ ~~ = default; /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:47:17: error: invalid case style for method 'SetUpTestCase' [readability-identifier-naming,-warnings-as-errors] static void SetUpTestCase() {} ^~~~~~~~~~~~~ setUpTestCase /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:49:10: error: method 'SetUp' can be made static [readability-convert-member-functions-to-static,-warnings-as-errors] void SetUp() override ^ static /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:49:10: error: invalid case style for method 'SetUp' [readability-identifier-naming,-warnings-as-errors] void SetUp() override ^~~~~ setUp /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:58:10: error: invalid case style for method 'TearDown' [readability-identifier-naming,-warnings-as-errors] void TearDown() override ^~~~~~~~ tearDown /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:81:20: error: parameter 'test_case' is unused [misc-unused-parameters,-warnings-as-errors] const String & test_case, ^~~~~~~~~ /*test_case*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:83:20: error: parameter 'type' is unused [misc-unused-parameters,-warnings-as-errors] const String & type, ^~~~ /*type*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:84:21: error: parameter 'block_tuples' is unused [misc-unused-parameters,-warnings-as-errors] const CSVTuples block_tuples, ^~~~~~~~~~~~ /*block_tuples*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:85:27: error: parameter 'filter' is unused [misc-unused-parameters,-warnings-as-errors] const RSOperatorPtr & filter, ^~~~~~ /*filter*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:136:32: error: parameter 'test_case' is unused [misc-unused-parameters,-warnings-as-errors] bool checkMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~~~~~ /*test_case*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:136:77: error: parameter 'type' is unused [misc-unused-parameters,-warnings-as-errors] bool checkMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~ /*type*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:136:98: error: parameter 'value' is unused [misc-unused-parameters,-warnings-as-errors] bool checkMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~ /*value*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:136:127: error: parameter 'filter' is unused [misc-unused-parameters,-warnings-as-errors] bool checkMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~~ /*filter*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:144:35: error: parameter 'test_case' is unused [misc-unused-parameters,-warnings-as-errors] bool checkDelMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~~~~~ /*test_case*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:144:80: error: parameter 'type' is unused [misc-unused-parameters,-warnings-as-errors] bool checkDelMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~ /*type*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:144:101: error: parameter 'value' is unused [misc-unused-parameters,-warnings-as-errors] bool checkDelMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~ /*value*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:144:130: error: parameter 'filter' is unused [misc-unused-parameters,-warnings-as-errors] bool checkDelMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~~ /*filter*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:152:34: error: parameter 'test_case' is unused [misc-unused-parameters,-warnings-as-errors] bool checkPkMatch(const String & test_case, Context & context, const String & type, const String & pk_value, const RSOperatorPtr & filter, bool is_common_handle) ^~~~~~~~~ /*test_case*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:152:79: error: parameter 'type' is unused [misc-unused-parameters,-warnings-as-errors] bool checkPkMatch(const String & test_case, Context & context, const String & type, const String & pk_value, const RSOperatorPtr & filter, bool is_common_handle) ^~~~ /*type*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:152:100: error: parameter 'pk_value' is unused [misc-unused-parameters,-warnings-as-errors] bool checkPkMatch(const String & test_case, Context & context, const String & type, const String & pk_value, const RSOperatorPtr & filter, bool is_common_handle) ^~~~~~~~ /*pk_value*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:152:132: error: parameter 'filter' is unused [misc-unused-parameters,-warnings-as-errors] bool checkPkMatch(const String & test_case, Context & context, const String & type, const String & pk_value, const RSOperatorPtr & filter, bool is_common_handle) ^~~~~~ /*filter*/ 49991 warnings and 1 error generated. Error while processing /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp. 99986 warnings and 2 errors generated. Error while processing /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp. 149981 warnings and 3 errors generated. Error while processing /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp. Suppressed 150062 warnings (149918 in non-user code, 144 NOLINT). Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well. 21 warnings treated as errors ```
1.0
Fix clang-tidy error in `dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp` - ## Enhancement ``` clang-tidy -p=/home/finder/projects/tiflash/build/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp /home/finder/projects/tiflash/contrib/libcityhash/include/city.h:48:10: error: 'utility' file not found [clang-diagnostic-error] #include <utility> ^ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:44:5: error: use '= default' to define a trivial default constructor [modernize-use-equals-default,-warnings-as-errors] DMMinMaxIndexTest() {} ^ ~~ = default; /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:47:17: error: invalid case style for method 'SetUpTestCase' [readability-identifier-naming,-warnings-as-errors] static void SetUpTestCase() {} ^~~~~~~~~~~~~ setUpTestCase /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:49:10: error: method 'SetUp' can be made static [readability-convert-member-functions-to-static,-warnings-as-errors] void SetUp() override ^ static /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:49:10: error: invalid case style for method 'SetUp' [readability-identifier-naming,-warnings-as-errors] void SetUp() override ^~~~~ setUp /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:58:10: error: invalid case style for method 'TearDown' [readability-identifier-naming,-warnings-as-errors] void TearDown() override ^~~~~~~~ tearDown /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:81:20: error: parameter 'test_case' is unused [misc-unused-parameters,-warnings-as-errors] const String & test_case, ^~~~~~~~~ /*test_case*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:83:20: error: parameter 'type' is unused [misc-unused-parameters,-warnings-as-errors] const String & type, ^~~~ /*type*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:84:21: error: parameter 'block_tuples' is unused [misc-unused-parameters,-warnings-as-errors] const CSVTuples block_tuples, ^~~~~~~~~~~~ /*block_tuples*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:85:27: error: parameter 'filter' is unused [misc-unused-parameters,-warnings-as-errors] const RSOperatorPtr & filter, ^~~~~~ /*filter*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:136:32: error: parameter 'test_case' is unused [misc-unused-parameters,-warnings-as-errors] bool checkMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~~~~~ /*test_case*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:136:77: error: parameter 'type' is unused [misc-unused-parameters,-warnings-as-errors] bool checkMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~ /*type*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:136:98: error: parameter 'value' is unused [misc-unused-parameters,-warnings-as-errors] bool checkMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~ /*value*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:136:127: error: parameter 'filter' is unused [misc-unused-parameters,-warnings-as-errors] bool checkMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~~ /*filter*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:144:35: error: parameter 'test_case' is unused [misc-unused-parameters,-warnings-as-errors] bool checkDelMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~~~~~ /*test_case*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:144:80: error: parameter 'type' is unused [misc-unused-parameters,-warnings-as-errors] bool checkDelMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~ /*type*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:144:101: error: parameter 'value' is unused [misc-unused-parameters,-warnings-as-errors] bool checkDelMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~ /*value*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:144:130: error: parameter 'filter' is unused [misc-unused-parameters,-warnings-as-errors] bool checkDelMatch(const String & test_case, Context & context, const String & type, const String & value, const RSOperatorPtr & filter) ^~~~~~ /*filter*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:152:34: error: parameter 'test_case' is unused [misc-unused-parameters,-warnings-as-errors] bool checkPkMatch(const String & test_case, Context & context, const String & type, const String & pk_value, const RSOperatorPtr & filter, bool is_common_handle) ^~~~~~~~~ /*test_case*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:152:79: error: parameter 'type' is unused [misc-unused-parameters,-warnings-as-errors] bool checkPkMatch(const String & test_case, Context & context, const String & type, const String & pk_value, const RSOperatorPtr & filter, bool is_common_handle) ^~~~ /*type*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:152:100: error: parameter 'pk_value' is unused [misc-unused-parameters,-warnings-as-errors] bool checkPkMatch(const String & test_case, Context & context, const String & type, const String & pk_value, const RSOperatorPtr & filter, bool is_common_handle) ^~~~~~~~ /*pk_value*/ /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp:152:132: error: parameter 'filter' is unused [misc-unused-parameters,-warnings-as-errors] bool checkPkMatch(const String & test_case, Context & context, const String & type, const String & pk_value, const RSOperatorPtr & filter, bool is_common_handle) ^~~~~~ /*filter*/ 49991 warnings and 1 error generated. Error while processing /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp. 99986 warnings and 2 errors generated. Error while processing /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp. 149981 warnings and 3 errors generated. Error while processing /home/finder/projects/tiflash/dbms/src/Storages/DeltaMerge/tests/gtest_dm_minmax_index.cpp. Suppressed 150062 warnings (149918 in non-user code, 144 NOLINT). Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well. 21 warnings treated as errors ```
non_main
fix clang tidy error in dbms src storages deltamerge tests gtest dm minmax index cpp enhancement clang tidy p home finder projects tiflash build home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp home finder projects tiflash contrib libcityhash include city h error utility file not found include home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error use default to define a trivial default constructor dmminmaxindextest default home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error invalid case style for method setuptestcase static void setuptestcase setuptestcase home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error method setup can be made static void setup override static home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error invalid case style for method setup void setup override setup home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error invalid case style for method teardown void teardown override teardown home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter test case is unused const string test case test case home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter type is unused const string type type home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter block tuples is unused const csvtuples block tuples block tuples home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter filter is unused const rsoperatorptr filter filter home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter test case is unused bool checkmatch const string test case context context const string type const string value const rsoperatorptr filter test case home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter type is unused bool checkmatch const string test case context context const string type const string value const rsoperatorptr filter type home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter value is unused bool checkmatch const string test case context context const string type const string value const rsoperatorptr filter value home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter filter is unused bool checkmatch const string test case context context const string type const string value const rsoperatorptr filter filter home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter test case is unused bool checkdelmatch const string test case context context const string type const string value const rsoperatorptr filter test case home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter type is unused bool checkdelmatch const string test case context context const string type const string value const rsoperatorptr filter type home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter value is unused bool checkdelmatch const string test case context context const string type const string value const rsoperatorptr filter value home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter filter is unused bool checkdelmatch const string test case context context const string type const string value const rsoperatorptr filter filter home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter test case is unused bool checkpkmatch const string test case context context const string type const string pk value const rsoperatorptr filter bool is common handle test case home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter type is unused bool checkpkmatch const string test case context context const string type const string pk value const rsoperatorptr filter bool is common handle type home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter pk value is unused bool checkpkmatch const string test case context context const string type const string pk value const rsoperatorptr filter bool is common handle pk value home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp error parameter filter is unused bool checkpkmatch const string test case context context const string type const string pk value const rsoperatorptr filter bool is common handle filter warnings and error generated error while processing home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp warnings and errors generated error while processing home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp warnings and errors generated error while processing home finder projects tiflash dbms src storages deltamerge tests gtest dm minmax index cpp suppressed warnings in non user code nolint use header filter to display errors from all non system headers use system headers to display errors from system headers as well warnings treated as errors
0
7,705
2,610,433,998
IssuesEvent
2015-02-26 20:22:11
chrsmith/scribefire-chrome
https://api.github.com/repos/chrsmith/scribefire-chrome
opened
can't download from web site
auto-migrated Priority-Medium Type-Defect
``` What's the problem? click on download and just get a page full of code What browser are you using? Safari 5.1 What Operating system are you using Mac OS 10.6.8 What version of ScribeFire are you running? no idea, it's on the google bar What Blog Type are you having this problem with? Please include version # if known or applicable I guess scribefire doesn't work for me now. Goodbye and thank you ``` ----- Original issue reported on code.google.com by `Andrew.s...@gmail.com` on 13 Jun 2014 at 9:57
1.0
can't download from web site - ``` What's the problem? click on download and just get a page full of code What browser are you using? Safari 5.1 What Operating system are you using Mac OS 10.6.8 What version of ScribeFire are you running? no idea, it's on the google bar What Blog Type are you having this problem with? Please include version # if known or applicable I guess scribefire doesn't work for me now. Goodbye and thank you ``` ----- Original issue reported on code.google.com by `Andrew.s...@gmail.com` on 13 Jun 2014 at 9:57
non_main
can t download from web site what s the problem click on download and just get a page full of code what browser are you using safari what operating system are you using mac os what version of scribefire are you running no idea it s on the google bar what blog type are you having this problem with please include version if known or applicable i guess scribefire doesn t work for me now goodbye and thank you original issue reported on code google com by andrew s gmail com on jun at
0
2,414
8,569,273,905
IssuesEvent
2018-11-11 08:45:30
diofant/diofant
https://api.github.com/repos/diofant/diofant
opened
Change used polynomial representation to sparse for Poly class
enhancement maintainability polys
Poly's - the only users of DMP in the codebase, so polyclasses.py then could be safely removed.
True
Change used polynomial representation to sparse for Poly class - Poly's - the only users of DMP in the codebase, so polyclasses.py then could be safely removed.
main
change used polynomial representation to sparse for poly class poly s the only users of dmp in the codebase so polyclasses py then could be safely removed
1
4,740
2,871,982,018
IssuesEvent
2015-06-08 08:51:33
dotse/zonemaster-backend
https://api.github.com/repos/dotse/zonemaster-backend
closed
Method #5 (start_domain_test) cant be run using API.md
documentation Prio High
This document (https://github.com/dotse/zonemaster-backend/blob/master/docs/API.md) needs to have functioning examples (that are always updated if the API changes). When I try to run method "start_domain_test" it returns: {"error":{"message":"Failed to parse json","code":-32700}} Please add a working example to the document (if some API-calls need a previous call to work, like "get_test" etc, then please note that in the document and try to have an existing call to create a test exists before that).
1.0
Method #5 (start_domain_test) cant be run using API.md - This document (https://github.com/dotse/zonemaster-backend/blob/master/docs/API.md) needs to have functioning examples (that are always updated if the API changes). When I try to run method "start_domain_test" it returns: {"error":{"message":"Failed to parse json","code":-32700}} Please add a working example to the document (if some API-calls need a previous call to work, like "get_test" etc, then please note that in the document and try to have an existing call to create a test exists before that).
non_main
method start domain test cant be run using api md this document needs to have functioning examples that are always updated if the api changes when i try to run method start domain test it returns error message failed to parse json code please add a working example to the document if some api calls need a previous call to work like get test etc then please note that in the document and try to have an existing call to create a test exists before that
0
5,811
30,769,250,585
IssuesEvent
2023-07-30 17:44:40
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
opened
MAINT, CI: Cirrus resource usage changes coming September 1st
maintainability Continuous Integration
This isn't particularly surprising for those of us who were around for a similar transition for Travis CI, but here is the relevant post: https://cirrus-ci.org/blog/2023/07/17/limiting-free-usage-of-cirrus-ci/ This is causing some upstream activity in NumPy/SciPy, mostly to try to use Cirrus more sparingly/efficiently, I guess we can try to follow what they do if it is practical. Of course, reshuffling CI services gets a bit exhausting!
True
MAINT, CI: Cirrus resource usage changes coming September 1st - This isn't particularly surprising for those of us who were around for a similar transition for Travis CI, but here is the relevant post: https://cirrus-ci.org/blog/2023/07/17/limiting-free-usage-of-cirrus-ci/ This is causing some upstream activity in NumPy/SciPy, mostly to try to use Cirrus more sparingly/efficiently, I guess we can try to follow what they do if it is practical. Of course, reshuffling CI services gets a bit exhausting!
main
maint ci cirrus resource usage changes coming september this isn t particularly surprising for those of us who were around for a similar transition for travis ci but here is the relevant post this is causing some upstream activity in numpy scipy mostly to try to use cirrus more sparingly efficiently i guess we can try to follow what they do if it is practical of course reshuffling ci services gets a bit exhausting
1
3,339
12,951,673,466
IssuesEvent
2020-07-19 17:38:58
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
closed
MAINT: explicit dtype for ragged arrays
maintainability
Creating NumPy arrays from ragged data structures without explicit specification of `dtype=object` is deprecated following the acceptance of [NEP 34](https://numpy.org/neps/nep-0034-infer-dtype-is-object.html). We have a few cases emitting warnings from our full test suite--we could possibly clean up proactively using a similar approach to https://github.com/MDAnalysis/mdanalysis/pull/2834.
True
MAINT: explicit dtype for ragged arrays - Creating NumPy arrays from ragged data structures without explicit specification of `dtype=object` is deprecated following the acceptance of [NEP 34](https://numpy.org/neps/nep-0034-infer-dtype-is-object.html). We have a few cases emitting warnings from our full test suite--we could possibly clean up proactively using a similar approach to https://github.com/MDAnalysis/mdanalysis/pull/2834.
main
maint explicit dtype for ragged arrays creating numpy arrays from ragged data structures without explicit specification of dtype object is deprecated following the acceptance of we have a few cases emitting warnings from our full test suite we could possibly clean up proactively using a similar approach to
1
932
4,643,956,196
IssuesEvent
2016-09-30 14:59:37
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
reopened
[2.1.2-RC5] Mount issue with nfs storage
affects_2.1 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mount module ##### ANSIBLE VERSION 2.1.2-rc5 ``` ansible --version /root/.virtualenvs/ansible2/lib/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6 DeprecationWarning ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT CentOS 6 ##### SUMMARY I have a similar issue with 2.1.2-RC5 - ```` TASK [drupal : Mount remote shared path] *************************************** fatal: [lx6997]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /mnt/sites/www.company.pt: mount.nfs: an incorrect mount option was specified\n"} fatal: [lx1658]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /mnt/sites/www.company.pt: mount.nfs: an incorrect mount option was specified\n"} ``` Task is: ``` - name: Mount remote shared path mount: > name={{ drupal_shared_path }} src={{ drupal_shared_source }} opts={{ drupal_shared_options }} fstype=nfs state=mounted ``` With ``{{ drupal_shared_options }}`` being: ``` drupal_shared_options: "tcp,timeo=600,noacl,rsize=65536,wsize=65536,bg,hard,noatime,nodiratime,actimeo=900,auto" ``` If I mount manually files, it works well - so it's not a regression and it worked with 2.1.1 ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS NFS Folder being mounted correctly. ##### ACTUAL RESULTS NFS folder is mounted but at ansible level, it's considered as an error...
True
[2.1.2-RC5] Mount issue with nfs storage - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME mount module ##### ANSIBLE VERSION 2.1.2-rc5 ``` ansible --version /root/.virtualenvs/ansible2/lib/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6 DeprecationWarning ansible 2.1.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT CentOS 6 ##### SUMMARY I have a similar issue with 2.1.2-RC5 - ```` TASK [drupal : Mount remote shared path] *************************************** fatal: [lx6997]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /mnt/sites/www.company.pt: mount.nfs: an incorrect mount option was specified\n"} fatal: [lx1658]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /mnt/sites/www.company.pt: mount.nfs: an incorrect mount option was specified\n"} ``` Task is: ``` - name: Mount remote shared path mount: > name={{ drupal_shared_path }} src={{ drupal_shared_source }} opts={{ drupal_shared_options }} fstype=nfs state=mounted ``` With ``{{ drupal_shared_options }}`` being: ``` drupal_shared_options: "tcp,timeo=600,noacl,rsize=65536,wsize=65536,bg,hard,noatime,nodiratime,actimeo=900,auto" ``` If I mount manually files, it works well - so it's not a regression and it worked with 2.1.1 ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS NFS Folder being mounted correctly. ##### ACTUAL RESULTS NFS folder is mounted but at ansible level, it's considered as an error...
main
mount issue with nfs storage issue type bug report component name mount module ansible version ansible version root virtualenvs lib site packages cryptography init py deprecationwarning python is no longer supported by the python core team please upgrade your python a future version of cryptography will drop support for python deprecationwarning ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment centos summary i have a similar issue with task fatal failed changed false failed true msg error mounting mnt sites mount nfs an incorrect mount option was specified n fatal failed changed false failed true msg error mounting mnt sites mount nfs an incorrect mount option was specified n task is name mount remote shared path mount name drupal shared path src drupal shared source opts drupal shared options fstype nfs state mounted with drupal shared options being drupal shared options tcp timeo noacl rsize wsize bg hard noatime nodiratime actimeo auto if i mount manually files it works well so it s not a regression and it worked with steps to reproduce expected results nfs folder being mounted correctly actual results nfs folder is mounted but at ansible level it s considered as an error
1
4,424
22,794,596,786
IssuesEvent
2022-07-10 14:20:43
Lissy93/dashy
https://api.github.com/repos/Lissy93/dashy
closed
[FEATURE_REQUEST] Sabnzbd widget
🦄 Feature Request 👤 Awaiting Maintainer Response
### Is your feature request related to a problem? If so, please describe. No problem, but I do miss a feature available in Heimdall. No urgency for you to reply; take your time. As an aside, I love the app. It's a little memory intensive bit it is incredibly flexible and looks to do everything I'm looking for. Icon handling is great. ### Describe the solution you'd like 1) Display current sizeleft and speed for live queue - This is available in Heimdall 2) Display most recent downloads from history (my example is showing most recent 3) - this would be an extension I believe this meets your requirements of: 1. Publicly accesable API - Sabnzbd API details are here: (https://sabnzbd.org/wiki/configuration/3.6/api) 2. CORS and HTTPS enabled - Actually I'm not familiar with CORS and I see no mention of it on the API page. I have HTTPS turned off on my Sabnzbd server but I could easily turn it on. 3. Free to use - I'm hosting it locally so it is free. 4. Allow for use in their TOS - I expect so (Heimdall is using it) 5. Would be useful for others - I expect so Sizeleft and speed are queried by this commend: `http://[ip:port]/sabnzbd/api?output=xml&apikey=[apikey]&mode=queue` with the following results: `<queue> <version>3.6.0</version> <paused>False</paused> <pause_int>0</pause_int> <paused_all>False</paused_all> <diskspace1>383.36</diskspace1> <diskspace2>4913.81</diskspace2> <diskspace1_norm>383.4 G</diskspace1_norm> <diskspace2_norm>4.8 T</diskspace2_norm> <diskspacetotal1>476.69</diskspacetotal1> <diskspacetotal2>14900.06</diskspacetotal2> <loadavg>0.79 | 0.42 | 0.32 | V=114M R=82M</loadavg> <speedlimit>50</speedlimit> <speedlimit_abs>26214400.0</speedlimit_abs> <have_warnings>0</have_warnings> <finishaction/> <quota>0 </quota> <have_quota>False</have_quota> <left_quota>0 </left_quota> <cache_art>0</cache_art> <cache_size>0 B</cache_size> <kbpersec>0.00</kbpersec> **<speed>0 </speed>** <mbleft>0.00</mbleft> <mb>0.00</mb> **<sizeleft>0 B</sizeleft>** <size>0 B</size> <noofslots_total>0</noofslots_total> <noofslots>0</noofslots> <start>0</start> <limit>0</limit> <finish>0</finish> <status>Idle</status> <timeleft>0:00:00</timeleft> <scripts/> <categories> <category>Default</category> <category>tv</category> <category>movies</category> <category>software</category> <category>audio</category> <category>readarr</category> </categories> <slots/> </queue>` History is available via this commend: `http://[ip:port]/sabnzbd/api?output=xml&apikey=[apikey]&mode=history&limit=3` with the following results: `<history> <total_size>5.7 T</total_size> <month_size>20.6 G</month_size> <week_size>13.6 G</week_size> <day_size>0 </day_size> <slots> <slot> <id>12</id> <completed>1643546003</completed> <name>Diners Drive-Ins and Dives S42E01 From Europe to Asia 720p WEBRip x264-KOMPOST</name> <nzb_name>Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzb</nzb_name> <category>tv</category> <pp>D</pp> <script>None</script> <report/> <url>Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzb</url> <status>Completed</status> <nzo_id>SABnzbd_nzo_xsjmambl</nzo_id> <storage>/Media/Downloads/SABnzbd/tv/Diners Drive-Ins and Dives S42E01 From Europe to Asia 720p WEBRip x264-KOMPOST/Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.mkv</storage> <path>/incomplete-downloads/Sabnzbs-incomplete-downloads/Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST</path> <script_log/> <script_line/> <download_time>39</download_time> <postproc_time>2</postproc_time> <stage_log> <slot> <name>Source</name> <actions> <item>Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzb</item> </actions> </slot> <slot> <name>Download</name> <actions> <item>Downloaded in 39 seconds at an average of 12.0 MB/s<br/>Age: 23h</item> </actions> </slot> <slot> <name>Servers</name> <actions> <item>Newsgroupdirect*Usenetexpress=470.5 MB</item> </actions> </slot> <slot> <name>Repair</name> <actions> <item>[f3a59282b6b2e7371b40ef91519b753a] Quick Check OK</item> </actions> </slot> <slot> <name>Unpack</name> <actions> <item>[f3a59282b6b2e7371b40ef91519b753a] Unpacked 1 files/folders in 2 seconds</item> </actions> </slot> </stage_log> <downloaded>493332395</downloaded> <completeness/> <fail_message/> <url_info/> <bytes>493332395</bytes> <meta/> <series>diners drive-ins and dives/42/1</series> <md5sum>900ba45a759377097519e9fd11722ef6</md5sum> <password/> <action_line/> <size>470.5 MB</size> <loaded>False</loaded> <retry>0</retry> </slot> <slot> <id>8</id> <completed>1639547751</completed> <name>Dogs 101 S03E04 720p HDTV x264-CBFM</name> <nzb_name>Dogs.101.S03E04.720p.HDTV.x264-CBFM.nzb</nzb_name> <category>tv</category> <pp>D</pp> <script>None</script> <report/> <url>Dogs.101.S03E04.720p.HDTV.x264-CBFM.nzb</url> <status>Completed</status> <nzo_id>SABnzbd_nzo_9hm_9e9y</nzo_id> <storage>/Media/Downloads/SABnzbd/tv/Dogs 101 S03E04 720p HDTV x264-CBFM</storage> <path>/incomplete-downloads/Sabnzbs-incomplete-downloads/Dogs.101.S03E04.720p.HDTV.x264-CBFM</path> <script_log/> <script_line/> <download_time>114</download_time> <postproc_time>21</postproc_time> <stage_log> <slot> <name>Source</name> <actions> <item>Dogs.101.S03E04.720p.HDTV.x264-CBFM.nzb</item> </actions> </slot> <slot> <name>Download</name> <actions> <item>Downloaded in 1 min 54 seconds at an average of 11.4 MB/s<br/>Age: 2140d</item> </actions> </slot> <slot> <name>Servers</name> <actions> <item>Newsgroupdirect*Usenetexpress=1.2 GB, Thecubenet*Usenetexpress=3 KB, Vipernews=3 KB, Maximumusenet*Omicron=114.7 MB</item> </actions> </slot> <slot> <name>Repair</name> <actions> <item>[dogs101.0304.720p-cbfm] Quick Check OK</item> </actions> </slot> <slot> <name>Unpack</name> <actions> <item>[dogs101.0304.720p-cbfm] Unpacked 1 files/folders in 21 seconds</item> </actions> </slot> </stage_log> <downloaded>1368672322</downloaded> <completeness/> <fail_message/> <url_info/> <bytes>1368672322</bytes> <meta/> <series>dogs 101/3/4</series> <md5sum>57fc20c4efbd669d2384ac6e28d3346e</md5sum> <password/> <action_line/> <size>1.3 GB</size> <loaded>False</loaded> <retry>0</retry> </slot> <slot> <id>7</id> <completed>1639547700</completed> <name>Dogs 101 S04E04 Grooming Special 1080i HDTV DD5 1 MPEG2-TrollHD</name> <nzb_name>Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzb</nzb_name> <category>tv</category> <pp>D</pp> <script>None</script> <report/> <url>Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzb</url> <status>Completed</status> <nzo_id>SABnzbd_nzo_hbw9kocp</nzo_id> <storage>/Media/Downloads/SABnzbd/tv/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5 1 MPEG2-TrollHD/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.ts</storage> <path>/incomplete-downloads/Sabnzbs-incomplete-downloads/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD</path> <script_log/> <script_line/> <download_time>403</download_time> <postproc_time>79</postproc_time> <stage_log> <slot> <name>Source</name> <actions> <item>Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzb</item> </actions> </slot> <slot> <name>Download</name> <actions> <item>Downloaded in 6 mins 43 seconds at an average of 11.3 MB/s<br/>Age: 3738d</item> </actions> </slot> <slot> <name>Servers</name> <actions> <item>Newsgroupdirect*Usenetexpress=149 KB, Thecubenet*Usenetexpress=152 KB, Vipernews=152 KB, Maximumusenet*Omicron=4.4 GB</item> </actions> </slot> <slot> <name>Repair</name> <actions> <item>[Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD] Quick Check OK</item> </actions> </slot> <slot> <name>Unpack</name> <actions> <item>[Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD] Unpacked 1 files/folders in 1 min 19 seconds</item> </actions> </slot> </stage_log> <downloaded>4769620721</downloaded> <completeness/> <fail_message/> <url_info/> <bytes>4769620721</bytes> <meta/> <series>dogs 101/4/4</series> <md5sum>b89cf7f92794e65c7bc65487f002a1be</md5sum> <password/> <action_line/> <size>4.4 GB</size> <loaded>False</loaded> <retry>0</retry> </slot> </slots> <noofslots>7</noofslots> <last_history_update>115</last_history_update> <version>3.6.0</version> </history>` XML formatting was stripped as part of the copy/paste bu tI'm sure you get the idea. I'll attach file to make it easier. [mode=queue.txt](https://github.com/Lissy93/dashy/files/8883292/mode.queue.txt) [mode=history.txt](https://github.com/Lissy93/dashy/files/8883293/mode.history.txt) ### Priority Low (Nice-to-have) ### Is this something you would be keen to implement _No response_
True
[FEATURE_REQUEST] Sabnzbd widget - ### Is your feature request related to a problem? If so, please describe. No problem, but I do miss a feature available in Heimdall. No urgency for you to reply; take your time. As an aside, I love the app. It's a little memory intensive bit it is incredibly flexible and looks to do everything I'm looking for. Icon handling is great. ### Describe the solution you'd like 1) Display current sizeleft and speed for live queue - This is available in Heimdall 2) Display most recent downloads from history (my example is showing most recent 3) - this would be an extension I believe this meets your requirements of: 1. Publicly accesable API - Sabnzbd API details are here: (https://sabnzbd.org/wiki/configuration/3.6/api) 2. CORS and HTTPS enabled - Actually I'm not familiar with CORS and I see no mention of it on the API page. I have HTTPS turned off on my Sabnzbd server but I could easily turn it on. 3. Free to use - I'm hosting it locally so it is free. 4. Allow for use in their TOS - I expect so (Heimdall is using it) 5. Would be useful for others - I expect so Sizeleft and speed are queried by this commend: `http://[ip:port]/sabnzbd/api?output=xml&apikey=[apikey]&mode=queue` with the following results: `<queue> <version>3.6.0</version> <paused>False</paused> <pause_int>0</pause_int> <paused_all>False</paused_all> <diskspace1>383.36</diskspace1> <diskspace2>4913.81</diskspace2> <diskspace1_norm>383.4 G</diskspace1_norm> <diskspace2_norm>4.8 T</diskspace2_norm> <diskspacetotal1>476.69</diskspacetotal1> <diskspacetotal2>14900.06</diskspacetotal2> <loadavg>0.79 | 0.42 | 0.32 | V=114M R=82M</loadavg> <speedlimit>50</speedlimit> <speedlimit_abs>26214400.0</speedlimit_abs> <have_warnings>0</have_warnings> <finishaction/> <quota>0 </quota> <have_quota>False</have_quota> <left_quota>0 </left_quota> <cache_art>0</cache_art> <cache_size>0 B</cache_size> <kbpersec>0.00</kbpersec> **<speed>0 </speed>** <mbleft>0.00</mbleft> <mb>0.00</mb> **<sizeleft>0 B</sizeleft>** <size>0 B</size> <noofslots_total>0</noofslots_total> <noofslots>0</noofslots> <start>0</start> <limit>0</limit> <finish>0</finish> <status>Idle</status> <timeleft>0:00:00</timeleft> <scripts/> <categories> <category>Default</category> <category>tv</category> <category>movies</category> <category>software</category> <category>audio</category> <category>readarr</category> </categories> <slots/> </queue>` History is available via this commend: `http://[ip:port]/sabnzbd/api?output=xml&apikey=[apikey]&mode=history&limit=3` with the following results: `<history> <total_size>5.7 T</total_size> <month_size>20.6 G</month_size> <week_size>13.6 G</week_size> <day_size>0 </day_size> <slots> <slot> <id>12</id> <completed>1643546003</completed> <name>Diners Drive-Ins and Dives S42E01 From Europe to Asia 720p WEBRip x264-KOMPOST</name> <nzb_name>Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzb</nzb_name> <category>tv</category> <pp>D</pp> <script>None</script> <report/> <url>Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzb</url> <status>Completed</status> <nzo_id>SABnzbd_nzo_xsjmambl</nzo_id> <storage>/Media/Downloads/SABnzbd/tv/Diners Drive-Ins and Dives S42E01 From Europe to Asia 720p WEBRip x264-KOMPOST/Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.mkv</storage> <path>/incomplete-downloads/Sabnzbs-incomplete-downloads/Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST</path> <script_log/> <script_line/> <download_time>39</download_time> <postproc_time>2</postproc_time> <stage_log> <slot> <name>Source</name> <actions> <item>Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzb</item> </actions> </slot> <slot> <name>Download</name> <actions> <item>Downloaded in 39 seconds at an average of 12.0 MB/s<br/>Age: 23h</item> </actions> </slot> <slot> <name>Servers</name> <actions> <item>Newsgroupdirect*Usenetexpress=470.5 MB</item> </actions> </slot> <slot> <name>Repair</name> <actions> <item>[f3a59282b6b2e7371b40ef91519b753a] Quick Check OK</item> </actions> </slot> <slot> <name>Unpack</name> <actions> <item>[f3a59282b6b2e7371b40ef91519b753a] Unpacked 1 files/folders in 2 seconds</item> </actions> </slot> </stage_log> <downloaded>493332395</downloaded> <completeness/> <fail_message/> <url_info/> <bytes>493332395</bytes> <meta/> <series>diners drive-ins and dives/42/1</series> <md5sum>900ba45a759377097519e9fd11722ef6</md5sum> <password/> <action_line/> <size>470.5 MB</size> <loaded>False</loaded> <retry>0</retry> </slot> <slot> <id>8</id> <completed>1639547751</completed> <name>Dogs 101 S03E04 720p HDTV x264-CBFM</name> <nzb_name>Dogs.101.S03E04.720p.HDTV.x264-CBFM.nzb</nzb_name> <category>tv</category> <pp>D</pp> <script>None</script> <report/> <url>Dogs.101.S03E04.720p.HDTV.x264-CBFM.nzb</url> <status>Completed</status> <nzo_id>SABnzbd_nzo_9hm_9e9y</nzo_id> <storage>/Media/Downloads/SABnzbd/tv/Dogs 101 S03E04 720p HDTV x264-CBFM</storage> <path>/incomplete-downloads/Sabnzbs-incomplete-downloads/Dogs.101.S03E04.720p.HDTV.x264-CBFM</path> <script_log/> <script_line/> <download_time>114</download_time> <postproc_time>21</postproc_time> <stage_log> <slot> <name>Source</name> <actions> <item>Dogs.101.S03E04.720p.HDTV.x264-CBFM.nzb</item> </actions> </slot> <slot> <name>Download</name> <actions> <item>Downloaded in 1 min 54 seconds at an average of 11.4 MB/s<br/>Age: 2140d</item> </actions> </slot> <slot> <name>Servers</name> <actions> <item>Newsgroupdirect*Usenetexpress=1.2 GB, Thecubenet*Usenetexpress=3 KB, Vipernews=3 KB, Maximumusenet*Omicron=114.7 MB</item> </actions> </slot> <slot> <name>Repair</name> <actions> <item>[dogs101.0304.720p-cbfm] Quick Check OK</item> </actions> </slot> <slot> <name>Unpack</name> <actions> <item>[dogs101.0304.720p-cbfm] Unpacked 1 files/folders in 21 seconds</item> </actions> </slot> </stage_log> <downloaded>1368672322</downloaded> <completeness/> <fail_message/> <url_info/> <bytes>1368672322</bytes> <meta/> <series>dogs 101/3/4</series> <md5sum>57fc20c4efbd669d2384ac6e28d3346e</md5sum> <password/> <action_line/> <size>1.3 GB</size> <loaded>False</loaded> <retry>0</retry> </slot> <slot> <id>7</id> <completed>1639547700</completed> <name>Dogs 101 S04E04 Grooming Special 1080i HDTV DD5 1 MPEG2-TrollHD</name> <nzb_name>Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzb</nzb_name> <category>tv</category> <pp>D</pp> <script>None</script> <report/> <url>Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzb</url> <status>Completed</status> <nzo_id>SABnzbd_nzo_hbw9kocp</nzo_id> <storage>/Media/Downloads/SABnzbd/tv/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5 1 MPEG2-TrollHD/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.ts</storage> <path>/incomplete-downloads/Sabnzbs-incomplete-downloads/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD</path> <script_log/> <script_line/> <download_time>403</download_time> <postproc_time>79</postproc_time> <stage_log> <slot> <name>Source</name> <actions> <item>Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzb</item> </actions> </slot> <slot> <name>Download</name> <actions> <item>Downloaded in 6 mins 43 seconds at an average of 11.3 MB/s<br/>Age: 3738d</item> </actions> </slot> <slot> <name>Servers</name> <actions> <item>Newsgroupdirect*Usenetexpress=149 KB, Thecubenet*Usenetexpress=152 KB, Vipernews=152 KB, Maximumusenet*Omicron=4.4 GB</item> </actions> </slot> <slot> <name>Repair</name> <actions> <item>[Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD] Quick Check OK</item> </actions> </slot> <slot> <name>Unpack</name> <actions> <item>[Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD] Unpacked 1 files/folders in 1 min 19 seconds</item> </actions> </slot> </stage_log> <downloaded>4769620721</downloaded> <completeness/> <fail_message/> <url_info/> <bytes>4769620721</bytes> <meta/> <series>dogs 101/4/4</series> <md5sum>b89cf7f92794e65c7bc65487f002a1be</md5sum> <password/> <action_line/> <size>4.4 GB</size> <loaded>False</loaded> <retry>0</retry> </slot> </slots> <noofslots>7</noofslots> <last_history_update>115</last_history_update> <version>3.6.0</version> </history>` XML formatting was stripped as part of the copy/paste bu tI'm sure you get the idea. I'll attach file to make it easier. [mode=queue.txt](https://github.com/Lissy93/dashy/files/8883292/mode.queue.txt) [mode=history.txt](https://github.com/Lissy93/dashy/files/8883293/mode.history.txt) ### Priority Low (Nice-to-have) ### Is this something you would be keen to implement _No response_
main
sabnzbd widget is your feature request related to a problem if so please describe no problem but i do miss a feature available in heimdall no urgency for you to reply take your time as an aside i love the app it s a little memory intensive bit it is incredibly flexible and looks to do everything i m looking for icon handling is great describe the solution you d like display current sizeleft and speed for live queue this is available in heimdall display most recent downloads from history my example is showing most recent this would be an extension i believe this meets your requirements of publicly accesable api sabnzbd api details are here cors and https enabled actually i m not familiar with cors and i see no mention of it on the api page i have https turned off on my sabnzbd server but i could easily turn it on free to use i m hosting it locally so it is free allow for use in their tos i expect so heimdall is using it would be useful for others i expect so sizeleft and speed are queried by this commend http sabnzbd api output xml apikey mode queue with the following results false false g t v r false b b b idle default tv movies software audio readarr history is available via this commend http sabnzbd api output xml apikey mode history limit with the following results t g g diners drive ins and dives from europe to asia webrip kompost diners drive ins and dives from europe to asia webrip kompost nzb tv d none diners drive ins and dives from europe to asia webrip kompost nzb completed sabnzbd nzo xsjmambl media downloads sabnzbd tv diners drive ins and dives from europe to asia webrip kompost diners drive ins and dives from europe to asia webrip kompost mkv incomplete downloads sabnzbs incomplete downloads diners drive ins and dives from europe to asia webrip kompost source diners drive ins and dives from europe to asia webrip kompost nzb download downloaded in seconds at an average of mb s age servers newsgroupdirect usenetexpress mb repair quick check ok unpack unpacked files folders in seconds diners drive ins and dives mb false dogs hdtv cbfm dogs hdtv cbfm nzb tv d none dogs hdtv cbfm nzb completed sabnzbd nzo media downloads sabnzbd tv dogs hdtv cbfm incomplete downloads sabnzbs incomplete downloads dogs hdtv cbfm source dogs hdtv cbfm nzb download downloaded in min seconds at an average of mb s age servers newsgroupdirect usenetexpress gb thecubenet usenetexpress kb vipernews kb maximumusenet omicron mb repair quick check ok unpack unpacked files folders in seconds dogs gb false dogs grooming special hdtv trollhd dogs grooming special hdtv trollhd nzb tv d none dogs grooming special hdtv trollhd nzb completed sabnzbd nzo media downloads sabnzbd tv dogs grooming special hdtv trollhd dogs grooming special hdtv trollhd ts incomplete downloads sabnzbs incomplete downloads dogs grooming special hdtv trollhd source dogs grooming special hdtv trollhd nzb download downloaded in mins seconds at an average of mb s age servers newsgroupdirect usenetexpress kb thecubenet usenetexpress kb vipernews kb maximumusenet omicron gb repair quick check ok unpack unpacked files folders in min seconds dogs gb false xml formatting was stripped as part of the copy paste bu ti m sure you get the idea i ll attach file to make it easier priority low nice to have is this something you would be keen to implement no response
1
1,140
4,998,881,477
IssuesEvent
2016-12-09 21:20:37
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
AWS modules should account for API throttling
affects_2.3 aws bug_report cloud waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc (Applies to _any_ AWS module!) ##### ANSIBLE VERSION Any Ansible version ##### OS / ENVIRONMENT N/A ##### SUMMARY It seems that Ansible (or "Boto" at the bottom layer) doesn't account for Query API Request Rate throttling Amazon enforces. If you do frequent AWS API calls (like I do at the moment, as I frequently create and destroy a very complex environment consisting of many components, because this environment is still in development) this throttling can kick in, give a negative reply to your Ansible task, and the Playbook aborts at that point. Countermeasures: - [API retries](https://docs.aws.amazon.com/general/latest/gr/api-retries.html) - [Backoff background info](https://www.awsarchitectureblog.com/2015/03/backoff.html) - [Retry throttling](https://aws.amazon.com/blogs/developer/introducing-retry-throttling/) ##### STEPS TO REPRODUCE Run a Playbook that does many API calls, like create a VPC, many subnets inside the VPC, Security Groups, ELBs, Internet Gateways, NAT Gateways, Route Tables, EC2 instances, etc. Then destroy them with your "destroy" Playbook. Re-run "create" Playbook. Destroy. Then you're likely to see this rate limiting. ##### EXPECTED RESULTS AWS modules should not fail when the rate limiting is in effect, but should retry until the call succeeds. ##### ACTUAL RESULTS AWS modules fail when the rate limiting is in effect. This manifests as follows: ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "An error occurred (RequestLimitExceeded) when calling the DescribeAddresses operation: Request limit exceeded.", "success": false} ```
True
AWS modules should account for API throttling - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc (Applies to _any_ AWS module!) ##### ANSIBLE VERSION Any Ansible version ##### OS / ENVIRONMENT N/A ##### SUMMARY It seems that Ansible (or "Boto" at the bottom layer) doesn't account for Query API Request Rate throttling Amazon enforces. If you do frequent AWS API calls (like I do at the moment, as I frequently create and destroy a very complex environment consisting of many components, because this environment is still in development) this throttling can kick in, give a negative reply to your Ansible task, and the Playbook aborts at that point. Countermeasures: - [API retries](https://docs.aws.amazon.com/general/latest/gr/api-retries.html) - [Backoff background info](https://www.awsarchitectureblog.com/2015/03/backoff.html) - [Retry throttling](https://aws.amazon.com/blogs/developer/introducing-retry-throttling/) ##### STEPS TO REPRODUCE Run a Playbook that does many API calls, like create a VPC, many subnets inside the VPC, Security Groups, ELBs, Internet Gateways, NAT Gateways, Route Tables, EC2 instances, etc. Then destroy them with your "destroy" Playbook. Re-run "create" Playbook. Destroy. Then you're likely to see this rate limiting. ##### EXPECTED RESULTS AWS modules should not fail when the rate limiting is in effect, but should retry until the call succeeds. ##### ACTUAL RESULTS AWS modules fail when the rate limiting is in effect. This manifests as follows: ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "An error occurred (RequestLimitExceeded) when calling the DescribeAddresses operation: Request limit exceeded.", "success": false} ```
main
aws modules should account for api throttling issue type bug report component name vpc applies to any aws module ansible version any ansible version os environment n a summary it seems that ansible or boto at the bottom layer doesn t account for query api request rate throttling amazon enforces if you do frequent aws api calls like i do at the moment as i frequently create and destroy a very complex environment consisting of many components because this environment is still in development this throttling can kick in give a negative reply to your ansible task and the playbook aborts at that point countermeasures steps to reproduce run a playbook that does many api calls like create a vpc many subnets inside the vpc security groups elbs internet gateways nat gateways route tables instances etc then destroy them with your destroy playbook re run create playbook destroy then you re likely to see this rate limiting expected results aws modules should not fail when the rate limiting is in effect but should retry until the call succeeds actual results aws modules fail when the rate limiting is in effect this manifests as follows fatal failed changed false failed true msg an error occurred requestlimitexceeded when calling the describeaddresses operation request limit exceeded success false
1
668
4,195,401,041
IssuesEvent
2016-06-25 18:17:49
duckduckgo/zeroclickinfo-spice
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
closed
Currency Conversion: not converting, not displaying conversion result
Internal Maintainer Input Requested
The Currency Conversion IA **has a defect**, I did a conversion and it didn't work (ie. didn't convert or display anything). See photo below ![selection_005](https://cloud.githubusercontent.com/assets/5355219/16330328/1182c20a-39f1-11e6-89fb-aaf5e14e36ef.png) ![selection_006](https://cloud.githubusercontent.com/assets/5355219/16330444/d951434c-39f1-11e6-90ce-944ca7ecad2f.png) ------ IA Page: http://duck.co/ia/view/currency [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @MrChrisW
True
Currency Conversion: not converting, not displaying conversion result - The Currency Conversion IA **has a defect**, I did a conversion and it didn't work (ie. didn't convert or display anything). See photo below ![selection_005](https://cloud.githubusercontent.com/assets/5355219/16330328/1182c20a-39f1-11e6-89fb-aaf5e14e36ef.png) ![selection_006](https://cloud.githubusercontent.com/assets/5355219/16330444/d951434c-39f1-11e6-90ce-944ca7ecad2f.png) ------ IA Page: http://duck.co/ia/view/currency [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @MrChrisW
main
currency conversion not converting not displaying conversion result the currency conversion ia has a defect i did a conversion and it didn t work ie didn t convert or display anything see photo below ia page mrchrisw
1
4,765
24,538,294,614
IssuesEvent
2022-10-11 23:32:03
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Data Explorer frontend - Live demo readiness
work: frontend status: ready restricted: maintainers type: meta
The following checklist is for the live demo readiness of Data Explorer - [x] #1590 - [ ] #1805 - [ ] #1804 - [ ] #1803 - [ ] #1802 - [ ] #1801 - [x] Fix column type calculation bug for summarized columns - [ ] #1596 - [ ] #1800
True
Data Explorer frontend - Live demo readiness - The following checklist is for the live demo readiness of Data Explorer - [x] #1590 - [ ] #1805 - [ ] #1804 - [ ] #1803 - [ ] #1802 - [ ] #1801 - [x] Fix column type calculation bug for summarized columns - [ ] #1596 - [ ] #1800
main
data explorer frontend live demo readiness the following checklist is for the live demo readiness of data explorer fix column type calculation bug for summarized columns
1
20,887
14,224,767,933
IssuesEvent
2020-11-17 20:11:03
forem/forem
https://api.github.com/repos/forem/forem
opened
Preact Components not Unmounting due to InstantClick page lifecycle events.
Forem team area: frontend infrastructure
<!-- Before creating a bug report, try disabling browser extensions to see if the bug is still present. --> **Describe the bug** I know how to fix it @reobin. I will create an issue and put a PR up for it. We need to explicitly call `unmountComponentAtNode` in an `InstantClick.on('change', () => {})` event. This is something that normally we wouldn't have to do, but because we use InstantClick, it's necessary. It also explains why I saw components repeated in the Preact dev tools. I need to do it for pretty much all out components that get mounted via `render`. e.g. ```jsx import { h, render } from 'preact'; import { unmountComponentAtNode } from 'preact/compat'; import { Search } from '../Search'; import 'focus-visible'; document.addEventListener('DOMContentLoaded', () => { const root = document.getElementById('header-search'); render(<Search />, root); InstantClick.on('change', () => { unmountComponentAtNode(root); }); }); ``` **To Reproduce** You will not see this behaviour as it's something under the hood. It's come to light due to issue #11371 **Expected behavior** Preact components should be unmounted when an InstantClick even changes the page. **Screenshots** <!-- If applicable, add screenshots to help explain your problem. --> **Desktop (please complete the following information):** - OS, version: - Browser, version: **Smartphone (please complete the following information):** - Device: - OS, version: - Browser, version: **Additional context** <!-- Add any other context about the problem or helpful links here. -->
1.0
Preact Components not Unmounting due to InstantClick page lifecycle events. - <!-- Before creating a bug report, try disabling browser extensions to see if the bug is still present. --> **Describe the bug** I know how to fix it @reobin. I will create an issue and put a PR up for it. We need to explicitly call `unmountComponentAtNode` in an `InstantClick.on('change', () => {})` event. This is something that normally we wouldn't have to do, but because we use InstantClick, it's necessary. It also explains why I saw components repeated in the Preact dev tools. I need to do it for pretty much all out components that get mounted via `render`. e.g. ```jsx import { h, render } from 'preact'; import { unmountComponentAtNode } from 'preact/compat'; import { Search } from '../Search'; import 'focus-visible'; document.addEventListener('DOMContentLoaded', () => { const root = document.getElementById('header-search'); render(<Search />, root); InstantClick.on('change', () => { unmountComponentAtNode(root); }); }); ``` **To Reproduce** You will not see this behaviour as it's something under the hood. It's come to light due to issue #11371 **Expected behavior** Preact components should be unmounted when an InstantClick even changes the page. **Screenshots** <!-- If applicable, add screenshots to help explain your problem. --> **Desktop (please complete the following information):** - OS, version: - Browser, version: **Smartphone (please complete the following information):** - Device: - OS, version: - Browser, version: **Additional context** <!-- Add any other context about the problem or helpful links here. -->
non_main
preact components not unmounting due to instantclick page lifecycle events describe the bug i know how to fix it reobin i will create an issue and put a pr up for it we need to explicitly call unmountcomponentatnode in an instantclick on change event this is something that normally we wouldn t have to do but because we use instantclick it s necessary it also explains why i saw components repeated in the preact dev tools i need to do it for pretty much all out components that get mounted via render e g jsx import h render from preact import unmountcomponentatnode from preact compat import search from search import focus visible document addeventlistener domcontentloaded const root document getelementbyid header search render root instantclick on change unmountcomponentatnode root to reproduce you will not see this behaviour as it s something under the hood it s come to light due to issue expected behavior preact components should be unmounted when an instantclick even changes the page screenshots desktop please complete the following information os version browser version smartphone please complete the following information device os version browser version additional context
0
1,598
6,572,380,847
IssuesEvent
2017-09-11 01:52:24
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
asa_config: Clearer message when `authorize: yes` isn't set
affects_2.2 bug_report networking P1 waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME asa_command ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel a265d2d77d) last updated 2016/09/28 10:32:34 (GMT +100) lib/ansible/modules/core: (devel f3a2fb3dae) last updated 2016/09/28 10:32:38 (GMT +100) lib/ansible/modules/extras: (devel aeecd8b09e) last updated 2016/09/28 10:32:39 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Seems like asa_config is fairly broken, and unsable to parse the output of `show running-config`. ``` yaml File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/asa.py", line 95, in get_config File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py", line 255, in run_commands File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: show running-config ^ ERROR: % Invalid input detected at '^' marker. asav-1> ``` ##### STEPS TO REPRODUCE ``` yaml - name: setup asa_config: lines: ['hostname firewall'] provider: "{{ cli }}" ``` or even ``` yaml - name: setup asa_config: backup: true provider: "{{ cli }}" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` yaml TASK [test_asa_config : setup] ************************************************* task path: /home/johnb/git/ansible-inc/test-asa/roles/test_asa_config/tests/cli/toplevel_nonidempotent.yaml:4 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/extras/network/asa/asa_config.py <asa01> ESTABLISH LOCAL CONNECTION FOR USER: johnb <asa01> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258 `" && echo ansible-tmp-1475061048.81-174384388459258="` echo $HOME/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258 `" ) && sleep 0' <asa01> PUT /tmp/tmpL9fzWq TO /home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/asa_config.py <asa01> EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/ /home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/asa_config.py && sleep 0' <asa01> EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/asa_config.py; rm -rf "/home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_rrZjWn/ansible_module_asa_config.py", line 327, in <module> main() File "/tmp/ansible_rrZjWn/ansible_module_asa_config.py", line 316, in main result['__backup__'] = module.config.get_config() File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/netcfg.py", line 61, in get_config File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/asa.py", line 95, in get_config File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py", line 255, in run_commands File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: show running-config ^ ERROR: % Invalid input detected at '^' marker. asav-1> fatal: [asa01]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "backup": true, "provider": { "host": "asa01", "password": "cisco", "transport": "cli", "username": "cisco" } }, "module_name": "asa_config" }, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_rrZjWn/ansible_module_asa_config.py\", line 327, in <module>\n main()\n File \"/tmp/ansible_rrZjWn/ansible_module_asa_config.py\", line 316, in main\n result['__backup__'] = module.config.get_config()\n File \"/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/netcfg.py\", line 61, in get_config\n File \"/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/asa.py\", line 95, in get_config\n File \"/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py\", line 255, in run_commands\n File \"/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py\", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: show running-config\r\n ^\r\nERROR: % Invalid input detected at '^' marker.\r\n\rasav-1> \n", "module_stdout": "", "msg": "MODULE FAILURE" ```
True
asa_config: Clearer message when `authorize: yes` isn't set - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME asa_command ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel a265d2d77d) last updated 2016/09/28 10:32:34 (GMT +100) lib/ansible/modules/core: (devel f3a2fb3dae) last updated 2016/09/28 10:32:38 (GMT +100) lib/ansible/modules/extras: (devel aeecd8b09e) last updated 2016/09/28 10:32:39 (GMT +100) ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY Seems like asa_config is fairly broken, and unsable to parse the output of `show running-config`. ``` yaml File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/asa.py", line 95, in get_config File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py", line 255, in run_commands File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: show running-config ^ ERROR: % Invalid input detected at '^' marker. asav-1> ``` ##### STEPS TO REPRODUCE ``` yaml - name: setup asa_config: lines: ['hostname firewall'] provider: "{{ cli }}" ``` or even ``` yaml - name: setup asa_config: backup: true provider: "{{ cli }}" ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` yaml TASK [test_asa_config : setup] ************************************************* task path: /home/johnb/git/ansible-inc/test-asa/roles/test_asa_config/tests/cli/toplevel_nonidempotent.yaml:4 Using module file /home/johnb/git/ansible-inc/ansible/lib/ansible/modules/extras/network/asa/asa_config.py <asa01> ESTABLISH LOCAL CONNECTION FOR USER: johnb <asa01> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258 `" && echo ansible-tmp-1475061048.81-174384388459258="` echo $HOME/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258 `" ) && sleep 0' <asa01> PUT /tmp/tmpL9fzWq TO /home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/asa_config.py <asa01> EXEC /bin/sh -c 'chmod u+x /home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/ /home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/asa_config.py && sleep 0' <asa01> EXEC /bin/sh -c 'python /home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/asa_config.py; rm -rf "/home/johnb/.ansible/tmp/ansible-tmp-1475061048.81-174384388459258/" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_rrZjWn/ansible_module_asa_config.py", line 327, in <module> main() File "/tmp/ansible_rrZjWn/ansible_module_asa_config.py", line 316, in main result['__backup__'] = module.config.get_config() File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/netcfg.py", line 61, in get_config File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/asa.py", line 95, in get_config File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py", line 255, in run_commands File "/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: show running-config ^ ERROR: % Invalid input detected at '^' marker. asav-1> fatal: [asa01]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "backup": true, "provider": { "host": "asa01", "password": "cisco", "transport": "cli", "username": "cisco" } }, "module_name": "asa_config" }, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_rrZjWn/ansible_module_asa_config.py\", line 327, in <module>\n main()\n File \"/tmp/ansible_rrZjWn/ansible_module_asa_config.py\", line 316, in main\n result['__backup__'] = module.config.get_config()\n File \"/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/netcfg.py\", line 61, in get_config\n File \"/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/asa.py\", line 95, in get_config\n File \"/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py\", line 255, in run_commands\n File \"/tmp/ansible_rrZjWn/ansible_modlib.zip/ansible/module_utils/shell.py\", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: show running-config\r\n ^\r\nERROR: % Invalid input detected at '^' marker.\r\n\rasav-1> \n", "module_stdout": "", "msg": "MODULE FAILURE" ```
main
asa config clearer message when authorize yes isn t set issue type bug report component name asa command ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt configuration os environment summary seems like asa config is fairly broken and unsable to parse the output of show running config yaml file tmp ansible rrzjwn ansible modlib zip ansible module utils asa py line in get config file tmp ansible rrzjwn ansible modlib zip ansible module utils shell py line in run commands file tmp ansible rrzjwn ansible modlib zip ansible module utils shell py line in execute ansible module utils network networkerror matched error in response show running config error invalid input detected at marker asav steps to reproduce yaml name setup asa config lines provider cli or even yaml name setup asa config backup true provider cli expected results actual results yaml task task path home johnb git ansible inc test asa roles test asa config tests cli toplevel nonidempotent yaml using module file home johnb git ansible inc ansible lib ansible modules extras network asa asa config py establish local connection for user johnb exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home johnb ansible tmp ansible tmp asa config py exec bin sh c chmod u x home johnb ansible tmp ansible tmp home johnb ansible tmp ansible tmp asa config py sleep exec bin sh c python home johnb ansible tmp ansible tmp asa config py rm rf home johnb ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible rrzjwn ansible module asa config py line in main file tmp ansible rrzjwn ansible module asa config py line in main result module config get config file tmp ansible rrzjwn ansible modlib zip ansible module utils netcfg py line in get config file tmp ansible rrzjwn ansible modlib zip ansible module utils asa py line in get config file tmp ansible rrzjwn ansible modlib zip ansible module utils shell py line in run commands file tmp ansible rrzjwn ansible modlib zip ansible module utils shell py line in execute ansible module utils network networkerror matched error in response show running config error invalid input detected at marker asav fatal failed changed false failed true invocation module args backup true provider host password cisco transport cli username cisco module name asa config module stderr traceback most recent call last n file tmp ansible rrzjwn ansible module asa config py line in n main n file tmp ansible rrzjwn ansible module asa config py line in main n result module config get config n file tmp ansible rrzjwn ansible modlib zip ansible module utils netcfg py line in get config n file tmp ansible rrzjwn ansible modlib zip ansible module utils asa py line in get config n file tmp ansible rrzjwn ansible modlib zip ansible module utils shell py line in run commands n file tmp ansible rrzjwn ansible modlib zip ansible module utils shell py line in execute nansible module utils network networkerror matched error in response show running config r n r nerror invalid input detected at marker r n rasav n module stdout msg module failure
1
24,530
6,550,928,905
IssuesEvent
2017-09-05 13:06:48
jimmerioles/bitcoin-currency-converter-php
https://api.github.com/repos/jimmerioles/bitcoin-currency-converter-php
closed
Fix "Generic Files LineLength TooLong" issue in src/Provider/AbstractProvider.php
codeclimate
Line exceeds 120 characters; contains 124 characters https://codeclimate.com/github/jimmerioles/bitcoin-currency-converter-php/src/Provider/AbstractProvider.php#issue_59ae6f761dced6000100001f
1.0
Fix "Generic Files LineLength TooLong" issue in src/Provider/AbstractProvider.php - Line exceeds 120 characters; contains 124 characters https://codeclimate.com/github/jimmerioles/bitcoin-currency-converter-php/src/Provider/AbstractProvider.php#issue_59ae6f761dced6000100001f
non_main
fix generic files linelength toolong issue in src provider abstractprovider php line exceeds characters contains characters
0
875
4,540,580,053
IssuesEvent
2016-09-09 15:03:38
Particular/NServiceBus.RabbitMQ
https://api.github.com/repos/Particular/NServiceBus.RabbitMQ
closed
Support Rabbit.Client 4.0.2
Impact: S Size: S State: In Progress Tag: Maintainer Prio Type: Feature
Rabbit.Client 4.0.2 was released on 1st Sep (and 4.0.1 and 4.0.0 on 25th Aug and 19th Aug respectively). Our current dependency is `RabbitMQ.Client (>= 3.6.3 && < 3.7.0)` This should become `RabbitMQ.Client (>= 4.0.2 && < 4.1.0)` - [x] fix on release-3.4.0 - [x] fix on release-4.0.0
True
Support Rabbit.Client 4.0.2 - Rabbit.Client 4.0.2 was released on 1st Sep (and 4.0.1 and 4.0.0 on 25th Aug and 19th Aug respectively). Our current dependency is `RabbitMQ.Client (>= 3.6.3 && < 3.7.0)` This should become `RabbitMQ.Client (>= 4.0.2 && < 4.1.0)` - [x] fix on release-3.4.0 - [x] fix on release-4.0.0
main
support rabbit client rabbit client was released on sep and and on aug and aug respectively our current dependency is rabbitmq client this should become rabbitmq client fix on release fix on release
1
5,513
27,559,982,501
IssuesEvent
2023-03-07 21:06:40
MozillaFoundation/foundation.mozilla.org
https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org
closed
Redirect deprecated category links into topic links
engineering feature request maintain
We'd like to automatically send users who enter through these old `/category/` links to their `/topic/` equivalent. We also need to maintain l10n functionality for routing users to the correct language. For instance: `https://foundation.mozilla.org/blog/category/moz-news-beat/` should route to `https://foundation.mozilla.org/en/blog/topic/moz-news-beat/` if the user should go to an `/en/` view ## Dev notes @tbrlpld identified that the work would need to happen in these places: https://github.com/mozilla/foundation.mozilla.org/blob/e03f22a51ae3bdc8faa75c7c9140030d25f429cf/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py#L91 https://github.com/mozilla/foundation.mozilla.org/blob/e03f22a51ae3bdc8faa75c7c9140030d25f429cf/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py#L348-L361 ## Acceptance Criteria - [x] As a site visitor, when I open a URL to filter for blog "categories" I should be redirected to the equivalent topic filter URL: - From: `https://foundation.mozilla.org/en/blog/category/moz-news-beat/` - To `https://foundation.mozilla.org/en/blog/topic/moz-news-beat/`. - [x] As a site visitor, when I open a blog category URL without a language code in it, I am redirected to the blog topic URL on the default locale: - From `https://foundation.mozilla.org/blog/category/moz-news-beat/` - To `https://foundation.mozilla.org/en/blog/topic/moz-news-beat/` - [x] As a site visitor, when I have visited the foundation site and changed my locale to something other than the default (e.g. `de`), redirects from URLs without language code will redirect to URLs for my previously selected locale. - From `https://foundation.mozilla.org/blog/category/moz-news-beat/` - To `https://foundation.mozilla.org/de/blog/topic/moz-news-beat/` - [x] If the origin URL contains a language code, the redirect target should keep the same locale as the origin. - [x] This should be a permanent redirect.
True
Redirect deprecated category links into topic links - We'd like to automatically send users who enter through these old `/category/` links to their `/topic/` equivalent. We also need to maintain l10n functionality for routing users to the correct language. For instance: `https://foundation.mozilla.org/blog/category/moz-news-beat/` should route to `https://foundation.mozilla.org/en/blog/topic/moz-news-beat/` if the user should go to an `/en/` view ## Dev notes @tbrlpld identified that the work would need to happen in these places: https://github.com/mozilla/foundation.mozilla.org/blob/e03f22a51ae3bdc8faa75c7c9140030d25f429cf/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py#L91 https://github.com/mozilla/foundation.mozilla.org/blob/e03f22a51ae3bdc8faa75c7c9140030d25f429cf/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py#L348-L361 ## Acceptance Criteria - [x] As a site visitor, when I open a URL to filter for blog "categories" I should be redirected to the equivalent topic filter URL: - From: `https://foundation.mozilla.org/en/blog/category/moz-news-beat/` - To `https://foundation.mozilla.org/en/blog/topic/moz-news-beat/`. - [x] As a site visitor, when I open a blog category URL without a language code in it, I am redirected to the blog topic URL on the default locale: - From `https://foundation.mozilla.org/blog/category/moz-news-beat/` - To `https://foundation.mozilla.org/en/blog/topic/moz-news-beat/` - [x] As a site visitor, when I have visited the foundation site and changed my locale to something other than the default (e.g. `de`), redirects from URLs without language code will redirect to URLs for my previously selected locale. - From `https://foundation.mozilla.org/blog/category/moz-news-beat/` - To `https://foundation.mozilla.org/de/blog/topic/moz-news-beat/` - [x] If the origin URL contains a language code, the redirect target should keep the same locale as the origin. - [x] This should be a permanent redirect.
main
redirect deprecated category links into topic links we d like to automatically send users who enter through these old category links to their topic equivalent we also need to maintain functionality for routing users to the correct language for instance should route to if the user should go to an en view dev notes tbrlpld identified that the work would need to happen in these places acceptance criteria as a site visitor when i open a url to filter for blog categories i should be redirected to the equivalent topic filter url from to as a site visitor when i open a blog category url without a language code in it i am redirected to the blog topic url on the default locale from to as a site visitor when i have visited the foundation site and changed my locale to something other than the default e g de redirects from urls without language code will redirect to urls for my previously selected locale from to if the origin url contains a language code the redirect target should keep the same locale as the origin this should be a permanent redirect
1
3,735
15,618,881,540
IssuesEvent
2021-03-20 02:27:54
cloverhearts/quilljs-markdown
https://api.github.com/repos/cloverhearts/quilljs-markdown
closed
markdown link not working
BUG Saw with Maintainer WILL MAKE IT WORK IN PROGRESS
expected: when typing a link in markdown format `[link text](www.link.com)` the link should show in the editor. actual behavior: after typing the link in markdown format when you type a space or return the link text gets corrupted. <img width="208" alt="Screen Shot 2021-03-17 at 2 12 08 PM" src="https://user-images.githubusercontent.com/16245896/111539317-fc86c900-872a-11eb-9d9d-22ea3b329fe5.png"> <img width="460" alt="Screen Shot 2021-03-17 at 2 12 32 PM" src="https://user-images.githubusercontent.com/16245896/111539327-ff81b980-872a-11eb-9513-130b12c01e35.png">
True
markdown link not working - expected: when typing a link in markdown format `[link text](www.link.com)` the link should show in the editor. actual behavior: after typing the link in markdown format when you type a space or return the link text gets corrupted. <img width="208" alt="Screen Shot 2021-03-17 at 2 12 08 PM" src="https://user-images.githubusercontent.com/16245896/111539317-fc86c900-872a-11eb-9d9d-22ea3b329fe5.png"> <img width="460" alt="Screen Shot 2021-03-17 at 2 12 32 PM" src="https://user-images.githubusercontent.com/16245896/111539327-ff81b980-872a-11eb-9513-130b12c01e35.png">
main
markdown link not working expected when typing a link in markdown format the link should show in the editor actual behavior after typing the link in markdown format when you type a space or return the link text gets corrupted img width alt screen shot at pm src img width alt screen shot at pm src
1
1,356
5,843,699,195
IssuesEvent
2017-05-10 09:49:57
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
win_lineinfile idempotence broken
affects_2.1 bug_report waiting_on_maintainer windows
##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> - win_lineinfile ##### ANSIBLE VERSION ``` ansible 2.1.0 (devel cb7b3b489d) last updated 2016/03/30 15:14:21 (GMT +200) lib/ansible/modules/core: (detached HEAD 0268864211) last updated 2016/03/30 15:14:39 (GMT +200) lib/ansible/modules/extras: (detached HEAD 6978984244) last updated 2016/03/30 15:14:39 (GMT +200) ``` ##### OS / ENVIRONMENT ubuntu 14 -> windows 2012R2 ##### SUMMARY backrefs option doesn't arrive well inside the module, causing it not to function properly and also breaking idempotence (line will be always inserted at the end of the file on every execution, regardless of backrefs=yes and regardless of regexp match) ##### STEPS TO REPRODUCE c:\test.txt containing ``` test1 test2 test3 ``` part of playbook ``` win_lineinfile: dest: c:\test.txt regexp: "test2" line: "this will be added over and over" backrefs: yes ``` ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> according to [this ](https://github.com/ansible/ansible/issues/4531)and similar to the behaviour on linux i was expecting to have idempotent behaviour (to change the regexp into line if regexp found and do nothing if not found), which works on linux. upon first execution, changed=1 upon second execution, changed=0 ``` test1 this will be added over and over test3 ``` ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> upon first execution, changed=1 upon second execution, changed=1 ``` test1 this will be added over and over test3 this will be added over and over ``` ##### WORKAROUND/HACK It seems that the problem comes from the backrefs variable arriving as True/False in the module, so this never gets triggered. ``` ElseIf ($backrefs -ne "no") { # No matches - no-op ``` and it forces it to go two elseifs down and add the line I used this bit of code at the beginning of the module - at params - since I desperately needed the idempotent behavior :) ``` #dirty hack $backrefs = Get-Attr $params "backrefs" "no"; if ( $backrefs -eq "True" ) { $backrefs = "yes" } else { $backrefs = "no" } ```
True
win_lineinfile idempotence broken - ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> - win_lineinfile ##### ANSIBLE VERSION ``` ansible 2.1.0 (devel cb7b3b489d) last updated 2016/03/30 15:14:21 (GMT +200) lib/ansible/modules/core: (detached HEAD 0268864211) last updated 2016/03/30 15:14:39 (GMT +200) lib/ansible/modules/extras: (detached HEAD 6978984244) last updated 2016/03/30 15:14:39 (GMT +200) ``` ##### OS / ENVIRONMENT ubuntu 14 -> windows 2012R2 ##### SUMMARY backrefs option doesn't arrive well inside the module, causing it not to function properly and also breaking idempotence (line will be always inserted at the end of the file on every execution, regardless of backrefs=yes and regardless of regexp match) ##### STEPS TO REPRODUCE c:\test.txt containing ``` test1 test2 test3 ``` part of playbook ``` win_lineinfile: dest: c:\test.txt regexp: "test2" line: "this will be added over and over" backrefs: yes ``` ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> according to [this ](https://github.com/ansible/ansible/issues/4531)and similar to the behaviour on linux i was expecting to have idempotent behaviour (to change the regexp into line if regexp found and do nothing if not found), which works on linux. upon first execution, changed=1 upon second execution, changed=0 ``` test1 this will be added over and over test3 ``` ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> upon first execution, changed=1 upon second execution, changed=1 ``` test1 this will be added over and over test3 this will be added over and over ``` ##### WORKAROUND/HACK It seems that the problem comes from the backrefs variable arriving as True/False in the module, so this never gets triggered. ``` ElseIf ($backrefs -ne "no") { # No matches - no-op ``` and it forces it to go two elseifs down and add the line I used this bit of code at the beginning of the module - at params - since I desperately needed the idempotent behavior :) ``` #dirty hack $backrefs = Get-Attr $params "backrefs" "no"; if ( $backrefs -eq "True" ) { $backrefs = "yes" } else { $backrefs = "no" } ```
main
win lineinfile idempotence broken issue type bug report component name win lineinfile ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt os environment ubuntu windows summary backrefs option doesn t arrive well inside the module causing it not to function properly and also breaking idempotence line will be always inserted at the end of the file on every execution regardless of backrefs yes and regardless of regexp match steps to reproduce c test txt containing part of playbook win lineinfile dest c test txt regexp line this will be added over and over backrefs yes expected results according to similar to the behaviour on linux i was expecting to have idempotent behaviour to change the regexp into line if regexp found and do nothing if not found which works on linux upon first execution changed upon second execution changed this will be added over and over actual results upon first execution changed upon second execution changed this will be added over and over this will be added over and over workaround hack it seems that the problem comes from the backrefs variable arriving as true false in the module so this never gets triggered elseif backrefs ne no no matches no op and it forces it to go two elseifs down and add the line i used this bit of code at the beginning of the module at params since i desperately needed the idempotent behavior dirty hack backrefs get attr params backrefs no if backrefs eq true backrefs yes else backrefs no
1
270,893
29,145,870,759
IssuesEvent
2023-05-18 02:48:36
aayant-mend/WebGoat
https://api.github.com/repos/aayant-mend/WebGoat
opened
commons-text-1.9.jar: 1 vulnerabilities (highest severity is: 9.8)
Mend: dependency security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-text-1.9.jar</b></p></summary> <p>Apache Commons Text is a library focused on algorithms working on strings.</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-text">https://commons.apache.org/proper/commons-text</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-text/1.9/commons-text-1.9.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/aayant-mend/WebGoat/commit/edae3edcc9d33118c8b81a9be8bb3d975504b0d1">edae3edcc9d33118c8b81a9be8bb3d975504b0d1</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (commons-text version) | Fix PR available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-42889](https://www.mend.io/vulnerability-database/CVE-2022-42889) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 9.8 | commons-text-1.9.jar | Direct | 1.10.0 | &#9989; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-42889</summary> ### Vulnerable Library - <b>commons-text-1.9.jar</b></p> <p>Apache Commons Text is a library focused on algorithms working on strings.</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-text">https://commons.apache.org/proper/commons-text</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-text/1.9/commons-text-1.9.jar</p> <p> Dependency Hierarchy: - :x: **commons-text-1.9.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/aayant-mend/WebGoat/commit/edae3edcc9d33118c8b81a9be8bb3d975504b0d1">edae3edcc9d33118c8b81a9be8bb3d975504b0d1</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Apache Commons Text performs variable interpolation, allowing properties to be dynamically evaluated and expanded. The standard format for interpolation is "${prefix:name}", where "prefix" is used to locate an instance of org.apache.commons.text.lookup.StringLookup that performs the interpolation. Starting with version 1.5 and continuing through 1.9, the set of default Lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers. These lookups are: - "script" - execute expressions using the JVM script execution engine (javax.script) - "dns" - resolve dns records - "url" - load values from urls, including from remote servers Applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used. Users are recommended to upgrade to Apache Commons Text 1.10.0, which disables the problematic interpolators by default. <p>Publish Date: 2022-10-13 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42889>CVE-2022-42889</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.openwall.com/lists/oss-security/2022/10/13/4">https://www.openwall.com/lists/oss-security/2022/10/13/4</a></p> <p>Release Date: 2022-10-13</p> <p>Fix Resolution (commons-text): org.apache.commons:commons-text:1.10.0</p> <p>Direct dependency fix Resolution (org.apache.commons:commons-text): 1.10.0</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
True
commons-text-1.9.jar: 1 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-text-1.9.jar</b></p></summary> <p>Apache Commons Text is a library focused on algorithms working on strings.</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-text">https://commons.apache.org/proper/commons-text</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-text/1.9/commons-text-1.9.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/aayant-mend/WebGoat/commit/edae3edcc9d33118c8b81a9be8bb3d975504b0d1">edae3edcc9d33118c8b81a9be8bb3d975504b0d1</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (commons-text version) | Fix PR available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-42889](https://www.mend.io/vulnerability-database/CVE-2022-42889) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 9.8 | commons-text-1.9.jar | Direct | 1.10.0 | &#9989; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-42889</summary> ### Vulnerable Library - <b>commons-text-1.9.jar</b></p> <p>Apache Commons Text is a library focused on algorithms working on strings.</p> <p>Library home page: <a href="https://commons.apache.org/proper/commons-text">https://commons.apache.org/proper/commons-text</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-text/1.9/commons-text-1.9.jar</p> <p> Dependency Hierarchy: - :x: **commons-text-1.9.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/aayant-mend/WebGoat/commit/edae3edcc9d33118c8b81a9be8bb3d975504b0d1">edae3edcc9d33118c8b81a9be8bb3d975504b0d1</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Apache Commons Text performs variable interpolation, allowing properties to be dynamically evaluated and expanded. The standard format for interpolation is "${prefix:name}", where "prefix" is used to locate an instance of org.apache.commons.text.lookup.StringLookup that performs the interpolation. Starting with version 1.5 and continuing through 1.9, the set of default Lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers. These lookups are: - "script" - execute expressions using the JVM script execution engine (javax.script) - "dns" - resolve dns records - "url" - load values from urls, including from remote servers Applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used. Users are recommended to upgrade to Apache Commons Text 1.10.0, which disables the problematic interpolators by default. <p>Publish Date: 2022-10-13 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42889>CVE-2022-42889</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.openwall.com/lists/oss-security/2022/10/13/4">https://www.openwall.com/lists/oss-security/2022/10/13/4</a></p> <p>Release Date: 2022-10-13</p> <p>Fix Resolution (commons-text): org.apache.commons:commons-text:1.10.0</p> <p>Direct dependency fix Resolution (org.apache.commons:commons-text): 1.10.0</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
non_main
commons text jar vulnerabilities highest severity is vulnerable library commons text jar apache commons text is a library focused on algorithms working on strings library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org apache commons commons text commons text jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in commons text version fix pr available high commons text jar direct details cve vulnerable library commons text jar apache commons text is a library focused on algorithms working on strings library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org apache commons commons text commons text jar dependency hierarchy x commons text jar vulnerable library found in head commit a href found in base branch main vulnerability details apache commons text performs variable interpolation allowing properties to be dynamically evaluated and expanded the standard format for interpolation is prefix name where prefix is used to locate an instance of org apache commons text lookup stringlookup that performs the interpolation starting with version and continuing through the set of default lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers these lookups are script execute expressions using the jvm script execution engine javax script dns resolve dns records url load values from urls including from remote servers applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used users are recommended to upgrade to apache commons text which disables the problematic interpolators by default publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons text org apache commons commons text direct dependency fix resolution org apache commons commons text rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
0
82,977
10,317,169,768
IssuesEvent
2019-08-30 12:01:21
xvitaly/srcrepair
https://api.github.com/repos/xvitaly/srcrepair
closed
Add documentation for all forms
documentation
Describe whatever you want to be implemented in SRC Repair in future:
1.0
Add documentation for all forms - Describe whatever you want to be implemented in SRC Repair in future:
non_main
add documentation for all forms describe whatever you want to be implemented in src repair in future
0
4,852
24,984,828,307
IssuesEvent
2022-11-02 14:22:57
Lissy93/dashy
https://api.github.com/repos/Lissy93/dashy
closed
After the KeyCloak 16.11 authentication is successful, the interface will automatically refresh every 5s
🤷‍♂️ Question 👤 Awaiting Maintainer Response
### Question When docked with KeyCloak 16.11 authentication. After completing the Keycloak account authentication, the dashy interface will automatically refresh every 5s. I don't know what's going on, I'm surprised!!!! Can someone please help me?? ### Category Configuration ### Please tick the boxes - [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number) - [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue) - [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide - [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct)
True
After the KeyCloak 16.11 authentication is successful, the interface will automatically refresh every 5s - ### Question When docked with KeyCloak 16.11 authentication. After completing the Keycloak account authentication, the dashy interface will automatically refresh every 5s. I don't know what's going on, I'm surprised!!!! Can someone please help me?? ### Category Configuration ### Please tick the boxes - [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number) - [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue) - [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide - [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct)
main
after the keycloak authentication is successful the interface will automatically refresh every question when docked with keycloak authentication after completing the keycloak account authentication the dashy interface will automatically refresh every i don t know what s going on i m surprised!!!! can someone please help me?? category configuration please tick the boxes you are using a version of dashy check the first two digits of the version number you ve checked that this you ve checked the and guide you agree to the
1
34,223
12,258,179,941
IssuesEvent
2020-05-06 14:45:19
kids-first/kf-portal-ui
https://api.github.com/repos/kids-first/kf-portal-ui
closed
Arrange Component Upgrade
Security arranger
We have currently 2 high risk vulnerabilities in the portal injected by arranger component. First step will be to upgrade our version to see if it fix the issues Then, fix arranger component if the warning is still present ![image.png](https://images.zenhubusercontent.com/5e2b1d5d519c09500843aab3/25952e9a-793a-4287-b427-76be52379e90)
True
Arrange Component Upgrade - We have currently 2 high risk vulnerabilities in the portal injected by arranger component. First step will be to upgrade our version to see if it fix the issues Then, fix arranger component if the warning is still present ![image.png](https://images.zenhubusercontent.com/5e2b1d5d519c09500843aab3/25952e9a-793a-4287-b427-76be52379e90)
non_main
arrange component upgrade we have currently high risk vulnerabilities in the portal injected by arranger component first step will be to upgrade our version to see if it fix the issues then fix arranger component if the warning is still present
0