Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
150,804
5,791,273,615
IssuesEvent
2017-05-02 05:00:50
famuvie/breedR
https://api.github.com/repos/famuvie/breedR
closed
incorrect labelling of variance components in summary() for 3+ trait models fitted with AI-REML
bug priority:high
``` r Nobs <- 1e4 ## residual covariance matrix S_res <- matrix(c( 9, 3, -3, 3, 9, 9, -3, 9, 14 ), nrow = 3, ncol = 3) ## simulated residual-only dataset testdat <- data.frame(breedR.sample.ranef(3, S_res, Nobs, vname = 'e')) ## fitted model res <- remlf90( cbind(e_1, e_2, e_3) ~ 1, data = testdat, method = "ai" ) #> Using default initial variances given by default_initial_variance() #> See ?breedR.getOption. summary(res) #> Formula: cbind(e_1, e_2, e_3) ~ 0 + Intercept #> Data: testdat #> AIC BIC logLik #> 83966 84031 -41974 #> #> #> Variance components: #> Estimated variances S.E. #> Residual.e_1 9.021 0.12758 #> Residual.e_1_Residual.e_2 3.098 0.09474 #> Residual.e_2 -2.900 0.11476 #> Residual.e_1_Residual.e_3 8.886 0.12567 #> Residual.e_2_Residual.e_3 8.787 0.14095 #> Residual.e_3 13.666 0.19327 #> #> Fixed effects: #> value s.e. #> Intercept.e_1 -0.042498 0.0300 #> Intercept.e_2 0.020878 0.0298 #> Intercept.e_3 0.045872 0.0370 res$var[["Residual", "Estimated variances"]] #> e_1 e_2 e_3 #> e_1 9.0205 3.0978 -2.9004 #> e_2 3.0978 8.8856 8.7875 #> e_3 -2.9004 8.7875 13.6660 ``` Notice how the value summarized as `Residual.e_2` corresponds in fact to the residual covariance between `e_2` and `e_3`. In particular, it is negative!. The values are fine, but the labeling is incorrect.
1.0
incorrect labelling of variance components in summary() for 3+ trait models fitted with AI-REML - ``` r Nobs <- 1e4 ## residual covariance matrix S_res <- matrix(c( 9, 3, -3, 3, 9, 9, -3, 9, 14 ), nrow = 3, ncol = 3) ## simulated residual-only dataset testdat <- data.frame(breedR.sample.ranef(3, S_res, Nobs, vname = 'e')) ## fitted model res <- remlf90( cbind(e_1, e_2, e_3) ~ 1, data = testdat, method = "ai" ) #> Using default initial variances given by default_initial_variance() #> See ?breedR.getOption. summary(res) #> Formula: cbind(e_1, e_2, e_3) ~ 0 + Intercept #> Data: testdat #> AIC BIC logLik #> 83966 84031 -41974 #> #> #> Variance components: #> Estimated variances S.E. #> Residual.e_1 9.021 0.12758 #> Residual.e_1_Residual.e_2 3.098 0.09474 #> Residual.e_2 -2.900 0.11476 #> Residual.e_1_Residual.e_3 8.886 0.12567 #> Residual.e_2_Residual.e_3 8.787 0.14095 #> Residual.e_3 13.666 0.19327 #> #> Fixed effects: #> value s.e. #> Intercept.e_1 -0.042498 0.0300 #> Intercept.e_2 0.020878 0.0298 #> Intercept.e_3 0.045872 0.0370 res$var[["Residual", "Estimated variances"]] #> e_1 e_2 e_3 #> e_1 9.0205 3.0978 -2.9004 #> e_2 3.0978 8.8856 8.7875 #> e_3 -2.9004 8.7875 13.6660 ``` Notice how the value summarized as `Residual.e_2` corresponds in fact to the residual covariance between `e_2` and `e_3`. In particular, it is negative!. The values are fine, but the labeling is incorrect.
non_test
incorrect labelling of variance components in summary for trait models fitted with ai reml r nobs residual covariance matrix s res matrix c nrow ncol simulated residual only dataset testdat data frame breedr sample ranef s res nobs vname e fitted model res cbind e e e data testdat method ai using default initial variances given by default initial variance see breedr getoption summary res formula cbind e e e intercept data testdat aic bic loglik variance components estimated variances s e residual e residual e residual e residual e residual e residual e residual e residual e residual e fixed effects value s e intercept e intercept e intercept e res var e e e e e e notice how the value summarized as residual e corresponds in fact to the residual covariance between e and e in particular it is negative the values are fine but the labeling is incorrect
0
269,570
8,440,536,589
IssuesEvent
2018-10-18 07:36:51
handsontable/handsontable
https://api.github.com/repos/handsontable/handsontable
closed
Sort, remove a row and undo turns out an unexpected result
Plugin: column sorting Plugin: undo-redo Priority: high Status: Merged (ready for release) Status: Released Type: Bug
I haven't found this behaviour in these issues. The actions are: refresh, sort by _code_, select EUR from _country_, _remove row_ and _undo_. I've done the test in the [examples section](https://handsontable.com/examples.html?headers&context-menu&sorting): ![load-sort-remove-undo](https://cloud.githubusercontent.com/assets/243047/12786810/f0449a46-ca92-11e5-9fc1-fe3e42bacb71.gif) And after undo the order disappears and I can't find the row for the Euro. Browser: Google Chrome 48.0.2564.82 (64-bit) over Linux.
1.0
Sort, remove a row and undo turns out an unexpected result - I haven't found this behaviour in these issues. The actions are: refresh, sort by _code_, select EUR from _country_, _remove row_ and _undo_. I've done the test in the [examples section](https://handsontable.com/examples.html?headers&context-menu&sorting): ![load-sort-remove-undo](https://cloud.githubusercontent.com/assets/243047/12786810/f0449a46-ca92-11e5-9fc1-fe3e42bacb71.gif) And after undo the order disappears and I can't find the row for the Euro. Browser: Google Chrome 48.0.2564.82 (64-bit) over Linux.
non_test
sort remove a row and undo turns out an unexpected result i haven t found this behaviour in these issues the actions are refresh sort by code select eur from country remove row and undo i ve done the test in the and after undo the order disappears and i can t find the row for the euro browser google chrome bit over linux
0
260,754
27,784,710,862
IssuesEvent
2023-03-17 01:30:47
n-devs/Fiction
https://api.github.com/repos/n-devs/Fiction
opened
CVE-2021-3807 (High) detected in ansi-regex-4.0.0.tgz
Mend: dependency security vulnerability
## CVE-2021-3807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-regex-4.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.0.0.tgz</a></p> <p>Path to dependency file: /Fiction/package.json</p> <p>Path to vulnerable library: /node_modules/inquirer/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - react-scripts-2.1.1.tgz (Root Library) - eslint-5.6.0.tgz - inquirer-6.2.1.tgz - strip-ansi-5.0.0.tgz - :x: **ansi-regex-4.0.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3807>CVE-2021-3807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution (ansi-regex): 4.1.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3807 (High) detected in ansi-regex-4.0.0.tgz - ## CVE-2021-3807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-regex-4.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.0.0.tgz</a></p> <p>Path to dependency file: /Fiction/package.json</p> <p>Path to vulnerable library: /node_modules/inquirer/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - react-scripts-2.1.1.tgz (Root Library) - eslint-5.6.0.tgz - inquirer-6.2.1.tgz - strip-ansi-5.0.0.tgz - :x: **ansi-regex-4.0.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3807>CVE-2021-3807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution (ansi-regex): 4.1.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in ansi regex tgz cve high severity vulnerability vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file fiction package json path to vulnerable library node modules inquirer node modules ansi regex package json dependency hierarchy react scripts tgz root library eslint tgz inquirer tgz strip ansi tgz x ansi regex tgz vulnerable library vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex direct dependency fix resolution react scripts step up your open source security game with mend
0
185,797
14,380,910,036
IssuesEvent
2020-12-02 04:01:06
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
Failing test: X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/exception_operators_data_types/ip·ts - detection engine api security and spaces enabled Detection exceptions data types and operators Rule exception operators for data type ip "is in list" operator will return 2 results if we have a list that includes 2 ips
failed-test
A test failed on a tracked branch ``` Error: timed out waiting for function condition to be true within waitForRuleSuccess at /dev/shm/workspace/parallel/14/kibana/x-pack/test/detection_engine_api_integration/utils.ts:708:9 ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/9973/) <!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/exception_operators_data_types/ip·ts","test.name":"detection engine api security and spaces enabled Detection exceptions data types and operators Rule exception operators for data type ip \"is in list\" operator will return 2 results if we have a list that includes 2 ips","test.failCount":1}} -->
1.0
Failing test: X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/exception_operators_data_types/ip·ts - detection engine api security and spaces enabled Detection exceptions data types and operators Rule exception operators for data type ip "is in list" operator will return 2 results if we have a list that includes 2 ips - A test failed on a tracked branch ``` Error: timed out waiting for function condition to be true within waitForRuleSuccess at /dev/shm/workspace/parallel/14/kibana/x-pack/test/detection_engine_api_integration/utils.ts:708:9 ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/9973/) <!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/exception_operators_data_types/ip·ts","test.name":"detection engine api security and spaces enabled Detection exceptions data types and operators Rule exception operators for data type ip \"is in list\" operator will return 2 results if we have a list that includes 2 ips","test.failCount":1}} -->
test
failing test x pack detection engine api integration tests x pack test detection engine api integration security and spaces tests exception operators data types ip·ts detection engine api security and spaces enabled detection exceptions data types and operators rule exception operators for data type ip is in list operator will return results if we have a list that includes ips a test failed on a tracked branch error timed out waiting for function condition to be true within waitforrulesuccess at dev shm workspace parallel kibana x pack test detection engine api integration utils ts first failure
1
96,967
3,979,617,460
IssuesEvent
2016-05-06 00:52:23
ParadiseSS13/Paradise
https://api.github.com/repos/ParadiseSS13/Paradise
closed
PDA Crew Manifest is broken.
Bug High Priority
**Problem Description**: PDA Crew Manifest is broken. Other people reported the same issue. **What did you expect to happen**: A list of the crew. **What happened instead**: Just loads up a tiny blue pixel. **Steps to reproduce the problem**: Clear BYOND cache. Join server. Join game. Open PDA, go to crew manifest. **Possibly related stuff (which gamemode was it? What were you doing at the time? Was anything else out of the ordinary happening?)**: Played two rounds, was broken throughout those two rounds. Other people reported the same issue.
1.0
PDA Crew Manifest is broken. - **Problem Description**: PDA Crew Manifest is broken. Other people reported the same issue. **What did you expect to happen**: A list of the crew. **What happened instead**: Just loads up a tiny blue pixel. **Steps to reproduce the problem**: Clear BYOND cache. Join server. Join game. Open PDA, go to crew manifest. **Possibly related stuff (which gamemode was it? What were you doing at the time? Was anything else out of the ordinary happening?)**: Played two rounds, was broken throughout those two rounds. Other people reported the same issue.
non_test
pda crew manifest is broken problem description pda crew manifest is broken other people reported the same issue what did you expect to happen a list of the crew what happened instead just loads up a tiny blue pixel steps to reproduce the problem clear byond cache join server join game open pda go to crew manifest possibly related stuff which gamemode was it what were you doing at the time was anything else out of the ordinary happening played two rounds was broken throughout those two rounds other people reported the same issue
0
148,620
19,534,415,756
IssuesEvent
2021-12-31 01:37:24
panasalap/linux-4.1.15
https://api.github.com/repos/panasalap/linux-4.1.15
opened
CVE-2017-14489 (Medium) detected in linux-stable-rtv4.1.33
security vulnerability
## CVE-2017-14489 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/scsi_transport_iscsi.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The iscsi_if_rx function in drivers/scsi/scsi_transport_iscsi.c in the Linux kernel through 4.13.2 allows local users to cause a denial of service (panic) by leveraging incorrect length validation. <p>Publish Date: 2017-09-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-14489>CVE-2017-14489</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-14489">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-14489</a></p> <p>Release Date: 2017-09-15</p> <p>Fix Resolution: v4.14-rc3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-14489 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2017-14489 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/scsi/scsi_transport_iscsi.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The iscsi_if_rx function in drivers/scsi/scsi_transport_iscsi.c in the Linux kernel through 4.13.2 allows local users to cause a denial of service (panic) by leveraging incorrect length validation. <p>Publish Date: 2017-09-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-14489>CVE-2017-14489</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-14489">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-14489</a></p> <p>Release Date: 2017-09-15</p> <p>Fix Resolution: v4.14-rc3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files drivers scsi scsi transport iscsi c vulnerability details the iscsi if rx function in drivers scsi scsi transport iscsi c in the linux kernel through allows local users to cause a denial of service panic by leveraging incorrect length validation publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
100,898
8,758,081,769
IssuesEvent
2018-12-15 00:18:22
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
opened
need to put shields down to watch video on Hulu
QA/Test-Plan-Specified QA/Yes bug feature/shields/webcompat regression
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description With 0.57.18 you need to put shields down to watch a video on Hulu. Allowing Ads and Trackers is not enough. Note, In 0.56.15 you did not need to put shields down to watch a video on Hulu - I was able to view a video with standard shield configuration. ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. Install 0.57.18 2. Navigate to Hulu 3. Install Widevine 4. Login and try to play a video. ## Actual result: Unable to play video. Get message below until you put shields down. ![screen shot 2018-12-14 at 7 06 33 pm](https://user-images.githubusercontent.com/28145373/50036098-1b3e0880-ffd4-11e8-8344-da717c8e1de4.png) ## Expected result: No error message, able to view video. ## Reproduces how often: Easily ## Brave version (brave://version info) Brave | 0.57.18 Chromium: 71.0.3578.80 (Official Build) (64-bit) -- | -- Revision | 2ac50e7249fbd55e6f517a28131605c9fb9fe897-refs/branch-heads/3578@{#860} OS | Mac OS X ### Reproducible on current release: - Does it reproduce on brave-browser dev/beta builds? yes ### Website problems only: - Does the issue resolve itself when disabling Brave Shields? yes - Is the issue reproducible on the latest version of Chrome? Using Chrome 71.0.3578.98 and UBlock Origin extension, issue does not reproduce. ### Additional Information Issue does not reproduce on previous version, 0.56.15
1.0
need to put shields down to watch video on Hulu - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description With 0.57.18 you need to put shields down to watch a video on Hulu. Allowing Ads and Trackers is not enough. Note, In 0.56.15 you did not need to put shields down to watch a video on Hulu - I was able to view a video with standard shield configuration. ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. Install 0.57.18 2. Navigate to Hulu 3. Install Widevine 4. Login and try to play a video. ## Actual result: Unable to play video. Get message below until you put shields down. ![screen shot 2018-12-14 at 7 06 33 pm](https://user-images.githubusercontent.com/28145373/50036098-1b3e0880-ffd4-11e8-8344-da717c8e1de4.png) ## Expected result: No error message, able to view video. ## Reproduces how often: Easily ## Brave version (brave://version info) Brave | 0.57.18 Chromium: 71.0.3578.80 (Official Build) (64-bit) -- | -- Revision | 2ac50e7249fbd55e6f517a28131605c9fb9fe897-refs/branch-heads/3578@{#860} OS | Mac OS X ### Reproducible on current release: - Does it reproduce on brave-browser dev/beta builds? yes ### Website problems only: - Does the issue resolve itself when disabling Brave Shields? yes - Is the issue reproducible on the latest version of Chrome? Using Chrome 71.0.3578.98 and UBlock Origin extension, issue does not reproduce. ### Additional Information Issue does not reproduce on previous version, 0.56.15
test
need to put shields down to watch video on hulu have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description with you need to put shields down to watch a video on hulu allowing ads and trackers is not enough note in you did not need to put shields down to watch a video on hulu i was able to view a video with standard shield configuration steps to reproduce install navigate to hulu install widevine login and try to play a video actual result unable to play video get message below until you put shields down expected result no error message able to view video reproduces how often easily brave version brave version info brave chromium   official build   bit revision refs branch heads os mac os x reproducible on current release does it reproduce on brave browser dev beta builds yes website problems only does the issue resolve itself when disabling brave shields yes is the issue reproducible on the latest version of chrome using chrome and ublock origin extension issue does not reproduce additional information issue does not reproduce on previous version
1
86,439
8,036,769,699
IssuesEvent
2018-07-30 10:13:16
vmware/harbor
https://api.github.com/repos/vmware/harbor
closed
UI error when user clicks "scan" on details page of a newly pushed image.
area/ui kind/automation-found kind/bug need-test-case target/1.6.0
Build: v1.5.0-d65a7baf Seems the root cause is the status widget is not initialized and after clicking it the code tries to modify the widget. ![image](https://user-images.githubusercontent.com/2390463/42364743-dc17d45c-812e-11e8-8143-198201c0d017.png)
1.0
UI error when user clicks "scan" on details page of a newly pushed image. - Build: v1.5.0-d65a7baf Seems the root cause is the status widget is not initialized and after clicking it the code tries to modify the widget. ![image](https://user-images.githubusercontent.com/2390463/42364743-dc17d45c-812e-11e8-8143-198201c0d017.png)
test
ui error when user clicks scan on details page of a newly pushed image build seems the root cause is the status widget is not initialized and after clicking it the code tries to modify the widget
1
145,419
19,339,417,180
IssuesEvent
2021-12-15 01:29:16
hydrogen-dev/molecule-quickstart-app
https://api.github.com/repos/hydrogen-dev/molecule-quickstart-app
opened
CVE-2021-32640 (Medium) detected in ws-6.2.1.tgz, ws-5.2.2.tgz
security vulnerability
## CVE-2021-32640 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ws-6.2.1.tgz</b>, <b>ws-5.2.2.tgz</b></p></summary> <p> <details><summary><b>ws-6.2.1.tgz</b></p></summary> <p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-6.2.1.tgz">https://registry.npmjs.org/ws/-/ws-6.2.1.tgz</a></p> <p>Path to dependency file: molecule-quickstart-app/package.json</p> <p>Path to vulnerable library: molecule-quickstart-app/node_modules/jest-environment-jsdom-fourteen/node_modules/ws/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.0.1.tgz (Root Library) - jest-environment-jsdom-fourteen-0.1.0.tgz - jsdom-14.1.0.tgz - :x: **ws-6.2.1.tgz** (Vulnerable Library) </details> <details><summary><b>ws-5.2.2.tgz</b></p></summary> <p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-5.2.2.tgz">https://registry.npmjs.org/ws/-/ws-5.2.2.tgz</a></p> <p>Path to dependency file: molecule-quickstart-app/package.json</p> <p>Path to vulnerable library: molecule-quickstart-app/node_modules/ws/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.0.1.tgz (Root Library) - jest-24.7.1.tgz - jest-cli-24.9.0.tgz - jest-config-24.9.0.tgz - jest-environment-jsdom-24.9.0.tgz - jsdom-11.12.0.tgz - :x: **ws-5.2.2.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options. <p>Publish Date: 2021-05-25 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p> <p>Release Date: 2021-05-25</p> <p>Fix Resolution: 5.2.3,6.2.2,7.4.6</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ws","packageVersion":"6.2.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.0.1;jest-environment-jsdom-fourteen:0.1.0;jsdom:14.1.0;ws:6.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.2.3,6.2.2,7.4.6","isBinary":false},{"packageType":"javascript/Node.js","packageName":"ws","packageVersion":"5.2.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.0.1;jest:24.7.1;jest-cli:24.9.0;jest-config:24.9.0;jest-environment-jsdom:24.9.0;jsdom:11.12.0;ws:5.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.2.3,6.2.2,7.4.6","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-32640","vulnerabilityDetails":"ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size\u003dsize`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-32640 (Medium) detected in ws-6.2.1.tgz, ws-5.2.2.tgz - ## CVE-2021-32640 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ws-6.2.1.tgz</b>, <b>ws-5.2.2.tgz</b></p></summary> <p> <details><summary><b>ws-6.2.1.tgz</b></p></summary> <p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-6.2.1.tgz">https://registry.npmjs.org/ws/-/ws-6.2.1.tgz</a></p> <p>Path to dependency file: molecule-quickstart-app/package.json</p> <p>Path to vulnerable library: molecule-quickstart-app/node_modules/jest-environment-jsdom-fourteen/node_modules/ws/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.0.1.tgz (Root Library) - jest-environment-jsdom-fourteen-0.1.0.tgz - jsdom-14.1.0.tgz - :x: **ws-6.2.1.tgz** (Vulnerable Library) </details> <details><summary><b>ws-5.2.2.tgz</b></p></summary> <p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-5.2.2.tgz">https://registry.npmjs.org/ws/-/ws-5.2.2.tgz</a></p> <p>Path to dependency file: molecule-quickstart-app/package.json</p> <p>Path to vulnerable library: molecule-quickstart-app/node_modules/ws/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.0.1.tgz (Root Library) - jest-24.7.1.tgz - jest-cli-24.9.0.tgz - jest-config-24.9.0.tgz - jest-environment-jsdom-24.9.0.tgz - jsdom-11.12.0.tgz - :x: **ws-5.2.2.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options. <p>Publish Date: 2021-05-25 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p> <p>Release Date: 2021-05-25</p> <p>Fix Resolution: 5.2.3,6.2.2,7.4.6</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ws","packageVersion":"6.2.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.0.1;jest-environment-jsdom-fourteen:0.1.0;jsdom:14.1.0;ws:6.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.2.3,6.2.2,7.4.6","isBinary":false},{"packageType":"javascript/Node.js","packageName":"ws","packageVersion":"5.2.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.0.1;jest:24.7.1;jest-cli:24.9.0;jest-config:24.9.0;jest-environment-jsdom:24.9.0;jsdom:11.12.0;ws:5.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.2.3,6.2.2,7.4.6","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-32640","vulnerabilityDetails":"ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size\u003dsize`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_test
cve medium detected in ws tgz ws tgz cve medium severity vulnerability vulnerable libraries ws tgz ws tgz ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file molecule quickstart app package json path to vulnerable library molecule quickstart app node modules jest environment jsdom fourteen node modules ws package json dependency hierarchy react scripts tgz root library jest environment jsdom fourteen tgz jsdom tgz x ws tgz vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file molecule quickstart app package json path to vulnerable library molecule quickstart app node modules ws package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz jest config tgz jest environment jsdom tgz jsdom tgz x ws tgz vulnerable library found in base branch master vulnerability details ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react scripts jest environment jsdom fourteen jsdom ws isminimumfixversionavailable true minimumfixversion isbinary false packagetype javascript node js packagename ws packageversion packagefilepaths istransitivedependency true dependencytree react scripts jest jest cli jest config jest environment jsdom jsdom ws isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options vulnerabilityurl
0
348,959
31,763,426,102
IssuesEvent
2023-09-12 07:15:24
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Update l10n for 1.58.x (Chromium 117).
l10n QA/Yes release-notes/exclude QA/Test-Plan-Specified OS/Android OS/Desktop
Download the latest l10n from Transifex. Test plan: @brave/legacy_qa should run through a few locals just to make sure that nothing obvious has regressed. Shouldn't spend too much time on this though. **On Android:** please, verify that the string `Downloading wallet data file * X%` from https://github.com/brave/brave-core/pull/19086 looks correctly (specifically the percentage character).
1.0
Update l10n for 1.58.x (Chromium 117). - Download the latest l10n from Transifex. Test plan: @brave/legacy_qa should run through a few locals just to make sure that nothing obvious has regressed. Shouldn't spend too much time on this though. **On Android:** please, verify that the string `Downloading wallet data file * X%` from https://github.com/brave/brave-core/pull/19086 looks correctly (specifically the percentage character).
test
update for x chromium download the latest from transifex test plan brave legacy qa should run through a few locals just to make sure that nothing obvious has regressed shouldn t spend too much time on this though on android please verify that the string downloading wallet data file x from looks correctly specifically the percentage character
1
33,772
16,107,007,622
IssuesEvent
2021-04-27 16:02:21
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
Hysteresis effect on threadpool hill-climbing
tenet-performance
We have noticed a periodic pattern on the threadpool hill-climbing logic, which uses either `n-cores` or `n-cores + 20` with an hysteresis effect that switches every 3-4 weeks: ![image](https://user-images.githubusercontent.com/1165805/116272903-27a30600-a736-11eb-870a-ff005d43bc47.png) The main visible impact is on performance results, here is an example with JsonPlatform mean latency, but some scenarios are also impacted in throughput: ![image](https://user-images.githubusercontent.com/1165805/116273450-9b451300-a736-11eb-8bcc-e47ceb47b063.png) This happens independently of the runtime version, meaning that using an older runtime/aspnet/sdk doesn't change the "current" value of the TP threads. It is also independent of the hardware, and happens on all machines (Linux only) on the same day. These machines have auto-updates disabled. Here are ARM64 (32 cores), AMD (48 cores), INTEL (28 cores): ![image](https://user-images.githubusercontent.com/1165805/116273851-f840c900-a736-11eb-8e88-124e000e56b3.png) Disabling hill-climbing restores the better perf in this case, so it is believe that fixing this variation will actually have a negative impact on perf for these scenarios.
True
Hysteresis effect on threadpool hill-climbing - We have noticed a periodic pattern on the threadpool hill-climbing logic, which uses either `n-cores` or `n-cores + 20` with an hysteresis effect that switches every 3-4 weeks: ![image](https://user-images.githubusercontent.com/1165805/116272903-27a30600-a736-11eb-870a-ff005d43bc47.png) The main visible impact is on performance results, here is an example with JsonPlatform mean latency, but some scenarios are also impacted in throughput: ![image](https://user-images.githubusercontent.com/1165805/116273450-9b451300-a736-11eb-8bcc-e47ceb47b063.png) This happens independently of the runtime version, meaning that using an older runtime/aspnet/sdk doesn't change the "current" value of the TP threads. It is also independent of the hardware, and happens on all machines (Linux only) on the same day. These machines have auto-updates disabled. Here are ARM64 (32 cores), AMD (48 cores), INTEL (28 cores): ![image](https://user-images.githubusercontent.com/1165805/116273851-f840c900-a736-11eb-8e88-124e000e56b3.png) Disabling hill-climbing restores the better perf in this case, so it is believe that fixing this variation will actually have a negative impact on perf for these scenarios.
non_test
hysteresis effect on threadpool hill climbing we have noticed a periodic pattern on the threadpool hill climbing logic which uses either n cores or n cores with an hysteresis effect that switches every weeks the main visible impact is on performance results here is an example with jsonplatform mean latency but some scenarios are also impacted in throughput this happens independently of the runtime version meaning that using an older runtime aspnet sdk doesn t change the current value of the tp threads it is also independent of the hardware and happens on all machines linux only on the same day these machines have auto updates disabled here are cores amd cores intel cores disabling hill climbing restores the better perf in this case so it is believe that fixing this variation will actually have a negative impact on perf for these scenarios
0
453,600
13,085,221,826
IssuesEvent
2020-08-02 00:51:36
HealthHackAu2020/not_the_only_one
https://api.github.com/repos/HealthHackAu2020/not_the_only_one
closed
I think the "Share your Story" should be replaced on main page by the search bar
Health Hack Priority
I think that the search bar should be more prominent.
1.0
I think the "Share your Story" should be replaced on main page by the search bar - I think that the search bar should be more prominent.
non_test
i think the share your story should be replaced on main page by the search bar i think that the search bar should be more prominent
0
660,838
22,032,872,654
IssuesEvent
2022-05-28 05:27:14
PavlidisLab/Gemma
https://api.github.com/repos/PavlidisLab/Gemma
closed
Error when searching ontology terms
bug high priority
The Gemma interface has been lagging as of lately, particularly when searching ontology terms (for experimental tags, experimental factors, factor values). It will take several minutes to search and then fail altogether - this will happen several times before obtaining any results. <img width="296" alt="Screen Shot 2022-02-18 at 12 28 16 PM" src="https://user-images.githubusercontent.com/89932772/154756725-1ef94304-b8ff-45df-baec-821bb666a35f.png">
1.0
Error when searching ontology terms - The Gemma interface has been lagging as of lately, particularly when searching ontology terms (for experimental tags, experimental factors, factor values). It will take several minutes to search and then fail altogether - this will happen several times before obtaining any results. <img width="296" alt="Screen Shot 2022-02-18 at 12 28 16 PM" src="https://user-images.githubusercontent.com/89932772/154756725-1ef94304-b8ff-45df-baec-821bb666a35f.png">
non_test
error when searching ontology terms the gemma interface has been lagging as of lately particularly when searching ontology terms for experimental tags experimental factors factor values it will take several minutes to search and then fail altogether this will happen several times before obtaining any results img width alt screen shot at pm src
0
25,530
11,185,757,825
IssuesEvent
2020-01-01 05:46:02
EcommEasy/EcommEasy
https://api.github.com/repos/EcommEasy/EcommEasy
opened
CVE-2019-19919 (High) detected in handlebars-4.1.1.tgz
security vulnerability
## CVE-2019-19919 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/EcommEasy/package.json</p> <p>Path to vulnerable library: /EcommEasy/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - :x: **handlebars-4.1.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/EcommEasy/EcommEasy/commit/363b3c5c1efcb2a7265f2d259bed12d00efb92c4">363b3c5c1efcb2a7265f2d259bed12d00efb92c4</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads. <p>Publish Date: 2019-12-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p> <p>Release Date: 2019-12-20</p> <p>Fix Resolution: 4.3.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-19919 (High) detected in handlebars-4.1.1.tgz - ## CVE-2019-19919 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.1.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/EcommEasy/package.json</p> <p>Path to vulnerable library: /EcommEasy/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - :x: **handlebars-4.1.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/EcommEasy/EcommEasy/commit/363b3c5c1efcb2a7265f2d259bed12d00efb92c4">363b3c5c1efcb2a7265f2d259bed12d00efb92c4</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads. <p>Publish Date: 2019-12-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p> <p>Release Date: 2019-12-20</p> <p>Fix Resolution: 4.3.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file tmp ws scm ecommeasy package json path to vulnerable library ecommeasy node modules handlebars package json dependency hierarchy x handlebars tgz vulnerable library found in head commit a href vulnerability details versions of handlebars prior to are vulnerable to prototype pollution leading to remote code execution templates may alter an object s proto and definegetter properties which may allow an attacker to execute arbitrary code through crafted payloads publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
55,070
6,425,244,995
IssuesEvent
2017-08-09 15:03:31
WordPress/gutenberg
https://api.github.com/repos/WordPress/gutenberg
opened
Uncertain behavior in userData reducer tests.
Unit Testing [Type] Bug [Type] Question
<!-- BEFORE POSTING YOUR ISSUE: - These comments won't show up when you submit the issue. - Try to add as much detail as possible. Be specific! - Please add the version of Gutenberg you are using in the description - If you're requesting a new feature, explain why you'd like it to be added. - Search this repository for the issue and whether it has been fixed or reported already. - Ensure you are using the latest code before logging bugs. - Disable all plugins to ensure it's not a plugin conflict issue. --> ## Issue Overview <!-- This is a brief overview of the issue. ---> One of the userData reducer tests in editor/test/state is not clearing out the global blockTypes correctly. It "should populate recently used blocks with the common category", which it does, just not the expected result in the test suite. You can see the failing tests in the #2299, and #2309. Not sure if the test is expecting the wrong thing, or if the data is not being cleared. This issue has deeper roots as it is tied to the global blockTypes state. ## Expected Behavior <!-- If you're describing a bug, tell us what should happen --> <!-- If you're suggesting a change/improvement, tell us how it should work --> According to the tests only two blockTypes should appear in the array, instead 8 do. It is unclear what the expected outcome of the test should be, but based on the test suite it looks as though the tests are not cleaning up themselves properly. ## Current Behavior <!-- If describing a bug, tell us what happens instead of the expected behavior --> <!-- If suggesting a change/improvement, explain the difference from current behavior --> ## Possible Solution <!-- Not obligatory, but suggest a fix/reason for the bug, --> <!-- or ideas how to implement the addition or change --> A possible solution might be clearing out all registered types on the after hook. A better long term solution is to introduce blockType registries and have independent fixtures be used for tests that need a certain set of blocks. This would make our test suite run slightly faster as well. ## Related Issues and/or PRs <!-- List related issues or PRs against other branches: --> #2309, #2299
1.0
Uncertain behavior in userData reducer tests. - <!-- BEFORE POSTING YOUR ISSUE: - These comments won't show up when you submit the issue. - Try to add as much detail as possible. Be specific! - Please add the version of Gutenberg you are using in the description - If you're requesting a new feature, explain why you'd like it to be added. - Search this repository for the issue and whether it has been fixed or reported already. - Ensure you are using the latest code before logging bugs. - Disable all plugins to ensure it's not a plugin conflict issue. --> ## Issue Overview <!-- This is a brief overview of the issue. ---> One of the userData reducer tests in editor/test/state is not clearing out the global blockTypes correctly. It "should populate recently used blocks with the common category", which it does, just not the expected result in the test suite. You can see the failing tests in the #2299, and #2309. Not sure if the test is expecting the wrong thing, or if the data is not being cleared. This issue has deeper roots as it is tied to the global blockTypes state. ## Expected Behavior <!-- If you're describing a bug, tell us what should happen --> <!-- If you're suggesting a change/improvement, tell us how it should work --> According to the tests only two blockTypes should appear in the array, instead 8 do. It is unclear what the expected outcome of the test should be, but based on the test suite it looks as though the tests are not cleaning up themselves properly. ## Current Behavior <!-- If describing a bug, tell us what happens instead of the expected behavior --> <!-- If suggesting a change/improvement, explain the difference from current behavior --> ## Possible Solution <!-- Not obligatory, but suggest a fix/reason for the bug, --> <!-- or ideas how to implement the addition or change --> A possible solution might be clearing out all registered types on the after hook. A better long term solution is to introduce blockType registries and have independent fixtures be used for tests that need a certain set of blocks. This would make our test suite run slightly faster as well. ## Related Issues and/or PRs <!-- List related issues or PRs against other branches: --> #2309, #2299
test
uncertain behavior in userdata reducer tests before posting your issue these comments won t show up when you submit the issue try to add as much detail as possible be specific please add the version of gutenberg you are using in the description if you re requesting a new feature explain why you d like it to be added search this repository for the issue and whether it has been fixed or reported already ensure you are using the latest code before logging bugs disable all plugins to ensure it s not a plugin conflict issue issue overview one of the userdata reducer tests in editor test state is not clearing out the global blocktypes correctly it should populate recently used blocks with the common category which it does just not the expected result in the test suite you can see the failing tests in the and not sure if the test is expecting the wrong thing or if the data is not being cleared this issue has deeper roots as it is tied to the global blocktypes state expected behavior according to the tests only two blocktypes should appear in the array instead do it is unclear what the expected outcome of the test should be but based on the test suite it looks as though the tests are not cleaning up themselves properly current behavior possible solution a possible solution might be clearing out all registered types on the after hook a better long term solution is to introduce blocktype registries and have independent fixtures be used for tests that need a certain set of blocks this would make our test suite run slightly faster as well related issues and or prs
1
294,787
22,162,725,390
IssuesEvent
2022-06-04 18:53:40
typescript-eslint/typescript-eslint
https://api.github.com/repos/typescript-eslint/typescript-eslint
closed
Docs: [no-extraneous-class] Explain why the rule is useful
package: eslint-plugin documentation accepting prs
### Before You File a Documentation Request Please Confirm You Have Done The Following... - [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal. - [X] I have [read the FAQ](https://typescript-eslint.io/docs/linting/troubleshooting) and my problem is not listed. ### Suggested Changes Right now the rule just quotes TSLint's old docs as evidence: > Users who come from a Java-style OO language may wrap their utility functions in an extra class, instead of putting them at the top level. ...but it doesn't explain _why_ wrapping utility functions in an extra class is unnecessary or even bad. In summary: * Wrapper classes add extra runtime bloat and cognitive complexity to code without adding any structural improvements * Whatever would be put on them, such as utility functions, is already organized by virtue of the module it's in. * You can always `import * as ...` the module to get all of them in a single object. * IDEs can't provide as good autocompletions when you start typing the names of the helpers, since they're on a class instead of freely available to import * It's harder to statically analyze code for unused variables, etc. when they're all on the class (see: [ts-prune](https://github.com/nadeesha/ts-prune)). IME, this kind of class structure often comes up and is later regretted with teams that are used to adhering to OOP principles but then work in a runtime (e.g. Node) and project type (e.g. Express) that don't need them. They eventually get used to using ECMAScript modules as their form of organization, and find the extra classes to be unnecessary bloat. ### Affected URL(s) https://typescript-eslint.io/rules/no-extraneous-class
1.0
Docs: [no-extraneous-class] Explain why the rule is useful - ### Before You File a Documentation Request Please Confirm You Have Done The Following... - [X] I have looked for existing [open or closed documentation requests](https://github.com/typescript-eslint/typescript-eslint/issues?q=is%3Aissue+label%3Adocumentation) that match my proposal. - [X] I have [read the FAQ](https://typescript-eslint.io/docs/linting/troubleshooting) and my problem is not listed. ### Suggested Changes Right now the rule just quotes TSLint's old docs as evidence: > Users who come from a Java-style OO language may wrap their utility functions in an extra class, instead of putting them at the top level. ...but it doesn't explain _why_ wrapping utility functions in an extra class is unnecessary or even bad. In summary: * Wrapper classes add extra runtime bloat and cognitive complexity to code without adding any structural improvements * Whatever would be put on them, such as utility functions, is already organized by virtue of the module it's in. * You can always `import * as ...` the module to get all of them in a single object. * IDEs can't provide as good autocompletions when you start typing the names of the helpers, since they're on a class instead of freely available to import * It's harder to statically analyze code for unused variables, etc. when they're all on the class (see: [ts-prune](https://github.com/nadeesha/ts-prune)). IME, this kind of class structure often comes up and is later regretted with teams that are used to adhering to OOP principles but then work in a runtime (e.g. Node) and project type (e.g. Express) that don't need them. They eventually get used to using ECMAScript modules as their form of organization, and find the extra classes to be unnecessary bloat. ### Affected URL(s) https://typescript-eslint.io/rules/no-extraneous-class
non_test
docs explain why the rule is useful before you file a documentation request please confirm you have done the following i have looked for existing that match my proposal i have and my problem is not listed suggested changes right now the rule just quotes tslint s old docs as evidence users who come from a java style oo language may wrap their utility functions in an extra class instead of putting them at the top level but it doesn t explain why wrapping utility functions in an extra class is unnecessary or even bad in summary wrapper classes add extra runtime bloat and cognitive complexity to code without adding any structural improvements whatever would be put on them such as utility functions is already organized by virtue of the module it s in you can always import as the module to get all of them in a single object ides can t provide as good autocompletions when you start typing the names of the helpers since they re on a class instead of freely available to import it s harder to statically analyze code for unused variables etc when they re all on the class see ime this kind of class structure often comes up and is later regretted with teams that are used to adhering to oop principles but then work in a runtime e g node and project type e g express that don t need them they eventually get used to using ecmascript modules as their form of organization and find the extra classes to be unnecessary bloat affected url s
0
38,833
12,603,292,742
IssuesEvent
2020-06-11 13:15:39
jgeraigery/logstash
https://api.github.com/repos/jgeraigery/logstash
opened
CVE-2019-16942 (High) detected in jackson-databind-2.9.10.jar
security vulnerability
## CVE-2019-16942 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.10/e201bb70b7469ba18dd58ed8268aa44e702fa2f0/jackson-databind-2.9.10.jar,le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.10/e201bb70b7469ba18dd58ed8268aa44e702fa2f0/jackson-databind-2.9.10.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.10.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/logstash/commit/201cee856b2ad93e442e269232049cdde83045a3">201cee856b2ad93e442e269232049cdde83045a3</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling. <p>Publish Date: 2019-10-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942>CVE-2019-16942</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942</a></p> <p>Release Date: 2019-10-01</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1"}],"vulnerabilityIdentifier":"CVE-2019-16942","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-16942 (High) detected in jackson-databind-2.9.10.jar - ## CVE-2019-16942 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.10/e201bb70b7469ba18dd58ed8268aa44e702fa2f0/jackson-databind-2.9.10.jar,le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.10/e201bb70b7469ba18dd58ed8268aa44e702fa2f0/jackson-databind-2.9.10.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.10.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/logstash/commit/201cee856b2ad93e442e269232049cdde83045a3">201cee856b2ad93e442e269232049cdde83045a3</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling. <p>Publish Date: 2019-10-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942>CVE-2019-16942</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942</a></p> <p>Release Date: 2019-10-01</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1"}],"vulnerabilityIdentifier":"CVE-2019-16942","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_test
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library le caches modules files com fasterxml jackson core jackson databind jackson databind jar le caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the commons dbcp jar in the classpath and an attacker can find an rmi service endpoint to access it is possible to make the service execute a malicious payload this issue exists because of org apache commons dbcp datasources sharedpooldatasource and org apache commons dbcp datasources peruserpooldatasource mishandling publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the commons dbcp jar in the classpath and an attacker can find an rmi service endpoint to access it is possible to make the service execute a malicious payload this issue exists because of org apache commons dbcp datasources sharedpooldatasource and org apache commons dbcp datasources peruserpooldatasource mishandling vulnerabilityurl
0
324,902
24,023,304,069
IssuesEvent
2022-09-15 09:24:51
synthead/timex_datalink_client
https://api.github.com/repos/synthead/timex_datalink_client
closed
Update README.md for protocol 9 usage
documentation enhancement protocol 9
Now that protocol 9 is feature-complete, update `README.md` to include usage examples for protocol 9.
1.0
Update README.md for protocol 9 usage - Now that protocol 9 is feature-complete, update `README.md` to include usage examples for protocol 9.
non_test
update readme md for protocol usage now that protocol is feature complete update readme md to include usage examples for protocol
0
240,418
18,351,064,552
IssuesEvent
2021-10-08 12:36:49
opendevstack/ods-pipeline
https://api.github.com/repos/opendevstack/ods-pipeline
closed
Improve installation instruction
documentation
Make the git subtree commands copy/pastable by concatenating the lines with `&& \`. Prerequisites are missing that `helm-diff` and `helm-secrets` plugins are installed. User installation is missing: * information that webhook configuration requires the bitbucket webhook secret * note about `pipeline` serviceaccount permissions in deployment namespaces FYI @adyachok @ManuelFeller
1.0
Improve installation instruction - Make the git subtree commands copy/pastable by concatenating the lines with `&& \`. Prerequisites are missing that `helm-diff` and `helm-secrets` plugins are installed. User installation is missing: * information that webhook configuration requires the bitbucket webhook secret * note about `pipeline` serviceaccount permissions in deployment namespaces FYI @adyachok @ManuelFeller
non_test
improve installation instruction make the git subtree commands copy pastable by concatenating the lines with prerequisites are missing that helm diff and helm secrets plugins are installed user installation is missing information that webhook configuration requires the bitbucket webhook secret note about pipeline serviceaccount permissions in deployment namespaces fyi adyachok manuelfeller
0
325,490
24,051,692,642
IssuesEvent
2022-09-16 13:21:57
adonutwithsprinklez/CodeNameEmpty
https://api.github.com/repos/adonutwithsprinklez/CodeNameEmpty
opened
Rename Branches
Documentation
Due to the change in how branches/issues are worked for this project, the branch names no longer fit what they actually contain. The following should be how branches are handled in the future: "master" - main branch. Only "release" or "hotfix" pull requests should be allowed "development" - Main development branch. Should only allow pull requests from branches with tracked issues "hotfix" - A branch that will only exist if the "master" branch needs an immediate fix that cannot wait for a full "release" "{issue #}" - Branches with specific issues attached to them. Typically pulled into the "development" branch upon completion.
1.0
Rename Branches - Due to the change in how branches/issues are worked for this project, the branch names no longer fit what they actually contain. The following should be how branches are handled in the future: "master" - main branch. Only "release" or "hotfix" pull requests should be allowed "development" - Main development branch. Should only allow pull requests from branches with tracked issues "hotfix" - A branch that will only exist if the "master" branch needs an immediate fix that cannot wait for a full "release" "{issue #}" - Branches with specific issues attached to them. Typically pulled into the "development" branch upon completion.
non_test
rename branches due to the change in how branches issues are worked for this project the branch names no longer fit what they actually contain the following should be how branches are handled in the future master main branch only release or hotfix pull requests should be allowed development main development branch should only allow pull requests from branches with tracked issues hotfix a branch that will only exist if the master branch needs an immediate fix that cannot wait for a full release issue branches with specific issues attached to them typically pulled into the development branch upon completion
0
38,409
12,541,739,225
IssuesEvent
2020-06-05 12:54:34
jgeraigery/Spark-Twitter-Watson-Dashboard
https://api.github.com/repos/jgeraigery/Spark-Twitter-Watson-Dashboard
opened
CVE-2017-16137 (Medium) detected in debug-2.2.0.tgz, debug-2.0.0.tgz
security vulnerability
## CVE-2017-16137 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>debug-2.2.0.tgz</b>, <b>debug-2.0.0.tgz</b></p></summary> <p> <details><summary><b>debug-2.2.0.tgz</b></p></summary> <p>small debugging utility</p> <p>Library home page: <a href="https://registry.npmjs.org/debug/-/debug-2.2.0.tgz">https://registry.npmjs.org/debug/-/debug-2.2.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/Spark-Twitter-Watson-Dashboard/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/Spark-Twitter-Watson-Dashboard/node_modules/express-session/node_modules/debug/package.json</p> <p> Dependency Hierarchy: - express-session-1.11.3.tgz (Root Library) - :x: **debug-2.2.0.tgz** (Vulnerable Library) </details> <details><summary><b>debug-2.0.0.tgz</b></p></summary> <p>small debugging utility</p> <p>Library home page: <a href="https://registry.npmjs.org/debug/-/debug-2.0.0.tgz">https://registry.npmjs.org/debug/-/debug-2.0.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/Spark-Twitter-Watson-Dashboard/package.json</p> <p>Path to vulnerable library: /Spark-Twitter-Watson-Dashboard/node_modules/debug/package.json</p> <p> Dependency Hierarchy: - :x: **debug-2.0.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/Spark-Twitter-Watson-Dashboard/commit/c04488721f52b3d658e3974d5410a6d3a0465bfe">c04488721f52b3d658e3974d5410a6d3a0465bfe</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16137>CVE-2017-16137</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16137">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16137</a></p> <p>Release Date: 2019-06-05</p> <p>Fix Resolution: 2.6.9,3.1.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"debug","packageVersion":"2.2.0","isTransitiveDependency":true,"dependencyTree":"express-session:1.11.3;debug:2.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.9,3.1.0"},{"packageType":"javascript/Node.js","packageName":"debug","packageVersion":"2.0.0","isTransitiveDependency":false,"dependencyTree":"debug:2.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.9,3.1.0"}],"vulnerabilityIdentifier":"CVE-2017-16137","vulnerabilityDetails":"The debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16137","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2017-16137 (Medium) detected in debug-2.2.0.tgz, debug-2.0.0.tgz - ## CVE-2017-16137 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>debug-2.2.0.tgz</b>, <b>debug-2.0.0.tgz</b></p></summary> <p> <details><summary><b>debug-2.2.0.tgz</b></p></summary> <p>small debugging utility</p> <p>Library home page: <a href="https://registry.npmjs.org/debug/-/debug-2.2.0.tgz">https://registry.npmjs.org/debug/-/debug-2.2.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/Spark-Twitter-Watson-Dashboard/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/Spark-Twitter-Watson-Dashboard/node_modules/express-session/node_modules/debug/package.json</p> <p> Dependency Hierarchy: - express-session-1.11.3.tgz (Root Library) - :x: **debug-2.2.0.tgz** (Vulnerable Library) </details> <details><summary><b>debug-2.0.0.tgz</b></p></summary> <p>small debugging utility</p> <p>Library home page: <a href="https://registry.npmjs.org/debug/-/debug-2.0.0.tgz">https://registry.npmjs.org/debug/-/debug-2.0.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/Spark-Twitter-Watson-Dashboard/package.json</p> <p>Path to vulnerable library: /Spark-Twitter-Watson-Dashboard/node_modules/debug/package.json</p> <p> Dependency Hierarchy: - :x: **debug-2.0.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/Spark-Twitter-Watson-Dashboard/commit/c04488721f52b3d658e3974d5410a6d3a0465bfe">c04488721f52b3d658e3974d5410a6d3a0465bfe</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16137>CVE-2017-16137</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16137">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16137</a></p> <p>Release Date: 2019-06-05</p> <p>Fix Resolution: 2.6.9,3.1.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"debug","packageVersion":"2.2.0","isTransitiveDependency":true,"dependencyTree":"express-session:1.11.3;debug:2.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.9,3.1.0"},{"packageType":"javascript/Node.js","packageName":"debug","packageVersion":"2.0.0","isTransitiveDependency":false,"dependencyTree":"debug:2.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.9,3.1.0"}],"vulnerabilityIdentifier":"CVE-2017-16137","vulnerabilityDetails":"The debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16137","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_test
cve medium detected in debug tgz debug tgz cve medium severity vulnerability vulnerable libraries debug tgz debug tgz debug tgz small debugging utility library home page a href path to dependency file tmp ws scm spark twitter watson dashboard package json path to vulnerable library tmp ws scm spark twitter watson dashboard node modules express session node modules debug package json dependency hierarchy express session tgz root library x debug tgz vulnerable library debug tgz small debugging utility library home page a href path to dependency file tmp ws scm spark twitter watson dashboard package json path to vulnerable library spark twitter watson dashboard node modules debug package json dependency hierarchy x debug tgz vulnerable library found in head commit a href vulnerability details the debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter it takes around characters to block for seconds making this a low severity issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails the debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter it takes around characters to block for seconds making this a low severity issue vulnerabilityurl
0
532,117
15,529,885,620
IssuesEvent
2021-03-13 16:56:01
joehot200/AntiAura
https://api.github.com/repos/joehot200/AntiAura
closed
11.84: Swimming through water with head poking flags Flight
bug high-priority
**Media:** https://youtu.be/nHHBQpOSNPE **AntiAura version:** 11.84 **Server version:** Paper version git-Paper-372 (MC: 1.15.2)
1.0
11.84: Swimming through water with head poking flags Flight - **Media:** https://youtu.be/nHHBQpOSNPE **AntiAura version:** 11.84 **Server version:** Paper version git-Paper-372 (MC: 1.15.2)
non_test
swimming through water with head poking flags flight media antiaura version server version paper version git paper mc
0
298,452
25,829,856,505
IssuesEvent
2022-12-12 15:23:19
HumanExposure/factotum
https://api.github.com/repos/HumanExposure/factotum
closed
Analyze entire test suite
testing
Currently, there are outdates tests in our test suite, and some may need updates. This is a ticket to test the entire test suite (every module) and analyze the results. To Do/Acceptance Criteria: - Run every test module (also MMDB) - Identify failing tests for each, and populate a table that includes (module, test name, failure point/assertion) information for each failure. If can be easily done, identify source of failure too.
1.0
Analyze entire test suite - Currently, there are outdates tests in our test suite, and some may need updates. This is a ticket to test the entire test suite (every module) and analyze the results. To Do/Acceptance Criteria: - Run every test module (also MMDB) - Identify failing tests for each, and populate a table that includes (module, test name, failure point/assertion) information for each failure. If can be easily done, identify source of failure too.
test
analyze entire test suite currently there are outdates tests in our test suite and some may need updates this is a ticket to test the entire test suite every module and analyze the results to do acceptance criteria run every test module also mmdb identify failing tests for each and populate a table that includes module test name failure point assertion information for each failure if can be easily done identify source of failure too
1
204,941
15,572,898,580
IssuesEvent
2021-03-17 07:48:46
DiSSCo/ELViS
https://api.github.com/repos/DiSSCo/ELViS
closed
TA request visible which is not assigned for my institution
bug resolved to test
#### Description In the list of requests assigned to my institution (NHMW), there is one TA request that is not assigned to NHMW, but for Senckenberg. Requests assigned to me: ![2021-02-18_ELViS_instmod_assignedrequests](https://user-images.githubusercontent.com/75616869/108408966-f41f9900-7225-11eb-80ba-48a71b1c2079.JPG) Details of TA request which is wrongly shown in my list: ![2021-02-18_ELViS_TA_JGrieb](https://user-images.githubusercontent.com/75616869/108409098-174a4880-7226-11eb-9498-27dd187c51b9.JPG)
1.0
TA request visible which is not assigned for my institution - #### Description In the list of requests assigned to my institution (NHMW), there is one TA request that is not assigned to NHMW, but for Senckenberg. Requests assigned to me: ![2021-02-18_ELViS_instmod_assignedrequests](https://user-images.githubusercontent.com/75616869/108408966-f41f9900-7225-11eb-80ba-48a71b1c2079.JPG) Details of TA request which is wrongly shown in my list: ![2021-02-18_ELViS_TA_JGrieb](https://user-images.githubusercontent.com/75616869/108409098-174a4880-7226-11eb-9498-27dd187c51b9.JPG)
test
ta request visible which is not assigned for my institution description in the list of requests assigned to my institution nhmw there is one ta request that is not assigned to nhmw but for senckenberg requests assigned to me details of ta request which is wrongly shown in my list
1
328,269
9,991,918,700
IssuesEvent
2019-07-11 12:19:54
ChainSafe/lodestar
https://api.github.com/repos/ChainSafe/lodestar
closed
Consolidate beacon node config
PR state: needs review priority: P4 nice-to-have type: refactor
- sort out config interfaces - separate config per each module - introduce config description - autogenerate config file and cli from config description
1.0
Consolidate beacon node config - - sort out config interfaces - separate config per each module - introduce config description - autogenerate config file and cli from config description
non_test
consolidate beacon node config sort out config interfaces separate config per each module introduce config description autogenerate config file and cli from config description
0
336,028
30,114,913,838
IssuesEvent
2023-06-30 10:39:30
gchq/sleeper
https://api.github.com/repos/gchq/sleeper
closed
Ingest task creation Lambda can time out in compaction performance test
bug system-test-module
The timeout we added for Lambda invocation doesn't seem to be working. We've seen a compaction performance test fail because the ingest task creator Lambda took very slightly longer than 30 seconds to run. There may be a separate timeout for the HTTP client as well as on the AWS Lambda client.
1.0
Ingest task creation Lambda can time out in compaction performance test - The timeout we added for Lambda invocation doesn't seem to be working. We've seen a compaction performance test fail because the ingest task creator Lambda took very slightly longer than 30 seconds to run. There may be a separate timeout for the HTTP client as well as on the AWS Lambda client.
test
ingest task creation lambda can time out in compaction performance test the timeout we added for lambda invocation doesn t seem to be working we ve seen a compaction performance test fail because the ingest task creator lambda took very slightly longer than seconds to run there may be a separate timeout for the http client as well as on the aws lambda client
1
217,305
16,849,449,650
IssuesEvent
2021-06-20 07:36:26
mgba-emu/mgba
https://api.github.com/repos/mgba-emu/mgba
closed
error mgba-qt
blocked:needs retest category:questionable
Hello, when I use the command: mgba-qt I have an error: `Qt: Session management error: None of the authentication protocols specified are supported libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast mgba-qt: Couldn't find current GLX or EGL context. ` please help me
1.0
error mgba-qt - Hello, when I use the command: mgba-qt I have an error: `Qt: Session management error: None of the authentication protocols specified are supported libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast mgba-qt: Couldn't find current GLX or EGL context. ` please help me
test
error mgba qt hello when i use the command mgba qt i have an error qt session management error none of the authentication protocols specified are supported libgl error no matching fbconfigs or visuals found libgl error failed to load driver swrast libgl error no matching fbconfigs or visuals found libgl error failed to load driver swrast mgba qt couldn t find current glx or egl context please help me
1
31,142
8,659,351,698
IssuesEvent
2018-11-28 05:40:57
ghdl/ghdl
https://api.github.com/repos/ghdl/ghdl
reopened
GHDL mingw64-llvm35 fails on AppVeyor
Backend: LLVM Build: MinGW (Makefile) CI: AppVeyor OS: Windows (MinGW)
**Description** The build for platform MinGW64 + LLVM 3.5 fails on AppVeyor. **Expected behavior** Do not fail :). **Context** ![image](https://user-images.githubusercontent.com/956109/48917277-3b235600-ee86-11e8-82c7-cc0f880f818c.png) - OS: Windows Server 2016 - Origin: Commit SHA: many latest commits **Additional context** MinGW64 has updated LLVM to another version, so LLVM 3.5 is not available anymore.
1.0
GHDL mingw64-llvm35 fails on AppVeyor - **Description** The build for platform MinGW64 + LLVM 3.5 fails on AppVeyor. **Expected behavior** Do not fail :). **Context** ![image](https://user-images.githubusercontent.com/956109/48917277-3b235600-ee86-11e8-82c7-cc0f880f818c.png) - OS: Windows Server 2016 - Origin: Commit SHA: many latest commits **Additional context** MinGW64 has updated LLVM to another version, so LLVM 3.5 is not available anymore.
non_test
ghdl fails on appveyor description the build for platform llvm fails on appveyor expected behavior do not fail context os windows server origin commit sha many latest commits additional context has updated llvm to another version so llvm is not available anymore
0
58,466
14,402,079,193
IssuesEvent
2020-12-03 14:30:12
hashicorp/packer
https://api.github.com/repos/hashicorp/packer
closed
packer-builder-qemu panic when disk_image=true and source image has no file extension
bug builder/qemu crash regression track-internal
#### Overview of the Issue `packer-builder-qemu` panic when `disk_image=true` and source image has no file extension Per the title. The input disk image is generated from a previous `packer` invocation. Packer does not add a disk format extension to its output file. However, as input, it appears a file extension is used as the basis for some feature detection. When there's no extension at all, it panics. #### Reproduction Steps Create a QEMU source with `disk_image = true` and an `iso_url` pointed at a file without an extension. ### Packer version ``` $ packer version Packer v1.6.4 ``` ### Operating system and Environment details MacOS 10.15.7 ### Log Fragments and crash.log files ``` ==> Some builds didn't complete successfully and had errors: --> qemu.cmvm: unexpected EOF ==> Builds finished but no artifacts were created. 2020/11/06 13:42:36 packer-builder-qemu plugin: panic: runtime error: slice bounds out of range [1:0] 2020/11/06 13:42:36 packer-builder-qemu plugin: 2020/11/06 13:42:36 packer-builder-qemu plugin: goroutine 41 [running]: 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/builder/qemu.(*stepCopyDisk).Run(0xc000ce98c0, 0x8d1fea0, 0xc00066b340, 0x8d20aa0, 0xc000d635f0, 0x0) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/builder/qemu/step_copy_disk.go:36 +0x850 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/common.askStep.Run(0x8ce14e0, 0xc000ce98c0, 0x8d3a380, 0xc000d63440, 0x8d1fea0, 0xc00066b340, 0x8d20aa0, 0xc000d635f0, 0x8838558) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/common/multistep_runner.go:109 +0x69 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/helper/multistep.(*BasicRunner).Run(0xc000d63650, 0x8d1fea0, 0xc00066b340, 0x8d20aa0, 0xc000d635f0) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/helper/multistep/basic_runner.go:67 +0x21c 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/builder/qemu.(*Builder).Run(0xc0005fdc00, 0x8d1fea0, 0xc00066b340, 0x8d3a380, 0xc000d63440, 0x8ca3200, 0xc000e67020, 0x43bcf65, 0xc0001421e0, 0xc0000aabd0, ...) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/builder/qemu/builder.go:154 +0x1142 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/packer/rpc.(*BuilderServer).Run(0xc000708200, 0x1, 0xc000135dc0, 0x0, 0x0) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/packer/rpc/builder.go:117 +0x1c4 2020/11/06 13:42:36 packer-builder-qemu plugin: reflect.Value.call(0xc0005b0480, 0xc000624118, 0x13, 0x86f3ebd, 0x4, 0xc0000aaf08, 0x3, 0x3, 0x8838340, 0xc000089000, ...) 2020/11/06 13:42:36 packer-builder-qemu plugin: reflect/value.go:475 +0x8c7 2020/11/06 13:42:36 packer-builder-qemu plugin: reflect.Value.Call(0xc0005b0480, 0xc000624118, 0x13, 0xc000d0e708, 0x3, 0x3, 0xc00072d338, 0xc0006d6dd0, 0x40650a0) 2020/11/06 13:42:36 packer-builder-qemu plugin: reflect/value.go:336 +0xb9 2020/11/06 13:42:36 packer-builder-qemu plugin: net/rpc.(*service).call(0xc000708240, 0xc0001500a0, 0xc000580310, 0xc000580320, 0xc0003ec200, 0xc00000ec60, 0x78bbc00, 0xc000135aec, 0x18a, 0x77cb860, ...) 2020/11/06 13:42:36 packer-builder-qemu plugin: net/rpc/server.go:377 +0x189 2020/11/06 13:42:36 packer-builder-qemu plugin: created by net/rpc.(*Server).ServeCodec 2020/11/06 13:42:36 packer-builder-qemu plugin: net/rpc/server.go:474 +0x445 ```
1.0
packer-builder-qemu panic when disk_image=true and source image has no file extension - #### Overview of the Issue `packer-builder-qemu` panic when `disk_image=true` and source image has no file extension Per the title. The input disk image is generated from a previous `packer` invocation. Packer does not add a disk format extension to its output file. However, as input, it appears a file extension is used as the basis for some feature detection. When there's no extension at all, it panics. #### Reproduction Steps Create a QEMU source with `disk_image = true` and an `iso_url` pointed at a file without an extension. ### Packer version ``` $ packer version Packer v1.6.4 ``` ### Operating system and Environment details MacOS 10.15.7 ### Log Fragments and crash.log files ``` ==> Some builds didn't complete successfully and had errors: --> qemu.cmvm: unexpected EOF ==> Builds finished but no artifacts were created. 2020/11/06 13:42:36 packer-builder-qemu plugin: panic: runtime error: slice bounds out of range [1:0] 2020/11/06 13:42:36 packer-builder-qemu plugin: 2020/11/06 13:42:36 packer-builder-qemu plugin: goroutine 41 [running]: 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/builder/qemu.(*stepCopyDisk).Run(0xc000ce98c0, 0x8d1fea0, 0xc00066b340, 0x8d20aa0, 0xc000d635f0, 0x0) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/builder/qemu/step_copy_disk.go:36 +0x850 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/common.askStep.Run(0x8ce14e0, 0xc000ce98c0, 0x8d3a380, 0xc000d63440, 0x8d1fea0, 0xc00066b340, 0x8d20aa0, 0xc000d635f0, 0x8838558) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/common/multistep_runner.go:109 +0x69 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/helper/multistep.(*BasicRunner).Run(0xc000d63650, 0x8d1fea0, 0xc00066b340, 0x8d20aa0, 0xc000d635f0) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/helper/multistep/basic_runner.go:67 +0x21c 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/builder/qemu.(*Builder).Run(0xc0005fdc00, 0x8d1fea0, 0xc00066b340, 0x8d3a380, 0xc000d63440, 0x8ca3200, 0xc000e67020, 0x43bcf65, 0xc0001421e0, 0xc0000aabd0, ...) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/builder/qemu/builder.go:154 +0x1142 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/packer/rpc.(*BuilderServer).Run(0xc000708200, 0x1, 0xc000135dc0, 0x0, 0x0) 2020/11/06 13:42:36 packer-builder-qemu plugin: github.com/hashicorp/packer/packer/rpc/builder.go:117 +0x1c4 2020/11/06 13:42:36 packer-builder-qemu plugin: reflect.Value.call(0xc0005b0480, 0xc000624118, 0x13, 0x86f3ebd, 0x4, 0xc0000aaf08, 0x3, 0x3, 0x8838340, 0xc000089000, ...) 2020/11/06 13:42:36 packer-builder-qemu plugin: reflect/value.go:475 +0x8c7 2020/11/06 13:42:36 packer-builder-qemu plugin: reflect.Value.Call(0xc0005b0480, 0xc000624118, 0x13, 0xc000d0e708, 0x3, 0x3, 0xc00072d338, 0xc0006d6dd0, 0x40650a0) 2020/11/06 13:42:36 packer-builder-qemu plugin: reflect/value.go:336 +0xb9 2020/11/06 13:42:36 packer-builder-qemu plugin: net/rpc.(*service).call(0xc000708240, 0xc0001500a0, 0xc000580310, 0xc000580320, 0xc0003ec200, 0xc00000ec60, 0x78bbc00, 0xc000135aec, 0x18a, 0x77cb860, ...) 2020/11/06 13:42:36 packer-builder-qemu plugin: net/rpc/server.go:377 +0x189 2020/11/06 13:42:36 packer-builder-qemu plugin: created by net/rpc.(*Server).ServeCodec 2020/11/06 13:42:36 packer-builder-qemu plugin: net/rpc/server.go:474 +0x445 ```
non_test
packer builder qemu panic when disk image true and source image has no file extension overview of the issue packer builder qemu panic when disk image true and source image has no file extension per the title the input disk image is generated from a previous packer invocation packer does not add a disk format extension to its output file however as input it appears a file extension is used as the basis for some feature detection when there s no extension at all it panics reproduction steps create a qemu source with disk image true and an iso url pointed at a file without an extension packer version packer version packer operating system and environment details macos log fragments and crash log files some builds didn t complete successfully and had errors qemu cmvm unexpected eof builds finished but no artifacts were created packer builder qemu plugin panic runtime error slice bounds out of range packer builder qemu plugin packer builder qemu plugin goroutine packer builder qemu plugin github com hashicorp packer builder qemu stepcopydisk run packer builder qemu plugin github com hashicorp packer builder qemu step copy disk go packer builder qemu plugin github com hashicorp packer common askstep run packer builder qemu plugin github com hashicorp packer common multistep runner go packer builder qemu plugin github com hashicorp packer helper multistep basicrunner run packer builder qemu plugin github com hashicorp packer helper multistep basic runner go packer builder qemu plugin github com hashicorp packer builder qemu builder run packer builder qemu plugin github com hashicorp packer builder qemu builder go packer builder qemu plugin github com hashicorp packer packer rpc builderserver run packer builder qemu plugin github com hashicorp packer packer rpc builder go packer builder qemu plugin reflect value call packer builder qemu plugin reflect value go packer builder qemu plugin reflect value call packer builder qemu plugin reflect value go packer builder qemu plugin net rpc service call packer builder qemu plugin net rpc server go packer builder qemu plugin created by net rpc server servecodec packer builder qemu plugin net rpc server go
0
189,702
15,193,414,978
IssuesEvent
2021-02-16 00:38:30
terminusdb/terminusdb
https://api.github.com/repos/terminusdb/terminusdb
closed
How-To: use RegEx in TerminusDB?
documentation
How do I use RegEx? Workarounds or complex answers are fine
1.0
How-To: use RegEx in TerminusDB? - How do I use RegEx? Workarounds or complex answers are fine
non_test
how to use regex in terminusdb how do i use regex workarounds or complex answers are fine
0
78,489
7,644,689,086
IssuesEvent
2018-05-08 16:10:58
CuBoulder/express
https://api.github.com/repos/CuBoulder/express
closed
Switch Logic From Import DB To Build Site
3.0:Alex:Testing
To properly use the commit flags, a developer has to know when they are writing code that affects site installs. With contrib module updates and the like, it would be better to only import the db when `!===build` is in a commit message. Also, the debugging printed lines should be cleaned up, and the scripts separated visually since Travis collapses the out of them to one line.
1.0
Switch Logic From Import DB To Build Site - To properly use the commit flags, a developer has to know when they are writing code that affects site installs. With contrib module updates and the like, it would be better to only import the db when `!===build` is in a commit message. Also, the debugging printed lines should be cleaned up, and the scripts separated visually since Travis collapses the out of them to one line.
test
switch logic from import db to build site to properly use the commit flags a developer has to know when they are writing code that affects site installs with contrib module updates and the like it would be better to only import the db when build is in a commit message also the debugging printed lines should be cleaned up and the scripts separated visually since travis collapses the out of them to one line
1
126,222
16,992,249,463
IssuesEvent
2021-06-30 22:30:23
Kaiserreich/Kaiserreich-4
https://api.github.com/repos/Kaiserreich/Kaiserreich-4
closed
South African People Union's Army focus tree locked
Bug Working as Designed
**Quick questions** OS: Window 10 HOI4 version: Collie 0.10.7 Kaiserreich version: 0.17.1a List any other mods used: PLPC, Toolbox 2021 Were you using Steam? -Yes Were you in multiplayer? - No Which expansions do you NOT have? All of them - except Death or Dishonor **Explanation of the issue:** SAF - Syndicalist events for British and French advisors set wrong flags, locking out the army branches **Steps to reproduce:** 1. Complete Focus Third Internationale Advisors, wait 7 days for the event saf.240 2. Choose either "Devil we know", or "Devil we don't" **Possible cause:** The event sets a different flag from the unlocking flags for the army branch **Screenshots:** <img width="960" alt="2021-06-30 (1)" src="https://user-images.githubusercontent.com/63805017/124024498-e5f03f00-d9b4-11eb-9749-eea16922d003.png"> <img width="960" alt="2021-06-30" src="https://user-images.githubusercontent.com/63805017/124024533-f30d2e00-d9b4-11eb-9a0e-28d574732eef.png">
1.0
South African People Union's Army focus tree locked - **Quick questions** OS: Window 10 HOI4 version: Collie 0.10.7 Kaiserreich version: 0.17.1a List any other mods used: PLPC, Toolbox 2021 Were you using Steam? -Yes Were you in multiplayer? - No Which expansions do you NOT have? All of them - except Death or Dishonor **Explanation of the issue:** SAF - Syndicalist events for British and French advisors set wrong flags, locking out the army branches **Steps to reproduce:** 1. Complete Focus Third Internationale Advisors, wait 7 days for the event saf.240 2. Choose either "Devil we know", or "Devil we don't" **Possible cause:** The event sets a different flag from the unlocking flags for the army branch **Screenshots:** <img width="960" alt="2021-06-30 (1)" src="https://user-images.githubusercontent.com/63805017/124024498-e5f03f00-d9b4-11eb-9749-eea16922d003.png"> <img width="960" alt="2021-06-30" src="https://user-images.githubusercontent.com/63805017/124024533-f30d2e00-d9b4-11eb-9a0e-28d574732eef.png">
non_test
south african people union s army focus tree locked quick questions os window version collie kaiserreich version list any other mods used plpc toolbox were you using steam yes were you in multiplayer no which expansions do you not have all of them except death or dishonor explanation of the issue saf syndicalist events for british and french advisors set wrong flags locking out the army branches steps to reproduce complete focus third internationale advisors wait days for the event saf choose either devil we know or devil we don t possible cause the event sets a different flag from the unlocking flags for the army branch screenshots img width alt src img width alt src
0
51,743
6,195,420,734
IssuesEvent
2017-07-05 12:35:25
Fourdee/DietPi
https://api.github.com/repos/Fourdee/DietPi
reopened
Image Request | Native PC x86_64: eg Intel NUC or Gigabyte Brix
Image Request Testing/testers required
First of all thank you for making DietPi, it is fantastic and makes life much easier dealing with single board computers. So much so, I'm wondering if I can use it with x86 barebones like https://www.gigabyte.com/Mini-PcBarebone Brix? I especially like the ability to just flash an image onto a disk and have it auto resize the partition and set everything else up automatically on first login.
2.0
Image Request | Native PC x86_64: eg Intel NUC or Gigabyte Brix - First of all thank you for making DietPi, it is fantastic and makes life much easier dealing with single board computers. So much so, I'm wondering if I can use it with x86 barebones like https://www.gigabyte.com/Mini-PcBarebone Brix? I especially like the ability to just flash an image onto a disk and have it auto resize the partition and set everything else up automatically on first login.
test
image request native pc eg intel nuc or gigabyte brix first of all thank you for making dietpi it is fantastic and makes life much easier dealing with single board computers so much so i m wondering if i can use it with barebones like brix i especially like the ability to just flash an image onto a disk and have it auto resize the partition and set everything else up automatically on first login
1
321,226
27,516,142,061
IssuesEvent
2023-03-06 12:07:12
PalisadoesFoundation/talawa-api
https://api.github.com/repos/PalisadoesFoundation/talawa-api
closed
Remove the conditional check `process.env !== "PRODUCTION"` for `requestContext.translate()` in code files.
discussion feature request parent wip test
**Is your feature request related to a problem? Please describe.** The whole idea of checking `process.env !== "PRODUCTION"` in places where `requestContext.translate()` function was being used was to make it not error out when testing the code that utilised that function. This was because `requestContext.translate()` would only properly work under the context of a running api server. But as the recent test changes have shown, `requestContext.translate()` can be mocked while testing a particular module/file and therefore the check `process.env !== "PRODUCTION"` no longer needs to exist. **Describe the solution you'd like** All the places in code where the check `process.env !== "PRODUCTION"` is made should be removed and `requestContext.translate()` should be mocked when testing the code utilising that function. This also means all the test code that doesn't mock `requestContext.translate()` should be removed/replaced with the one that does mock it. Two identical suites of tests running for the same logic should not exist when there's no need. **Describe alternatives you've considered** - **Approach to be followed (optional)** 1. Removed all instances of the check `process.env !== "PRODUCTION"` and conditional usage of `requestContext.translate()` function depending on that check from code files. 2. Migrate/refactor tests to test code files using the mocked version of `requestContext.translate()`. All test code depending on the check `process.env !== "PRODUCTION"` should be removed. **Additional context** I can't think of any reason for which this would affect talawa-api negatively. Leave your opinions/thoughts on this.
1.0
Remove the conditional check `process.env !== "PRODUCTION"` for `requestContext.translate()` in code files. - **Is your feature request related to a problem? Please describe.** The whole idea of checking `process.env !== "PRODUCTION"` in places where `requestContext.translate()` function was being used was to make it not error out when testing the code that utilised that function. This was because `requestContext.translate()` would only properly work under the context of a running api server. But as the recent test changes have shown, `requestContext.translate()` can be mocked while testing a particular module/file and therefore the check `process.env !== "PRODUCTION"` no longer needs to exist. **Describe the solution you'd like** All the places in code where the check `process.env !== "PRODUCTION"` is made should be removed and `requestContext.translate()` should be mocked when testing the code utilising that function. This also means all the test code that doesn't mock `requestContext.translate()` should be removed/replaced with the one that does mock it. Two identical suites of tests running for the same logic should not exist when there's no need. **Describe alternatives you've considered** - **Approach to be followed (optional)** 1. Removed all instances of the check `process.env !== "PRODUCTION"` and conditional usage of `requestContext.translate()` function depending on that check from code files. 2. Migrate/refactor tests to test code files using the mocked version of `requestContext.translate()`. All test code depending on the check `process.env !== "PRODUCTION"` should be removed. **Additional context** I can't think of any reason for which this would affect talawa-api negatively. Leave your opinions/thoughts on this.
test
remove the conditional check process env production for requestcontext translate in code files is your feature request related to a problem please describe the whole idea of checking process env production in places where requestcontext translate function was being used was to make it not error out when testing the code that utilised that function this was because requestcontext translate would only properly work under the context of a running api server but as the recent test changes have shown requestcontext translate can be mocked while testing a particular module file and therefore the check process env production no longer needs to exist describe the solution you d like all the places in code where the check process env production is made should be removed and requestcontext translate should be mocked when testing the code utilising that function this also means all the test code that doesn t mock requestcontext translate should be removed replaced with the one that does mock it two identical suites of tests running for the same logic should not exist when there s no need describe alternatives you ve considered approach to be followed optional removed all instances of the check process env production and conditional usage of requestcontext translate function depending on that check from code files migrate refactor tests to test code files using the mocked version of requestcontext translate all test code depending on the check process env production should be removed additional context i can t think of any reason for which this would affect talawa api negatively leave your opinions thoughts on this
1
217,451
16,855,779,716
IssuesEvent
2021-06-21 06:25:20
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
raftstore::test_region_change_observer::test_region_change_observer failed
component/test-bench
raftstore::test_region_change_observer::test_region_change_observer Latest failed builds: https://internal.pingcap.net/idc-jenkins/job/tikv_ghpr_test/20938/consoleFull
1.0
raftstore::test_region_change_observer::test_region_change_observer failed - raftstore::test_region_change_observer::test_region_change_observer Latest failed builds: https://internal.pingcap.net/idc-jenkins/job/tikv_ghpr_test/20938/consoleFull
test
raftstore test region change observer test region change observer failed raftstore test region change observer test region change observer latest failed builds
1
831,199
32,040,898,656
IssuesEvent
2023-09-22 19:13:58
HydrologicEngineeringCenter/HEC-FDA
https://api.github.com/repos/HydrologicEngineeringCenter/HEC-FDA
closed
Total Risk = Breach Risk + Non-Breach Risk
enhancement PRIORITY
For impact areas with levees, we need a way to calculate total risk, which is the sum of breach risk and non-breach risk, but the HEC-FDA methodology assumes that non-breach risk is zero. Why this matters: there are plenty of cases where interior drainage issues result in damage before the levee is overtopped. Take for example a likely flood scenario in Sacramento from the American River. I might have the details wrong but the premise is correct. The left bank of the river just north-east of midtown is expected to be the first place to overtop. The flooding would inundate south sac and pocket, while the pocket and south land park levees may still be in tact. In the current Freeport example, a levee is expected to be flanked before it is overtopped. When interior drainage issues have been serious, users find a way to use our square peg to fit their round hole. Instead, we need to do this calculation internally. This will involve accepting a second pair of stage-damage functions as part of the scenario compute, so that we can do breach-scenario and non-breach scenario. We'll need to know if the user plans to model non-breach risk (and make sure to communicate in the results). Interior-exterior functions are possible for both scenarios. Interior-exterior functions simplify the analysis by not requiring additional hydraulic and stage-damage computes (although some hydraulic analysis must occur to develop the interior-exterior function). The change to the computational code is straightforward. All existing code should stay the same. We add logic for the scenario where a user wants to model non-breach risk, and if so, apply total probability.
1.0
Total Risk = Breach Risk + Non-Breach Risk - For impact areas with levees, we need a way to calculate total risk, which is the sum of breach risk and non-breach risk, but the HEC-FDA methodology assumes that non-breach risk is zero. Why this matters: there are plenty of cases where interior drainage issues result in damage before the levee is overtopped. Take for example a likely flood scenario in Sacramento from the American River. I might have the details wrong but the premise is correct. The left bank of the river just north-east of midtown is expected to be the first place to overtop. The flooding would inundate south sac and pocket, while the pocket and south land park levees may still be in tact. In the current Freeport example, a levee is expected to be flanked before it is overtopped. When interior drainage issues have been serious, users find a way to use our square peg to fit their round hole. Instead, we need to do this calculation internally. This will involve accepting a second pair of stage-damage functions as part of the scenario compute, so that we can do breach-scenario and non-breach scenario. We'll need to know if the user plans to model non-breach risk (and make sure to communicate in the results). Interior-exterior functions are possible for both scenarios. Interior-exterior functions simplify the analysis by not requiring additional hydraulic and stage-damage computes (although some hydraulic analysis must occur to develop the interior-exterior function). The change to the computational code is straightforward. All existing code should stay the same. We add logic for the scenario where a user wants to model non-breach risk, and if so, apply total probability.
non_test
total risk breach risk non breach risk for impact areas with levees we need a way to calculate total risk which is the sum of breach risk and non breach risk but the hec fda methodology assumes that non breach risk is zero why this matters there are plenty of cases where interior drainage issues result in damage before the levee is overtopped take for example a likely flood scenario in sacramento from the american river i might have the details wrong but the premise is correct the left bank of the river just north east of midtown is expected to be the first place to overtop the flooding would inundate south sac and pocket while the pocket and south land park levees may still be in tact in the current freeport example a levee is expected to be flanked before it is overtopped when interior drainage issues have been serious users find a way to use our square peg to fit their round hole instead we need to do this calculation internally this will involve accepting a second pair of stage damage functions as part of the scenario compute so that we can do breach scenario and non breach scenario we ll need to know if the user plans to model non breach risk and make sure to communicate in the results interior exterior functions are possible for both scenarios interior exterior functions simplify the analysis by not requiring additional hydraulic and stage damage computes although some hydraulic analysis must occur to develop the interior exterior function the change to the computational code is straightforward all existing code should stay the same we add logic for the scenario where a user wants to model non breach risk and if so apply total probability
0
92,309
8,359,306,108
IssuesEvent
2018-10-03 07:46:29
GoogleContainerTools/skaffold
https://api.github.com/repos/GoogleContainerTools/skaffold
closed
Run container-structure-test after build phase
area/testing feature-request
We should make it possible to run [container-structure-test](https://github.com/GoogleContainerTools/container-structure-test) after an image is built. Thanks @jlandure for thew idea.
1.0
Run container-structure-test after build phase - We should make it possible to run [container-structure-test](https://github.com/GoogleContainerTools/container-structure-test) after an image is built. Thanks @jlandure for thew idea.
test
run container structure test after build phase we should make it possible to run after an image is built thanks jlandure for thew idea
1
311,968
9,541,131,222
IssuesEvent
2019-04-30 21:24:42
processing/p5.js-web-editor
https://api.github.com/repos/processing/p5.js-web-editor
closed
Focus on choosing option in find menu
priority:low type:bug
#### Nature of issue? - Found a bug #### Details about the bug: Clicking on search specifiers causes the editor lose focus from the search box. Most of the editors retain focus in the search box on clicking these specifiers (eg. VS Code). We can have that too. @catarak what are your views on it. ![Screenshot (5)](https://user-images.githubusercontent.com/37630020/56083802-8e06a580-5e47-11e9-8480-8125f6d48efd.png)
1.0
Focus on choosing option in find menu - #### Nature of issue? - Found a bug #### Details about the bug: Clicking on search specifiers causes the editor lose focus from the search box. Most of the editors retain focus in the search box on clicking these specifiers (eg. VS Code). We can have that too. @catarak what are your views on it. ![Screenshot (5)](https://user-images.githubusercontent.com/37630020/56083802-8e06a580-5e47-11e9-8480-8125f6d48efd.png)
non_test
focus on choosing option in find menu nature of issue found a bug details about the bug clicking on search specifiers causes the editor lose focus from the search box most of the editors retain focus in the search box on clicking these specifiers eg vs code we can have that too catarak what are your views on it
0
159,743
12,489,510,858
IssuesEvent
2020-05-31 19:11:01
google-research/evoflow
https://api.github.com/repos/google-research/evoflow
opened
Move release to be done when commit are tagged
type:testing
Currently we push the new version at each commit -- move the release to only specific tags and ensure we have also the release listed in github release
1.0
Move release to be done when commit are tagged - Currently we push the new version at each commit -- move the release to only specific tags and ensure we have also the release listed in github release
test
move release to be done when commit are tagged currently we push the new version at each commit move the release to only specific tags and ensure we have also the release listed in github release
1
448,376
12,949,180,960
IssuesEvent
2020-07-19 08:01:49
mayajs/maya
https://api.github.com/repos/mayajs/maya
closed
🐞 Invalid class names generated after `maya generate <type> <name>` command
Priority: Medium Type: Bug
## 📝 Describe the bug Invalid class names generated after `maya generate <type> <name>` command **To Reproduce** Steps to reproduce the behavior: 1. Create a maya project using the command: `maya new <project-name>` 2. Create a controller inside the project using the command: `maya g r <dash-seperated-route-name>` 3. View the generated controller and service 4. See error **Expected behavior** Generate class names with pascal casing ## 📷 Screenshots ![image](https://user-images.githubusercontent.com/62224113/77736319-08273e80-7047-11ea-9ac9-764b3620d4e5.png) ## ⚙️ Component maya/cli
1.0
🐞 Invalid class names generated after `maya generate <type> <name>` command - ## 📝 Describe the bug Invalid class names generated after `maya generate <type> <name>` command **To Reproduce** Steps to reproduce the behavior: 1. Create a maya project using the command: `maya new <project-name>` 2. Create a controller inside the project using the command: `maya g r <dash-seperated-route-name>` 3. View the generated controller and service 4. See error **Expected behavior** Generate class names with pascal casing ## 📷 Screenshots ![image](https://user-images.githubusercontent.com/62224113/77736319-08273e80-7047-11ea-9ac9-764b3620d4e5.png) ## ⚙️ Component maya/cli
non_test
🐞 invalid class names generated after maya generate command 📝 describe the bug invalid class names generated after maya generate command to reproduce steps to reproduce the behavior create a maya project using the command maya new create a controller inside the project using the command maya g r view the generated controller and service see error expected behavior generate class names with pascal casing 📷 screenshots ⚙️ component maya cli
0
396,850
27,138,764,271
IssuesEvent
2023-02-16 14:59:27
learntocloud/cloud-dictionary
https://api.github.com/repos/learntocloud/cloud-dictionary
closed
Submit a Cloud Definition
documentation good first issue
Do NOT copy/paste a definition from somewhere else. Read about the word you want to define and come up with your own definition. Copy/Paste submissions will be closed and not added. Fill out the JSON with your submission: ```json { "word": "Block Storage", "content": "Block storage is a technology that is used to store data files on cloud-based storage environments. It involves breaking up data into blocks and then storing those blocks as separate individual pieces, each with a unique identifier. This makes it very fast, efficient and reliable in data retention and transportation", "learn_more_URL":"https://www.ibm.com/topics/block-storage", "tag":"storage", "abbreviation": "", "author_name":"Ochu Williams", "author_link": "https://github.com/WilliamsOchu" } ``` Fill out the JSON below with the following. ### Word (REQUIRED) The word you are defining. Check [this URL](https://zealous-flower-0f27b070f.2.azurestaticapps.net/) for all words we currently have. #### Content (REQUIRED) The definition. No more than 3 sentences. ### learn more URL (REQUIRED) Website where people can visit to learn more about the word. ### tag (REQUIRED and select one) Tech category the word fits in. Options: - compute - security - service - general - analytics - developer tool - web - networking - database - storage - devops - ai/ml - identity - iot - monitoring - cost management - disaster recovery ### abbreviation (OPTIONAL) If the word is commonly abbreviated, please provide it. For example, command line interface is often abbreviated as CLI. ### author name (REQUIRED) Your name. ### author link (REQUIRED) The URL you want your name to link to.
1.0
Submit a Cloud Definition - Do NOT copy/paste a definition from somewhere else. Read about the word you want to define and come up with your own definition. Copy/Paste submissions will be closed and not added. Fill out the JSON with your submission: ```json { "word": "Block Storage", "content": "Block storage is a technology that is used to store data files on cloud-based storage environments. It involves breaking up data into blocks and then storing those blocks as separate individual pieces, each with a unique identifier. This makes it very fast, efficient and reliable in data retention and transportation", "learn_more_URL":"https://www.ibm.com/topics/block-storage", "tag":"storage", "abbreviation": "", "author_name":"Ochu Williams", "author_link": "https://github.com/WilliamsOchu" } ``` Fill out the JSON below with the following. ### Word (REQUIRED) The word you are defining. Check [this URL](https://zealous-flower-0f27b070f.2.azurestaticapps.net/) for all words we currently have. #### Content (REQUIRED) The definition. No more than 3 sentences. ### learn more URL (REQUIRED) Website where people can visit to learn more about the word. ### tag (REQUIRED and select one) Tech category the word fits in. Options: - compute - security - service - general - analytics - developer tool - web - networking - database - storage - devops - ai/ml - identity - iot - monitoring - cost management - disaster recovery ### abbreviation (OPTIONAL) If the word is commonly abbreviated, please provide it. For example, command line interface is often abbreviated as CLI. ### author name (REQUIRED) Your name. ### author link (REQUIRED) The URL you want your name to link to.
non_test
submit a cloud definition do not copy paste a definition from somewhere else read about the word you want to define and come up with your own definition copy paste submissions will be closed and not added fill out the json with your submission json word block storage content block storage is a technology that is used to store data files on cloud based storage environments it involves breaking up data into blocks and then storing those blocks as separate individual pieces each with a unique identifier this makes it very fast efficient and reliable in data retention and transportation learn more url tag storage abbreviation author name ochu williams author link fill out the json below with the following word required the word you are defining check for all words we currently have content required the definition no more than sentences learn more url required website where people can visit to learn more about the word tag required and select one tech category the word fits in options compute security service general analytics developer tool web networking database storage devops ai ml identity iot monitoring cost management disaster recovery abbreviation optional if the word is commonly abbreviated please provide it for example command line interface is often abbreviated as cli author name required your name author link required the url you want your name to link to
0
322,263
27,592,339,653
IssuesEvent
2023-03-09 01:57:13
ashthomasweb/codestasher
https://api.github.com/repos/ashthomasweb/codestasher
opened
Create mock firestore for testing
testing
Mock database calls to test user entry and update functions.
1.0
Create mock firestore for testing - Mock database calls to test user entry and update functions.
test
create mock firestore for testing mock database calls to test user entry and update functions
1
282,111
24,450,076,192
IssuesEvent
2022-10-06 21:50:09
quarkusio/quarkus
https://api.github.com/repos/quarkusio/quarkus
closed
Upgrade quarkus-arquillian to use JUnit 5
area/testing area/housekeeping
### Description The quarkus-arquillian extension is still using the old JUnit 4 framework. Most usages are under the tcks. It's getting outdated because It seems that MP is trying to move towards Junit, starting that after MP 6.0. Also, MP Telemetry tracing is based on JUnit 5. ### Implementation ideas _No response_
1.0
Upgrade quarkus-arquillian to use JUnit 5 - ### Description The quarkus-arquillian extension is still using the old JUnit 4 framework. Most usages are under the tcks. It's getting outdated because It seems that MP is trying to move towards Junit, starting that after MP 6.0. Also, MP Telemetry tracing is based on JUnit 5. ### Implementation ideas _No response_
test
upgrade quarkus arquillian to use junit description the quarkus arquillian extension is still using the old junit framework most usages are under the tcks it s getting outdated because it seems that mp is trying to move towards junit starting that after mp also mp telemetry tracing is based on junit implementation ideas no response
1
78,879
10,092,484,550
IssuesEvent
2019-07-26 16:48:40
RHUL-CS-Projects/graphfellow
https://api.github.com/repos/RHUL-CS-Projects/graphfellow
closed
make good regexp FA example
documentation enhancement
This was one of the original incentives for this project; need to add example code for populating the accumulated string (in an external DOM element). Had this working in the prototype.
1.0
make good regexp FA example - This was one of the original incentives for this project; need to add example code for populating the accumulated string (in an external DOM element). Had this working in the prototype.
non_test
make good regexp fa example this was one of the original incentives for this project need to add example code for populating the accumulated string in an external dom element had this working in the prototype
0
19,675
10,416,001,895
IssuesEvent
2019-09-14 09:29:47
tomkerkhove/promitor
https://api.github.com/repos/tomkerkhove/promitor
closed
Microsoft Security Advisory CVE-2018-8269: Denial of Service Vulnerability in OData
security
Microsoft Security Advisory CVE-2018-8269: Denial of Service Vulnerability in OData ## Vulnerability Information https://github.com/aspnet/Announcements/issues/385 ## Vulnerability Fix Upgrade to .NET Core 2.2.7
True
Microsoft Security Advisory CVE-2018-8269: Denial of Service Vulnerability in OData - Microsoft Security Advisory CVE-2018-8269: Denial of Service Vulnerability in OData ## Vulnerability Information https://github.com/aspnet/Announcements/issues/385 ## Vulnerability Fix Upgrade to .NET Core 2.2.7
non_test
microsoft security advisory cve denial of service vulnerability in odata microsoft security advisory cve denial of service vulnerability in odata vulnerability information vulnerability fix upgrade to net core
0
443,081
12,759,400,742
IssuesEvent
2020-06-29 05:43:43
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
opened
Invalidate gateway cache when blocking subscriptions while using Opaque tokens
Priority/Normal Type/Improvement
### Describe your problem(s) When Opaque tokens are used to invoke APIs, they are cached in the Gateway. Therefore if a subscription is blocked from the Publisher portal, the blocking doesn't take effect immediately until the cache is expired. ### Describe your solution When JWT tokens are used and a subscription is blocked from the Publisher portal, the blocking takes effect immediately. It is suggested to implement this behavior for the Opaque tokens as well in away that gateway cache is invalidated when a subscription is blocked. #### Suggested Labels: 3.0.0 3.1.0
1.0
Invalidate gateway cache when blocking subscriptions while using Opaque tokens - ### Describe your problem(s) When Opaque tokens are used to invoke APIs, they are cached in the Gateway. Therefore if a subscription is blocked from the Publisher portal, the blocking doesn't take effect immediately until the cache is expired. ### Describe your solution When JWT tokens are used and a subscription is blocked from the Publisher portal, the blocking takes effect immediately. It is suggested to implement this behavior for the Opaque tokens as well in away that gateway cache is invalidated when a subscription is blocked. #### Suggested Labels: 3.0.0 3.1.0
non_test
invalidate gateway cache when blocking subscriptions while using opaque tokens describe your problem s when opaque tokens are used to invoke apis they are cached in the gateway therefore if a subscription is blocked from the publisher portal the blocking doesn t take effect immediately until the cache is expired describe your solution when jwt tokens are used and a subscription is blocked from the publisher portal the blocking takes effect immediately it is suggested to implement this behavior for the opaque tokens as well in away that gateway cache is invalidated when a subscription is blocked suggested labels
0
52,831
13,225,113,679
IssuesEvent
2020-08-17 20:31:05
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
sim-services/PropagatorServiceUtils::Propagate broken/non-functional (Trac #423)
Migrated from Trac combo reconstruction defect
I'd like to commit the following attached patch with the following commit message: PropagatorServiceUtils::Propagate replaces its input pointer and should thus get it passed by reference. Also the wrong MCTree was modified (the original instead of the output copy). <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/423">https://code.icecube.wisc.edu/projects/icecube/ticket/423</a>, reported by claudio.kopperand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2012-10-31T17:33:36", "_ts": "1351704816000000", "description": "I'd like to commit the following attached patch with the following commit message:\n\nPropagatorServiceUtils::Propagate replaces its input pointer and should thus get it passed by reference. Also the wrong MCTree was modified (the original instead of the output copy).\n", "reporter": "claudio.kopper", "cc": "", "resolution": "fixed", "time": "2012-06-25T00:59:43", "component": "combo reconstruction", "summary": "sim-services/PropagatorServiceUtils::Propagate broken/non-functional", "priority": "normal", "keywords": "", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
1.0
sim-services/PropagatorServiceUtils::Propagate broken/non-functional (Trac #423) - I'd like to commit the following attached patch with the following commit message: PropagatorServiceUtils::Propagate replaces its input pointer and should thus get it passed by reference. Also the wrong MCTree was modified (the original instead of the output copy). <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/423">https://code.icecube.wisc.edu/projects/icecube/ticket/423</a>, reported by claudio.kopperand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2012-10-31T17:33:36", "_ts": "1351704816000000", "description": "I'd like to commit the following attached patch with the following commit message:\n\nPropagatorServiceUtils::Propagate replaces its input pointer and should thus get it passed by reference. Also the wrong MCTree was modified (the original instead of the output copy).\n", "reporter": "claudio.kopper", "cc": "", "resolution": "fixed", "time": "2012-06-25T00:59:43", "component": "combo reconstruction", "summary": "sim-services/PropagatorServiceUtils::Propagate broken/non-functional", "priority": "normal", "keywords": "", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
non_test
sim services propagatorserviceutils propagate broken non functional trac i d like to commit the following attached patch with the following commit message propagatorserviceutils propagate replaces its input pointer and should thus get it passed by reference also the wrong mctree was modified the original instead of the output copy migrated from json status closed changetime ts description i d like to commit the following attached patch with the following commit message n npropagatorserviceutils propagate replaces its input pointer and should thus get it passed by reference also the wrong mctree was modified the original instead of the output copy n reporter claudio kopper cc resolution fixed time component combo reconstruction summary sim services propagatorserviceutils propagate broken non functional priority normal keywords milestone owner olivas type defect
0
151,894
5,829,551,969
IssuesEvent
2017-05-08 14:49:43
magnolo/newhere
https://api.github.com/repos/magnolo/newhere
opened
I forgot my password
bug front end high priority NGO
The I forgot my password link takes you to the following page where you enter your email address: ![screen shot 2017-05-08 at 16 47 27](https://cloud.githubusercontent.com/assets/22774962/25809695/1b171c68-340e-11e7-8c78-63753e0c0d33.png) Once completed, you see this pop-up: ![screen shot 2017-05-08 at 16 48 35](https://cloud.githubusercontent.com/assets/22774962/25809715/31ea598c-340e-11e7-95bb-e415bfb6beb1.png) Despite this, no email arrives and it is thus not possible to change your password.
1.0
I forgot my password - The I forgot my password link takes you to the following page where you enter your email address: ![screen shot 2017-05-08 at 16 47 27](https://cloud.githubusercontent.com/assets/22774962/25809695/1b171c68-340e-11e7-8c78-63753e0c0d33.png) Once completed, you see this pop-up: ![screen shot 2017-05-08 at 16 48 35](https://cloud.githubusercontent.com/assets/22774962/25809715/31ea598c-340e-11e7-95bb-e415bfb6beb1.png) Despite this, no email arrives and it is thus not possible to change your password.
non_test
i forgot my password the i forgot my password link takes you to the following page where you enter your email address once completed you see this pop up despite this no email arrives and it is thus not possible to change your password
0
158,864
13,750,392,558
IssuesEvent
2020-10-06 11:59:04
competitive-programming-tools/competitive-problems-tools
https://api.github.com/repos/competitive-programming-tools/competitive-problems-tools
opened
Realizar uma Análise de Discurso
documentation elicitação
## Realizar uma Análise de Discurso ### O que é: A análise de discurso, especificamente a área de análise de conversação, é uma técnica que consiste em uma ou mais reuniões de conversa livre entres os stakeholders, em que um dos participantes fica responsável por tentar identificar as ideias expostas, sejam elas novas funcionalidades, requisitos, melhorias, problemas, entre outras sugestões ### Description: <!-- Describe issue briefly, inserting information necessary for the understanding of the future developer. --> ### Acceptance criteria: <!-- Define acceptance criteria to verify the completeness of the issue --> - [ ] criteria 1 - [ ] criteria 2 ### Tasks: <!-- Checklist of actions that should possibly be taken to complete the issue. --> - [ ] Task 1 - [ ] Task 2
1.0
Realizar uma Análise de Discurso - ## Realizar uma Análise de Discurso ### O que é: A análise de discurso, especificamente a área de análise de conversação, é uma técnica que consiste em uma ou mais reuniões de conversa livre entres os stakeholders, em que um dos participantes fica responsável por tentar identificar as ideias expostas, sejam elas novas funcionalidades, requisitos, melhorias, problemas, entre outras sugestões ### Description: <!-- Describe issue briefly, inserting information necessary for the understanding of the future developer. --> ### Acceptance criteria: <!-- Define acceptance criteria to verify the completeness of the issue --> - [ ] criteria 1 - [ ] criteria 2 ### Tasks: <!-- Checklist of actions that should possibly be taken to complete the issue. --> - [ ] Task 1 - [ ] Task 2
non_test
realizar uma análise de discurso realizar uma análise de discurso o que é a análise de discurso especificamente a área de análise de conversação é uma técnica que consiste em uma ou mais reuniões de conversa livre entres os stakeholders em que um dos participantes fica responsável por tentar identificar as ideias expostas sejam elas novas funcionalidades requisitos melhorias problemas entre outras sugestões description acceptance criteria criteria criteria tasks task task
0
46,169
11,795,301,806
IssuesEvent
2020-03-18 08:41:14
ShaikASK/Testing
https://api.github.com/repos/ShaikASK/Testing
closed
New Hire :Edit work Flow : Review and certify : Hire status is displayed as 'Accepted' when a 'Read Only' document is added (all other documents are signed by candidate) to the candidate workflow in 'Completed' status
Defect P1 Release#7 Build#3
New Hire : Edit work Flow : Hire status is still displayed as accepted when all documents are approved from HR Review & Certify screen Steps To Replicate : 1.Launch the URL 2.Sign in as HR Admin user 3.Click on edit workflow option displayed against completed hire status 4.Navigated to edit workflow screen 5.Add Read Only document in newly added step and click on send button Experienced Behavior : Observed that hire status is still displayed as accepted when all documents are approved from HR Review & Certify screen
1.0
New Hire :Edit work Flow : Review and certify : Hire status is displayed as 'Accepted' when a 'Read Only' document is added (all other documents are signed by candidate) to the candidate workflow in 'Completed' status - New Hire : Edit work Flow : Hire status is still displayed as accepted when all documents are approved from HR Review & Certify screen Steps To Replicate : 1.Launch the URL 2.Sign in as HR Admin user 3.Click on edit workflow option displayed against completed hire status 4.Navigated to edit workflow screen 5.Add Read Only document in newly added step and click on send button Experienced Behavior : Observed that hire status is still displayed as accepted when all documents are approved from HR Review & Certify screen
non_test
new hire edit work flow review and certify hire status is displayed as accepted when a read only document is added all other documents are signed by candidate to the candidate workflow in completed status new hire edit work flow hire status is still displayed as accepted when all documents are approved from hr review certify screen steps to replicate launch the url sign in as hr admin user click on edit workflow option displayed against completed hire status navigated to edit workflow screen add read only document in newly added step and click on send button experienced behavior observed that hire status is still displayed as accepted when all documents are approved from hr review certify screen
0
74,679
15,365,217,422
IssuesEvent
2021-03-01 23:10:09
samq-wsdemo/SecurityShepherd
https://api.github.com/repos/samq-wsdemo/SecurityShepherd
opened
CVE-2020-2933 (Low) detected in mysql-connector-java-5.1.24.jar
security vulnerability
## CVE-2020-2933 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.24.jar</b></p></summary> <p>MySQL JDBC Type 4 driver</p> <p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p> <p>Path to dependency file: SecurityShepherd/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.24/mysql-connector-java-5.1.24.jar,SecurityShepherd/target/owaspSecurityShepherd/WEB-INF/lib/mysql-connector-java-5.1.24.jar</p> <p> Dependency Hierarchy: - :x: **mysql-connector-java-5.1.24.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/samq-wsdemo/SecurityShepherd/commit/7fb988217aedaf44f75f5dfe1a668c123722e409">7fb988217aedaf44f75f5dfe1a668c123722e409</a></p> <p>Found in base branch: <b>dev</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L). <p>Publish Date: 2020-04-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933>CVE-2020-2933</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING">https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING</a></p> <p>Release Date: 2020-04-15</p> <p>Fix Resolution: mysql:mysql-connector-java:5.1.49</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.24","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.24","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:5.1.49"}],"baseBranches":["dev"],"vulnerabilityIdentifier":"CVE-2020-2933","vulnerabilityDetails":"Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933","cvss3Severity":"low","cvss3Score":"2.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"High","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-2933 (Low) detected in mysql-connector-java-5.1.24.jar - ## CVE-2020-2933 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.24.jar</b></p></summary> <p>MySQL JDBC Type 4 driver</p> <p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p> <p>Path to dependency file: SecurityShepherd/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.24/mysql-connector-java-5.1.24.jar,SecurityShepherd/target/owaspSecurityShepherd/WEB-INF/lib/mysql-connector-java-5.1.24.jar</p> <p> Dependency Hierarchy: - :x: **mysql-connector-java-5.1.24.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/samq-wsdemo/SecurityShepherd/commit/7fb988217aedaf44f75f5dfe1a668c123722e409">7fb988217aedaf44f75f5dfe1a668c123722e409</a></p> <p>Found in base branch: <b>dev</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L). <p>Publish Date: 2020-04-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933>CVE-2020-2933</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING">https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING</a></p> <p>Release Date: 2020-04-15</p> <p>Fix Resolution: mysql:mysql-connector-java:5.1.49</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.24","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.24","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:5.1.49"}],"baseBranches":["dev"],"vulnerabilityIdentifier":"CVE-2020-2933","vulnerabilityDetails":"Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933","cvss3Severity":"low","cvss3Score":"2.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"High","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_test
cve low detected in mysql connector java jar cve low severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file securityshepherd pom xml path to vulnerable library canner repository mysql mysql connector java mysql connector java jar securityshepherd target owaspsecurityshepherd web inf lib mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch dev vulnerability details vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score availability impacts cvss vector cvss av n ac h pr h ui n s u c n i n a l publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mysql mysql connector java check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree mysql mysql connector java isminimumfixversionavailable true minimumfixversion mysql mysql connector java basebranches vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score availability impacts cvss vector cvss av n ac h pr h ui n s u c n i n a l vulnerabilityurl
0
41,436
5,356,411,524
IssuesEvent
2017-02-20 15:35:46
720kb/ndm
https://api.github.com/repos/720kb/ndm
closed
App updater is updating mac with the linux .zip (probably)
In Progress ... priority:1 Seems-fixed.... TODO: Test it Windows  Mac
If i update from mac i get the linux version installed as new version (can see it from the font that changes to the linux default font 🐣 )
1.0
App updater is updating mac with the linux .zip (probably) - If i update from mac i get the linux version installed as new version (can see it from the font that changes to the linux default font 🐣 )
test
app updater is updating mac with the linux zip probably if i update from mac i get the linux version installed as new version can see it from the font that changes to the linux default font 🐣
1
220,366
17,191,062,370
IssuesEvent
2021-07-16 11:01:42
momentum-mod/game
https://api.github.com/repos/momentum-mod/game
closed
Stickies bounce near portals
Blocked: Needs testing & verification Priority: Medium Type: Bug
**Describe the bug** Stickies bounce on surfaces near the edges of portals due to portals being models behind the scenes. Ideally they shouldn't have any collision with stickies or any weapon/entity, since they're meant to be invisible models.
1.0
Stickies bounce near portals - **Describe the bug** Stickies bounce on surfaces near the edges of portals due to portals being models behind the scenes. Ideally they shouldn't have any collision with stickies or any weapon/entity, since they're meant to be invisible models.
test
stickies bounce near portals describe the bug stickies bounce on surfaces near the edges of portals due to portals being models behind the scenes ideally they shouldn t have any collision with stickies or any weapon entity since they re meant to be invisible models
1
321,858
27,561,537,736
IssuesEvent
2023-03-07 22:31:20
golang/go
https://api.github.com/repos/golang/go
closed
cmd/go: TestCacheCoverage failures
Testing NeedsInvestigation GoCommand
``` #!watchflakes post <- pkg == "cmd/go" && test == "TestCacheCoverage" && `test timed out while running command` ``` Issue created automatically to collect these failures. Example ([log](https://build.golang.org/log/26188a34341357875931c56ab2d366ccb29ebc27)): --- FAIL: TestCacheCoverage (125.29s) go_test.go:2388: running testgo [test -cover -short strings] go_test.go:2388: standard output: go_test.go:2388: ok strings 5.489s coverage: 97.7% of statements go_test.go:2389: running testgo [test -cover -short math strings] exec.go:146: test timed out while running command: /tmp/buildlet/tmp/cmd-go-test-2779377144/tmpdir69193264/testbin/go test -cover -short math strings go_test.go:2389: standard error: go_test.go:2389: SIGQUIT: quit PC=0x7ff81f8423ea m=0 sigcode=0 ... panic: test timed out after 9m0s running tests: TestTestCache (2m0s) goroutine 11740 [running]: panic({0x1a97b40, 0xc0001c2240}) /tmp/buildlet/go/src/runtime/panic.go:987 +0x3bb fp=0xc0002b7f18 sp=0xc0002b7e58 pc=0x10888db testing.(*M).startAlarm.func1() /tmp/buildlet/go/src/testing/testing.go:2241 +0x219 fp=0xc0002b7fe0 sp=0xc0002b7f18 pc=0x11f90b9 runtime.goexit() /tmp/buildlet/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0002b7fe8 sp=0xc0002b7fe0 pc=0x10c1321 created by time.goFunc /tmp/buildlet/go/src/time/sleep.go:176 +0x48 — [watchflakes](https://go.dev/wiki/Watchflakes)
1.0
cmd/go: TestCacheCoverage failures - ``` #!watchflakes post <- pkg == "cmd/go" && test == "TestCacheCoverage" && `test timed out while running command` ``` Issue created automatically to collect these failures. Example ([log](https://build.golang.org/log/26188a34341357875931c56ab2d366ccb29ebc27)): --- FAIL: TestCacheCoverage (125.29s) go_test.go:2388: running testgo [test -cover -short strings] go_test.go:2388: standard output: go_test.go:2388: ok strings 5.489s coverage: 97.7% of statements go_test.go:2389: running testgo [test -cover -short math strings] exec.go:146: test timed out while running command: /tmp/buildlet/tmp/cmd-go-test-2779377144/tmpdir69193264/testbin/go test -cover -short math strings go_test.go:2389: standard error: go_test.go:2389: SIGQUIT: quit PC=0x7ff81f8423ea m=0 sigcode=0 ... panic: test timed out after 9m0s running tests: TestTestCache (2m0s) goroutine 11740 [running]: panic({0x1a97b40, 0xc0001c2240}) /tmp/buildlet/go/src/runtime/panic.go:987 +0x3bb fp=0xc0002b7f18 sp=0xc0002b7e58 pc=0x10888db testing.(*M).startAlarm.func1() /tmp/buildlet/go/src/testing/testing.go:2241 +0x219 fp=0xc0002b7fe0 sp=0xc0002b7f18 pc=0x11f90b9 runtime.goexit() /tmp/buildlet/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0002b7fe8 sp=0xc0002b7fe0 pc=0x10c1321 created by time.goFunc /tmp/buildlet/go/src/time/sleep.go:176 +0x48 — [watchflakes](https://go.dev/wiki/Watchflakes)
test
cmd go testcachecoverage failures watchflakes post pkg cmd go test testcachecoverage test timed out while running command issue created automatically to collect these failures example fail testcachecoverage go test go running testgo go test go standard output go test go ok strings coverage of statements go test go running testgo exec go test timed out while running command tmp buildlet tmp cmd go test testbin go test cover short math strings go test go standard error go test go sigquit quit pc m sigcode panic test timed out after running tests testtestcache goroutine panic tmp buildlet go src runtime panic go fp sp pc testing m startalarm tmp buildlet go src testing testing go fp sp pc runtime goexit tmp buildlet go src runtime asm s fp sp pc created by time gofunc tmp buildlet go src time sleep go —
1
142,384
5,474,664,724
IssuesEvent
2017-03-11 02:22:05
tgstation/tgstation
https://api.github.com/repos/tgstation/tgstation
closed
Infinite loop suspected--switching proc to background.
Priority: High Runtime
Loaded station in 14.4s! Infinite loop suspected--switching proc to background. If it is not an infinite loop, either do 'set background=1' or set world.loop_checks=0. proc name: get area turfs (/proc/get_area_turfs) source file: unsorted.dm,650 usr: (src) src: null call stack: get area turfs(/area/space (/area/space), 0, 0) process teleport locs() Mapping (/datum/controller/subsystem/mapping): Initialize(849594) Master (/datum/controller/master): Initialize(10, 0) Infinite loop suspected--switching proc to background. If it is not an infinite loop, either do 'set background=1' or set world.loop_checks=0.
1.0
Infinite loop suspected--switching proc to background. - Loaded station in 14.4s! Infinite loop suspected--switching proc to background. If it is not an infinite loop, either do 'set background=1' or set world.loop_checks=0. proc name: get area turfs (/proc/get_area_turfs) source file: unsorted.dm,650 usr: (src) src: null call stack: get area turfs(/area/space (/area/space), 0, 0) process teleport locs() Mapping (/datum/controller/subsystem/mapping): Initialize(849594) Master (/datum/controller/master): Initialize(10, 0) Infinite loop suspected--switching proc to background. If it is not an infinite loop, either do 'set background=1' or set world.loop_checks=0.
non_test
infinite loop suspected switching proc to background loaded station in infinite loop suspected switching proc to background if it is not an infinite loop either do set background or set world loop checks proc name get area turfs proc get area turfs source file unsorted dm usr src src null call stack get area turfs area space area space process teleport locs mapping datum controller subsystem mapping initialize master datum controller master initialize infinite loop suspected switching proc to background if it is not an infinite loop either do set background or set world loop checks
0
167,340
20,726,022,886
IssuesEvent
2022-03-14 02:03:27
peterwkc85/Java_Jackson
https://api.github.com/repos/peterwkc85/Java_Jackson
opened
CVE-2020-25649 (High) detected in jackson-databind-2.2.3.jar
security vulnerability
## CVE-2020-25649 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.2.3.jar</b></p></summary> <p>A high-performance JSON processor (parser, generator).</p> <p>Path to dependency file: /json-patch-master/json-patch-master/build.gradle</p> <p>Path to vulnerable library: /json-patch-master/json-patch-master/build.gradle</p> <p> Dependency Hierarchy: - jackson-coreutils-1.6.jar (Root Library) - :x: **jackson-databind-2.2.3.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity. <p>Publish Date: 2020-12-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p> <p>Release Date: 2020-12-03</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-25649 (High) detected in jackson-databind-2.2.3.jar - ## CVE-2020-25649 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.2.3.jar</b></p></summary> <p>A high-performance JSON processor (parser, generator).</p> <p>Path to dependency file: /json-patch-master/json-patch-master/build.gradle</p> <p>Path to vulnerable library: /json-patch-master/json-patch-master/build.gradle</p> <p> Dependency Hierarchy: - jackson-coreutils-1.6.jar (Root Library) - :x: **jackson-databind-2.2.3.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity. <p>Publish Date: 2020-12-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p> <p>Release Date: 2020-12-03</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar a high performance json processor parser generator path to dependency file json patch master json patch master build gradle path to vulnerable library json patch master json patch master build gradle dependency hierarchy jackson coreutils jar root library x jackson databind jar vulnerable library vulnerability details a flaw was found in fasterxml jackson databind where it did not have entity expansion secured properly this flaw allows vulnerability to xml external entity xxe attacks the highest threat from this vulnerability is data integrity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
0
155,343
12,246,859,306
IssuesEvent
2020-05-05 15:03:15
fetchai/agents-aea
https://api.github.com/repos/fetchai/agents-aea
closed
Add check for consistent package versions
core package test
**Is your feature request related to a problem? Please describe.** Ensuring that package identifiers are not out of sync in the docs is tricky. **Describe the solution you'd like** Add a package version consistency checker. This should look at all occurences of: > add skill/add contract/add protocol/add connection/fetch PUBLIC_ID and then check if the PUBLIC_ID of that PACKAGE_TYPE is present in the packages dir (by checking config files)
1.0
Add check for consistent package versions - **Is your feature request related to a problem? Please describe.** Ensuring that package identifiers are not out of sync in the docs is tricky. **Describe the solution you'd like** Add a package version consistency checker. This should look at all occurences of: > add skill/add contract/add protocol/add connection/fetch PUBLIC_ID and then check if the PUBLIC_ID of that PACKAGE_TYPE is present in the packages dir (by checking config files)
test
add check for consistent package versions is your feature request related to a problem please describe ensuring that package identifiers are not out of sync in the docs is tricky describe the solution you d like add a package version consistency checker this should look at all occurences of add skill add contract add protocol add connection fetch public id and then check if the public id of that package type is present in the packages dir by checking config files
1
8,434
3,176,203,476
IssuesEvent
2015-09-24 07:27:18
commercialhaskell/stack
https://api.github.com/repos/commercialhaskell/stack
closed
capitalization: stack vs Stack
component: documentation
I know the command in terminal is just `stack`, but I find it *really* awkward that the word is not capitalized through the GUIDE.md. The word "stack" is a plain English word. Capitalization is almost always used to identify the difference between this special thing named "Stack" versus the general English noun. I really wish we could stick with capitalized Stack or use \`stack\` (with the markdown code markings) in places where we really want to keep the command as the lowercase code command.
1.0
capitalization: stack vs Stack - I know the command in terminal is just `stack`, but I find it *really* awkward that the word is not capitalized through the GUIDE.md. The word "stack" is a plain English word. Capitalization is almost always used to identify the difference between this special thing named "Stack" versus the general English noun. I really wish we could stick with capitalized Stack or use \`stack\` (with the markdown code markings) in places where we really want to keep the command as the lowercase code command.
non_test
capitalization stack vs stack i know the command in terminal is just stack but i find it really awkward that the word is not capitalized through the guide md the word stack is a plain english word capitalization is almost always used to identify the difference between this special thing named stack versus the general english noun i really wish we could stick with capitalized stack or use stack with the markdown code markings in places where we really want to keep the command as the lowercase code command
0
1,944
7,016,285,764
IssuesEvent
2017-12-21 02:37:53
littlevgl/lvgl
https://api.github.com/repos/littlevgl/lvgl
closed
Suggestion: Move application out of lvgl
architecture
It is a small suggestion to keep library clean and independent. Is it possible to move lv_app/lv_appx out of lvgl source as they are not part of library as such. May it can be moved to a new repo just like hal/misc etc. This way lvgl as a source will have no code that may or may not be usable to others. One can always close the lvapp repo if he/she needs.
1.0
Suggestion: Move application out of lvgl - It is a small suggestion to keep library clean and independent. Is it possible to move lv_app/lv_appx out of lvgl source as they are not part of library as such. May it can be moved to a new repo just like hal/misc etc. This way lvgl as a source will have no code that may or may not be usable to others. One can always close the lvapp repo if he/she needs.
non_test
suggestion move application out of lvgl it is a small suggestion to keep library clean and independent is it possible to move lv app lv appx out of lvgl source as they are not part of library as such may it can be moved to a new repo just like hal misc etc this way lvgl as a source will have no code that may or may not be usable to others one can always close the lvapp repo if he she needs
0
34,891
7,875,457,640
IssuesEvent
2018-06-25 20:29:58
Microsoft/PTVS
https://api.github.com/repos/Microsoft/PTVS
opened
Incorrect encoding error message when no BOM
area:Code Intelligence bug regression
Create a new Python project (with its single file). Add `# coding: ascii` Look at error messages: ``` Severity Description Project File Line Suppression State Error file has both Unicode marker and PEP-263 file encoding. You must use "utf-8" as the encoding name when a BOM is present. PythonApplication1 PythonApplication1.py 1 Error File is saved in encoding 'Unicode (UTF-8)' which does not match encoding 'ascii' specified in the coding comment PythonApplication1 PythonApplication1.py 1 ``` The second error is correct, but the first is not. There is no BOM in this file.
1.0
Incorrect encoding error message when no BOM - Create a new Python project (with its single file). Add `# coding: ascii` Look at error messages: ``` Severity Description Project File Line Suppression State Error file has both Unicode marker and PEP-263 file encoding. You must use "utf-8" as the encoding name when a BOM is present. PythonApplication1 PythonApplication1.py 1 Error File is saved in encoding 'Unicode (UTF-8)' which does not match encoding 'ascii' specified in the coding comment PythonApplication1 PythonApplication1.py 1 ``` The second error is correct, but the first is not. There is no BOM in this file.
non_test
incorrect encoding error message when no bom create a new python project with its single file add coding ascii look at error messages severity description project file line suppression state error file has both unicode marker and pep file encoding you must use utf as the encoding name when a bom is present py error file is saved in encoding unicode utf which does not match encoding ascii specified in the coding comment py the second error is correct but the first is not there is no bom in this file
0
329,908
28,313,670,975
IssuesEvent
2023-04-10 17:38:28
DickinsonCollege/FarmData2
https://api.github.com/repos/DickinsonCollege/FarmData2
closed
Refactoring Seeding Report Log Creations
testing refactoring
## Current Design ## Currently inside of the Cypress [file](https://github.com/DickinsonCollege/FarmData2/blob/main/farmdata2_modules/fd2_tabs/fd2_barn_kit/seedingReport.spec.js) for the Seeding Report's e2e test log creation is being done using a req: ![image](https://user-images.githubusercontent.com/31524934/181606573-386f044e-a376-4e0b-a4dc-7ba1473b81cf.png) even though the FarmOSAPI supports a function for creating logs [[here](https://github.com/DickinsonCollege/FarmData2/blob/main/farmdata2_modules/fd2_tabs/resources/FarmOSAPI.js#L500)]. ## Desired Design ## It would be best to be consistent and to use the resources available via the FarmOSAPI. We're using some of those to get and delete records, so we should use the pre-existing function to also create them. The desired design would be to remove those req and to simply use the createRecord() function.
1.0
Refactoring Seeding Report Log Creations - ## Current Design ## Currently inside of the Cypress [file](https://github.com/DickinsonCollege/FarmData2/blob/main/farmdata2_modules/fd2_tabs/fd2_barn_kit/seedingReport.spec.js) for the Seeding Report's e2e test log creation is being done using a req: ![image](https://user-images.githubusercontent.com/31524934/181606573-386f044e-a376-4e0b-a4dc-7ba1473b81cf.png) even though the FarmOSAPI supports a function for creating logs [[here](https://github.com/DickinsonCollege/FarmData2/blob/main/farmdata2_modules/fd2_tabs/resources/FarmOSAPI.js#L500)]. ## Desired Design ## It would be best to be consistent and to use the resources available via the FarmOSAPI. We're using some of those to get and delete records, so we should use the pre-existing function to also create them. The desired design would be to remove those req and to simply use the createRecord() function.
test
refactoring seeding report log creations current design currently inside of the cypress for the seeding report s test log creation is being done using a req even though the farmosapi supports a function for creating logs desired design it would be best to be consistent and to use the resources available via the farmosapi we re using some of those to get and delete records so we should use the pre existing function to also create them the desired design would be to remove those req and to simply use the createrecord function
1
63,613
15,683,267,589
IssuesEvent
2021-03-25 08:33:06
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
When ryzen build tf_to_gpu_binary.exe failed: error executing command error
TF 2.4 subtype:windows type:build/install
<em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em> **System information** - OS : Windows 10 20H2 - TensorFlow version: 2.4.1 - Python version: python=3.9 - Building anaconda env - Bazel version : using bazelisk - CUDA/cuDNN version: cuda 11.1, cuDNN 8.0.5 - GPU model and memory: RTX2080, 8G When i build tensorflow in 10900k build succeed. But error is occur when i try to build ryzen 5600x or ryzen3700x. > INFO: Found 1 target... INFO: Deleting stale sandbox base C:/users/user/_bazel_user/d3ty3xtx/sandbox ERROR: C:/users/user/documents/cpplibrarys/tensorflow/tensorflow/core/kernels/mlir_generated/BUILD:149:1: compile tensorflow/core/kernels/mlir_generated/abs_i64_kernel_cubin.sm_75.bin failed (Exit 1): tf_to_gpu_binary.exe failed: error executing command cd C:/users/user/_bazel_user/d3ty3xtx/execroot/org_tensorflow bazel-out/x64_windows-opt/bin/tensorflow/compiler/mlir/tools/kernel_gen/tf_to_gpu_binary.exe --same_shape=0,1 --unroll_factors=4 --tile_sizes=256 --arch=sm_75 --input=bazel-out/x64_windows-opt/bin/tensorflow/core/kernels/mlir_generated/abs_i64.mlir --output=bazel-out/x64_windows-opt/bin/tensorflow/core/kernels/mlir_generated/abs_i64_kernel_cubin.sm_75.bin Execution platform: @local_execution_config_platform//:platform 2021-01-26 17:38:55.379641: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable. warning: Linking two modules of different data layouts: 'C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.1/nvvm/libdevice/libdevice.10.bc' is 'e-i64:64-v16:16-v32:32-n16:32:64' whereas 'acme' is 'e-i64:64-i128:128-v16:16-v32:32-n16:32:64' 2021-01-26 17:38:55.768832: I tensorflow/core/platform/windows/subprocess.cc:308] SubProcess ended with return code: 0 2021-01-26 17:38:55.845907: I tensorflow/core/platform/windows/subprocess.cc:308] SubProcess ended with return code: 4294967295 2021-01-26 17:38:55.847744: E tensorflow/compiler/mlir/tools/kernel_gen/tf_to_gpu_binary.cc:97] Internal: Lowering to LLVM IR failed. Target //tensorflow/tools/pip_package:build_pip_package failed to build ERROR: C:/users/user/documents/cpplibrarys/tensorflow/tensorflow/python/data/experimental/service/BUILD:11:1 compile tensorflow/core/kernels/mlir_generated/tanh_f32_kernel_cubin.sm_75.bin failed (Exit 1): tf_to_gpu_binary.exe failed: error executing command cd C:/users/user/_bazel_user/d3ty3xtx/execroot/org_tensorflow bazel-out/x64_windows-opt/bin/tensorflow/compiler/mlir/tools/kernel_gen/tf_to_gpu_binary.exe --same_shape=0,1 --unroll_factors=4 --tile_sizes=256 --arch=sm_75 --input=bazel-out/x64_windows-opt/bin/tensorflow/core/kernels/mlir_generated/tanh_f32.mlir --output=bazel-out/x64_windows-opt/bin/tensorflow/core/kernels/mlir_generated/tanh_f32_kernel_cubin.sm_75.bin Execution platform: @local_execution_config_platform//:platform INFO: Elapsed time: 69.840s, Critical Path: 24.63s INFO: 0 processes. FAILED: Build did NOT complete successfully
1.0
When ryzen build tf_to_gpu_binary.exe failed: error executing command error - <em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em> **System information** - OS : Windows 10 20H2 - TensorFlow version: 2.4.1 - Python version: python=3.9 - Building anaconda env - Bazel version : using bazelisk - CUDA/cuDNN version: cuda 11.1, cuDNN 8.0.5 - GPU model and memory: RTX2080, 8G When i build tensorflow in 10900k build succeed. But error is occur when i try to build ryzen 5600x or ryzen3700x. > INFO: Found 1 target... INFO: Deleting stale sandbox base C:/users/user/_bazel_user/d3ty3xtx/sandbox ERROR: C:/users/user/documents/cpplibrarys/tensorflow/tensorflow/core/kernels/mlir_generated/BUILD:149:1: compile tensorflow/core/kernels/mlir_generated/abs_i64_kernel_cubin.sm_75.bin failed (Exit 1): tf_to_gpu_binary.exe failed: error executing command cd C:/users/user/_bazel_user/d3ty3xtx/execroot/org_tensorflow bazel-out/x64_windows-opt/bin/tensorflow/compiler/mlir/tools/kernel_gen/tf_to_gpu_binary.exe --same_shape=0,1 --unroll_factors=4 --tile_sizes=256 --arch=sm_75 --input=bazel-out/x64_windows-opt/bin/tensorflow/core/kernels/mlir_generated/abs_i64.mlir --output=bazel-out/x64_windows-opt/bin/tensorflow/core/kernels/mlir_generated/abs_i64_kernel_cubin.sm_75.bin Execution platform: @local_execution_config_platform//:platform 2021-01-26 17:38:55.379641: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable. warning: Linking two modules of different data layouts: 'C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.1/nvvm/libdevice/libdevice.10.bc' is 'e-i64:64-v16:16-v32:32-n16:32:64' whereas 'acme' is 'e-i64:64-i128:128-v16:16-v32:32-n16:32:64' 2021-01-26 17:38:55.768832: I tensorflow/core/platform/windows/subprocess.cc:308] SubProcess ended with return code: 0 2021-01-26 17:38:55.845907: I tensorflow/core/platform/windows/subprocess.cc:308] SubProcess ended with return code: 4294967295 2021-01-26 17:38:55.847744: E tensorflow/compiler/mlir/tools/kernel_gen/tf_to_gpu_binary.cc:97] Internal: Lowering to LLVM IR failed. Target //tensorflow/tools/pip_package:build_pip_package failed to build ERROR: C:/users/user/documents/cpplibrarys/tensorflow/tensorflow/python/data/experimental/service/BUILD:11:1 compile tensorflow/core/kernels/mlir_generated/tanh_f32_kernel_cubin.sm_75.bin failed (Exit 1): tf_to_gpu_binary.exe failed: error executing command cd C:/users/user/_bazel_user/d3ty3xtx/execroot/org_tensorflow bazel-out/x64_windows-opt/bin/tensorflow/compiler/mlir/tools/kernel_gen/tf_to_gpu_binary.exe --same_shape=0,1 --unroll_factors=4 --tile_sizes=256 --arch=sm_75 --input=bazel-out/x64_windows-opt/bin/tensorflow/core/kernels/mlir_generated/tanh_f32.mlir --output=bazel-out/x64_windows-opt/bin/tensorflow/core/kernels/mlir_generated/tanh_f32_kernel_cubin.sm_75.bin Execution platform: @local_execution_config_platform//:platform INFO: Elapsed time: 69.840s, Critical Path: 24.63s INFO: 0 processes. FAILED: Build did NOT complete successfully
non_test
when ryzen build tf to gpu binary exe failed error executing command error please make sure that this is a build installation issue as per our we only address code doc bugs performance issues feature requests and build installation issues on github tag build template system information os windows tensorflow version python version python building anaconda env bazel version using bazelisk cuda cudnn version cuda cudnn gpu model and memory when i build tensorflow in build succeed but error is occur when i try to build ryzen or info found target info deleting stale sandbox base c users user bazel user sandbox error c users user documents cpplibrarys tensorflow tensorflow core kernels mlir generated build compile tensorflow core kernels mlir generated abs kernel cubin sm bin failed exit tf to gpu binary exe failed error executing command cd c users user bazel user execroot org tensorflow bazel out windows opt bin tensorflow compiler mlir tools kernel gen tf to gpu binary exe same shape unroll factors tile sizes arch sm input bazel out windows opt bin tensorflow core kernels mlir generated abs mlir output bazel out windows opt bin tensorflow core kernels mlir generated abs kernel cubin sm bin execution platform local execution config platform platform i tensorflow compiler mlir tensorflow utils dump mlir util cc disabling mlir crash reproducer set env var mlir crash reproducer directory to enable warning linking two modules of different data layouts c program files nvidia gpu computing toolkit cuda nvvm libdevice libdevice bc is e whereas acme is e i tensorflow core platform windows subprocess cc subprocess ended with return code i tensorflow core platform windows subprocess cc subprocess ended with return code e tensorflow compiler mlir tools kernel gen tf to gpu binary cc internal lowering to llvm ir failed target tensorflow tools pip package build pip package failed to build error c users user documents cpplibrarys tensorflow tensorflow python data experimental service build compile tensorflow core kernels mlir generated tanh kernel cubin sm bin failed exit tf to gpu binary exe failed error executing command cd c users user bazel user execroot org tensorflow bazel out windows opt bin tensorflow compiler mlir tools kernel gen tf to gpu binary exe same shape unroll factors tile sizes arch sm input bazel out windows opt bin tensorflow core kernels mlir generated tanh mlir output bazel out windows opt bin tensorflow core kernels mlir generated tanh kernel cubin sm bin execution platform local execution config platform platform info elapsed time critical path info processes failed build did not complete successfully
0
72,949
13,940,238,962
IssuesEvent
2020-10-22 17:36:48
distributed-system-analysis/pbench
https://api.github.com/repos/distributed-system-analysis/pbench
closed
Tracking issue for backports of PRs from `master` for `b0.70`
Agent Code Infrastructure Server
Below is a list of all the PRs that we need to back-port to the `b0.70` branch. - [x] #1901 (via PR #1903) - [x] #1897 (via PR #1903) - [x] #1894 (via PR #1903) - [x] #1887 (via PR #1895)
1.0
Tracking issue for backports of PRs from `master` for `b0.70` - Below is a list of all the PRs that we need to back-port to the `b0.70` branch. - [x] #1901 (via PR #1903) - [x] #1897 (via PR #1903) - [x] #1894 (via PR #1903) - [x] #1887 (via PR #1895)
non_test
tracking issue for backports of prs from master for below is a list of all the prs that we need to back port to the branch via pr via pr via pr via pr
0
330,705
28,484,366,228
IssuesEvent
2023-04-18 06:43:21
microsoft/AzureStorageExplorer
https://api.github.com/repos/microsoft/AzureStorageExplorer
opened
An error dialog pops up when trying to import one large .csv file
🧪 testing :gear: tables :beetle: regression
**Storage Explorer Version**: 1.29.0-dev **Build Number**: 202300418.4 **Branch**: main **Platform/OS**: Windows 10/Linux Ubuntu 22.04 **Architecture**: ia32/x64 **How Found**: From running test case **Regression From**: Previous release (1.28.1) ## Steps to Reproduce ## 1. Expand one storage account -> Tables. 2. Create a table -> Click 'Import'. 3. Select one large .csv file (contains 200x1million entities) -> Click 'Open'. 4. Check no error dialog pops up. ## Expected Experience ## No error dialog pops up. ## Actual Experience ## An error dialog pops up. ![image](https://user-images.githubusercontent.com/41351993/232688977-5ba7b9c9-b805-4c16-9f80-89fdc95c717d.png) **Error info:** Dialog window failed to acknowledge (19805357618500). The renderer process may be frozen. ## Additional Context ## This issue doesn't reproduce on MacOS Ventura 13.3.1 (Apple M1 Pro).
1.0
An error dialog pops up when trying to import one large .csv file - **Storage Explorer Version**: 1.29.0-dev **Build Number**: 202300418.4 **Branch**: main **Platform/OS**: Windows 10/Linux Ubuntu 22.04 **Architecture**: ia32/x64 **How Found**: From running test case **Regression From**: Previous release (1.28.1) ## Steps to Reproduce ## 1. Expand one storage account -> Tables. 2. Create a table -> Click 'Import'. 3. Select one large .csv file (contains 200x1million entities) -> Click 'Open'. 4. Check no error dialog pops up. ## Expected Experience ## No error dialog pops up. ## Actual Experience ## An error dialog pops up. ![image](https://user-images.githubusercontent.com/41351993/232688977-5ba7b9c9-b805-4c16-9f80-89fdc95c717d.png) **Error info:** Dialog window failed to acknowledge (19805357618500). The renderer process may be frozen. ## Additional Context ## This issue doesn't reproduce on MacOS Ventura 13.3.1 (Apple M1 Pro).
test
an error dialog pops up when trying to import one large csv file storage explorer version dev build number branch main platform os windows linux ubuntu architecture how found from running test case regression from previous release steps to reproduce expand one storage account tables create a table click import select one large csv file contains entities click open check no error dialog pops up expected experience no error dialog pops up actual experience an error dialog pops up error info dialog window failed to acknowledge the renderer process may be frozen additional context this issue doesn t reproduce on macos ventura apple pro
1
112,840
14,292,110,609
IssuesEvent
2020-11-24 00:14:21
cammelworks/doubleEdged
https://api.github.com/repos/cammelworks/doubleEdged
closed
アカウント情報ページの修正
Design enhancement
![image](https://user-images.githubusercontent.com/40158101/99528211-4fe94480-29e1-11eb-845f-f5b14b178f1e.jpeg) - [ ] ~アイコン画像をホームにあるアカウントバーに表示~ - [ ] ~アイコン画像をタップして変更できるようにする~ - [x] 名前の横の「変更」ボタンを鉛筆マーク✏️にする - [x] クリスタルの個数を表示 ### Release #257
1.0
アカウント情報ページの修正 - ![image](https://user-images.githubusercontent.com/40158101/99528211-4fe94480-29e1-11eb-845f-f5b14b178f1e.jpeg) - [ ] ~アイコン画像をホームにあるアカウントバーに表示~ - [ ] ~アイコン画像をタップして変更できるようにする~ - [x] 名前の横の「変更」ボタンを鉛筆マーク✏️にする - [x] クリスタルの個数を表示 ### Release #257
non_test
アカウント情報ページの修正 アイコン画像をホームにあるアカウントバーに表示 アイコン画像をタップして変更できるようにする 名前の横の「変更」ボタンを鉛筆マーク✏️にする クリスタルの個数を表示 release
0
587,418
17,615,392,445
IssuesEvent
2021-08-18 09:05:02
google/android-fhir
https://api.github.com/repos/google/android-fhir
closed
Handle Questionnaire with item repeats to handle questions which can have more than one answer
enhancement help wanted high priority Q3 2021
[Questionnaire.item.repeats](https://www.hl7.org/fhir/questionnaire-definitions.html#Questionnaire.item.repeats) is used to indicate that a particular question / group of questions can have multiple answers. > When rendering the questionnaire, it is up to the rendering software whether to render the question text for each answer repetition (i.e. "repeat the question") or to simply allow entry/selection of multiple answers for the question (repeat the answers). This issue is linked to the following: - [ ] https://github.com/google/android-fhir/issues/606
1.0
Handle Questionnaire with item repeats to handle questions which can have more than one answer - [Questionnaire.item.repeats](https://www.hl7.org/fhir/questionnaire-definitions.html#Questionnaire.item.repeats) is used to indicate that a particular question / group of questions can have multiple answers. > When rendering the questionnaire, it is up to the rendering software whether to render the question text for each answer repetition (i.e. "repeat the question") or to simply allow entry/selection of multiple answers for the question (repeat the answers). This issue is linked to the following: - [ ] https://github.com/google/android-fhir/issues/606
non_test
handle questionnaire with item repeats to handle questions which can have more than one answer is used to indicate that a particular question group of questions can have multiple answers when rendering the questionnaire it is up to the rendering software whether to render the question text for each answer repetition i e repeat the question or to simply allow entry selection of multiple answers for the question repeat the answers this issue is linked to the following
0
244,610
18,763,272,536
IssuesEvent
2021-11-05 19:16:32
aws/aws-cdk
https://api.github.com/repos/aws/aws-cdk
closed
(aws-ecs): FargateService creates new SecurityGroup instead of using VPC default
bug p2 @aws-cdk/aws-ecs documentation
According to the [documentation of the FargateService](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-ecs.FargateService.html#securitygroups) the default SecurityGroup of the VPC is being used when no group is defined. However it seems to be implemented differently if you look at: https://github.com/aws/aws-cdk/blob/daa5d66bce204df437bdc28844d7c5944710954b/packages/%40aws-cdk/aws-ecs/lib/base/base-service.ts#L804 A new SecurityGroup is created instead of using the default one of the VPC.
1.0
(aws-ecs): FargateService creates new SecurityGroup instead of using VPC default - According to the [documentation of the FargateService](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-ecs.FargateService.html#securitygroups) the default SecurityGroup of the VPC is being used when no group is defined. However it seems to be implemented differently if you look at: https://github.com/aws/aws-cdk/blob/daa5d66bce204df437bdc28844d7c5944710954b/packages/%40aws-cdk/aws-ecs/lib/base/base-service.ts#L804 A new SecurityGroup is created instead of using the default one of the VPC.
non_test
aws ecs fargateservice creates new securitygroup instead of using vpc default according to the the default securitygroup of the vpc is being used when no group is defined however it seems to be implemented differently if you look at a new securitygroup is created instead of using the default one of the vpc
0
122,992
17,772,042,228
IssuesEvent
2021-08-30 14:41:25
kapseliboi/sagefy
https://api.github.com/repos/kapseliboi/sagefy
opened
CVE-2021-23364 (Medium) detected in browserslist-4.11.1.tgz
security vulnerability
## CVE-2021-23364 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserslist-4.11.1.tgz</b></p></summary> <p>Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset</p> <p>Library home page: <a href="https://registry.npmjs.org/browserslist/-/browserslist-4.11.1.tgz">https://registry.npmjs.org/browserslist/-/browserslist-4.11.1.tgz</a></p> <p>Path to dependency file: sagefy/client/package.json</p> <p>Path to vulnerable library: sagefy/client/node_modules/browserslist/package.json,sagefy/server/node_modules/browserslist/package.json</p> <p> Dependency Hierarchy: - preset-env-7.9.0.tgz (Root Library) - :x: **browserslist-4.11.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/sagefy/commit/85736dad6168b94cc8ec8bfe513dc6d3fd360e38">85736dad6168b94cc8ec8bfe513dc6d3fd360e38</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package browserslist from 4.0.0 and before 4.16.5 are vulnerable to Regular Expression Denial of Service (ReDoS) during parsing of queries. <p>Publish Date: 2021-04-28 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23364>CVE-2021-23364</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364</a></p> <p>Release Date: 2021-04-28</p> <p>Fix Resolution: browserslist - 4.16.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23364 (Medium) detected in browserslist-4.11.1.tgz - ## CVE-2021-23364 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>browserslist-4.11.1.tgz</b></p></summary> <p>Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset</p> <p>Library home page: <a href="https://registry.npmjs.org/browserslist/-/browserslist-4.11.1.tgz">https://registry.npmjs.org/browserslist/-/browserslist-4.11.1.tgz</a></p> <p>Path to dependency file: sagefy/client/package.json</p> <p>Path to vulnerable library: sagefy/client/node_modules/browserslist/package.json,sagefy/server/node_modules/browserslist/package.json</p> <p> Dependency Hierarchy: - preset-env-7.9.0.tgz (Root Library) - :x: **browserslist-4.11.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/sagefy/commit/85736dad6168b94cc8ec8bfe513dc6d3fd360e38">85736dad6168b94cc8ec8bfe513dc6d3fd360e38</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package browserslist from 4.0.0 and before 4.16.5 are vulnerable to Regular Expression Denial of Service (ReDoS) during parsing of queries. <p>Publish Date: 2021-04-28 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23364>CVE-2021-23364</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23364</a></p> <p>Release Date: 2021-04-28</p> <p>Fix Resolution: browserslist - 4.16.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in browserslist tgz cve medium severity vulnerability vulnerable library browserslist tgz share target browsers between different front end tools like autoprefixer stylelint and babel env preset library home page a href path to dependency file sagefy client package json path to vulnerable library sagefy client node modules browserslist package json sagefy server node modules browserslist package json dependency hierarchy preset env tgz root library x browserslist tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package browserslist from and before are vulnerable to regular expression denial of service redos during parsing of queries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution browserslist step up your open source security game with whitesource
0
91,988
8,336,032,956
IssuesEvent
2018-09-28 06:06:59
nicupavel/openpanzer
https://api.github.com/repos/nicupavel/openpanzer
closed
A unit on the victory tile does not claim the tile
bug fixed-testing
Hi While playing the Student scenario from the "Das Reich" campaign, I faced the following weird bug. There is my unit on the victory tile, but the tile is not claimed i.e. it still belongs to the enemy. I am attaching the saved game illustrating it. There is a top most victory tile, a city called Valkenburg, in which there is my unit, but the tile still belongs to the enemy (according to the strategic map and the number of objectives left). [(Campaign) Student Turn 2 Sun Jan 01 2017 11_23_37 AM.json.zip](https://github.com/nicupavel/openpanzer/files/680685/Campaign.Student.Turn.2.Sun.Jan.01.2017.11_23_37.AM.json.zip) This bug can be reproduced by performing the following actions: - start the Student scenario; - there is a paratrooper unit over Valkenburg, so it is possible drop them directly into the city, whereupon the city will be captured (everything is fine); - after pressing "next turn", a weird thing happens - the unit in the city somehow changes its ownership and the game shows that the city is again captured by the enemy; - saving and loading the game restores unit ownership, but the city still belongs to the enemy; It seems that there is some bug in the game engine, which for some reason causes unit ownership change.
1.0
A unit on the victory tile does not claim the tile - Hi While playing the Student scenario from the "Das Reich" campaign, I faced the following weird bug. There is my unit on the victory tile, but the tile is not claimed i.e. it still belongs to the enemy. I am attaching the saved game illustrating it. There is a top most victory tile, a city called Valkenburg, in which there is my unit, but the tile still belongs to the enemy (according to the strategic map and the number of objectives left). [(Campaign) Student Turn 2 Sun Jan 01 2017 11_23_37 AM.json.zip](https://github.com/nicupavel/openpanzer/files/680685/Campaign.Student.Turn.2.Sun.Jan.01.2017.11_23_37.AM.json.zip) This bug can be reproduced by performing the following actions: - start the Student scenario; - there is a paratrooper unit over Valkenburg, so it is possible drop them directly into the city, whereupon the city will be captured (everything is fine); - after pressing "next turn", a weird thing happens - the unit in the city somehow changes its ownership and the game shows that the city is again captured by the enemy; - saving and loading the game restores unit ownership, but the city still belongs to the enemy; It seems that there is some bug in the game engine, which for some reason causes unit ownership change.
test
a unit on the victory tile does not claim the tile hi while playing the student scenario from the das reich campaign i faced the following weird bug there is my unit on the victory tile but the tile is not claimed i e it still belongs to the enemy i am attaching the saved game illustrating it there is a top most victory tile a city called valkenburg in which there is my unit but the tile still belongs to the enemy according to the strategic map and the number of objectives left this bug can be reproduced by performing the following actions start the student scenario there is a paratrooper unit over valkenburg so it is possible drop them directly into the city whereupon the city will be captured everything is fine after pressing next turn a weird thing happens the unit in the city somehow changes its ownership and the game shows that the city is again captured by the enemy saving and loading the game restores unit ownership but the city still belongs to the enemy it seems that there is some bug in the game engine which for some reason causes unit ownership change
1
330,200
28,358,403,022
IssuesEvent
2023-04-12 09:00:52
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
ccl/streamingccl/streamingest: TestRandomClientGeneration failed
C-test-failure O-robot branch-release-23.1.0
ccl/streamingccl/streamingest.TestRandomClientGeneration [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9555706?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9555706?buildTab=artifacts#/) on release-23.1.0 @ [f1921dbd499fd258a606c4e7180aff7b82b6f900](https://github.com/cockroachdb/cockroach/commits/f1921dbd499fd258a606c4e7180aff7b82b6f900): ``` === RUN TestRandomClientGeneration test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/90c1b75d835f45b8488807abb5b1092d/logTestRandomClientGeneration3975357437 test_log_scope.go:79: use -show-logs to present logs inline * * INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants. * * * INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants. * * * INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants. * stream_ingestion_processor_test.go:404: Error Trace: github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest/stream_ingestion_processor_test.go:404 github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest/stream_ingestion_processor_test.go:552 Error: "0" is not greater than "0" Test: TestRandomClientGeneration panic.go:540: -- test log scope end -- test logs left over in: /artifacts/tmp/_tmp/90c1b75d835f45b8488807abb5b1092d/logTestRandomClientGeneration3975357437 --- FAIL: TestRandomClientGeneration (82.18s) ``` <p>Parameters: <code>TAGS=bazel,gss,race</code> </p> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #101211 ccl/streamingccl/streamingest: TestRandomClientGeneration failed [C-test-failure O-robot T-disaster-recovery branch-release-23.1] </p> </details> /cc @cockroachdb/disaster-recovery <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRandomClientGeneration.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
1.0
ccl/streamingccl/streamingest: TestRandomClientGeneration failed - ccl/streamingccl/streamingest.TestRandomClientGeneration [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9555706?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9555706?buildTab=artifacts#/) on release-23.1.0 @ [f1921dbd499fd258a606c4e7180aff7b82b6f900](https://github.com/cockroachdb/cockroach/commits/f1921dbd499fd258a606c4e7180aff7b82b6f900): ``` === RUN TestRandomClientGeneration test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/90c1b75d835f45b8488807abb5b1092d/logTestRandomClientGeneration3975357437 test_log_scope.go:79: use -show-logs to present logs inline * * INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants. * * * INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants. * * * INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants. * stream_ingestion_processor_test.go:404: Error Trace: github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest/stream_ingestion_processor_test.go:404 github.com/cockroachdb/cockroach/pkg/ccl/streamingccl/streamingest/stream_ingestion_processor_test.go:552 Error: "0" is not greater than "0" Test: TestRandomClientGeneration panic.go:540: -- test log scope end -- test logs left over in: /artifacts/tmp/_tmp/90c1b75d835f45b8488807abb5b1092d/logTestRandomClientGeneration3975357437 --- FAIL: TestRandomClientGeneration (82.18s) ``` <p>Parameters: <code>TAGS=bazel,gss,race</code> </p> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #101211 ccl/streamingccl/streamingest: TestRandomClientGeneration failed [C-test-failure O-robot T-disaster-recovery branch-release-23.1] </p> </details> /cc @cockroachdb/disaster-recovery <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestRandomClientGeneration.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
test
ccl streamingccl streamingest testrandomclientgeneration failed ccl streamingccl streamingest testrandomclientgeneration with on release run testrandomclientgeneration test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants stream ingestion processor test go error trace github com cockroachdb cockroach pkg ccl streamingccl streamingest stream ingestion processor test go github com cockroachdb cockroach pkg ccl streamingccl streamingest stream ingestion processor test go error is not greater than test testrandomclientgeneration panic go test log scope end test logs left over in artifacts tmp tmp fail testrandomclientgeneration parameters tags bazel gss race help see also same failure on other branches ccl streamingccl streamingest testrandomclientgeneration failed cc cockroachdb disaster recovery
1
313,292
26,915,295,705
IssuesEvent
2023-02-07 05:39:48
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
opened
[YSQL][Unit test] Flaky test: org.yb.cql.TestSelect.testLargeParallelIn
area/ysql kind/failing-test status/awaiting-triage
### Description The test org.yb.cql.TestSelect.testLargeParallelIn is flaky and fails with Assertion Error in multiple builds and this is also reproducible on alma8 dev-server. It fails like 5/60 times with error: `java.lang.AssertionError: expected:<1> but was:<0>` Reproducible command: `./yb_build.sh --java-test 'org.yb.cql.TestSelect#testLargeParallelIn' -n 100` Local Logs: https://gist.github.com/rjalan-yb/5b180727cf9d7ede4b5f2df1644356d7 Complete logs Jenkins run: https://gist.github.com/rjalan-yb/08d2061dbe68838ff52bb3f105c97b4c
1.0
[YSQL][Unit test] Flaky test: org.yb.cql.TestSelect.testLargeParallelIn - ### Description The test org.yb.cql.TestSelect.testLargeParallelIn is flaky and fails with Assertion Error in multiple builds and this is also reproducible on alma8 dev-server. It fails like 5/60 times with error: `java.lang.AssertionError: expected:<1> but was:<0>` Reproducible command: `./yb_build.sh --java-test 'org.yb.cql.TestSelect#testLargeParallelIn' -n 100` Local Logs: https://gist.github.com/rjalan-yb/5b180727cf9d7ede4b5f2df1644356d7 Complete logs Jenkins run: https://gist.github.com/rjalan-yb/08d2061dbe68838ff52bb3f105c97b4c
test
flaky test org yb cql testselect testlargeparallelin description the test org yb cql testselect testlargeparallelin is flaky and fails with assertion error in multiple builds and this is also reproducible on dev server it fails like times with error java lang assertionerror expected but was reproducible command yb build sh java test org yb cql testselect testlargeparallelin n local logs complete logs jenkins run
1
258,307
22,301,658,700
IssuesEvent
2022-06-13 09:16:13
dusk-network/dusk-blockchain
https://api.github.com/repos/dusk-network/dusk-blockchain
closed
Tune block gas limit
mark:testnet
**Describe "Why" this is needed** At the moment the block gas limit is set with 1000B gas point. This means that it can handle up to ~10k simple transfer per block. This impossibilitate to easy test the Block limit reaching in a running cluster. **Describe alternatives you've considered** Preload the mempool with 10k transfers, but this is not sustainable to reproduce without restarting the cluster **Additional context** Up to 20transfer per block should be enough
1.0
Tune block gas limit - **Describe "Why" this is needed** At the moment the block gas limit is set with 1000B gas point. This means that it can handle up to ~10k simple transfer per block. This impossibilitate to easy test the Block limit reaching in a running cluster. **Describe alternatives you've considered** Preload the mempool with 10k transfers, but this is not sustainable to reproduce without restarting the cluster **Additional context** Up to 20transfer per block should be enough
test
tune block gas limit describe why this is needed at the moment the block gas limit is set with gas point this means that it can handle up to simple transfer per block this impossibilitate to easy test the block limit reaching in a running cluster describe alternatives you ve considered preload the mempool with transfers but this is not sustainable to reproduce without restarting the cluster additional context up to per block should be enough
1
348,486
31,622,113,920
IssuesEvent
2023-09-06 00:28:42
sayakongit/status-code-sangnet
https://api.github.com/repos/sayakongit/status-code-sangnet
opened
Unit test cases for backend
hacktoberfest unit-tests
### Description For the backend, we need to write unit test cases for each existing modules with the maximum code coverage. These may be related to API test or functional test. The test cases would be both positive and negative ensuring a better end user experience and robust usage.
1.0
Unit test cases for backend - ### Description For the backend, we need to write unit test cases for each existing modules with the maximum code coverage. These may be related to API test or functional test. The test cases would be both positive and negative ensuring a better end user experience and robust usage.
test
unit test cases for backend description for the backend we need to write unit test cases for each existing modules with the maximum code coverage these may be related to api test or functional test the test cases would be both positive and negative ensuring a better end user experience and robust usage
1
306,208
26,447,622,277
IssuesEvent
2023-01-16 08:47:56
anoma/namada
https://api.github.com/repos/anoma/namada
closed
Add a test that PrepareProposal returns the correct transactions in the response to Tendermint when proposing a block
ledger testing
(Relates to https://github.com/anoma/namada/pull/494 but is more general than that PR) When calling `PrepareProposal` with some txs from the mempool, we should check that the returned proposed block contains: - *all* `ProtocolTxType::VoteExtension` protocol transactions that were passed to `PrepareProposal` - appropriate vote extension digest transactions (i.e. `ProtocolTxType::EthereumEvents` and `ProtocolTxType::ValidatorSetUpdate`) based on the `ProtocolTxType::VoteExtension`s passed in - roughly half of any wrapper transactions that were passed to `PrepareProposal` - the rest of the returned transactions should be decrypted transactions This can be a test in Rust code.
1.0
Add a test that PrepareProposal returns the correct transactions in the response to Tendermint when proposing a block - (Relates to https://github.com/anoma/namada/pull/494 but is more general than that PR) When calling `PrepareProposal` with some txs from the mempool, we should check that the returned proposed block contains: - *all* `ProtocolTxType::VoteExtension` protocol transactions that were passed to `PrepareProposal` - appropriate vote extension digest transactions (i.e. `ProtocolTxType::EthereumEvents` and `ProtocolTxType::ValidatorSetUpdate`) based on the `ProtocolTxType::VoteExtension`s passed in - roughly half of any wrapper transactions that were passed to `PrepareProposal` - the rest of the returned transactions should be decrypted transactions This can be a test in Rust code.
test
add a test that prepareproposal returns the correct transactions in the response to tendermint when proposing a block relates to but is more general than that pr when calling prepareproposal with some txs from the mempool we should check that the returned proposed block contains all protocoltxtype voteextension protocol transactions that were passed to prepareproposal appropriate vote extension digest transactions i e protocoltxtype ethereumevents and protocoltxtype validatorsetupdate based on the protocoltxtype voteextension s passed in roughly half of any wrapper transactions that were passed to prepareproposal the rest of the returned transactions should be decrypted transactions this can be a test in rust code
1
368,355
10,878,023,842
IssuesEvent
2019-11-16 14:49:18
piperhaywood/notebook-ph
https://api.github.com/repos/piperhaywood/notebook-ph
closed
List formatting w/in header description is off
concern: front-end priority: low 🌤 type: bug 🐜
<img width="1440" alt="Screenshot 2019-11-12 at 10 06 57" src="https://user-images.githubusercontent.com/4711611/68662195-43abeb00-0534-11ea-8ada-f190b2f17bf8.png"> Definitely related to the Gutenberg refactor
1.0
List formatting w/in header description is off - <img width="1440" alt="Screenshot 2019-11-12 at 10 06 57" src="https://user-images.githubusercontent.com/4711611/68662195-43abeb00-0534-11ea-8ada-f190b2f17bf8.png"> Definitely related to the Gutenberg refactor
non_test
list formatting w in header description is off img width alt screenshot at src definitely related to the gutenberg refactor
0
30,209
14,475,933,227
IssuesEvent
2020-12-10 02:51:11
angular/angularfire
https://api.github.com/repos/angular/angularfire
closed
6.0RC1: canActivate seems to have performance issues
convert: general needs: investigation type: performance
Updated to `6.0.0-rc.1`, the issue with performance still seems to remain for me. Here are two preformance profiles I took, the fast one with `5.4.2` and the slow one on `6.0.0-rc.1` [profiles.zip](https://github.com/angular/angularfire/files/4164159/profiles.zip) _Originally posted by @wSedlacek in https://github.com/angular/angularfire/issues/2312#issuecomment-582806005_
True
6.0RC1: canActivate seems to have performance issues - Updated to `6.0.0-rc.1`, the issue with performance still seems to remain for me. Here are two preformance profiles I took, the fast one with `5.4.2` and the slow one on `6.0.0-rc.1` [profiles.zip](https://github.com/angular/angularfire/files/4164159/profiles.zip) _Originally posted by @wSedlacek in https://github.com/angular/angularfire/issues/2312#issuecomment-582806005_
non_test
canactivate seems to have performance issues updated to rc the issue with performance still seems to remain for me here are two preformance profiles i took the fast one with and the slow one on rc originally posted by wsedlacek in
0
286,752
24,781,933,429
IssuesEvent
2022-10-24 06:18:54
dimitri/pgloader
https://api.github.com/repos/dimitri/pgloader
closed
FATEL error on "SET SQL_MODE = "NO_AUTO_"
Needs more testing / information
Hello, I did a mysql dump normal way and try to convert it. But I get this error: ``` 2022-06-15T17:54:01.054000Z LOG pgloader version "3.6.1" KABOOM! FATAL error: At SET SQL_MODE = "NO_AUTO_ ^ (Line 10, Column 0, Position 218) In context KW-LOAD: While parsing KW-LOAD. Expected: the character Tab or the character Newline or the character Return or the character Space or the string "--" or the string "/*" or the string "load" An unhandled error condition has been signalled: At SET SQL_MODE = "NO_AUTO_ ^ (Line 10, Column 0, Position 218) In context KW-LOAD: While parsing KW-LOAD. Expected: the character Tab or the character Newline or the character Return or the character Space or the string "--" or the string "/*" or the string "load" What I am doing here? At SET SQL_MODE = "NO_AUTO_ ^ (Line 10, Column 0, Position 218) In context KW-LOAD: While parsing KW-LOAD. Expected: the character Tab or the character Newline or the character Return or the character Space or the string "--" or the string "/*" or the string "load" ```
1.0
FATEL error on "SET SQL_MODE = "NO_AUTO_" - Hello, I did a mysql dump normal way and try to convert it. But I get this error: ``` 2022-06-15T17:54:01.054000Z LOG pgloader version "3.6.1" KABOOM! FATAL error: At SET SQL_MODE = "NO_AUTO_ ^ (Line 10, Column 0, Position 218) In context KW-LOAD: While parsing KW-LOAD. Expected: the character Tab or the character Newline or the character Return or the character Space or the string "--" or the string "/*" or the string "load" An unhandled error condition has been signalled: At SET SQL_MODE = "NO_AUTO_ ^ (Line 10, Column 0, Position 218) In context KW-LOAD: While parsing KW-LOAD. Expected: the character Tab or the character Newline or the character Return or the character Space or the string "--" or the string "/*" or the string "load" What I am doing here? At SET SQL_MODE = "NO_AUTO_ ^ (Line 10, Column 0, Position 218) In context KW-LOAD: While parsing KW-LOAD. Expected: the character Tab or the character Newline or the character Return or the character Space or the string "--" or the string "/*" or the string "load" ```
test
fatel error on set sql mode no auto hello i did a mysql dump normal way and try to convert it but i get this error log pgloader version kaboom fatal error at set sql mode no auto line column position in context kw load while parsing kw load expected the character tab or the character newline or the character return or the character space or the string or the string or the string load an unhandled error condition has been signalled at set sql mode no auto line column position in context kw load while parsing kw load expected the character tab or the character newline or the character return or the character space or the string or the string or the string load what i am doing here at set sql mode no auto line column position in context kw load while parsing kw load expected the character tab or the character newline or the character return or the character space or the string or the string or the string load
1
21,033
3,868,830,934
IssuesEvent
2016-04-10 07:10:20
colinxfleming/dcaf_case_management
https://api.github.com/repos/colinxfleming/dcaf_case_management
closed
Unit test userstamp/history/timestamp for core models
minitest
Stemming from #198 we should make sure to userstamp/history/timestamp all this for the following controllers: - [ ] Patient - [ ] Pregnancy - [ ] Pledge - [ ] Call - [ ] Clinic - [x] Note We can be casual about it (just check `respond_to`s) but should have some representation of it in our controller and model tests, since we're relying on these for frontend functionality and not just auditing.
1.0
Unit test userstamp/history/timestamp for core models - Stemming from #198 we should make sure to userstamp/history/timestamp all this for the following controllers: - [ ] Patient - [ ] Pregnancy - [ ] Pledge - [ ] Call - [ ] Clinic - [x] Note We can be casual about it (just check `respond_to`s) but should have some representation of it in our controller and model tests, since we're relying on these for frontend functionality and not just auditing.
test
unit test userstamp history timestamp for core models stemming from we should make sure to userstamp history timestamp all this for the following controllers patient pregnancy pledge call clinic note we can be casual about it just check respond to s but should have some representation of it in our controller and model tests since we re relying on these for frontend functionality and not just auditing
1
60,112
25,003,145,604
IssuesEvent
2022-11-03 09:42:25
amplication/amplication
https://api.github.com/repos/amplication/amplication
closed
Selective code generation: add enable\disable app database
type: feature request @amplication/client @amplication/server @amplication/data-service-generator generated server
Update application database settings and add enable\disable toggle. the default value for applications is database enabled. when the database is disabled the application database-related code generation should not be part of the code generation. database-related code: * prisma client * database docker-compose * database environments ![image](https://user-images.githubusercontent.com/91742238/167671233-dfe457a5-1c4b-482d-aad2-137d3dcab54d.png) - [ ] POC - generate app with Amplication, disconnect the DB-related code and make it work. - [ ] Implement the changes on the code that depends on the DB + tests - [ ] Implement the the toggle logic result on the server + tests - [ ] implement the toggle UI and login on the client + tests - [ ] testing all the flow together
1.0
Selective code generation: add enable\disable app database - Update application database settings and add enable\disable toggle. the default value for applications is database enabled. when the database is disabled the application database-related code generation should not be part of the code generation. database-related code: * prisma client * database docker-compose * database environments ![image](https://user-images.githubusercontent.com/91742238/167671233-dfe457a5-1c4b-482d-aad2-137d3dcab54d.png) - [ ] POC - generate app with Amplication, disconnect the DB-related code and make it work. - [ ] Implement the changes on the code that depends on the DB + tests - [ ] Implement the the toggle logic result on the server + tests - [ ] implement the toggle UI and login on the client + tests - [ ] testing all the flow together
non_test
selective code generation add enable disable app database update application database settings and add enable disable toggle the default value for applications is database enabled when the database is disabled the application database related code generation should not be part of the code generation database related code prisma client database docker compose database environments poc generate app with amplication disconnect the db related code and make it work implement the changes on the code that depends on the db tests implement the the toggle logic result on the server tests implement the toggle ui and login on the client tests testing all the flow together
0
137,151
11,101,361,821
IssuesEvent
2019-12-16 21:16:18
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
Test plan for SkipLocalsInit attribute
Area-Compilers Test
First, I will start with a simple feature description. This is not meant to be a specification, just an English-language description of the feature and the intent. The feature is to add a new well-known attribute to the compiler, System.Runtime.CompilerServices.SkipLocalsInit that causes method bodies nested "inside" the scope of the attribute to elide the ".locals init" CIL directive that causes the CLR to zero-init local variables and space reserved using the "localloc" instruction. "Nested inside the scope of the attribute" is a concept defined as follows: 1. When the attribute is applied to a module, all emitted methods inside that module, including generated methods, will skip local initialization. 2. When the attribute is applied to a type (including interfaces), all methods inside that type, including generated methods, will skip local initialization. 3. When the attribute is applied to a method, the method and all methods generated by the compiler which contain user-influenced code inside that method (e.g., local functions, lambdas, async methods) will skip local initialization. Given that definition, the test plan follows. * [x] Specification checked in to `csharplang` and/or `roslyn`. * [x] Trivial case: method with SkipLocalsInit and >0 locals lists locals but no .locals init flag on emit * [x] Method with SkipLocalsInit and no locals, but stackalloc doesn't have .locals init flag * [x] SkipLocalsInit attribute can be mixed with other attributes * [x] Nested functions inside method with SkipLocalsInit inherit setting * [x] Lambdas * [x] Local functions * [x] Async state machines * [x] Iterator state machines * [x] Local functions + async & iterator * [x] Versions of the previous nested inside one-another * [x] SkipLocalsInit *doesn't* apply to constructor of DisplayClass for closures * [x] SkipLocalsInit on module applies to generated anonymous type * [x] Other members * [x] Attribute on declaration part of partial method, but not implementation part * [x] Same with types * [x] Properties * [x] Indexers * [x] Constructors * [x] Field initializers that declare locals, e.g. through out var * [x] Finalizers * [x] Static versions of previous * [x] Property-like event handlers * [x] SkipLocalsInit on type also applies to all previous members and nested functions/generated members * [x] Abstract classes/inheritance * [x] Also inherited to generated types * [x] Nested types * [x] SkipLocalsInit on modules * [x] Applies to nested versions of all previous * [x] Works with netmodule emit * [x] If assembly references module with attribute, SkipLocalsInit is not automatically applied to other modules * [x] Applies to default interface implementations * [ ] What to emit for EnC? * [x] ~~Scripting??~~ * [ ] Add supporting type https://github.com/dotnet/coreclr/pull/20093 * [x] Update Test Plan doc to mention SkipLocalsInit for future testing ## Productivity * [ ] The diagnostic for missing `unsafe` switch should trigger appropriate fixer to fix the project
1.0
Test plan for SkipLocalsInit attribute - First, I will start with a simple feature description. This is not meant to be a specification, just an English-language description of the feature and the intent. The feature is to add a new well-known attribute to the compiler, System.Runtime.CompilerServices.SkipLocalsInit that causes method bodies nested "inside" the scope of the attribute to elide the ".locals init" CIL directive that causes the CLR to zero-init local variables and space reserved using the "localloc" instruction. "Nested inside the scope of the attribute" is a concept defined as follows: 1. When the attribute is applied to a module, all emitted methods inside that module, including generated methods, will skip local initialization. 2. When the attribute is applied to a type (including interfaces), all methods inside that type, including generated methods, will skip local initialization. 3. When the attribute is applied to a method, the method and all methods generated by the compiler which contain user-influenced code inside that method (e.g., local functions, lambdas, async methods) will skip local initialization. Given that definition, the test plan follows. * [x] Specification checked in to `csharplang` and/or `roslyn`. * [x] Trivial case: method with SkipLocalsInit and >0 locals lists locals but no .locals init flag on emit * [x] Method with SkipLocalsInit and no locals, but stackalloc doesn't have .locals init flag * [x] SkipLocalsInit attribute can be mixed with other attributes * [x] Nested functions inside method with SkipLocalsInit inherit setting * [x] Lambdas * [x] Local functions * [x] Async state machines * [x] Iterator state machines * [x] Local functions + async & iterator * [x] Versions of the previous nested inside one-another * [x] SkipLocalsInit *doesn't* apply to constructor of DisplayClass for closures * [x] SkipLocalsInit on module applies to generated anonymous type * [x] Other members * [x] Attribute on declaration part of partial method, but not implementation part * [x] Same with types * [x] Properties * [x] Indexers * [x] Constructors * [x] Field initializers that declare locals, e.g. through out var * [x] Finalizers * [x] Static versions of previous * [x] Property-like event handlers * [x] SkipLocalsInit on type also applies to all previous members and nested functions/generated members * [x] Abstract classes/inheritance * [x] Also inherited to generated types * [x] Nested types * [x] SkipLocalsInit on modules * [x] Applies to nested versions of all previous * [x] Works with netmodule emit * [x] If assembly references module with attribute, SkipLocalsInit is not automatically applied to other modules * [x] Applies to default interface implementations * [ ] What to emit for EnC? * [x] ~~Scripting??~~ * [ ] Add supporting type https://github.com/dotnet/coreclr/pull/20093 * [x] Update Test Plan doc to mention SkipLocalsInit for future testing ## Productivity * [ ] The diagnostic for missing `unsafe` switch should trigger appropriate fixer to fix the project
test
test plan for skiplocalsinit attribute first i will start with a simple feature description this is not meant to be a specification just an english language description of the feature and the intent the feature is to add a new well known attribute to the compiler system runtime compilerservices skiplocalsinit that causes method bodies nested inside the scope of the attribute to elide the locals init cil directive that causes the clr to zero init local variables and space reserved using the localloc instruction nested inside the scope of the attribute is a concept defined as follows when the attribute is applied to a module all emitted methods inside that module including generated methods will skip local initialization when the attribute is applied to a type including interfaces all methods inside that type including generated methods will skip local initialization when the attribute is applied to a method the method and all methods generated by the compiler which contain user influenced code inside that method e g local functions lambdas async methods will skip local initialization given that definition the test plan follows specification checked in to csharplang and or roslyn trivial case method with skiplocalsinit and locals lists locals but no locals init flag on emit method with skiplocalsinit and no locals but stackalloc doesn t have locals init flag skiplocalsinit attribute can be mixed with other attributes nested functions inside method with skiplocalsinit inherit setting lambdas local functions async state machines iterator state machines local functions async iterator versions of the previous nested inside one another skiplocalsinit doesn t apply to constructor of displayclass for closures skiplocalsinit on module applies to generated anonymous type other members attribute on declaration part of partial method but not implementation part same with types properties indexers constructors field initializers that declare locals e g through out var finalizers static versions of previous property like event handlers skiplocalsinit on type also applies to all previous members and nested functions generated members abstract classes inheritance also inherited to generated types nested types skiplocalsinit on modules applies to nested versions of all previous works with netmodule emit if assembly references module with attribute skiplocalsinit is not automatically applied to other modules applies to default interface implementations what to emit for enc scripting add supporting type update test plan doc to mention skiplocalsinit for future testing productivity the diagnostic for missing unsafe switch should trigger appropriate fixer to fix the project
1
227,725
18,096,091,485
IssuesEvent
2021-09-22 09:09:58
NativeScript/NativeScript
https://api.github.com/repos/NativeScript/NativeScript
closed
[FormattedString] [IOS] Button height is not measured correctly when there are some span bounded from code behind
backlog ready for test bug severity: low os: ios
Reproduced with tns-core-modules@3.1.0 ``` <Button textWrap="true" backgroundColor="red" fontSize="60"> <button.formattedText> <formattedString> <formattedString.spans> <Span text="code behind" > </Span> <Span text="{{ text }}"> </Span> </formattedString.spans> </formattedString> </button.formattedText> </Button> ``` <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/46726549-formattedstring-ios-button-height-is-not-measured-correctly-when-there-are-some-span-bounded-from-code-behind?utm_campaign=plugin&utm_content=tracker%2F12908224&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F12908224&utm_medium=issues&utm_source=github). </bountysource-plugin>
1.0
[FormattedString] [IOS] Button height is not measured correctly when there are some span bounded from code behind - Reproduced with tns-core-modules@3.1.0 ``` <Button textWrap="true" backgroundColor="red" fontSize="60"> <button.formattedText> <formattedString> <formattedString.spans> <Span text="code behind" > </Span> <Span text="{{ text }}"> </Span> </formattedString.spans> </formattedString> </button.formattedText> </Button> ``` <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/46726549-formattedstring-ios-button-height-is-not-measured-correctly-when-there-are-some-span-bounded-from-code-behind?utm_campaign=plugin&utm_content=tracker%2F12908224&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F12908224&utm_medium=issues&utm_source=github). </bountysource-plugin>
test
button height is not measured correctly when there are some span bounded from code behind reproduced with tns core modules want to back this issue we accept bounties via
1
13,156
8,133,503,681
IssuesEvent
2018-08-19 02:50:17
TabbycatDebate/tabbycat
https://api.github.com/repos/TabbycatDebate/tabbycat
closed
Performance: implement NGINX
performance
These may deliver better performance than using Daphne and may be worth trialling.
True
Performance: implement NGINX - These may deliver better performance than using Daphne and may be worth trialling.
non_test
performance implement nginx these may deliver better performance than using daphne and may be worth trialling
0
386,449
11,439,280,673
IssuesEvent
2020-02-05 06:46:42
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Ability to specify internal interface and external interface in Kubeadm
kind/feature priority/awaiting-more-evidence sig/cluster-lifecycle sig/node
It seems like by default kubeadm tries to figure out the internal-ip by looking at what the hostname resolves to in `/etc/hosts`. However, this is not ideal since it resolves to the public ip on some cloud providers. **What would you like to be added**: I'd like the ability in kubeadm/KubeletConfiguration to specify what it should use as the internal-ip and what it should use as the external-ip. **Why is this needed**: To have a proper overview in `kubectl get nodes`, as well as making sure that all internal comms use the internal ip address. Related: kubernetes/kubeadm#1987
1.0
Ability to specify internal interface and external interface in Kubeadm - It seems like by default kubeadm tries to figure out the internal-ip by looking at what the hostname resolves to in `/etc/hosts`. However, this is not ideal since it resolves to the public ip on some cloud providers. **What would you like to be added**: I'd like the ability in kubeadm/KubeletConfiguration to specify what it should use as the internal-ip and what it should use as the external-ip. **Why is this needed**: To have a proper overview in `kubectl get nodes`, as well as making sure that all internal comms use the internal ip address. Related: kubernetes/kubeadm#1987
non_test
ability to specify internal interface and external interface in kubeadm it seems like by default kubeadm tries to figure out the internal ip by looking at what the hostname resolves to in etc hosts however this is not ideal since it resolves to the public ip on some cloud providers what would you like to be added i d like the ability in kubeadm kubeletconfiguration to specify what it should use as the internal ip and what it should use as the external ip why is this needed to have a proper overview in kubectl get nodes as well as making sure that all internal comms use the internal ip address related kubernetes kubeadm
0
189,227
14,494,744,746
IssuesEvent
2020-12-11 10:12:32
FEUP-ESOF-2020-21/open-cx-t7g5-5-estrelinhas
https://api.github.com/repos/FEUP-ESOF-2020-21/open-cx-t7g5-5-estrelinhas
opened
Unit tests
testing
Use flutter's unit tests to test main app features. TOTEST: - [ ] Create profile page - [ ] Implement proof-of-concept tests - [ ] Test page drawing - [ ] Test input validation - [ ] Create conference page - [ ] Implement proof-of-concept tests - [ ] Test page drawing - [ ] Test input validation - [ ] Profile page? - [ ] Implement proof-of-concept tests - [ ] Test page drawing - [ ] Test likes
1.0
Unit tests - Use flutter's unit tests to test main app features. TOTEST: - [ ] Create profile page - [ ] Implement proof-of-concept tests - [ ] Test page drawing - [ ] Test input validation - [ ] Create conference page - [ ] Implement proof-of-concept tests - [ ] Test page drawing - [ ] Test input validation - [ ] Profile page? - [ ] Implement proof-of-concept tests - [ ] Test page drawing - [ ] Test likes
test
unit tests use flutter s unit tests to test main app features totest create profile page implement proof of concept tests test page drawing test input validation create conference page implement proof of concept tests test page drawing test input validation profile page implement proof of concept tests test page drawing test likes
1
378,538
26,325,292,982
IssuesEvent
2023-01-10 05:50:13
AndBobsYourUncle/stable-diffusion-discord-bot
https://api.github.com/repos/AndBobsYourUncle/stable-diffusion-discord-bot
closed
Documentation change and feature request
documentation
During the setup, I had difficulty with what GCC to install. I installed a 32-bit version first by accident. Also, I think I had to add the %PATH% environment variable by hand for GO and GCC (maybe not needed if the default location is used). The changes you made to the installation section in the readme should help a little bit but be thinking maybe there could be more specific instructions so more basic users don't get caught up with the same hangups I did and don't bug you for the small stuff. For example, the link you provided to Sourceforge is just the page for that package and not a link to the 64bit package that is needed. **Maybe something more like this for less advanced windows users?** Windows Installation -- ### Clone this repository Install GIT for windows (this should already be installed if you are installing on the same windows machine as your Automatic1111 install) https://git-scm.com/download/win Create or traverse to a folder you want the installation directory to be cloned to in windows explorer. In this example, I'm using a folder in the root of my local "C" drive "C:\tools". I open that folder in windows explorer and right-click in the empty space and choose "Git Bash here" from the contextual menu, this opens a git command window already in the directory. Clone the repository using this git command in the console window you just opened: `git clone https://github.com/AndBobsYourUncle/stable-diffusion-discord-bot.git` *Note: from now on if you want to update the code you can do the same process to open a git command window and use the git command: ` git pull` The source is in Go and will need to be compiled into an EXE using Go and GCC ### Install Go https://go.dev/doc/install By default, the installer will install Go to Program Files or Program Files (x86). You can change the location as needed. After installing, you will need to close and reopen any open command prompts so that changes to the environment made by the installer are reflected in the command prompt. *I've noticed that if you install Go to an alternate location it sets the %PATH% environment variable to be the programs file folder not the alternate folder. You can change it by going to "settings" in the start menu then into "System" then under "about" (very bottom) there is a link under "related Settings" called "Advanced System Settings". This will open the System Control Panel already on the tab with the Environment Variables editor button. Click it to edit the list. The top entries are for the "User" and the bottom are for the "System". You may need to edit both locations. The Variable for go is not in "Path" and is it's own entry called "GOPATH" these should be pointing to the go directory. In my example, I installed go to C:\tools\go instead of the default program files folder and had to change the GOPATH variable manually with that path. You will need 64bit GCC installed to compile the EXE https://sourceforge.net/projects/mingw-w64/files/mingw-w64/mingw-w64-release/mingw-w64-v10.0.0.zip/download You can check if it was installed properly with ` GCC --version` It should return something like this if installed correctly: ``` gcc.exe (GCC) 10.4.0 Copyright (C) 2020 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ``` *The installer should add the location of GCC to your "Path" environmental variables but may not if you choose a non-default location, you can check there if it can't find GCC after installation. Also, don't forget to restart any command windows you had open before installation before using. Build the bot with go build Make sure you're in the project directory and build the bot with go build: `go build` If it's successful you will get no errors and return to a ready state. You will now have a stable_diffusion_bot.exe in your directory and are ready to move on to "usage". *If you make any changes to files or do a "git pull" and the code is updated then you will need to re-build the EXE to gain those new code changes. --- Ok I had mentioned this feature request in another thread but didn't want it to get lost. I was wondering if we can add "style" to the queue.go under newGeneration ``` styles: [ "string" ], ```
1.0
Documentation change and feature request - During the setup, I had difficulty with what GCC to install. I installed a 32-bit version first by accident. Also, I think I had to add the %PATH% environment variable by hand for GO and GCC (maybe not needed if the default location is used). The changes you made to the installation section in the readme should help a little bit but be thinking maybe there could be more specific instructions so more basic users don't get caught up with the same hangups I did and don't bug you for the small stuff. For example, the link you provided to Sourceforge is just the page for that package and not a link to the 64bit package that is needed. **Maybe something more like this for less advanced windows users?** Windows Installation -- ### Clone this repository Install GIT for windows (this should already be installed if you are installing on the same windows machine as your Automatic1111 install) https://git-scm.com/download/win Create or traverse to a folder you want the installation directory to be cloned to in windows explorer. In this example, I'm using a folder in the root of my local "C" drive "C:\tools". I open that folder in windows explorer and right-click in the empty space and choose "Git Bash here" from the contextual menu, this opens a git command window already in the directory. Clone the repository using this git command in the console window you just opened: `git clone https://github.com/AndBobsYourUncle/stable-diffusion-discord-bot.git` *Note: from now on if you want to update the code you can do the same process to open a git command window and use the git command: ` git pull` The source is in Go and will need to be compiled into an EXE using Go and GCC ### Install Go https://go.dev/doc/install By default, the installer will install Go to Program Files or Program Files (x86). You can change the location as needed. After installing, you will need to close and reopen any open command prompts so that changes to the environment made by the installer are reflected in the command prompt. *I've noticed that if you install Go to an alternate location it sets the %PATH% environment variable to be the programs file folder not the alternate folder. You can change it by going to "settings" in the start menu then into "System" then under "about" (very bottom) there is a link under "related Settings" called "Advanced System Settings". This will open the System Control Panel already on the tab with the Environment Variables editor button. Click it to edit the list. The top entries are for the "User" and the bottom are for the "System". You may need to edit both locations. The Variable for go is not in "Path" and is it's own entry called "GOPATH" these should be pointing to the go directory. In my example, I installed go to C:\tools\go instead of the default program files folder and had to change the GOPATH variable manually with that path. You will need 64bit GCC installed to compile the EXE https://sourceforge.net/projects/mingw-w64/files/mingw-w64/mingw-w64-release/mingw-w64-v10.0.0.zip/download You can check if it was installed properly with ` GCC --version` It should return something like this if installed correctly: ``` gcc.exe (GCC) 10.4.0 Copyright (C) 2020 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ``` *The installer should add the location of GCC to your "Path" environmental variables but may not if you choose a non-default location, you can check there if it can't find GCC after installation. Also, don't forget to restart any command windows you had open before installation before using. Build the bot with go build Make sure you're in the project directory and build the bot with go build: `go build` If it's successful you will get no errors and return to a ready state. You will now have a stable_diffusion_bot.exe in your directory and are ready to move on to "usage". *If you make any changes to files or do a "git pull" and the code is updated then you will need to re-build the EXE to gain those new code changes. --- Ok I had mentioned this feature request in another thread but didn't want it to get lost. I was wondering if we can add "style" to the queue.go under newGeneration ``` styles: [ "string" ], ```
non_test
documentation change and feature request during the setup i had difficulty with what gcc to install i installed a bit version first by accident also i think i had to add the path environment variable by hand for go and gcc maybe not needed if the default location is used the changes you made to the installation section in the readme should help a little bit but be thinking maybe there could be more specific instructions so more basic users don t get caught up with the same hangups i did and don t bug you for the small stuff for example the link you provided to sourceforge is just the page for that package and not a link to the package that is needed maybe something more like this for less advanced windows users windows installation clone this repository install git for windows this should already be installed if you are installing on the same windows machine as your install create or traverse to a folder you want the installation directory to be cloned to in windows explorer in this example i m using a folder in the root of my local c drive c tools i open that folder in windows explorer and right click in the empty space and choose git bash here from the contextual menu this opens a git command window already in the directory clone the repository using this git command in the console window you just opened git clone note from now on if you want to update the code you can do the same process to open a git command window and use the git command git pull the source is in go and will need to be compiled into an exe using go and gcc install go by default the installer will install go to program files or program files you can change the location as needed after installing you will need to close and reopen any open command prompts so that changes to the environment made by the installer are reflected in the command prompt i ve noticed that if you install go to an alternate location it sets the path environment variable to be the programs file folder not the alternate folder you can change it by going to settings in the start menu then into system then under about very bottom there is a link under related settings called advanced system settings this will open the system control panel already on the tab with the environment variables editor button click it to edit the list the top entries are for the user and the bottom are for the system you may need to edit both locations the variable for go is not in path and is it s own entry called gopath these should be pointing to the go directory in my example i installed go to c tools go instead of the default program files folder and had to change the gopath variable manually with that path you will need gcc installed to compile the exe you can check if it was installed properly with gcc version it should return something like this if installed correctly gcc exe gcc copyright c free software foundation inc this is free software see the source for copying conditions there is no warranty not even for merchantability or fitness for a particular purpose the installer should add the location of gcc to your path environmental variables but may not if you choose a non default location you can check there if it can t find gcc after installation also don t forget to restart any command windows you had open before installation before using build the bot with go build make sure you re in the project directory and build the bot with go build go build if it s successful you will get no errors and return to a ready state you will now have a stable diffusion bot exe in your directory and are ready to move on to usage if you make any changes to files or do a git pull and the code is updated then you will need to re build the exe to gain those new code changes ok i had mentioned this feature request in another thread but didn t want it to get lost i was wondering if we can add style to the queue go under newgeneration styles string
0
433,633
30,341,843,678
IssuesEvent
2023-07-11 13:13:02
alphagov/govuk-frontend
https://api.github.com/repos/alphagov/govuk-frontend
closed
Remove polyfills for IE
documentation performance javascript
<!-- This is a template for any issues that aren’t bug reports or new feature requests. The headings in this section provide examples of the information you might want to include, but feel free to add/delete sections where appropriate. --> ## What At the moment, we ship polyfills to all browsers even when they're not needed. We think around 50% of the JavaScript we ship is polyfills (this might say more about the amount of JavaScript we have, rather than the size of the polyfills). Having to include polyfills also means we tend to restrict ourselves to features that we know are either 1) well supported, or 2) don't require large polyfills to work in older browsers. ## Why There is a performance impact if we're shipping unnecessary JavaScript to browsers that don't need it. Our current approach puts some restrictions on the JavaScript features we use. This isn't necessarily causing problems right now, but might do in future when we come to build more complex components. (also depends on https://github.com/alphagov/govuk-frontend/issues/2503 ) ## Who needs to work on this Developers ## Who needs to review this Developers ## Done when Depends on our [browser support proposal](https://github.com/alphagov/govuk-frontend/issues/2519) being approved by the community. If we go down the route of dropping IE support to 'functional': - [x] https://github.com/alphagov/govuk-frontend/pull/3570 - [x] https://github.com/alphagov/govuk-frontend/pull/3720 - [x] https://github.com/alphagov/govuk-frontend/pull/3723 - [x] Update/remove [polyfill documentation](https://github.com/alphagov/govuk-frontend/blob/main/docs/contributing/polyfilling.md) and pick up https://github.com/alphagov/govuk-frontend/issues/2159 as part of it (making sure it meets style guide etc) - [x] If we remove all polyfills and won't be adding any in future, we can close https://github.com/alphagov/govuk-frontend/issues/674
1.0
Remove polyfills for IE - <!-- This is a template for any issues that aren’t bug reports or new feature requests. The headings in this section provide examples of the information you might want to include, but feel free to add/delete sections where appropriate. --> ## What At the moment, we ship polyfills to all browsers even when they're not needed. We think around 50% of the JavaScript we ship is polyfills (this might say more about the amount of JavaScript we have, rather than the size of the polyfills). Having to include polyfills also means we tend to restrict ourselves to features that we know are either 1) well supported, or 2) don't require large polyfills to work in older browsers. ## Why There is a performance impact if we're shipping unnecessary JavaScript to browsers that don't need it. Our current approach puts some restrictions on the JavaScript features we use. This isn't necessarily causing problems right now, but might do in future when we come to build more complex components. (also depends on https://github.com/alphagov/govuk-frontend/issues/2503 ) ## Who needs to work on this Developers ## Who needs to review this Developers ## Done when Depends on our [browser support proposal](https://github.com/alphagov/govuk-frontend/issues/2519) being approved by the community. If we go down the route of dropping IE support to 'functional': - [x] https://github.com/alphagov/govuk-frontend/pull/3570 - [x] https://github.com/alphagov/govuk-frontend/pull/3720 - [x] https://github.com/alphagov/govuk-frontend/pull/3723 - [x] Update/remove [polyfill documentation](https://github.com/alphagov/govuk-frontend/blob/main/docs/contributing/polyfilling.md) and pick up https://github.com/alphagov/govuk-frontend/issues/2159 as part of it (making sure it meets style guide etc) - [x] If we remove all polyfills and won't be adding any in future, we can close https://github.com/alphagov/govuk-frontend/issues/674
non_test
remove polyfills for ie this is a template for any issues that aren’t bug reports or new feature requests the headings in this section provide examples of the information you might want to include but feel free to add delete sections where appropriate what at the moment we ship polyfills to all browsers even when they re not needed we think around of the javascript we ship is polyfills this might say more about the amount of javascript we have rather than the size of the polyfills having to include polyfills also means we tend to restrict ourselves to features that we know are either well supported or don t require large polyfills to work in older browsers why there is a performance impact if we re shipping unnecessary javascript to browsers that don t need it our current approach puts some restrictions on the javascript features we use this isn t necessarily causing problems right now but might do in future when we come to build more complex components also depends on who needs to work on this developers who needs to review this developers done when depends on our being approved by the community if we go down the route of dropping ie support to functional update remove and pick up as part of it making sure it meets style guide etc if we remove all polyfills and won t be adding any in future we can close
0
12,437
9,781,742,799
IssuesEvent
2019-06-07 20:43:54
forseti-security/forseti-security
https://api.github.com/repos/forseti-security/forseti-security
closed
Forseti should audit stackdriver monitoring configurations for requested resources
issue-review: future-milestone module: auditor module: infrastructure module: inventory triaged: yes type: help-wanted
Another possible API to be able to set a policy on that could have customer value if they want to ensure certain stackdriver monitoring configurations are always in place for a project.
1.0
Forseti should audit stackdriver monitoring configurations for requested resources - Another possible API to be able to set a policy on that could have customer value if they want to ensure certain stackdriver monitoring configurations are always in place for a project.
non_test
forseti should audit stackdriver monitoring configurations for requested resources another possible api to be able to set a policy on that could have customer value if they want to ensure certain stackdriver monitoring configurations are always in place for a project
0
5,874
8,696,365,716
IssuesEvent
2018-12-04 17:17:20
emacs-ess/ESS
https://api.github.com/repos/emacs-ess/ESS
closed
Note: Variable binding depth exceeds max-specpdl-size
literate process:eval
Hi all, in ESS 16.10 "ess-eval-chunk" failed when a "space" followed the "=" at the beginning of a code chunk "<<>>= ": generate-new-buffer: Variable binding depth exceeds max-specpdl-size or preview-clearout: Variable binding depth exceeds max-specpdl-size Interestingly, marking the code and calling "ess-eval-region" works as expected. Deleting the space "<<>>=" and calling again "ess-eval-chunk" works. Maybe this is of interest, Sven
1.0
Note: Variable binding depth exceeds max-specpdl-size - Hi all, in ESS 16.10 "ess-eval-chunk" failed when a "space" followed the "=" at the beginning of a code chunk "<<>>= ": generate-new-buffer: Variable binding depth exceeds max-specpdl-size or preview-clearout: Variable binding depth exceeds max-specpdl-size Interestingly, marking the code and calling "ess-eval-region" works as expected. Deleting the space "<<>>=" and calling again "ess-eval-chunk" works. Maybe this is of interest, Sven
non_test
note variable binding depth exceeds max specpdl size hi all in ess ess eval chunk failed when a space followed the at the beginning of a code chunk generate new buffer variable binding depth exceeds max specpdl size or preview clearout variable binding depth exceeds max specpdl size interestingly marking the code and calling ess eval region works as expected deleting the space and calling again ess eval chunk works maybe this is of interest sven
0
21,919
3,926,272,494
IssuesEvent
2016-04-22 22:38:33
HeinrichReimer/android-issue-reporter
https://api.github.com/repos/HeinrichReimer/android-issue-reporter
closed
Prametre
test
Bamako - Device info: --- <table> <tr><td>App version</td><td>1.2.3</td></tr> <tr><td>App version code</td><td>12300</td></tr> <tr><td>Android build version</td><td>eng.lile.1440383524</td></tr> <tr><td>Android release version</td><td>4.4.2</td></tr> <tr><td>Android SDK version</td><td>19</td></tr> <tr><td>Android build ID</td><td>ALPS.KK1.MP7.V1.22</td></tr> <tr><td>Device brand</td><td>alps</td></tr> <tr><td>Device manufacturer</td><td>alps</td></tr> <tr><td>Device name</td><td>C777</td></tr> <tr><td>Device model</td><td>k708s</td></tr> <tr><td>Device product name</td><td>C777</td></tr> <tr><td>Device hardware name</td><td>mt6572</td></tr> <tr><td>ABIs</td><td>[armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (32bit)</td><td>null</td></tr> <tr><td>ABIs (64bit)</td><td>null</td></tr> </table> Extra info: --- <table> <tr><td>Test 1</td><td>Example string</td></tr> <tr><td>Test 2</td><td>true</td></tr> </table>
1.0
Prametre - Bamako - Device info: --- <table> <tr><td>App version</td><td>1.2.3</td></tr> <tr><td>App version code</td><td>12300</td></tr> <tr><td>Android build version</td><td>eng.lile.1440383524</td></tr> <tr><td>Android release version</td><td>4.4.2</td></tr> <tr><td>Android SDK version</td><td>19</td></tr> <tr><td>Android build ID</td><td>ALPS.KK1.MP7.V1.22</td></tr> <tr><td>Device brand</td><td>alps</td></tr> <tr><td>Device manufacturer</td><td>alps</td></tr> <tr><td>Device name</td><td>C777</td></tr> <tr><td>Device model</td><td>k708s</td></tr> <tr><td>Device product name</td><td>C777</td></tr> <tr><td>Device hardware name</td><td>mt6572</td></tr> <tr><td>ABIs</td><td>[armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (32bit)</td><td>null</td></tr> <tr><td>ABIs (64bit)</td><td>null</td></tr> </table> Extra info: --- <table> <tr><td>Test 1</td><td>Example string</td></tr> <tr><td>Test 2</td><td>true</td></tr> </table>
test
prametre bamako device info app version app version code android build version eng lile android release version android sdk version android build id alps device brand alps device manufacturer alps device name device model device product name device hardware name abis abis null abis null extra info test example string test true
1
53,722
6,342,763,788
IssuesEvent
2017-07-27 16:07:12
red/red
https://api.github.com/repos/red/red
closed
make url! from components allows block! only, not any-list!
status.built status.tested type.wish
yet another plea for consistency: `make time!` and `make date!` do allow `any-list!`; the analogy is clear: you submit individual components and `make` combines them...
1.0
make url! from components allows block! only, not any-list! - yet another plea for consistency: `make time!` and `make date!` do allow `any-list!`; the analogy is clear: you submit individual components and `make` combines them...
test
make url from components allows block only not any list yet another plea for consistency make time and make date do allow any list the analogy is clear you submit individual components and make combines them
1
247,441
20,979,730,471
IssuesEvent
2022-03-28 18:38:26
wazuh/wazuh
https://api.github.com/repos/wazuh/wazuh
opened
Analysisd stats are affected by wazuh-logtest execution
bug core/analysisd core/logtest
|Wazuh version|Component|Install type|Install method|Platform| |---|---|---|---|---| | v4.1.0 - master | wazuh-analysisd | Manager | Any | GNU/Linux | ## Description The problem is that Analysisd stats files are being affected by `wazuh-logtest` log evaluation due to a global variable shared between analysisd and logtest scopes https://github.com/wazuh/wazuh/blob/b006d1896f7734c4eafa9b28f4b0de8fd5c9ba27/src/analysisd/rules.c#L3119 ## Background Analysisd stats files (`<WAZUH_INSTALLDIR>/stats/totals/<YYYY>/<MM>/ossec-totals-<dd>.log`), whose main purpose is to keep track of ruleid, level and count per hour using a specific syntax: ``` cat cat /var/ossec/stats/totals/2022/Mar/ossec-totals-28.log 0-5402-3-16 0-5501-3-16 0-5502-3-17 0-5762-4-1 0-40700-0-60 0-80700-0-119 0-1002-2-49 0-530-0-66 0-535-1-1 0-502-3-1 0-510-7-213 0-515-0-2 0-19000-0-2 0--563--839--2--0 ``` For example `0-5402-3-16` means: Ruleid 5402, level 3 was triggered 16 times in hour 0. The last line is a summary `0--563--839--2--0`: 563 alerts of 839 events (2 from syscheck and 0 from firewall) https://github.com/wazuh/wazuh/blob/b5470add63e070b47a729da0b05082a5fdc38aa6/src/analysisd/analysisd.c#L1133-L1135
1.0
Analysisd stats are affected by wazuh-logtest execution - |Wazuh version|Component|Install type|Install method|Platform| |---|---|---|---|---| | v4.1.0 - master | wazuh-analysisd | Manager | Any | GNU/Linux | ## Description The problem is that Analysisd stats files are being affected by `wazuh-logtest` log evaluation due to a global variable shared between analysisd and logtest scopes https://github.com/wazuh/wazuh/blob/b006d1896f7734c4eafa9b28f4b0de8fd5c9ba27/src/analysisd/rules.c#L3119 ## Background Analysisd stats files (`<WAZUH_INSTALLDIR>/stats/totals/<YYYY>/<MM>/ossec-totals-<dd>.log`), whose main purpose is to keep track of ruleid, level and count per hour using a specific syntax: ``` cat cat /var/ossec/stats/totals/2022/Mar/ossec-totals-28.log 0-5402-3-16 0-5501-3-16 0-5502-3-17 0-5762-4-1 0-40700-0-60 0-80700-0-119 0-1002-2-49 0-530-0-66 0-535-1-1 0-502-3-1 0-510-7-213 0-515-0-2 0-19000-0-2 0--563--839--2--0 ``` For example `0-5402-3-16` means: Ruleid 5402, level 3 was triggered 16 times in hour 0. The last line is a summary `0--563--839--2--0`: 563 alerts of 839 events (2 from syscheck and 0 from firewall) https://github.com/wazuh/wazuh/blob/b5470add63e070b47a729da0b05082a5fdc38aa6/src/analysisd/analysisd.c#L1133-L1135
test
analysisd stats are affected by wazuh logtest execution wazuh version component install type install method platform master wazuh analysisd manager any gnu linux description the problem is that analysisd stats files are being affected by wazuh logtest log evaluation due to a global variable shared between analysisd and logtest scopes background analysisd stats files stats totals ossec totals log whose main purpose is to keep track of ruleid level and count per hour using a specific syntax cat cat var ossec stats totals mar ossec totals log for example means ruleid level was triggered times in hour the last line is a summary alerts of events from syscheck and from firewall
1
46,522
13,055,926,536
IssuesEvent
2020-07-30 03:08:31
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
[spline-reco] Segmentation fault in spline reco (Trac #1361)
Incomplete Migration Migrated from Trac combo reconstruction defect
Migrated from https://code.icecube.wisc.edu/ticket/1361 ```json { "status": "closed", "changetime": "2015-09-24T09:38:45", "description": "Segmentation fault shows up when I attempt to run spline-reco module, in the way that is shown in the attachment. I checked trunk and with the last release of icerec, and the same result.", "reporter": "gmaggi", "cc": "", "resolution": "fixed", "_ts": "1443087525552045", "component": "combo reconstruction", "summary": "[spline-reco] Segmentation fault in spline reco", "priority": "critical", "keywords": "", "time": "2015-09-22T06:52:40", "milestone": "", "owner": "mvoge", "type": "defect" } ```
1.0
[spline-reco] Segmentation fault in spline reco (Trac #1361) - Migrated from https://code.icecube.wisc.edu/ticket/1361 ```json { "status": "closed", "changetime": "2015-09-24T09:38:45", "description": "Segmentation fault shows up when I attempt to run spline-reco module, in the way that is shown in the attachment. I checked trunk and with the last release of icerec, and the same result.", "reporter": "gmaggi", "cc": "", "resolution": "fixed", "_ts": "1443087525552045", "component": "combo reconstruction", "summary": "[spline-reco] Segmentation fault in spline reco", "priority": "critical", "keywords": "", "time": "2015-09-22T06:52:40", "milestone": "", "owner": "mvoge", "type": "defect" } ```
non_test
segmentation fault in spline reco trac migrated from json status closed changetime description segmentation fault shows up when i attempt to run spline reco module in the way that is shown in the attachment i checked trunk and with the last release of icerec and the same result reporter gmaggi cc resolution fixed ts component combo reconstruction summary segmentation fault in spline reco priority critical keywords time milestone owner mvoge type defect
0
732,337
25,255,568,296
IssuesEvent
2022-11-15 17:46:02
googleapis/python-datastore
https://api.github.com/repos/googleapis/python-datastore
closed
tests.system.test_query: test_large_query[None-None-2500] failed
type: bug priority: p2 api: datastore flakybot: issue flakybot: flaky
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 0ee3fe910e2865db0794d81970edcdb11e482dc7 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/57fd5258-b7af-479f-bc71-b741ac2a101f), [Sponge](http://sponge2/57fd5258-b7af-479f-bc71-b741ac2a101f) status: failed <details><summary>Test output</summary><br><pre>large_query = <google.cloud.datastore.query.Query object at 0x7f365c341760> limit = None, offset = None, expected = 2500 @pytest.mark.parametrize( "limit,offset,expected", [ # with no offset there are the correct # of results ( None, None, populate_datastore.LARGE_CHARACTER_TOTAL_OBJECTS, ), # with no limit there are results (offset provided) ( None, 900, populate_datastore.LARGE_CHARACTER_TOTAL_OBJECTS - 900, ), # Offset beyond items larger: verify 200 items found ( 200, 1100, 200, ), # offset within range, expect 50 despite larger limit") (100, populate_datastore.LARGE_CHARACTER_TOTAL_OBJECTS - 50, 50), # Offset beyond items larger Verify no items found") (200, populate_datastore.LARGE_CHARACTER_TOTAL_OBJECTS + 1000, 0), ], ) def test_large_query(large_query, limit, offset, expected): page_query = large_query page_query.add_filter("family", "=", "Stark") page_query.add_filter("alive", "=", False) iterator = page_query.fetch(limit=limit, offset=offset) > entities = [e for e in iterator] tests/system/test_query.py:356: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/system/test_query.py:356: in <listcomp> entities = [e for e in iterator] .nox/system-3-8-disable_grpc-true/lib/python3.8/site-packages/google/api_core/page_iterator.py:208: in _items_iter for page in self._page_iter(increment=False): .nox/system-3-8-disable_grpc-true/lib/python3.8/site-packages/google/api_core/page_iterator.py:250: in _page_iter page = self._next_page() google/cloud/datastore/query.py:627: in _next_page response_pb = self.client._datastore_api.run_query( google/cloud/datastore/_http.py:271: in run_query return _rpc( google/cloud/datastore/_http.py:179: in _rpc response = _request( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ http = <google.auth.transport.requests.AuthorizedSession object at 0x7f365c3e8910> project = 'precise-truck-742', method = 'runQuery' data = b'\n\x00\x12)\x12\x11precise-truck-742"\x14LargeCharacterEntity\x1a\x9f\x01\x1a\x10\n\x0eLargeCharacter"1\n/\x08\x01\x...r"\x0b\x12\x0eLargeCharacter"\x0echaracter00599\x0c\xa2\x01\x14LargeCharacterEntity\x18\x00 \x00B\x11precise-truck-742' base_url = 'https://datastore.googleapis.com' client_info = <google.api_core.gapic_v1.client_info.ClientInfo object at 0x7f365c4dc760> retry = None, timeout = None def _request( http, project, method, data, base_url, client_info, retry=None, timeout=None, ): """Make a request over the Http transport to the Cloud Datastore API. :type http: :class:`requests.Session` :param http: HTTP object to make requests. :type project: str :param project: The project to make the request for. :type method: str :param method: The API call method name (ie, ``runQuery``, ``lookup``, etc) :type data: str :param data: The data to send with the API call. Typically this is a serialized Protobuf string. :type base_url: str :param base_url: The base URL where the API lives. :type client_info: :class:`google.api_core.client_info.ClientInfo` :param client_info: used to generate user agent. :type retry: :class:`google.api_core.retry.Retry` :param retry: (Optional) retry policy for the request :type timeout: float or tuple(float, float) :param timeout: (Optional) timeout for the request :rtype: str :returns: The string response content from the API call. :raises: :class:`google.cloud.exceptions.GoogleCloudError` if the response code is not 200 OK. """ user_agent = client_info.to_user_agent() headers = { "Content-Type": "application/x-protobuf", "User-Agent": user_agent, connection_module.CLIENT_INFO_HEADER: user_agent, } api_url = build_api_url(project, method, base_url) requester = http.request if retry is not None: requester = retry(requester) if timeout is not None: response = requester( url=api_url, method="POST", headers=headers, data=data, timeout=timeout, ) else: response = requester(url=api_url, method="POST", headers=headers, data=data) if response.status_code != 200: error_status = status_pb2.Status.FromString(response.content) > raise exceptions.from_http_status( response.status_code, error_status.message, errors=[error_status] ) E google.api_core.exceptions.ServiceUnavailable: 503 The datastore operation timed out, or the data was temporarily unavailable. google/cloud/datastore/_http.py:124: ServiceUnavailable</pre></details>
1.0
tests.system.test_query: test_large_query[None-None-2500] failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 0ee3fe910e2865db0794d81970edcdb11e482dc7 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/57fd5258-b7af-479f-bc71-b741ac2a101f), [Sponge](http://sponge2/57fd5258-b7af-479f-bc71-b741ac2a101f) status: failed <details><summary>Test output</summary><br><pre>large_query = <google.cloud.datastore.query.Query object at 0x7f365c341760> limit = None, offset = None, expected = 2500 @pytest.mark.parametrize( "limit,offset,expected", [ # with no offset there are the correct # of results ( None, None, populate_datastore.LARGE_CHARACTER_TOTAL_OBJECTS, ), # with no limit there are results (offset provided) ( None, 900, populate_datastore.LARGE_CHARACTER_TOTAL_OBJECTS - 900, ), # Offset beyond items larger: verify 200 items found ( 200, 1100, 200, ), # offset within range, expect 50 despite larger limit") (100, populate_datastore.LARGE_CHARACTER_TOTAL_OBJECTS - 50, 50), # Offset beyond items larger Verify no items found") (200, populate_datastore.LARGE_CHARACTER_TOTAL_OBJECTS + 1000, 0), ], ) def test_large_query(large_query, limit, offset, expected): page_query = large_query page_query.add_filter("family", "=", "Stark") page_query.add_filter("alive", "=", False) iterator = page_query.fetch(limit=limit, offset=offset) > entities = [e for e in iterator] tests/system/test_query.py:356: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/system/test_query.py:356: in <listcomp> entities = [e for e in iterator] .nox/system-3-8-disable_grpc-true/lib/python3.8/site-packages/google/api_core/page_iterator.py:208: in _items_iter for page in self._page_iter(increment=False): .nox/system-3-8-disable_grpc-true/lib/python3.8/site-packages/google/api_core/page_iterator.py:250: in _page_iter page = self._next_page() google/cloud/datastore/query.py:627: in _next_page response_pb = self.client._datastore_api.run_query( google/cloud/datastore/_http.py:271: in run_query return _rpc( google/cloud/datastore/_http.py:179: in _rpc response = _request( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ http = <google.auth.transport.requests.AuthorizedSession object at 0x7f365c3e8910> project = 'precise-truck-742', method = 'runQuery' data = b'\n\x00\x12)\x12\x11precise-truck-742"\x14LargeCharacterEntity\x1a\x9f\x01\x1a\x10\n\x0eLargeCharacter"1\n/\x08\x01\x...r"\x0b\x12\x0eLargeCharacter"\x0echaracter00599\x0c\xa2\x01\x14LargeCharacterEntity\x18\x00 \x00B\x11precise-truck-742' base_url = 'https://datastore.googleapis.com' client_info = <google.api_core.gapic_v1.client_info.ClientInfo object at 0x7f365c4dc760> retry = None, timeout = None def _request( http, project, method, data, base_url, client_info, retry=None, timeout=None, ): """Make a request over the Http transport to the Cloud Datastore API. :type http: :class:`requests.Session` :param http: HTTP object to make requests. :type project: str :param project: The project to make the request for. :type method: str :param method: The API call method name (ie, ``runQuery``, ``lookup``, etc) :type data: str :param data: The data to send with the API call. Typically this is a serialized Protobuf string. :type base_url: str :param base_url: The base URL where the API lives. :type client_info: :class:`google.api_core.client_info.ClientInfo` :param client_info: used to generate user agent. :type retry: :class:`google.api_core.retry.Retry` :param retry: (Optional) retry policy for the request :type timeout: float or tuple(float, float) :param timeout: (Optional) timeout for the request :rtype: str :returns: The string response content from the API call. :raises: :class:`google.cloud.exceptions.GoogleCloudError` if the response code is not 200 OK. """ user_agent = client_info.to_user_agent() headers = { "Content-Type": "application/x-protobuf", "User-Agent": user_agent, connection_module.CLIENT_INFO_HEADER: user_agent, } api_url = build_api_url(project, method, base_url) requester = http.request if retry is not None: requester = retry(requester) if timeout is not None: response = requester( url=api_url, method="POST", headers=headers, data=data, timeout=timeout, ) else: response = requester(url=api_url, method="POST", headers=headers, data=data) if response.status_code != 200: error_status = status_pb2.Status.FromString(response.content) > raise exceptions.from_http_status( response.status_code, error_status.message, errors=[error_status] ) E google.api_core.exceptions.ServiceUnavailable: 503 The datastore operation timed out, or the data was temporarily unavailable. google/cloud/datastore/_http.py:124: ServiceUnavailable</pre></details>
non_test
tests system test query test large query failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output large query limit none offset none expected pytest mark parametrize limit offset expected with no offset there are the correct of results none none populate datastore large character total objects with no limit there are results offset provided none populate datastore large character total objects offset beyond items larger verify items found offset within range expect despite larger limit populate datastore large character total objects offset beyond items larger verify no items found populate datastore large character total objects def test large query large query limit offset expected page query large query page query add filter family stark page query add filter alive false iterator page query fetch limit limit offset offset entities tests system test query py tests system test query py in entities nox system disable grpc true lib site packages google api core page iterator py in items iter for page in self page iter increment false nox system disable grpc true lib site packages google api core page iterator py in page iter page self next page google cloud datastore query py in next page response pb self client datastore api run query google cloud datastore http py in run query return rpc google cloud datastore http py in rpc response request http project precise truck method runquery data b n truck n n x r truck base url client info retry none timeout none def request http project method data base url client info retry none timeout none make a request over the http transport to the cloud datastore api type http class requests session param http http object to make requests type project str param project the project to make the request for type method str param method the api call method name ie runquery lookup etc type data str param data the data to send with the api call typically this is a serialized protobuf string type base url str param base url the base url where the api lives type client info class google api core client info clientinfo param client info used to generate user agent type retry class google api core retry retry param retry optional retry policy for the request type timeout float or tuple float float param timeout optional timeout for the request rtype str returns the string response content from the api call raises class google cloud exceptions googleclouderror if the response code is not ok user agent client info to user agent headers content type application x protobuf user agent user agent connection module client info header user agent api url build api url project method base url requester http request if retry is not none requester retry requester if timeout is not none response requester url api url method post headers headers data data timeout timeout else response requester url api url method post headers headers data data if response status code error status status status fromstring response content raise exceptions from http status response status code error status message errors e google api core exceptions serviceunavailable the datastore operation timed out or the data was temporarily unavailable google cloud datastore http py serviceunavailable
0
144,621
19,292,298,589
IssuesEvent
2021-12-12 01:29:07
MidnightBSD/security-advisory
https://api.github.com/repos/MidnightBSD/security-advisory
opened
CVE-2021-43797 (Medium) detected in netty-codec-http-4.1.70.Final.jar
security vulnerability
## CVE-2021-43797 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.70.Final.jar</b></p></summary> <p></p> <p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p> <p>Path to dependency file: security-advisory/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.70.Final/netty-codec-http-4.1.70.Final.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-data-elasticsearch-2.4.13.jar (Root Library) - spring-data-elasticsearch-4.1.15.jar - transport-netty4-client-7.9.3.jar - :x: **netty-codec-http-4.1.70.Final.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. Netty prior to version 4.1.7.1.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to "sanitize" header names before it forward these to another remote system when used as proxy. This remote system can't see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.7.1.Final to receive a patch. <p>Publish Date: 2021-12-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797>CVE-2021-43797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-wx5j-54mm-rqqq">https://github.com/advisories/GHSA-wx5j-54mm-rqqq</a></p> <p>Release Date: 2021-12-09</p> <p>Fix Resolution: io.netty:netty-codec-http:4.1.71.Final</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-43797 (Medium) detected in netty-codec-http-4.1.70.Final.jar - ## CVE-2021-43797 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.70.Final.jar</b></p></summary> <p></p> <p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p> <p>Path to dependency file: security-advisory/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.70.Final/netty-codec-http-4.1.70.Final.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-data-elasticsearch-2.4.13.jar (Root Library) - spring-data-elasticsearch-4.1.15.jar - transport-netty4-client-7.9.3.jar - :x: **netty-codec-http-4.1.70.Final.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. Netty prior to version 4.1.7.1.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to "sanitize" header names before it forward these to another remote system when used as proxy. This remote system can't see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.7.1.Final to receive a patch. <p>Publish Date: 2021-12-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797>CVE-2021-43797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-wx5j-54mm-rqqq">https://github.com/advisories/GHSA-wx5j-54mm-rqqq</a></p> <p>Release Date: 2021-12-09</p> <p>Fix Resolution: io.netty:netty-codec-http:4.1.71.Final</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in netty codec http final jar cve medium severity vulnerability vulnerable library netty codec http final jar library home page a href path to dependency file security advisory pom xml path to vulnerable library home wss scanner repository io netty netty codec http final netty codec http final jar dependency hierarchy spring boot starter data elasticsearch jar root library spring data elasticsearch jar transport client jar x netty codec http final jar vulnerable library found in base branch master vulnerability details netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers clients netty prior to version final skips control chars when they are present at the beginning end of the header name it should instead fail fast as these are not allowed by the spec and could lead to http request smuggling failing to do the validation might cause netty to sanitize header names before it forward these to another remote system when used as proxy this remote system can t see the invalid usage anymore and therefore does not do the validation itself users should upgrade to version final to receive a patch publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty codec http final step up your open source security game with whitesource
0
431,279
12,476,622,672
IssuesEvent
2020-05-29 13:47:21
input-output-hk/ouroboros-network
https://api.github.com/repos/input-output-hk/ouroboros-network
opened
Don't derive EpochInfo from HardForkSummary
bug consensus priority high transition
The HardForkSummary gives us _full_ information, including relation to wall clock. This is not required for `EpochInfo`, and is leading to the need for `SystemStart` in places where it shouldn't be.
1.0
Don't derive EpochInfo from HardForkSummary - The HardForkSummary gives us _full_ information, including relation to wall clock. This is not required for `EpochInfo`, and is leading to the need for `SystemStart` in places where it shouldn't be.
non_test
don t derive epochinfo from hardforksummary the hardforksummary gives us full information including relation to wall clock this is not required for epochinfo and is leading to the need for systemstart in places where it shouldn t be
0
21,905
11,660,535,465
IssuesEvent
2020-03-03 03:40:36
cityofaustin/atd-data-tech
https://api.github.com/repos/cityofaustin/atd-data-tech
opened
Fix Location references to noncr3_est_comp_cost & locationTotals
Need: 1-Must Have Product: Vision Zero Crash Data System Project: Vision Zero Crash Data System Service: Dev Workgroup: VZ migrated
Two areas of commented out code with `// TODO` above the block need to be fixed and uncommented so we don't have a feature regression on the Locations details page. These comments happened here: https://github.com/cityofaustin/atd-vz-data/pull/687 *Migrated from [atd-vz-data #700](https://github.com/cityofaustin/atd-vz-data/issues/700)*
1.0
Fix Location references to noncr3_est_comp_cost & locationTotals - Two areas of commented out code with `// TODO` above the block need to be fixed and uncommented so we don't have a feature regression on the Locations details page. These comments happened here: https://github.com/cityofaustin/atd-vz-data/pull/687 *Migrated from [atd-vz-data #700](https://github.com/cityofaustin/atd-vz-data/issues/700)*
non_test
fix location references to est comp cost locationtotals two areas of commented out code with todo above the block need to be fixed and uncommented so we don t have a feature regression on the locations details page these comments happened here migrated from
0
26,223
2,684,248,714
IssuesEvent
2015-03-28 20:04:32
ConEmu/old-issues
https://api.github.com/repos/ConEmu/old-issues
opened
Использование памяти ConEmu
1 star bug imported Priority-Medium
_From [Dmitriy....@gmail.com](https://code.google.com/u/118428503880752500726/) on December 21, 2012 04:33:39_ OS version: Win7 SP1 x64 (с откллюченным свопом) ConEmu version: 121216 Far version: 2.0 r1807 x86 *Bug description* Последние версии ConEmu используют в два раза больше оперативы - 17+6 MB против 6MB на старенькой v2008.3.26.0 Если запустить 5 инстансов фара в отдельном окне - это уже -100мб на ровном месте. Можно как-то уменьшить аппетиты? *Steps to reproduction* 1. Запустить FAR 2. Открыть Task Manager и посмотреть столбцы Working Set и Private Working Set. На приатаченной картинке * ConEmu _old.exe - это v2008.3.26.0 * ConEmu .exe и ConEmu C.exe - это 121216 **Attachment:** [conemu.png](http://code.google.com/p/conemu-maximus5/issues/detail?id=855) _Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=855_
1.0
Использование памяти ConEmu - _From [Dmitriy....@gmail.com](https://code.google.com/u/118428503880752500726/) on December 21, 2012 04:33:39_ OS version: Win7 SP1 x64 (с откллюченным свопом) ConEmu version: 121216 Far version: 2.0 r1807 x86 *Bug description* Последние версии ConEmu используют в два раза больше оперативы - 17+6 MB против 6MB на старенькой v2008.3.26.0 Если запустить 5 инстансов фара в отдельном окне - это уже -100мб на ровном месте. Можно как-то уменьшить аппетиты? *Steps to reproduction* 1. Запустить FAR 2. Открыть Task Manager и посмотреть столбцы Working Set и Private Working Set. На приатаченной картинке * ConEmu _old.exe - это v2008.3.26.0 * ConEmu .exe и ConEmu C.exe - это 121216 **Attachment:** [conemu.png](http://code.google.com/p/conemu-maximus5/issues/detail?id=855) _Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=855_
non_test
использование памяти conemu from on december os version с откллюченным свопом conemu version far version bug description последние версии conemu используют в два раза больше оперативы mb против на старенькой если запустить инстансов фара в отдельном окне это уже на ровном месте можно как то уменьшить аппетиты steps to reproduction запустить far открыть task manager и посмотреть столбцы working set и private working set на приатаченной картинке conemu old exe это conemu exe и conemu c exe это attachment original issue
0
1,895
4,111,514,048
IssuesEvent
2016-06-07 06:36:47
thx/brix-components
https://api.github.com/repos/thx/brix-components
closed
Dropdown:支持 .disabled() 方法
3-Medium enhancement Requirement
```js var Dropdown = require('components/dropdown') Dropdown.prototype.disabled = function(value) { if (value === undefined) return this.options.disabled this.options.disabled = value this.$element.prop('disabled', value) this.$relatedElement[value ? 'addClass' : 'removeClass']('disabled') return this } ```
1.0
Dropdown:支持 .disabled() 方法 - ```js var Dropdown = require('components/dropdown') Dropdown.prototype.disabled = function(value) { if (value === undefined) return this.options.disabled this.options.disabled = value this.$element.prop('disabled', value) this.$relatedElement[value ? 'addClass' : 'removeClass']('disabled') return this } ```
non_test
dropdown:支持 disabled 方法 js var dropdown require components dropdown dropdown prototype disabled function value if value undefined return this options disabled this options disabled value this element prop disabled value this relatedelement disabled return this
0
177,044
13,676,020,708
IssuesEvent
2020-09-29 13:25:18
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Earlier integration between Engine and Client work
kind/improvement team/engines team/typescript topic: tests
Right now we can only integrate Engine work into the Client after it was merged to `master`. That is problematic, as it forces us do a lot of up front work before we can test and integrate. We want an "integration branch" where we can test a specific engine (by hash) or specific Engine branch. Notes: - Branch names could be the trigger to download a different build (from S3) - Publish to Npm in new Npm tag (nightly? other?) so other people can also test these builds
1.0
Earlier integration between Engine and Client work - Right now we can only integrate Engine work into the Client after it was merged to `master`. That is problematic, as it forces us do a lot of up front work before we can test and integrate. We want an "integration branch" where we can test a specific engine (by hash) or specific Engine branch. Notes: - Branch names could be the trigger to download a different build (from S3) - Publish to Npm in new Npm tag (nightly? other?) so other people can also test these builds
test
earlier integration between engine and client work right now we can only integrate engine work into the client after it was merged to master that is problematic as it forces us do a lot of up front work before we can test and integrate we want an integration branch where we can test a specific engine by hash or specific engine branch notes branch names could be the trigger to download a different build from publish to npm in new npm tag nightly other so other people can also test these builds
1
31,072
2,731,544,042
IssuesEvent
2015-04-16 20:57:32
GoogleCloudPlatform/kubernetes
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
closed
Rename `.kube/.kubeconfig` to `.kube/kubeconfig` or `.kube/config`
component/CLI priority/P3 team/UX
By analogy to `.git/config`. I expect a config directory to be hidden, but not the files inside it. @dead2k what do you think?
1.0
Rename `.kube/.kubeconfig` to `.kube/kubeconfig` or `.kube/config` - By analogy to `.git/config`. I expect a config directory to be hidden, but not the files inside it. @dead2k what do you think?
non_test
rename kube kubeconfig to kube kubeconfig or kube config by analogy to git config i expect a config directory to be hidden but not the files inside it what do you think
0