Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
226,854 | 7,523,827,069 | IssuesEvent | 2018-04-13 03:24:51 | cuappdev/podcast-ios | https://api.github.com/repos/cuappdev/podcast-ios | closed | Discover showing episodes twice | Priority: High Type: Bug | These episodes reappear when scrolling in discover


| 1.0 | Discover showing episodes twice - These episodes reappear when scrolling in discover


| non_test | discover showing episodes twice these episodes reappear when scrolling in discover | 0 |
732,998 | 25,284,241,780 | IssuesEvent | 2022-11-16 17:56:31 | googleapis/nodejs-compute | https://api.github.com/repos/googleapis/nodejs-compute | closed | start-iap-tunnel api for node.js | type: question priority: p3 api: compute | Hi Team,
I am trying to use IAP for TCP forwarding ref: ( https://cloud.google.com/iap/docs/using-tcp-forwarding#tunneling_other_tcp_connections )
But i am not able to find nodejs api for [start-iap-tunnel](https://cloud.google.com/sdk/gcloud/reference/compute/start-iap-tunnel)
Please help me regarding the same. | 1.0 | start-iap-tunnel api for node.js - Hi Team,
I am trying to use IAP for TCP forwarding ref: ( https://cloud.google.com/iap/docs/using-tcp-forwarding#tunneling_other_tcp_connections )
But i am not able to find nodejs api for [start-iap-tunnel](https://cloud.google.com/sdk/gcloud/reference/compute/start-iap-tunnel)
Please help me regarding the same. | non_test | start iap tunnel api for node js hi team i am trying to use iap for tcp forwarding ref but i am not able to find nodejs api for please help me regarding the same | 0 |
102,605 | 4,157,217,216 | IssuesEvent | 2016-06-16 20:34:34 | flipdazed/Hybrid-Monte-Carlo | https://api.github.com/repos/flipdazed/Hybrid-Monte-Carlo | opened | Derivative of the Action | High Priority | **Aim**
Implement analytic derivative of the Action for the lattice.
**Technical**
The only non trivial term is the derivative of the velocity squared (x_{i+1} - x_i)^2/a^2 given by:
2 * ( 2x_j + x_{j+1} - x_{j-1} )
this can also be expressed for the symmetric velocity squared (x_{i+1} - x_i)(x_i - x_{i-1}) / a^2 as:
[ 2 * ( x_{j-1} + x_{j+1} - x_j ) - x_{j-2} - x_{j+2} ] / a^2
( take the derivative d/dx_j and obtain delta-kroneka functions such as \delta_{(i+1)j}) and simplify
**Checks**
- Check vs. known results e.g. the expectation value: < x^2 > (Creutz & Freedman)
- Incorporate in existing unit tests | 1.0 | Derivative of the Action - **Aim**
Implement analytic derivative of the Action for the lattice.
**Technical**
The only non trivial term is the derivative of the velocity squared (x_{i+1} - x_i)^2/a^2 given by:
2 * ( 2x_j + x_{j+1} - x_{j-1} )
this can also be expressed for the symmetric velocity squared (x_{i+1} - x_i)(x_i - x_{i-1}) / a^2 as:
[ 2 * ( x_{j-1} + x_{j+1} - x_j ) - x_{j-2} - x_{j+2} ] / a^2
( take the derivative d/dx_j and obtain delta-kroneka functions such as \delta_{(i+1)j}) and simplify
**Checks**
- Check vs. known results e.g. the expectation value: < x^2 > (Creutz & Freedman)
- Incorporate in existing unit tests | non_test | derivative of the action aim implement analytic derivative of the action for the lattice technical the only non trivial term is the derivative of the velocity squared x i x i a given by j x j x j this can also be expressed for the symmetric velocity squared x i x i x i x i a as a take the derivative d dx j and obtain delta kroneka functions such as delta i j and simplify checks check vs known results e g the expectation value creutz freedman incorporate in existing unit tests | 0 |
248,283 | 26,785,057,453 | IssuesEvent | 2023-02-01 01:36:18 | dmartinez777/AzureDevOpsAngular | https://api.github.com/repos/dmartinez777/AzureDevOpsAngular | opened | CVE-2022-25881 (Medium) detected in http-cache-semantics-3.8.1.tgz | security vulnerability | ## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-3.8.1.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- cli-10.0.7.tgz (Root Library)
- pacote-9.5.12.tgz
- make-fetch-happen-5.0.2.tgz
- :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-25881 (Medium) detected in http-cache-semantics-3.8.1.tgz - ## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-3.8.1.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- cli-10.0.7.tgz (Root Library)
- pacote-9.5.12.tgz
- make-fetch-happen-5.0.2.tgz
- :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in http cache semantics tgz cve medium severity vulnerability vulnerable library http cache semantics tgz parses cache control and other headers helps building correct http caches and proxies library home page a href path to dependency file package json path to vulnerable library node modules http cache semantics package json dependency hierarchy cli tgz root library pacote tgz make fetch happen tgz x http cache semantics tgz vulnerable library found in base branch master vulnerability details this affects versions of the package http cache semantics before the issue can be exploited via malicious request header values sent to a server when that server reads the cache policy from the request using this library publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http cache semantics step up your open source security game with mend | 0 |
183,761 | 21,783,887,798 | IssuesEvent | 2022-05-13 22:49:47 | billmcchesney1/concord | https://api.github.com/repos/billmcchesney1/concord | opened | CVE-2022-1650 (High) detected in eventsource-1.0.7.tgz | security vulnerability | ## CVE-2022-1650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eventsource-1.0.7.tgz</b></p></summary>
<p>W3C compliant EventSource client for Node.js and browser (polyfill)</p>
<p>Library home page: <a href="https://registry.npmjs.org/eventsource/-/eventsource-1.0.7.tgz">https://registry.npmjs.org/eventsource/-/eventsource-1.0.7.tgz</a></p>
<p>Path to dependency file: /console2/package.json</p>
<p>Path to vulnerable library: /console2/node_modules/eventsource/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.3.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **eventsource-1.0.7.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository eventsource/eventsource prior to v2.0.2.
<p>Publish Date: 2022-05-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1650>CVE-2022-1650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/dc9e467f-be5d-4945-867d-1044d27e9b8e/">https://huntr.dev/bounties/dc9e467f-be5d-4945-867d-1044d27e9b8e/</a></p>
<p>Release Date: 2022-05-12</p>
<p>Fix Resolution: eventsource - 2.0.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"eventsource","packageVersion":"1.0.7","packageFilePaths":["/console2/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.4.3;webpack-dev-server:3.11.0;sockjs-client:1.4.0;eventsource:1.0.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"eventsource - 2.0.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-1650","vulnerabilityDetails":"Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository eventsource/eventsource prior to v2.0.2.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1650","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2022-1650 (High) detected in eventsource-1.0.7.tgz - ## CVE-2022-1650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eventsource-1.0.7.tgz</b></p></summary>
<p>W3C compliant EventSource client for Node.js and browser (polyfill)</p>
<p>Library home page: <a href="https://registry.npmjs.org/eventsource/-/eventsource-1.0.7.tgz">https://registry.npmjs.org/eventsource/-/eventsource-1.0.7.tgz</a></p>
<p>Path to dependency file: /console2/package.json</p>
<p>Path to vulnerable library: /console2/node_modules/eventsource/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.3.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **eventsource-1.0.7.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository eventsource/eventsource prior to v2.0.2.
<p>Publish Date: 2022-05-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1650>CVE-2022-1650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/dc9e467f-be5d-4945-867d-1044d27e9b8e/">https://huntr.dev/bounties/dc9e467f-be5d-4945-867d-1044d27e9b8e/</a></p>
<p>Release Date: 2022-05-12</p>
<p>Fix Resolution: eventsource - 2.0.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"eventsource","packageVersion":"1.0.7","packageFilePaths":["/console2/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.4.3;webpack-dev-server:3.11.0;sockjs-client:1.4.0;eventsource:1.0.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"eventsource - 2.0.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-1650","vulnerabilityDetails":"Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository eventsource/eventsource prior to v2.0.2.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1650","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in eventsource tgz cve high severity vulnerability vulnerable library eventsource tgz compliant eventsource client for node js and browser polyfill library home page a href path to dependency file package json path to vulnerable library node modules eventsource package json dependency hierarchy react scripts tgz root library webpack dev server tgz sockjs client tgz x eventsource tgz vulnerable library found in base branch master vulnerability details exposure of sensitive information to an unauthorized actor in github repository eventsource eventsource prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution eventsource isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react scripts webpack dev server sockjs client eventsource isminimumfixversionavailable true minimumfixversion eventsource isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails exposure of sensitive information to an unauthorized actor in github repository eventsource eventsource prior to vulnerabilityurl | 0 |
37,078 | 4,777,519,604 | IssuesEvent | 2016-10-27 16:30:54 | UoSGamesGroups/first-semester-l4-5-group-13 | https://api.github.com/repos/UoSGamesGroups/first-semester-l4-5-group-13 | opened | Design 3 horizontal 'T-Junction' chunks | Design | Design 3 horizontal 'T-Junction' chunks. Refer to the design sheet posted by Kenny to garner an understanding of a t-junction chunk.
Assigned Time: 1 Hour | 1.0 | Design 3 horizontal 'T-Junction' chunks - Design 3 horizontal 'T-Junction' chunks. Refer to the design sheet posted by Kenny to garner an understanding of a t-junction chunk.
Assigned Time: 1 Hour | non_test | design horizontal t junction chunks design horizontal t junction chunks refer to the design sheet posted by kenny to garner an understanding of a t junction chunk assigned time hour | 0 |
178,244 | 6,601,607,112 | IssuesEvent | 2017-09-18 02:25:22 | CruCentralCoast/CruiOS | https://api.github.com/repos/CruCentralCoast/CruiOS | closed | Home - Event Day is ambiguous when limit is 2 weeks | Priority: Major Type: Improvement | Since the change from displaying events two weeks away instead of one, displaying the day as SAT is ambiguous. | 1.0 | Home - Event Day is ambiguous when limit is 2 weeks - Since the change from displaying events two weeks away instead of one, displaying the day as SAT is ambiguous. | non_test | home event day is ambiguous when limit is weeks since the change from displaying events two weeks away instead of one displaying the day as sat is ambiguous | 0 |
236,636 | 18,104,454,289 | IssuesEvent | 2021-09-22 17:35:19 | fga-eps-mds/2021-1-Bot | https://api.github.com/repos/fga-eps-mds/2021-1-Bot | closed | Alteração na politica de branch | documentation Time-Capivara | ## Descrição da Issue
A politica de branch está informando que utilizaremos uma branch chamada develop, para intermediar entre a main e as demais, porém, não a utilizaremos.
## Tasks:
- [x] Apagar informações sobre brach develop.
## Critérios de Aceitação:
- [x] Revisão de pelo menos um membro. | 1.0 | Alteração na politica de branch - ## Descrição da Issue
A politica de branch está informando que utilizaremos uma branch chamada develop, para intermediar entre a main e as demais, porém, não a utilizaremos.
## Tasks:
- [x] Apagar informações sobre brach develop.
## Critérios de Aceitação:
- [x] Revisão de pelo menos um membro. | non_test | alteração na politica de branch descrição da issue a politica de branch está informando que utilizaremos uma branch chamada develop para intermediar entre a main e as demais porém não a utilizaremos tasks apagar informações sobre brach develop critérios de aceitação revisão de pelo menos um membro | 0 |
259,961 | 22,579,494,808 | IssuesEvent | 2022-06-28 10:17:14 | dafny-lang/dafny | https://api.github.com/repos/dafny-lang/dafny | opened | Unstable test VerificationDiagnosticsCanBeMigratedAcrossMultipleResolutions | kind: tests | https://github.com/dafny-lang/dafny/runs/7075905732?check_suite_focus=true
```
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Failed VerificationDiagnosticsCanBeMigratedAcrossMultipleResolutions [784 ms]
Error Message:
Assert.AreEqual failed. Expected:<file:///verification71.dfy>. Actual:<file:///testFile70.dfy>. Unexpected diagnostics were received whereas none were expected:
Diagnostic { Range = [start: (0, 0), end: (0, 0)], Severity = Error, Code = , CodeDescription = , Source = Parser, Message = [internal error] Parser exception: The operation was canceled., Tags = , RelatedInformation = OmniSharp.Extensions.LanguageServer.Protocol.Models.Container`1[OmniSharp.Extensions.LanguageServer.Protocol.Models.DiagnosticRelatedInformation], Data = },Diagnostic { Range = [start: (4, 4), end: (4, 10)], Severity = Error, Code = , CodeDescription = , Source = Verifier, Message = A postcondition might not hold on this return path., Tags = , RelatedInformation = OmniSharp.Extensions.LanguageServer.Protocol.Models.Container`1[OmniSharp.Extensions.LanguageServer.Protocol.Models.DiagnosticRelatedInformation], Data = }
Stack Trace:
at Microsoft.Dafny.LanguageServer.IntegrationTest.Util.ClientBasedLanguageServerTest.AssertNoDiagnosticsAreComing(CancellationToken cancellationToken) in D:\a\dafny\dafny\Source\DafnyLanguageServer.Test\Util\ClientBasedLanguageServerTest.cs:line 155
at Microsoft.Dafny.LanguageServer.IntegrationTest.Various.DiagnosticMigrationTest.VerificationDiagnosticsCanBeMigratedAcrossMultipleResolutions() in D:\a\dafny\dafny\Source\DafnyLanguageServer.Test\Various\DiagnosticMigrationTest.cs:line 134
Results File: D:\a\dafny\dafny\Source\DafnyLanguageServer.Test\TestResults\runneradmin_fv-az455-620_[202](https://github.com/dafny-lang/dafny/runs/7075905732?check_suite_focus=true#step:8:203)2-06-27_15_27_24.trx
Failed! - Failed: 1, Passed: 286, Skipped: 0, Total: 287, Duration: 5 m 3 s - DafnyLanguageServer.Test.dll (net6.0)
1>Done Building Project "D:\a\dafny\dafny\Source\DafnyLanguageServer.Test\DafnyLanguageServer.Test.csproj" (VSTest target(s)) -- FAILED.
```
| 1.0 | Unstable test VerificationDiagnosticsCanBeMigratedAcrossMultipleResolutions - https://github.com/dafny-lang/dafny/runs/7075905732?check_suite_focus=true
```
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Failed VerificationDiagnosticsCanBeMigratedAcrossMultipleResolutions [784 ms]
Error Message:
Assert.AreEqual failed. Expected:<file:///verification71.dfy>. Actual:<file:///testFile70.dfy>. Unexpected diagnostics were received whereas none were expected:
Diagnostic { Range = [start: (0, 0), end: (0, 0)], Severity = Error, Code = , CodeDescription = , Source = Parser, Message = [internal error] Parser exception: The operation was canceled., Tags = , RelatedInformation = OmniSharp.Extensions.LanguageServer.Protocol.Models.Container`1[OmniSharp.Extensions.LanguageServer.Protocol.Models.DiagnosticRelatedInformation], Data = },Diagnostic { Range = [start: (4, 4), end: (4, 10)], Severity = Error, Code = , CodeDescription = , Source = Verifier, Message = A postcondition might not hold on this return path., Tags = , RelatedInformation = OmniSharp.Extensions.LanguageServer.Protocol.Models.Container`1[OmniSharp.Extensions.LanguageServer.Protocol.Models.DiagnosticRelatedInformation], Data = }
Stack Trace:
at Microsoft.Dafny.LanguageServer.IntegrationTest.Util.ClientBasedLanguageServerTest.AssertNoDiagnosticsAreComing(CancellationToken cancellationToken) in D:\a\dafny\dafny\Source\DafnyLanguageServer.Test\Util\ClientBasedLanguageServerTest.cs:line 155
at Microsoft.Dafny.LanguageServer.IntegrationTest.Various.DiagnosticMigrationTest.VerificationDiagnosticsCanBeMigratedAcrossMultipleResolutions() in D:\a\dafny\dafny\Source\DafnyLanguageServer.Test\Various\DiagnosticMigrationTest.cs:line 134
Results File: D:\a\dafny\dafny\Source\DafnyLanguageServer.Test\TestResults\runneradmin_fv-az455-620_[202](https://github.com/dafny-lang/dafny/runs/7075905732?check_suite_focus=true#step:8:203)2-06-27_15_27_24.trx
Failed! - Failed: 1, Passed: 286, Skipped: 0, Total: 287, Duration: 5 m 3 s - DafnyLanguageServer.Test.dll (net6.0)
1>Done Building Project "D:\a\dafny\dafny\Source\DafnyLanguageServer.Test\DafnyLanguageServer.Test.csproj" (VSTest target(s)) -- FAILED.
```
| test | unstable test verificationdiagnosticscanbemigratedacrossmultipleresolutions starting test execution please wait a total of test files matched the specified pattern failed verificationdiagnosticscanbemigratedacrossmultipleresolutions error message assert areequal failed expected actual unexpected diagnostics were received whereas none were expected diagnostic range severity error code codedescription source parser message parser exception the operation was canceled tags relatedinformation omnisharp extensions languageserver protocol models container data diagnostic range severity error code codedescription source verifier message a postcondition might not hold on this return path tags relatedinformation omnisharp extensions languageserver protocol models container data stack trace at microsoft dafny languageserver integrationtest util clientbasedlanguageservertest assertnodiagnosticsarecoming cancellationtoken cancellationtoken in d a dafny dafny source dafnylanguageserver test util clientbasedlanguageservertest cs line at microsoft dafny languageserver integrationtest various diagnosticmigrationtest verificationdiagnosticscanbemigratedacrossmultipleresolutions in d a dafny dafny source dafnylanguageserver test various diagnosticmigrationtest cs line results file d a dafny dafny source dafnylanguageserver test testresults runneradmin fv failed failed passed skipped total duration m s dafnylanguageserver test dll done building project d a dafny dafny source dafnylanguageserver test dafnylanguageserver test csproj vstest target s failed | 1 |
204,807 | 15,554,838,366 | IssuesEvent | 2021-03-16 04:50:13 | packit/ogr | https://api.github.com/repos/packit/ogr | closed | GithubObject has no attribute "NotSet" | stale testing | By running `pre-commit run --all-files` getting the below error without using the strict mode. I am not sure what NotSet does in this case.
```
ogr/services/github/service.py:169: error: Module has no attribute "NotSet"
```
The changes were made in this PR - https://github.com/packit/ogr/pull/476
| 1.0 | GithubObject has no attribute "NotSet" - By running `pre-commit run --all-files` getting the below error without using the strict mode. I am not sure what NotSet does in this case.
```
ogr/services/github/service.py:169: error: Module has no attribute "NotSet"
```
The changes were made in this PR - https://github.com/packit/ogr/pull/476
| test | githubobject has no attribute notset by running pre commit run all files getting the below error without using the strict mode i am not sure what notset does in this case ogr services github service py error module has no attribute notset the changes were made in this pr | 1 |
238,830 | 19,769,821,450 | IssuesEvent | 2022-01-17 08:53:48 | pingcap/tiflow | https://api.github.com/repos/pingcap/tiflow | closed | testOptimist.TestOptimistLockConflict is not stabled | component/test | ### Which jobs are flaking?
DM-UT
### Which test(s) are flaking?
testOptimist.TestOptimistLockConflict
```
FAIL: optimist_test.go:612: testOptimist.TestOptimistLockConflict
optimist_test.go:697:
c.Assert(len(opCh), Equals, 1)
... obtained int = 0
... expected int = 1
```
### Jenkins logs or GitHub Actions link
https://ci.pingcap.net/blue/organizations/jenkins/atom-ut/detail/atom-ut/7501/tests/
### Anything else we need to know
- Does this test exist for other branches as well?
- Has there been a high frequency of failure lately? | 1.0 | testOptimist.TestOptimistLockConflict is not stabled - ### Which jobs are flaking?
DM-UT
### Which test(s) are flaking?
testOptimist.TestOptimistLockConflict
```
FAIL: optimist_test.go:612: testOptimist.TestOptimistLockConflict
optimist_test.go:697:
c.Assert(len(opCh), Equals, 1)
... obtained int = 0
... expected int = 1
```
### Jenkins logs or GitHub Actions link
https://ci.pingcap.net/blue/organizations/jenkins/atom-ut/detail/atom-ut/7501/tests/
### Anything else we need to know
- Does this test exist for other branches as well?
- Has there been a high frequency of failure lately? | test | testoptimist testoptimistlockconflict is not stabled which jobs are flaking dm ut which test s are flaking testoptimist testoptimistlockconflict fail optimist test go testoptimist testoptimistlockconflict optimist test go c assert len opch equals obtained int expected int jenkins logs or github actions link anything else we need to know does this test exist for other branches as well has there been a high frequency of failure lately | 1 |
138,394 | 11,200,390,647 | IssuesEvent | 2020-01-03 21:37:09 | spacetelescope/jwst | https://api.github.com/repos/spacetelescope/jwst | closed | Team learn about the new regression architecture | jira testing | Issue [JP-1164](https://jira.stsci.edu/browse/JP-1164) was created by Jonathan Eisenhamer:
Team learns about the new regression architecture | 1.0 | Team learn about the new regression architecture - Issue [JP-1164](https://jira.stsci.edu/browse/JP-1164) was created by Jonathan Eisenhamer:
Team learns about the new regression architecture | test | team learn about the new regression architecture issue was created by jonathan eisenhamer team learns about the new regression architecture | 1 |
293,374 | 25,288,039,378 | IssuesEvent | 2022-11-16 21:06:41 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | Failing test: X-Pack Accessibility Tests.x-pack/test/accessibility/apps/rules_connectors·ts - Kibana Alerts - rules tab accessibility tests a11y test on create rules panel | failed-test | A test failed on a tracked branch
```
Error: a11y report:
VIOLATION
[aria-valid-attr-value]: Ensures all ARIA attributes have valid values
Impact: critical
Help: https://dequeuniversity.com/rules/axe/4.0/aria-valid-attr-value?application=axeAPI
Elements:
- <div aria-labelledby="flyoutRuleAddTitle" role="dialog" class="euiFlyout css-1ymop35-euiFlyout-l-m-overlay-right" tabindex="-1" style="max-inline-size: 620px;">
at AccessibilityService.assertValidAxeReport (test/accessibility/services/a11y/a11y.ts:66:13)
at AccessibilityService.testAppSnapshot (test/accessibility/services/a11y/a11y.ts:47:10)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Context.<anonymous> (x-pack/test/accessibility/apps/rules_connectors.ts:34:7)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:78:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/23644#01848214-82ae-496e-bc3b-9b420e3475a6)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Accessibility Tests.x-pack/test/accessibility/apps/rules_connectors·ts","test.name":"Kibana Alerts - rules tab accessibility tests a11y test on create rules panel","test.failCount":1}} --> | 1.0 | Failing test: X-Pack Accessibility Tests.x-pack/test/accessibility/apps/rules_connectors·ts - Kibana Alerts - rules tab accessibility tests a11y test on create rules panel - A test failed on a tracked branch
```
Error: a11y report:
VIOLATION
[aria-valid-attr-value]: Ensures all ARIA attributes have valid values
Impact: critical
Help: https://dequeuniversity.com/rules/axe/4.0/aria-valid-attr-value?application=axeAPI
Elements:
- <div aria-labelledby="flyoutRuleAddTitle" role="dialog" class="euiFlyout css-1ymop35-euiFlyout-l-m-overlay-right" tabindex="-1" style="max-inline-size: 620px;">
at AccessibilityService.assertValidAxeReport (test/accessibility/services/a11y/a11y.ts:66:13)
at AccessibilityService.testAppSnapshot (test/accessibility/services/a11y/a11y.ts:47:10)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Context.<anonymous> (x-pack/test/accessibility/apps/rules_connectors.ts:34:7)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:78:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/23644#01848214-82ae-496e-bc3b-9b420e3475a6)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Accessibility Tests.x-pack/test/accessibility/apps/rules_connectors·ts","test.name":"Kibana Alerts - rules tab accessibility tests a11y test on create rules panel","test.failCount":1}} --> | test | failing test x pack accessibility tests x pack test accessibility apps rules connectors·ts kibana alerts rules tab accessibility tests test on create rules panel a test failed on a tracked branch error report violation ensures all aria attributes have valid values impact critical help elements at accessibilityservice assertvalidaxereport test accessibility services ts at accessibilityservice testappsnapshot test accessibility services ts at runmicrotasks at processticksandrejections node internal process task queues at context x pack test accessibility apps rules connectors ts at object apply node modules kbn test target node src functional test runner lib mocha wrap function js first failure | 1 |
344,898 | 30,771,108,746 | IssuesEvent | 2023-07-30 22:58:32 | pokt-network/pocket | https://api.github.com/repos/pokt-network/pocket | closed | [E2E] Cycling nodes mechansim | triage testing | ## Objective
Develop a mechanism for cycling nodes on and off in the network simulation environment to simulate nodes leaving and joining the network.
## Origin Document
Byzantine Network Simulator [research results](https://www.notion.so/pocketnetwork/Byzantine-Network-Simulator-fd9002b599aa44e98eb94ec400d5e6ef)
## Goals
- Implement a mechanism for cycling nodes on and off
- Ensure compatibility with the network simulation environment
- Ensure the mechanism works smoothly with DevNet environments
- Support for targeting and killing pods with kubectl commands
## Deliverable
- [ ] Mechanism for cycling nodes on and off in the network simulation environment
- [ ] Documentation on how to use the cycling mechanism
## Non-goals / Non-deliverables
- Providing a mechanism for every possible node lifecycle event beyond joining and leaving the network
## General issue deliverables
- [ ] Update the appropriate CHANGELOG
- [ ] Update any relevant READMEs (local and/or global)
- [ ] Update any relevant global documentation & references
- [ ] If applicable, update the source code tree explanation
- [ ] If applicable, add or update a state, sequence or flowchart diagram using [mermaid](https://mermaid-js.github.io/mermaid/)
## Testing Methodology
- [ ] **All tests**: `make test_all`
- [ ] **LocalNet**: verify a `LocalNet` is still functioning correctly by following the instructions at [docs/development/README.md](https://github.com/pokt-network/pocket/tree/main/docs/development)
---
**Creator**: @jessicadaugherty | 1.0 | [E2E] Cycling nodes mechansim - ## Objective
Develop a mechanism for cycling nodes on and off in the network simulation environment to simulate nodes leaving and joining the network.
## Origin Document
Byzantine Network Simulator [research results](https://www.notion.so/pocketnetwork/Byzantine-Network-Simulator-fd9002b599aa44e98eb94ec400d5e6ef)
## Goals
- Implement a mechanism for cycling nodes on and off
- Ensure compatibility with the network simulation environment
- Ensure the mechanism works smoothly with DevNet environments
- Support for targeting and killing pods with kubectl commands
## Deliverable
- [ ] Mechanism for cycling nodes on and off in the network simulation environment
- [ ] Documentation on how to use the cycling mechanism
## Non-goals / Non-deliverables
- Providing a mechanism for every possible node lifecycle event beyond joining and leaving the network
## General issue deliverables
- [ ] Update the appropriate CHANGELOG
- [ ] Update any relevant READMEs (local and/or global)
- [ ] Update any relevant global documentation & references
- [ ] If applicable, update the source code tree explanation
- [ ] If applicable, add or update a state, sequence or flowchart diagram using [mermaid](https://mermaid-js.github.io/mermaid/)
## Testing Methodology
- [ ] **All tests**: `make test_all`
- [ ] **LocalNet**: verify a `LocalNet` is still functioning correctly by following the instructions at [docs/development/README.md](https://github.com/pokt-network/pocket/tree/main/docs/development)
---
**Creator**: @jessicadaugherty | test | cycling nodes mechansim objective develop a mechanism for cycling nodes on and off in the network simulation environment to simulate nodes leaving and joining the network origin document byzantine network simulator goals implement a mechanism for cycling nodes on and off ensure compatibility with the network simulation environment ensure the mechanism works smoothly with devnet environments support for targeting and killing pods with kubectl commands deliverable mechanism for cycling nodes on and off in the network simulation environment documentation on how to use the cycling mechanism non goals non deliverables providing a mechanism for every possible node lifecycle event beyond joining and leaving the network general issue deliverables update the appropriate changelog update any relevant readmes local and or global update any relevant global documentation references if applicable update the source code tree explanation if applicable add or update a state sequence or flowchart diagram using testing methodology all tests make test all localnet verify a localnet is still functioning correctly by following the instructions at creator jessicadaugherty | 1 |
779,044 | 27,336,949,920 | IssuesEvent | 2023-02-26 10:33:16 | docker-mailserver/docker-mailserver | https://api.github.com/repos/docker-mailserver/docker-mailserver | closed | [BUG] Changelog is not updated to 11.3.1 | kind/bug meta/needs triage priority/medium | ### Miscellaneous first checks
- [X] I checked that all ports are open and not blocked by my ISP / hosting provider.
- [X] I know that SSL errors are likely the result of a wrong setup on the user side and not caused by DMS itself. I'm confident my setup is correct.
### Affected Component(s)
No debug
### What happened and when does this occur?
Server keeps sending notification:
```
Subject: Mailserver update available! [ 11.3.0 --> 11.3.1 ]
Hello ****!
There is a docker-mailserver update available on your host: localhost
Current version: 11.3.0
Latest version: 11.3.1
Changelog: https://github.com/docker-mailserver/docker-mailserver/blob/master/CHANGELOG.md
```
In the commit b75fc448eae0185b991d8707764fc7f4b5381121 which 11.3.1 release tag the CHANGELOG.md as well is on 11.3.0 instead 11.3.1
Refer: https://github.com/docker-mailserver/docker-mailserver/tree/b75fc448eae0185b991d8707764fc7f4b5381121
### What did you expect to happen?
updated to 11.3.1
### How do we replicate the issue?
1. Create container based on 11.3.1 image `docker.io/mailserver/docker-mailserver:latest`
https://hub.docker.com/layers/mailserver/docker-mailserver/11.3.1/images/sha256-e15b8840db6fbe3a95052b2ab903e44a3c0c167fcf142ea83fd6f9dfa13f1447?context=explore
### DMS version
v11.3.1
### What operating system is DMS running on?
Linux
### Which operating system version?
AlmaLinux 8.7
### What instruction set architecture is DMS running on?
AMD64 / x86_64
### What container orchestration tool are you using?
Docker
### docker-compose.yml
```yml
services:
mailserver:
image: docker.io/mailserver/docker-mailserver:latest
container_name: mailserver
# If the FQDN for your mail-server is only two labels (eg: example.com),
# you can assign this entirely to `hostname` and remove `domainname`.
hostname: mail
domainname: example.com
env_file: mailserver.env
# More information about the mail-server ports:
# https://docker-mailserver.github.io/docker-mailserver/edge/config/security/understanding-the-ports/
# To avoid conflicts with yaml base-60 float, DO NOT remove the quotation marks.
ports:
- "25:25" # SMTP (explicit TLS => STARTTLS)
- "143:143" # IMAP4 (explicit TLS => STARTTLS)
- "465:465" # ESMTP (implicit TLS)
- "587:587" # ESMTP (explicit TLS => STARTTLS)
- "993:993" # IMAP4 (implicit TLS)
volumes:
- ./docker-data/dms/mail-data/:/var/mail/
- ./docker-data/dms/mail-state/:/var/mail-state/
- ./docker-data/dms/mail-logs/:/var/log/mail/
- ./docker-data/dms/config/:/tmp/docker-mailserver/
- /etc/localtime:/etc/localtime:ro
restart: always
stop_grace_period: 1m
cap_add:
- NET_ADMIN
healthcheck:
test: "ss --listening --tcp | grep -P 'LISTEN.+:smtp' || exit 1"
timeout: 3s
retries: 0
```
### Relevant log output
```Text
Subject: Mailserver update available! [ 11.3.0 --> 11.3.1 ]
Hello ****!
There is a docker-mailserver update available on your host: localhost
Current version: 11.3.0
Latest version: 11.3.1
Changelog: https://github.com/docker-mailserver/docker-mailserver/blob/master/CHANGELOG.md
```
```
### Other relevant information
_No response_
### What level of experience do you have with Docker and mail servers?
- [ ] I am inexperienced with docker
- [X] I am rather experienced with docker
- [ ] I am inexperienced with mail servers
- [X] I am rather experienced with mail servers
- [ ] I am uncomfortable with the CLI
- [X] I am rather comfortable with the CLI
### Code of conduct
- [X] I have read this project's [Code of Conduct](https://github.com/docker-mailserver/docker-mailserver/blob/master/CODE_OF_CONDUCT.md) and I agree
- [X] I have read the [README](https://github.com/docker-mailserver/docker-mailserver/blob/master/README.md) and the [documentation](https://docker-mailserver.github.io/docker-mailserver/edge/) and I searched the [issue tracker](https://github.com/docker-mailserver/docker-mailserver/issues?q=is%3Aissue) but could not find a solution
### Improvements to this form?
TLTR | 1.0 | [BUG] Changelog is not updated to 11.3.1 - ### Miscellaneous first checks
- [X] I checked that all ports are open and not blocked by my ISP / hosting provider.
- [X] I know that SSL errors are likely the result of a wrong setup on the user side and not caused by DMS itself. I'm confident my setup is correct.
### Affected Component(s)
No debug
### What happened and when does this occur?
Server keeps sending notification:
```
Subject: Mailserver update available! [ 11.3.0 --> 11.3.1 ]
Hello ****!
There is a docker-mailserver update available on your host: localhost
Current version: 11.3.0
Latest version: 11.3.1
Changelog: https://github.com/docker-mailserver/docker-mailserver/blob/master/CHANGELOG.md
```
In the commit b75fc448eae0185b991d8707764fc7f4b5381121 which 11.3.1 release tag the CHANGELOG.md as well is on 11.3.0 instead 11.3.1
Refer: https://github.com/docker-mailserver/docker-mailserver/tree/b75fc448eae0185b991d8707764fc7f4b5381121
### What did you expect to happen?
updated to 11.3.1
### How do we replicate the issue?
1. Create container based on 11.3.1 image `docker.io/mailserver/docker-mailserver:latest`
https://hub.docker.com/layers/mailserver/docker-mailserver/11.3.1/images/sha256-e15b8840db6fbe3a95052b2ab903e44a3c0c167fcf142ea83fd6f9dfa13f1447?context=explore
### DMS version
v11.3.1
### What operating system is DMS running on?
Linux
### Which operating system version?
AlmaLinux 8.7
### What instruction set architecture is DMS running on?
AMD64 / x86_64
### What container orchestration tool are you using?
Docker
### docker-compose.yml
```yml
services:
mailserver:
image: docker.io/mailserver/docker-mailserver:latest
container_name: mailserver
# If the FQDN for your mail-server is only two labels (eg: example.com),
# you can assign this entirely to `hostname` and remove `domainname`.
hostname: mail
domainname: example.com
env_file: mailserver.env
# More information about the mail-server ports:
# https://docker-mailserver.github.io/docker-mailserver/edge/config/security/understanding-the-ports/
# To avoid conflicts with yaml base-60 float, DO NOT remove the quotation marks.
ports:
- "25:25" # SMTP (explicit TLS => STARTTLS)
- "143:143" # IMAP4 (explicit TLS => STARTTLS)
- "465:465" # ESMTP (implicit TLS)
- "587:587" # ESMTP (explicit TLS => STARTTLS)
- "993:993" # IMAP4 (implicit TLS)
volumes:
- ./docker-data/dms/mail-data/:/var/mail/
- ./docker-data/dms/mail-state/:/var/mail-state/
- ./docker-data/dms/mail-logs/:/var/log/mail/
- ./docker-data/dms/config/:/tmp/docker-mailserver/
- /etc/localtime:/etc/localtime:ro
restart: always
stop_grace_period: 1m
cap_add:
- NET_ADMIN
healthcheck:
test: "ss --listening --tcp | grep -P 'LISTEN.+:smtp' || exit 1"
timeout: 3s
retries: 0
```
### Relevant log output
```Text
Subject: Mailserver update available! [ 11.3.0 --> 11.3.1 ]
Hello ****!
There is a docker-mailserver update available on your host: localhost
Current version: 11.3.0
Latest version: 11.3.1
Changelog: https://github.com/docker-mailserver/docker-mailserver/blob/master/CHANGELOG.md
```
```
### Other relevant information
_No response_
### What level of experience do you have with Docker and mail servers?
- [ ] I am inexperienced with docker
- [X] I am rather experienced with docker
- [ ] I am inexperienced with mail servers
- [X] I am rather experienced with mail servers
- [ ] I am uncomfortable with the CLI
- [X] I am rather comfortable with the CLI
### Code of conduct
- [X] I have read this project's [Code of Conduct](https://github.com/docker-mailserver/docker-mailserver/blob/master/CODE_OF_CONDUCT.md) and I agree
- [X] I have read the [README](https://github.com/docker-mailserver/docker-mailserver/blob/master/README.md) and the [documentation](https://docker-mailserver.github.io/docker-mailserver/edge/) and I searched the [issue tracker](https://github.com/docker-mailserver/docker-mailserver/issues?q=is%3Aissue) but could not find a solution
### Improvements to this form?
TLTR | non_test | changelog is not updated to miscellaneous first checks i checked that all ports are open and not blocked by my isp hosting provider i know that ssl errors are likely the result of a wrong setup on the user side and not caused by dms itself i m confident my setup is correct affected component s no debug what happened and when does this occur server keeps sending notification subject mailserver update available hello there is a docker mailserver update available on your host localhost current version latest version changelog in the commit which release tag the changelog md as well is on instead refer what did you expect to happen updated to how do we replicate the issue create container based on image docker io mailserver docker mailserver latest dms version what operating system is dms running on linux which operating system version almalinux what instruction set architecture is dms running on what container orchestration tool are you using docker docker compose yml yml services mailserver image docker io mailserver docker mailserver latest container name mailserver if the fqdn for your mail server is only two labels eg example com you can assign this entirely to hostname and remove domainname hostname mail domainname example com env file mailserver env more information about the mail server ports to avoid conflicts with yaml base float do not remove the quotation marks ports smtp explicit tls starttls explicit tls starttls esmtp implicit tls esmtp explicit tls starttls implicit tls volumes docker data dms mail data var mail docker data dms mail state var mail state docker data dms mail logs var log mail docker data dms config tmp docker mailserver etc localtime etc localtime ro restart always stop grace period cap add net admin healthcheck test ss listening tcp grep p listen smtp exit timeout retries relevant log output text subject mailserver update available hello there is a docker mailserver update available on your host localhost current version latest version changelog other relevant information no response what level of experience do you have with docker and mail servers i am inexperienced with docker i am rather experienced with docker i am inexperienced with mail servers i am rather experienced with mail servers i am uncomfortable with the cli i am rather comfortable with the cli code of conduct i have read this project s and i agree i have read the and the and i searched the but could not find a solution improvements to this form tltr | 0 |
226,484 | 18,021,882,208 | IssuesEvent | 2021-09-16 20:35:41 | RasaHQ/rasa | https://api.github.com/repos/RasaHQ/rasa | closed | failed_test_stories.yml is not printing the correct message | type:bug :bug: area:rasa-oss :ferris_wheel: priority:high effort:enable-squad/2 area:rasa-oss/model-testing | **Rasa version**: 2.8.0
**Rasa SDK version** (if used & relevant): N/A
**Rasa X version** (if used & relevant): N/A
**Python version**: 3.8
**Operating system** (windows, osx, ...): Linux
**Issue**: failed_test_stories.yml includes a test story that is actually correct
The bot is attached.
When running `rasa test` , you will see that this test story is marked as a failed test story, but it does not say why.
**file: test_stories.yml**
```yaml
- story: play ping pong
steps:
- user: |
hello there!
intent: greet
- action: utter_greet
- user: |
play ping pong
intent: play_game
- action: utter_lets_play
```
**file: failed_test_stories.yml**
```yaml
version: "2.0"
stories:
- story: play ping pong (./tests/test_stories.yml)
steps:
- intent: greet
- action: utter_greet
- intent: play_game
- action: utter_lets_play
```
In previous versions of Rasa, it would print the reason why it failed:
**file: failed_test_stories.yml**
```yaml
version: "2.0"
stories:
- story: play ping pong
steps:
- intent: greet
- action: utter_greet
- intent: play_game # predicted: play_game: play [ping pong](game)
- action: utter_lets_play
```
It all works when updating the test story to include the entity extraction, like this:
**file: test_stories.yml (with corrected version)**
```yaml
- story: play ping pong
steps:
- user: |
hello there!
intent: greet
- action: utter_greet
- user: |
play [ping pong](game)
intent: play_game
- action: utter_lets_play
```
[the_model.zip](https://github.com/RasaHQ/rasa/files/6941250/the_model.zip)
| 1.0 | failed_test_stories.yml is not printing the correct message - **Rasa version**: 2.8.0
**Rasa SDK version** (if used & relevant): N/A
**Rasa X version** (if used & relevant): N/A
**Python version**: 3.8
**Operating system** (windows, osx, ...): Linux
**Issue**: failed_test_stories.yml includes a test story that is actually correct
The bot is attached.
When running `rasa test` , you will see that this test story is marked as a failed test story, but it does not say why.
**file: test_stories.yml**
```yaml
- story: play ping pong
steps:
- user: |
hello there!
intent: greet
- action: utter_greet
- user: |
play ping pong
intent: play_game
- action: utter_lets_play
```
**file: failed_test_stories.yml**
```yaml
version: "2.0"
stories:
- story: play ping pong (./tests/test_stories.yml)
steps:
- intent: greet
- action: utter_greet
- intent: play_game
- action: utter_lets_play
```
In previous versions of Rasa, it would print the reason why it failed:
**file: failed_test_stories.yml**
```yaml
version: "2.0"
stories:
- story: play ping pong
steps:
- intent: greet
- action: utter_greet
- intent: play_game # predicted: play_game: play [ping pong](game)
- action: utter_lets_play
```
It all works when updating the test story to include the entity extraction, like this:
**file: test_stories.yml (with corrected version)**
```yaml
- story: play ping pong
steps:
- user: |
hello there!
intent: greet
- action: utter_greet
- user: |
play [ping pong](game)
intent: play_game
- action: utter_lets_play
```
[the_model.zip](https://github.com/RasaHQ/rasa/files/6941250/the_model.zip)
| test | failed test stories yml is not printing the correct message rasa version rasa sdk version if used relevant n a rasa x version if used relevant n a python version operating system windows osx linux issue failed test stories yml includes a test story that is actually correct the bot is attached when running rasa test you will see that this test story is marked as a failed test story but it does not say why file test stories yml yaml story play ping pong steps user hello there intent greet action utter greet user play ping pong intent play game action utter lets play file failed test stories yml yaml version stories story play ping pong tests test stories yml steps intent greet action utter greet intent play game action utter lets play in previous versions of rasa it would print the reason why it failed file failed test stories yml yaml version stories story play ping pong steps intent greet action utter greet intent play game predicted play game play game action utter lets play it all works when updating the test story to include the entity extraction like this file test stories yml with corrected version yaml story play ping pong steps user hello there intent greet action utter greet user play game intent play game action utter lets play | 1 |
27,904 | 4,345,488,007 | IssuesEvent | 2016-07-29 12:52:52 | metafizzy/isotope | https://api.github.com/repos/metafizzy/isotope | closed | arrangeComplete and layoutComplete events fire too early | test case required | I found out that an issue from past is still existing.
Both events, (arrangeComplete and layoutComplete) fires about 1 second too early. I need to use
```
window.setTimeout(function() {
...
});
```
to get correct results. I am using transitions. Without transition everything works well.
I don't want to use this setTimeout()-Method do fix my program...
Thanks | 1.0 | arrangeComplete and layoutComplete events fire too early - I found out that an issue from past is still existing.
Both events, (arrangeComplete and layoutComplete) fires about 1 second too early. I need to use
```
window.setTimeout(function() {
...
});
```
to get correct results. I am using transitions. Without transition everything works well.
I don't want to use this setTimeout()-Method do fix my program...
Thanks | test | arrangecomplete and layoutcomplete events fire too early i found out that an issue from past is still existing both events arrangecomplete and layoutcomplete fires about second too early i need to use window settimeout function to get correct results i am using transitions without transition everything works well i don t want to use this settimeout method do fix my program thanks | 1 |
199,838 | 15,078,680,025 | IssuesEvent | 2021-02-05 09:05:21 | input-output-hk/jormungandr | https://api.github.com/repos/input-output-hk/jormungandr | closed | [Tests] Scenario tests should use the same runtime object for each async call | A-tests | Jormungandr scenario tests project is using runtime in an incorrect way. Right now it creating runtime only for client bootstrap. Every each grpc method execution will produce and error.
Putting runtime object to context and share it for each node should solve that issue. Hopefully we are not using any grpc call in scenario, but in nearest future it can change.
| 1.0 | [Tests] Scenario tests should use the same runtime object for each async call - Jormungandr scenario tests project is using runtime in an incorrect way. Right now it creating runtime only for client bootstrap. Every each grpc method execution will produce and error.
Putting runtime object to context and share it for each node should solve that issue. Hopefully we are not using any grpc call in scenario, but in nearest future it can change.
| test | scenario tests should use the same runtime object for each async call jormungandr scenario tests project is using runtime in an incorrect way right now it creating runtime only for client bootstrap every each grpc method execution will produce and error putting runtime object to context and share it for each node should solve that issue hopefully we are not using any grpc call in scenario but in nearest future it can change | 1 |
209,362 | 16,017,998,162 | IssuesEvent | 2021-04-20 18:32:52 | Automattic/wp-calypso | https://api.github.com/repos/Automattic/wp-calypso | opened | Social Icons Block: Mail Icon Links Shouldn't Require `mailto:` | Empathy Testing [Type] Feature Request | What
It can be expected that the mail icon would be used to link to email addresses. So if an email address is entered without the mailto: part, it should be added automatically upon publishing.
Why
Some users aren't aware of mailto: links. | 1.0 | Social Icons Block: Mail Icon Links Shouldn't Require `mailto:` - What
It can be expected that the mail icon would be used to link to email addresses. So if an email address is entered without the mailto: part, it should be added automatically upon publishing.
Why
Some users aren't aware of mailto: links. | test | social icons block mail icon links shouldn t require mailto what it can be expected that the mail icon would be used to link to email addresses so if an email address is entered without the mailto part it should be added automatically upon publishing why some users aren t aware of mailto links | 1 |
185,119 | 14,333,988,788 | IssuesEvent | 2020-11-27 07:11:25 | peacefulcraft-network/TrenchPvP | https://api.github.com/repos/peacefulcraft-network/TrenchPvP | closed | Teleport player's to current arena spectator point on join | enhancement fixed needs testing project | When a player joins the server, they should be teleported to the current arena spectator point. | 1.0 | Teleport player's to current arena spectator point on join - When a player joins the server, they should be teleported to the current arena spectator point. | test | teleport player s to current arena spectator point on join when a player joins the server they should be teleported to the current arena spectator point | 1 |
132,188 | 10,735,740,841 | IssuesEvent | 2019-10-29 09:26:00 | kyma-project/kyma | https://api.github.com/repos/kyma-project/kyma | closed | Enable e2e-kubeless test | priority/critical test-failing test-missing | <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
As we agreed in https://github.com/kyma-project/community/issues/369 we are turning off flaky tests in [this](https://github.com/kyma-project/kyma/pull/5901) PR, and `e2e-kubeless` is one of them. This needs to be re-enabled.
| 2.0 | Enable e2e-kubeless test - <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
As we agreed in https://github.com/kyma-project/community/issues/369 we are turning off flaky tests in [this](https://github.com/kyma-project/kyma/pull/5901) PR, and `e2e-kubeless` is one of them. This needs to be re-enabled.
| test | enable kubeless test thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description as we agreed in we are turning off flaky tests in pr and kubeless is one of them this needs to be re enabled | 1 |
188,287 | 14,443,119,186 | IssuesEvent | 2020-12-07 19:09:30 | CUCentralAdvancement/giving-frontend | https://api.github.com/repos/CUCentralAdvancement/giving-frontend | closed | Try Testing Review Apps Instead Of App Built On Travis CI | payments stale testing | I can see that the transactions post to the Authorize.net sandbox environment, but the response gets messed up in the iframe communicator dealing with the response. The payment button just says "Processing..." and hangs there. | 1.0 | Try Testing Review Apps Instead Of App Built On Travis CI - I can see that the transactions post to the Authorize.net sandbox environment, but the response gets messed up in the iframe communicator dealing with the response. The payment button just says "Processing..." and hangs there. | test | try testing review apps instead of app built on travis ci i can see that the transactions post to the authorize net sandbox environment but the response gets messed up in the iframe communicator dealing with the response the payment button just says processing and hangs there | 1 |
250,934 | 21,390,726,785 | IssuesEvent | 2022-04-21 06:47:24 | woocommerce/woocommerce-gutenberg-products-block | https://api.github.com/repos/woocommerce/woocommerce-gutenberg-products-block | closed | Critical flows: Shopper → Checkout → Can create an account | type: enhancement ◼️ block: checkout category: tests | ### Story
As a shopper, I want to be able to create an account during the checkout so that I can look up my order at a later point.
### File
`tests/e2e/specs/shopper/checkout-account.test.js`
### Test
1. Make sure the "Allow customers to create an account during checkout" is enabled in WC Settings
2. Make sure the "Allow shoppers to sign up for a user account during checkout" option is enabled on the Checkout block
3. Visit the store as a logged out user
4. Add a product to the cart
5. Go to checkout block page
6. Make sure `Create an account` checkbox is visible.
7. Place the order
8. Check that the user that placed the order is logged in
| 1.0 | Critical flows: Shopper → Checkout → Can create an account - ### Story
As a shopper, I want to be able to create an account during the checkout so that I can look up my order at a later point.
### File
`tests/e2e/specs/shopper/checkout-account.test.js`
### Test
1. Make sure the "Allow customers to create an account during checkout" is enabled in WC Settings
2. Make sure the "Allow shoppers to sign up for a user account during checkout" option is enabled on the Checkout block
3. Visit the store as a logged out user
4. Add a product to the cart
5. Go to checkout block page
6. Make sure `Create an account` checkbox is visible.
7. Place the order
8. Check that the user that placed the order is logged in
| test | critical flows shopper → checkout → can create an account story as a shopper i want to be able to create an account during the checkout so that i can look up my order at a later point file tests specs shopper checkout account test js test make sure the allow customers to create an account during checkout is enabled in wc settings make sure the allow shoppers to sign up for a user account during checkout option is enabled on the checkout block visit the store as a logged out user add a product to the cart go to checkout block page make sure create an account checkbox is visible place the order check that the user that placed the order is logged in | 1 |
165,306 | 6,274,011,508 | IssuesEvent | 2017-07-18 00:17:41 | minishift/minishift | https://api.github.com/repos/minishift/minishift | closed | minishift start fails at "Checking Docker client" while on VPN | kind/bug os/windows priority/minor status/needs-info | Hello -
My dev team is coming up to speed on OpenShift Origin, and are able to successfully run the following minishift startup locally while connected to our domain at the office:
`minishift start --memory 4096 --cpus 2 --vm-driver virtualbox --show-libmachine-logs --iso-url centos`
So we have VirtualBox installed, and all works fine. However, we are fortunate to be able to work remotely occasionally, running over VPN. When we try to start minishift in this remote case while over VPN using the same start command, we get an error with the following in the logs when Docker is being checked:
```
Setting Docker configuration on the remote daemon...
-- Checking OpenShift client ... OK
-- Checking Docker client ... FAIL
Error: cannot communicate with Docker
Solution:
Please install Docker tools by following instructions at:
https://docs.docker.com/windows/
Once installed, run this command with the --create-machine argument to crea
te a
new Docker machine that will run OpenShift.
Caused By:
Error: Get https://192.168.99.100:2376/_ping: dial tcp 192.168.99.100:2376:
connectex: A connection attempt failed because the connected party did not prop
erly respond after a period of time, or established connection failed because co
nnected host has failed to respond.
OpenShift provisioning failed. origin container failed to start.
```
I've googled this issue, and although I see some similar issues, I'm not sure if there's actually an explanation or solution. We work on Windows 7 boxes and are using VirtualBox 5.1.22.
As a reminder, we have no problem running on our domain at the office, or running off VPN (in this case, sometimes, we have to recover by doing a minishift delete, followed by another minishift start, but it DOES start successfully after that).
We are hoping there's a viable solution for this problem so we can use minishift over VPN. Any help with this would be appreciated | 1.0 | minishift start fails at "Checking Docker client" while on VPN - Hello -
My dev team is coming up to speed on OpenShift Origin, and are able to successfully run the following minishift startup locally while connected to our domain at the office:
`minishift start --memory 4096 --cpus 2 --vm-driver virtualbox --show-libmachine-logs --iso-url centos`
So we have VirtualBox installed, and all works fine. However, we are fortunate to be able to work remotely occasionally, running over VPN. When we try to start minishift in this remote case while over VPN using the same start command, we get an error with the following in the logs when Docker is being checked:
```
Setting Docker configuration on the remote daemon...
-- Checking OpenShift client ... OK
-- Checking Docker client ... FAIL
Error: cannot communicate with Docker
Solution:
Please install Docker tools by following instructions at:
https://docs.docker.com/windows/
Once installed, run this command with the --create-machine argument to crea
te a
new Docker machine that will run OpenShift.
Caused By:
Error: Get https://192.168.99.100:2376/_ping: dial tcp 192.168.99.100:2376:
connectex: A connection attempt failed because the connected party did not prop
erly respond after a period of time, or established connection failed because co
nnected host has failed to respond.
OpenShift provisioning failed. origin container failed to start.
```
I've googled this issue, and although I see some similar issues, I'm not sure if there's actually an explanation or solution. We work on Windows 7 boxes and are using VirtualBox 5.1.22.
As a reminder, we have no problem running on our domain at the office, or running off VPN (in this case, sometimes, we have to recover by doing a minishift delete, followed by another minishift start, but it DOES start successfully after that).
We are hoping there's a viable solution for this problem so we can use minishift over VPN. Any help with this would be appreciated | non_test | minishift start fails at checking docker client while on vpn hello my dev team is coming up to speed on openshift origin and are able to successfully run the following minishift startup locally while connected to our domain at the office minishift start memory cpus vm driver virtualbox show libmachine logs iso url centos so we have virtualbox installed and all works fine however we are fortunate to be able to work remotely occasionally running over vpn when we try to start minishift in this remote case while over vpn using the same start command we get an error with the following in the logs when docker is being checked setting docker configuration on the remote daemon checking openshift client ok checking docker client fail error cannot communicate with docker solution please install docker tools by following instructions at once installed run this command with the create machine argument to crea te a new docker machine that will run openshift caused by error get dial tcp connectex a connection attempt failed because the connected party did not prop erly respond after a period of time or established connection failed because co nnected host has failed to respond openshift provisioning failed origin container failed to start i ve googled this issue and although i see some similar issues i m not sure if there s actually an explanation or solution we work on windows boxes and are using virtualbox as a reminder we have no problem running on our domain at the office or running off vpn in this case sometimes we have to recover by doing a minishift delete followed by another minishift start but it does start successfully after that we are hoping there s a viable solution for this problem so we can use minishift over vpn any help with this would be appreciated | 0 |
84,706 | 7,930,038,151 | IssuesEvent | 2018-07-06 17:12:04 | gammapy/gammapy | https://api.github.com/repos/gammapy/gammapy | closed | Increase gammapy test coverage | cleanup effort-low package-novice tests | Currently there are many gammapy functions / methods that are not covered by the existing unit tests.
The current status can also be seen here:
https://coveralls.io/r/gammapy/gammapy
Any and all help in adding tests (and probably fixing bugs and improving docstrings along the way) would be much appreciated!
You can create a gammapy coverage report and view it locally:
```
cd gammapy
python setup.py test -V --coverage
open htmlcov/index.html
```
I'll keep this issue open for a while until the situation has improved significantly.
| 1.0 | Increase gammapy test coverage - Currently there are many gammapy functions / methods that are not covered by the existing unit tests.
The current status can also be seen here:
https://coveralls.io/r/gammapy/gammapy
Any and all help in adding tests (and probably fixing bugs and improving docstrings along the way) would be much appreciated!
You can create a gammapy coverage report and view it locally:
```
cd gammapy
python setup.py test -V --coverage
open htmlcov/index.html
```
I'll keep this issue open for a while until the situation has improved significantly.
| test | increase gammapy test coverage currently there are many gammapy functions methods that are not covered by the existing unit tests the current status can also be seen here any and all help in adding tests and probably fixing bugs and improving docstrings along the way would be much appreciated you can create a gammapy coverage report and view it locally cd gammapy python setup py test v coverage open htmlcov index html i ll keep this issue open for a while until the situation has improved significantly | 1 |
66,456 | 7,001,035,968 | IssuesEvent | 2017-12-18 08:40:19 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | closed | Update Employee request list and Employee request by id | epic/no_tax_id kind/change_request priority/medium status/test | 1. NHS admin must be able to view employees which registered without tax_id. According to this Employee Request list must be changed and additional filters should be added [apiary](https://uaehealthapi.docs.apiary.io/reference/public.-medical-service-provider-integration-layer/employee-requests/get-employee-requests-list):
- by edrpou
- by legal entity name
- by no_tax_id flag
- by employee_request_id
2. Update Get employee Request by ID [apiary](https://uaehealthapi.docs.apiary.io/reference/public.-medical-service-provider-integration-layer/employee-requests/get-employee-request-by-id) | 1.0 | Update Employee request list and Employee request by id - 1. NHS admin must be able to view employees which registered without tax_id. According to this Employee Request list must be changed and additional filters should be added [apiary](https://uaehealthapi.docs.apiary.io/reference/public.-medical-service-provider-integration-layer/employee-requests/get-employee-requests-list):
- by edrpou
- by legal entity name
- by no_tax_id flag
- by employee_request_id
2. Update Get employee Request by ID [apiary](https://uaehealthapi.docs.apiary.io/reference/public.-medical-service-provider-integration-layer/employee-requests/get-employee-request-by-id) | test | update employee request list and employee request by id nhs admin must be able to view employees which registered without tax id according to this employee request list must be changed and additional filters should be added by edrpou by legal entity name by no tax id flag by employee request id update get employee request by id | 1 |
240,418 | 20,028,589,770 | IssuesEvent | 2022-02-02 01:00:17 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | opened | [CI] MixedClusterClientYamlTestSuiteIT test {p0=search.aggregation/70_adjacency_matrix/Terms lookup} failing | :Analytics/Aggregations >test-failure | The underlying reason for this failure and a couple others in the same test suite was that one node got killed when upgrading. The node failed to start because a serialisation error for `InternalRange` https://gradle-enterprise.elastic.co/s/fx4ulm463asji/console-log#L6392
It seems that the backport PR #83339 didn't make its way into v7.17.0. It was automatically tagged as `v7.17.1` and `git tag --contains 7f5623f464576e775cf341fc9fde4a9f2aa9ce15` shows no result.
@salvatore-campagna Could you please take a look?
**Build scan:**
https://gradle-enterprise.elastic.co/s/fx4ulm463asji/tests/:qa:mixed-cluster:v7.17.0%23mixedClusterTest/org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT/test%20%7Bp0=search.aggregation%2F70_adjacency_matrix%2FTerms%20lookup%7D
**Reproduction line:**
`./gradlew ':qa:mixed-cluster:v7.17.0#mixedClusterTest' -Dtests.class="org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT" -Dtests.method="test {p0=search.aggregation/70_adjacency_matrix/Terms lookup}" -Dtests.seed=340A3D79B51F2327 -Dtests.bwc=true -Dtests.locale=ko-KR -Dtests.timezone=NZ-CHAT -Druntime.java=17`
**Applicable branches:**
master
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT&tests.test=test%20%7Bp0%3Dsearch.aggregation/70_adjacency_matrix/Terms%20lookup%7D
**Failure excerpt:**
```
java.lang.RuntimeException: Failure at [search.aggregation/70_adjacency_matrix:95]: NodeSelector [version ranges [[8.1.0 - 8.1.0]]] rejected all nodes, living [[host=http://127.0.0.1:46615, bound=[http://[::1]:38445, http://127.0.0.1:46615], name=v7.17.0-3, version=7.17.0, roles=data,data_cold,data_content,data_frozen,data_hot,data_warm,ingest,master,ml,remote_cluster_client,transform, attributes={testattr=[test], ml.machine_memory=[101259509760], ml.max_open_jobs=[512], xpack.installed=[true], ml.max_jvm_size=[536870912], transform.node=[true]}], [host=http://127.0.0.1:37241, bound=[http://[::1]:45615, http://127.0.0.1:37241], name=v7.17.0-2, version=7.17.0, roles=data,data_cold,data_content,data_frozen,data_hot,data_warm,ingest,master,ml,remote_cluster_client,transform, attributes={testattr=[test], ml.machine_memory=[101259509760], ml.max_open_jobs=[512], xpack.installed=[true], ml.max_jvm_size=[536870912], transform.node=[true]}]] and dead []
at __randomizedtesting.SeedInfo.seed([340A3D79B51F2327:BC5E02A31BE34EDF]:0)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:491)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:462)
at jdk.internal.reflect.GeneratedMethodAccessor19.invoke(null:-1)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
Caused by: java.io.IOException: NodeSelector [version ranges [[8.1.0 - 8.1.0]]] rejected all nodes, living [[host=http://127.0.0.1:46615, bound=[http://[::1]:38445, http://127.0.0.1:46615], name=v7.17.0-3, version=7.17.0, roles=data,data_cold,data_content,data_frozen,data_hot,data_warm,ingest,master,ml,remote_cluster_client,transform, attributes={testattr=[test], ml.machine_memory=[101259509760], ml.max_open_jobs=[512], xpack.installed=[true], ml.max_jvm_size=[536870912], transform.node=[true]}], [host=http://127.0.0.1:37241, bound=[http://[::1]:45615, http://127.0.0.1:37241], name=v7.17.0-2, version=7.17.0, roles=data,data_cold,data_content,data_frozen,data_hot,data_warm,ingest,master,ml,remote_cluster_client,transform, attributes={testattr=[test], ml.machine_memory=[101259509760], ml.max_open_jobs=[512], xpack.installed=[true], ml.max_jvm_size=[536870912], transform.node=[true]}]] and dead []
at org.elasticsearch.client.RestClient.selectNodes(RestClient.java:515)
at org.elasticsearch.client.RestClient.nextNodes(RestClient.java:447)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287)
at org.elasticsearch.test.rest.yaml.ClientYamlTestClient.callApi(ClientYamlTestClient.java:201)
at org.elasticsearch.test.rest.yaml.ClientYamlTestExecutionContext.callApiInternal(ClientYamlTestExecutionContext.java:185)
at org.elasticsearch.test.rest.yaml.ClientYamlTestExecutionContext.callApi(ClientYamlTestExecutionContext.java:105)
at org.elasticsearch.test.rest.yaml.section.DoSection.execute(DoSection.java:349)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:478)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:462)
at jdk.internal.reflect.GeneratedMethodAccessor19.invoke(null:-1)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
``` | 1.0 | [CI] MixedClusterClientYamlTestSuiteIT test {p0=search.aggregation/70_adjacency_matrix/Terms lookup} failing - The underlying reason for this failure and a couple others in the same test suite was that one node got killed when upgrading. The node failed to start because a serialisation error for `InternalRange` https://gradle-enterprise.elastic.co/s/fx4ulm463asji/console-log#L6392
It seems that the backport PR #83339 didn't make its way into v7.17.0. It was automatically tagged as `v7.17.1` and `git tag --contains 7f5623f464576e775cf341fc9fde4a9f2aa9ce15` shows no result.
@salvatore-campagna Could you please take a look?
**Build scan:**
https://gradle-enterprise.elastic.co/s/fx4ulm463asji/tests/:qa:mixed-cluster:v7.17.0%23mixedClusterTest/org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT/test%20%7Bp0=search.aggregation%2F70_adjacency_matrix%2FTerms%20lookup%7D
**Reproduction line:**
`./gradlew ':qa:mixed-cluster:v7.17.0#mixedClusterTest' -Dtests.class="org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT" -Dtests.method="test {p0=search.aggregation/70_adjacency_matrix/Terms lookup}" -Dtests.seed=340A3D79B51F2327 -Dtests.bwc=true -Dtests.locale=ko-KR -Dtests.timezone=NZ-CHAT -Druntime.java=17`
**Applicable branches:**
master
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT&tests.test=test%20%7Bp0%3Dsearch.aggregation/70_adjacency_matrix/Terms%20lookup%7D
**Failure excerpt:**
```
java.lang.RuntimeException: Failure at [search.aggregation/70_adjacency_matrix:95]: NodeSelector [version ranges [[8.1.0 - 8.1.0]]] rejected all nodes, living [[host=http://127.0.0.1:46615, bound=[http://[::1]:38445, http://127.0.0.1:46615], name=v7.17.0-3, version=7.17.0, roles=data,data_cold,data_content,data_frozen,data_hot,data_warm,ingest,master,ml,remote_cluster_client,transform, attributes={testattr=[test], ml.machine_memory=[101259509760], ml.max_open_jobs=[512], xpack.installed=[true], ml.max_jvm_size=[536870912], transform.node=[true]}], [host=http://127.0.0.1:37241, bound=[http://[::1]:45615, http://127.0.0.1:37241], name=v7.17.0-2, version=7.17.0, roles=data,data_cold,data_content,data_frozen,data_hot,data_warm,ingest,master,ml,remote_cluster_client,transform, attributes={testattr=[test], ml.machine_memory=[101259509760], ml.max_open_jobs=[512], xpack.installed=[true], ml.max_jvm_size=[536870912], transform.node=[true]}]] and dead []
at __randomizedtesting.SeedInfo.seed([340A3D79B51F2327:BC5E02A31BE34EDF]:0)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:491)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:462)
at jdk.internal.reflect.GeneratedMethodAccessor19.invoke(null:-1)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
Caused by: java.io.IOException: NodeSelector [version ranges [[8.1.0 - 8.1.0]]] rejected all nodes, living [[host=http://127.0.0.1:46615, bound=[http://[::1]:38445, http://127.0.0.1:46615], name=v7.17.0-3, version=7.17.0, roles=data,data_cold,data_content,data_frozen,data_hot,data_warm,ingest,master,ml,remote_cluster_client,transform, attributes={testattr=[test], ml.machine_memory=[101259509760], ml.max_open_jobs=[512], xpack.installed=[true], ml.max_jvm_size=[536870912], transform.node=[true]}], [host=http://127.0.0.1:37241, bound=[http://[::1]:45615, http://127.0.0.1:37241], name=v7.17.0-2, version=7.17.0, roles=data,data_cold,data_content,data_frozen,data_hot,data_warm,ingest,master,ml,remote_cluster_client,transform, attributes={testattr=[test], ml.machine_memory=[101259509760], ml.max_open_jobs=[512], xpack.installed=[true], ml.max_jvm_size=[536870912], transform.node=[true]}]] and dead []
at org.elasticsearch.client.RestClient.selectNodes(RestClient.java:515)
at org.elasticsearch.client.RestClient.nextNodes(RestClient.java:447)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287)
at org.elasticsearch.test.rest.yaml.ClientYamlTestClient.callApi(ClientYamlTestClient.java:201)
at org.elasticsearch.test.rest.yaml.ClientYamlTestExecutionContext.callApiInternal(ClientYamlTestExecutionContext.java:185)
at org.elasticsearch.test.rest.yaml.ClientYamlTestExecutionContext.callApi(ClientYamlTestExecutionContext.java:105)
at org.elasticsearch.test.rest.yaml.section.DoSection.execute(DoSection.java:349)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:478)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:462)
at jdk.internal.reflect.GeneratedMethodAccessor19.invoke(null:-1)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
``` | test | mixedclusterclientyamltestsuiteit test search aggregation adjacency matrix terms lookup failing the underlying reason for this failure and a couple others in the same test suite was that one node got killed when upgrading the node failed to start because a serialisation error for internalrange it seems that the backport pr didn t make its way into it was automatically tagged as and git tag contains shows no result salvatore campagna could you please take a look build scan reproduction line gradlew qa mixed cluster mixedclustertest dtests class org elasticsearch backwards mixedclusterclientyamltestsuiteit dtests method test search aggregation adjacency matrix terms lookup dtests seed dtests bwc true dtests locale ko kr dtests timezone nz chat druntime java applicable branches master reproduces locally no failure history failure excerpt java lang runtimeexception failure at nodeselector rejected all nodes living name version roles data data cold data content data frozen data hot data warm ingest master ml remote cluster client transform attributes testattr ml machine memory ml max open jobs xpack installed ml max jvm size transform node name version roles data data cold data content data frozen data hot data warm ingest master ml remote cluster client transform attributes testattr ml machine memory ml max open jobs xpack installed ml max jvm size transform node and dead at randomizedtesting seedinfo seed at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java at org elasticsearch test rest yaml esclientyamlsuitetestcase test esclientyamlsuitetestcase java at jdk internal reflect invoke null at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java caused by java io ioexception nodeselector rejected all nodes living name version roles data data cold data content data frozen data hot data warm ingest master ml remote cluster client transform attributes testattr ml machine memory ml max open jobs xpack installed ml max jvm size transform node name version roles data data cold data content data frozen data hot data warm ingest master ml remote cluster client transform attributes testattr ml machine memory ml max open jobs xpack installed ml max jvm size transform node and dead at org elasticsearch client restclient selectnodes restclient java at org elasticsearch client restclient nextnodes restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch test rest yaml clientyamltestclient callapi clientyamltestclient java at org elasticsearch test rest yaml clientyamltestexecutioncontext callapiinternal clientyamltestexecutioncontext java at org elasticsearch test rest yaml clientyamltestexecutioncontext callapi clientyamltestexecutioncontext java at org elasticsearch test rest yaml section dosection execute dosection java at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java at org elasticsearch test rest yaml esclientyamlsuitetestcase test esclientyamlsuitetestcase java at jdk internal reflect invoke null at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java | 1 |
151,867 | 12,060,793,521 | IssuesEvent | 2020-04-15 21:59:52 | ValveSoftware/Proton | https://api.github.com/repos/ValveSoftware/Proton | closed | Blitzkrieg Anthology (313480) | Game compatibility - Unofficial Need Retest Regression | # Compatibility Report
- Name of the game with compatibility issues: Blitzkrieg Anthology
- Steam AppID of the game: 313480
## System Information
- GPU: NVIDIA GeForce GTX 660M
- Driver/LLVM version: NVIDIA 435.27.07
- Kernel version: 5.3.12
- Link to full system information report as [Gist](https://gist.github.com/):
- Proton version: 4.11-10 (not working), 4.2-9 (working)
## I confirm:
- [x] that I haven't found an existing compatibility report for this game.
- [x] that I have checked whether there are updates for my system available.
[steam-313480.log](https://github.com/ValveSoftware/Proton/files/3965918/steam-313480.log)
## Symptoms <!-- What's the problem? -->
The game will not work with the newest version of Proton, namely 4.11-10, but will work with version 4.2-9. As a side note, it seems to work better with `PROTON_USE_D9VK=1 %command%` on the latter version.
## Reproduction
1. Launch the game.
<!--
1. You can find the Steam AppID in the URL of the shop page of the game.
e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`.
2. You can find your driver and Linux version, as well as your graphics
processor's name in the system information report of Steam.
3. You can retrieve a full system information report by clicking
`Help` > `System Information` in the Steam client on your machine.
4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`.
Then paste it in a [Gist](https://gist.github.com/) and post the link in
this issue.
5. Please search for open issues and pull requests by the name of the game and
find out whether they are relevant and should be referenced above.
-->
| 1.0 | Blitzkrieg Anthology (313480) - # Compatibility Report
- Name of the game with compatibility issues: Blitzkrieg Anthology
- Steam AppID of the game: 313480
## System Information
- GPU: NVIDIA GeForce GTX 660M
- Driver/LLVM version: NVIDIA 435.27.07
- Kernel version: 5.3.12
- Link to full system information report as [Gist](https://gist.github.com/):
- Proton version: 4.11-10 (not working), 4.2-9 (working)
## I confirm:
- [x] that I haven't found an existing compatibility report for this game.
- [x] that I have checked whether there are updates for my system available.
[steam-313480.log](https://github.com/ValveSoftware/Proton/files/3965918/steam-313480.log)
## Symptoms <!-- What's the problem? -->
The game will not work with the newest version of Proton, namely 4.11-10, but will work with version 4.2-9. As a side note, it seems to work better with `PROTON_USE_D9VK=1 %command%` on the latter version.
## Reproduction
1. Launch the game.
<!--
1. You can find the Steam AppID in the URL of the shop page of the game.
e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`.
2. You can find your driver and Linux version, as well as your graphics
processor's name in the system information report of Steam.
3. You can retrieve a full system information report by clicking
`Help` > `System Information` in the Steam client on your machine.
4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`.
Then paste it in a [Gist](https://gist.github.com/) and post the link in
this issue.
5. Please search for open issues and pull requests by the name of the game and
find out whether they are relevant and should be referenced above.
-->
| test | blitzkrieg anthology compatibility report name of the game with compatibility issues blitzkrieg anthology steam appid of the game system information gpu nvidia geforce gtx driver llvm version nvidia kernel version link to full system information report as proton version not working working i confirm that i haven t found an existing compatibility report for this game that i have checked whether there are updates for my system available symptoms the game will not work with the newest version of proton namely but will work with version as a side note it seems to work better with proton use command on the latter version reproduction launch the game you can find the steam appid in the url of the shop page of the game e g for the witcher wild hunt the appid is you can find your driver and linux version as well as your graphics processor s name in the system information report of steam you can retrieve a full system information report by clicking help system information in the steam client on your machine please copy it to your clipboard by pressing ctrl a and then ctrl c then paste it in a and post the link in this issue please search for open issues and pull requests by the name of the game and find out whether they are relevant and should be referenced above | 1 |
246,395 | 20,862,604,828 | IssuesEvent | 2022-03-22 01:26:50 | rancher/dashboard | https://api.github.com/repos/rancher/dashboard | closed | Fleet: Rename ‘Assign to’ to ‘Change workspace’ | [zube]: To Test area/fleet kind/enhancement good-first-issue | Rename ‘Assign to’ to ‘Change workspace’ on the "3 dots" menu for a cluster list item
<img width="1527" alt="Screenshot 2022-02-21 at 10 31 06" src="https://user-images.githubusercontent.com/97888974/154937572-5ac66017-f2c9-4852-83d2-90b10ea7e407.png">
| 1.0 | Fleet: Rename ‘Assign to’ to ‘Change workspace’ - Rename ‘Assign to’ to ‘Change workspace’ on the "3 dots" menu for a cluster list item
<img width="1527" alt="Screenshot 2022-02-21 at 10 31 06" src="https://user-images.githubusercontent.com/97888974/154937572-5ac66017-f2c9-4852-83d2-90b10ea7e407.png">
| test | fleet rename ‘assign to’ to ‘change workspace’ rename ‘assign to’ to ‘change workspace’ on the dots menu for a cluster list item img width alt screenshot at src | 1 |
143,652 | 11,572,415,890 | IssuesEvent | 2020-02-21 00:00:42 | knative/serving | https://api.github.com/repos/knative/serving | closed | Different priviledges for controller and webhook. | area/test-and-release kind/feature lifecycle/rotten | ## In what area(s)?
<!-- Remove the '> ' to select -->
> /area API
> /area autoscale
> /area build
> /area monitoring
> /area networking
/area test-and-release
<!--
Other classifications:
> /kind good-first-issue
> /kind process
> /kind spec
> /kind proposal
-->
## Describe the feature
Currently the webhook and controller run with the same priviledges. Webhook should probably run with less priviledges. This probably requires the webhook and controller to run as two different service accounts and tighter roles to be created for the two (controller does not need to muck with mutatingwebhooks for example).
| 1.0 | Different priviledges for controller and webhook. - ## In what area(s)?
<!-- Remove the '> ' to select -->
> /area API
> /area autoscale
> /area build
> /area monitoring
> /area networking
/area test-and-release
<!--
Other classifications:
> /kind good-first-issue
> /kind process
> /kind spec
> /kind proposal
-->
## Describe the feature
Currently the webhook and controller run with the same priviledges. Webhook should probably run with less priviledges. This probably requires the webhook and controller to run as two different service accounts and tighter roles to be created for the two (controller does not need to muck with mutatingwebhooks for example).
| test | different priviledges for controller and webhook in what area s to select area api area autoscale area build area monitoring area networking area test and release other classifications kind good first issue kind process kind spec kind proposal describe the feature currently the webhook and controller run with the same priviledges webhook should probably run with less priviledges this probably requires the webhook and controller to run as two different service accounts and tighter roles to be created for the two controller does not need to muck with mutatingwebhooks for example | 1 |
27,192 | 4,287,365,883 | IssuesEvent | 2016-07-16 18:28:12 | WormBase/website | https://api.github.com/repos/WormBase/website | reopened | images not displaying | Under testing | Images are not displayed
http://www.wormbase.org/species/all/expr_pattern/Expr1149515#0213--10
http://www.wormbase.org/species/all/expr_pattern/Expr1143118#0213--10
they all relate to the same study, WBPaper00046121
thanks | 1.0 | images not displaying - Images are not displayed
http://www.wormbase.org/species/all/expr_pattern/Expr1149515#0213--10
http://www.wormbase.org/species/all/expr_pattern/Expr1143118#0213--10
they all relate to the same study, WBPaper00046121
thanks | test | images not displaying images are not displayed they all relate to the same study thanks | 1 |
98,236 | 8,675,380,846 | IssuesEvent | 2018-11-30 10:42:39 | humera987/FXLabs-Test-Automation | https://api.github.com/repos/humera987/FXLabs-Test-Automation | closed | FXLabs Testing 30 : ApiV1DashboardCountBugsBetweenGetQueryParamFromdateNullValue | FXLabs Testing 30 | Project : FXLabs Testing 30
Job : UAT 1
Env : UAT 1
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=M2Q3NjVjMjEtYjMzYS00ODFkLWE0ZDMtZGRjMDdkNmQ5NDQ3; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 10:40:39 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/dashboard/count-bugs-between?fromDate=null&toDate=JROxyUNu
Request :
Response :
{
"timestamp" : "2018-11-30T10:40:39.713+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/dashboard/count-bugs-between"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot --- | 1.0 | FXLabs Testing 30 : ApiV1DashboardCountBugsBetweenGetQueryParamFromdateNullValue - Project : FXLabs Testing 30
Job : UAT 1
Env : UAT 1
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=M2Q3NjVjMjEtYjMzYS00ODFkLWE0ZDMtZGRjMDdkNmQ5NDQ3; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 10:40:39 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/dashboard/count-bugs-between?fromDate=null&toDate=JROxyUNu
Request :
Response :
{
"timestamp" : "2018-11-30T10:40:39.713+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/dashboard/count-bugs-between"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot --- | test | fxlabs testing project fxlabs testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api dashboard count bugs between logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot | 1 |
27,557 | 11,502,843,729 | IssuesEvent | 2020-02-12 19:53:25 | MicrosoftDocs/microsoft-365-docs | https://api.github.com/repos/MicrosoftDocs/microsoft-365-docs | closed | Why not in Security Center | security | Why is this new functionality getting added to the old SCC center at protection.microsoft.com instead of new center at security.microsoft.com. According to https://docs.microsoft.com/en-us/microsoft-365/security/mtp/overview-security-center the new site is supposed to be the home for this functionality
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 72ac8b1f-a7cf-3487-9a3c-8eff53e4adb5
* Version Independent ID: 63773389-8a4f-926f-8430-8c80b19ff2da
* Content: [Safe Documents in Office 365 ATP - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/safe-docs)
* Content Source: [microsoft-365/security/office-365-security/safe-docs.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/safe-docs.md)
* Service: **o365-seccomp**
* GitHub Login: @chrisda
* Microsoft Alias: **chrisda** | True | Why not in Security Center - Why is this new functionality getting added to the old SCC center at protection.microsoft.com instead of new center at security.microsoft.com. According to https://docs.microsoft.com/en-us/microsoft-365/security/mtp/overview-security-center the new site is supposed to be the home for this functionality
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 72ac8b1f-a7cf-3487-9a3c-8eff53e4adb5
* Version Independent ID: 63773389-8a4f-926f-8430-8c80b19ff2da
* Content: [Safe Documents in Office 365 ATP - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/safe-docs)
* Content Source: [microsoft-365/security/office-365-security/safe-docs.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/safe-docs.md)
* Service: **o365-seccomp**
* GitHub Login: @chrisda
* Microsoft Alias: **chrisda** | non_test | why not in security center why is this new functionality getting added to the old scc center at protection microsoft com instead of new center at security microsoft com according to the new site is supposed to be the home for this functionality document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service seccomp github login chrisda microsoft alias chrisda | 0 |
10,407 | 8,556,396,181 | IssuesEvent | 2018-11-08 13:03:16 | srinikoganti/sams | https://api.github.com/repos/srinikoganti/sams | closed | Infrastructure: super admin login, Manage Assets page, dont give Add Assets button instead name it as Show Assets . | Infrastructure enhancement | Give the button name as Show Assets from super admin and from college login it should be Add Assets.

| 1.0 | Infrastructure: super admin login, Manage Assets page, dont give Add Assets button instead name it as Show Assets . - Give the button name as Show Assets from super admin and from college login it should be Add Assets.

| non_test | infrastructure super admin login manage assets page dont give add assets button instead name it as show assets give the button name as show assets from super admin and from college login it should be add assets | 0 |
493 | 2,535,202,246 | IssuesEvent | 2015-01-25 20:00:03 | eldipa/ConcuDebug | https://api.github.com/repos/eldipa/ConcuDebug | closed | Hacer que el GDB a ser lanzado sea configurable | code this! | Hoy se spawnea el gdb que esta en el PATH. Pero si no se tiene ningun gdb en el PATH o bien se quiere usar otro gdb, la unica manera de cambiar el gdb a lanzar es modificando el codigo fuente.
Se debe refactorizar el codigo para que lea de un archivo de configuracion sobre cual gdb se debe lanzar. | 1.0 | Hacer que el GDB a ser lanzado sea configurable - Hoy se spawnea el gdb que esta en el PATH. Pero si no se tiene ningun gdb en el PATH o bien se quiere usar otro gdb, la unica manera de cambiar el gdb a lanzar es modificando el codigo fuente.
Se debe refactorizar el codigo para que lea de un archivo de configuracion sobre cual gdb se debe lanzar. | non_test | hacer que el gdb a ser lanzado sea configurable hoy se spawnea el gdb que esta en el path pero si no se tiene ningun gdb en el path o bien se quiere usar otro gdb la unica manera de cambiar el gdb a lanzar es modificando el codigo fuente se debe refactorizar el codigo para que lea de un archivo de configuracion sobre cual gdb se debe lanzar | 0 |
9,687 | 8,087,350,870 | IssuesEvent | 2018-08-09 01:13:10 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | The value cannot be changed in the GUI | bug interface/infrastructure | In the master branch, I cannot change any values in the Replacements --> Wheat. For example,
The value will be restored into the original values after pressing enter and/or leaving focus.

| 1.0 | The value cannot be changed in the GUI - In the master branch, I cannot change any values in the Replacements --> Wheat. For example,
The value will be restored into the original values after pressing enter and/or leaving focus.

| non_test | the value cannot be changed in the gui in the master branch i cannot change any values in the replacements wheat for example the value will be restored into the original values after pressing enter and or leaving focus | 0 |
318,035 | 27,280,410,211 | IssuesEvent | 2023-02-23 09:39:07 | DevTraces/BackEnd | https://api.github.com/repos/DevTraces/BackEnd | closed | feat: 프로필 조회 시 팔로우 정보 추가 | ☑️test ✨feature | ## 📌 이슈 내용
> 프로필 정보에 팔로우 정보를 추가하는 로직을 구현하려 합니다.
<br>
## 📝 To-do
- [x] 사용자 팔로우/팔로워 수 조회 로직 구현
- [x] 해당 사용자 팔로우하는지 안하는지 boolean 값 반환
- [x] 테스트코드 작성
<br>
## 🤝 관련 이슈
- #43
<br>
### ✅ 체크 리스트
- [x] issue 제목은 유의미한가?
- [x] 관련 issue가 있다면 추가했는가?
- [x] 유의미한 label을 추가했는가?
| 1.0 | feat: 프로필 조회 시 팔로우 정보 추가 - ## 📌 이슈 내용
> 프로필 정보에 팔로우 정보를 추가하는 로직을 구현하려 합니다.
<br>
## 📝 To-do
- [x] 사용자 팔로우/팔로워 수 조회 로직 구현
- [x] 해당 사용자 팔로우하는지 안하는지 boolean 값 반환
- [x] 테스트코드 작성
<br>
## 🤝 관련 이슈
- #43
<br>
### ✅ 체크 리스트
- [x] issue 제목은 유의미한가?
- [x] 관련 issue가 있다면 추가했는가?
- [x] 유의미한 label을 추가했는가?
| test | feat 프로필 조회 시 팔로우 정보 추가 📌 이슈 내용 프로필 정보에 팔로우 정보를 추가하는 로직을 구현하려 합니다 📝 to do 사용자 팔로우 팔로워 수 조회 로직 구현 해당 사용자 팔로우하는지 안하는지 boolean 값 반환 테스트코드 작성 🤝 관련 이슈 ✅ 체크 리스트 issue 제목은 유의미한가 관련 issue가 있다면 추가했는가 유의미한 label을 추가했는가 | 1 |
55,791 | 6,492,279,545 | IssuesEvent | 2017-08-21 12:29:08 | DEIB-GECO/GMQL | https://api.github.com/repos/DEIB-GECO/GMQL | closed | DS creation failed because too big | test | job_testid91_bernasconi_20170322_195107
DS_CREATION_FAILED Input Dataset is empty , (error in process or Wrong input Selection query).
query:
A = SELECT( region: left>1000000 ) HG19_ENCODE_BED;
MATERIALIZE A into outSel;
I realize this is a HUGE output dataset. Nevertheless: is this the correct message of the system in this case? | 1.0 | DS creation failed because too big - job_testid91_bernasconi_20170322_195107
DS_CREATION_FAILED Input Dataset is empty , (error in process or Wrong input Selection query).
query:
A = SELECT( region: left>1000000 ) HG19_ENCODE_BED;
MATERIALIZE A into outSel;
I realize this is a HUGE output dataset. Nevertheless: is this the correct message of the system in this case? | test | ds creation failed because too big job bernasconi ds creation failed input dataset is empty error in process or wrong input selection query query a select region left encode bed materialize a into outsel i realize this is a huge output dataset nevertheless is this the correct message of the system in this case | 1 |
96,955 | 28,066,672,042 | IssuesEvent | 2023-03-29 15:53:53 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | cmake: signing: Allow extended/replacing signing with external or out of tree systems | Enhancement area: Build System | **Is your enhancement proposal related to a problem? Please describe.**
At present, when creating a signed application which targets mcuboot, west automatically does the signing using this cmake file: https://github.com/zephyrproject-rtos/zephyr/blob/main/cmake/mcuboot.cmake
This works fine for in-tree samples and testing, but is limited to how it works in that file and cannot (at present) be changed or extended for projects without forking the zephyr repository itself.
**Describe the solution you'd like**
A way to be able to extend this or override it, either for adding new functionality (e.g. instead of using a private key to sign the hash, someone might want to take the hash and send it to another system or device with a HSM (hardware security module) which is then used to generate the signature), or to have a different method of signing (e.g. to support other bootloaders, or to change what information is used for signing).
**Describe alternatives you've considered**
Forking zephyr and changing - not even worth considering | 1.0 | cmake: signing: Allow extended/replacing signing with external or out of tree systems - **Is your enhancement proposal related to a problem? Please describe.**
At present, when creating a signed application which targets mcuboot, west automatically does the signing using this cmake file: https://github.com/zephyrproject-rtos/zephyr/blob/main/cmake/mcuboot.cmake
This works fine for in-tree samples and testing, but is limited to how it works in that file and cannot (at present) be changed or extended for projects without forking the zephyr repository itself.
**Describe the solution you'd like**
A way to be able to extend this or override it, either for adding new functionality (e.g. instead of using a private key to sign the hash, someone might want to take the hash and send it to another system or device with a HSM (hardware security module) which is then used to generate the signature), or to have a different method of signing (e.g. to support other bootloaders, or to change what information is used for signing).
**Describe alternatives you've considered**
Forking zephyr and changing - not even worth considering | non_test | cmake signing allow extended replacing signing with external or out of tree systems is your enhancement proposal related to a problem please describe at present when creating a signed application which targets mcuboot west automatically does the signing using this cmake file this works fine for in tree samples and testing but is limited to how it works in that file and cannot at present be changed or extended for projects without forking the zephyr repository itself describe the solution you d like a way to be able to extend this or override it either for adding new functionality e g instead of using a private key to sign the hash someone might want to take the hash and send it to another system or device with a hsm hardware security module which is then used to generate the signature or to have a different method of signing e g to support other bootloaders or to change what information is used for signing describe alternatives you ve considered forking zephyr and changing not even worth considering | 0 |
59,745 | 6,662,251,609 | IssuesEvent | 2017-10-02 12:24:49 | CyberReboot/poseidon | https://api.github.com/repos/CyberReboot/poseidon | closed | Test creation: print_endpoint | Hacktoberfest testing | NorthboundControllerAbstraction needs better coverage for testing.
add testing for **print_endpoint** to **test_poseidonMonitor.py** | 1.0 | Test creation: print_endpoint - NorthboundControllerAbstraction needs better coverage for testing.
add testing for **print_endpoint** to **test_poseidonMonitor.py** | test | test creation print endpoint northboundcontrollerabstraction needs better coverage for testing add testing for print endpoint to test poseidonmonitor py | 1 |
317,376 | 23,673,573,360 | IssuesEvent | 2022-08-27 18:45:59 | the-djmaze/snappymail | https://api.github.com/repos/the-djmaze/snappymail | closed | html editor not working | documentation | If i click source on the image below and copy that into a new email.
All HTML is gone.
If I straight up copy the first image, without clicking source, I can copy it into a new email.
But I do wish to save the HTML, to re-use it later on.
I believe that the source button should contain the HTML, if not, have a separate button to do so.
Snappymail 2.17.0 (Within CyberPanel)
How it should look.

How it looks when I click "Source", it's missing the HTML.

How the outcome looks if I copy the source box into a new email?

| 1.0 | html editor not working - If i click source on the image below and copy that into a new email.
All HTML is gone.
If I straight up copy the first image, without clicking source, I can copy it into a new email.
But I do wish to save the HTML, to re-use it later on.
I believe that the source button should contain the HTML, if not, have a separate button to do so.
Snappymail 2.17.0 (Within CyberPanel)
How it should look.

How it looks when I click "Source", it's missing the HTML.

How the outcome looks if I copy the source box into a new email?

| non_test | html editor not working if i click source on the image below and copy that into a new email all html is gone if i straight up copy the first image without clicking source i can copy it into a new email but i do wish to save the html to re use it later on i believe that the source button should contain the html if not have a separate button to do so snappymail within cyberpanel how it should look how it looks when i click source it s missing the html how the outcome looks if i copy the source box into a new email | 0 |
179,498 | 21,571,682,208 | IssuesEvent | 2022-05-02 09:00:03 | vincenzodistasio97/excel-to-json | https://api.github.com/repos/vincenzodistasio97/excel-to-json | opened | react-16.2.0.tgz: 5 vulnerabilities (highest severity is: 7.5) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>react-16.2.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/ua-parser-js/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-27292](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27292) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ua-parser-js-0.7.17.tgz | Transitive | 16.3.0-alpha.0 | ❌ |
| [CVE-2020-7733](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ua-parser-js-0.7.17.tgz | Transitive | 16.3.0-alpha.0 | ❌ |
| [CVE-2020-7793](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7793) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ua-parser-js-0.7.17.tgz | Transitive | 16.3.0-alpha.0 | ❌ |
| [CVE-2022-0235](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0235) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | node-fetch-1.7.3.tgz | Transitive | N/A | ❌ |
| [CVE-2020-15168](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | node-fetch-1.7.3.tgz | Transitive | 16.5.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-27292</summary>
### Vulnerable Library - <b>ua-parser-js-0.7.17.tgz</b></p>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/ua-parser-js/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- :x: **ua-parser-js-0.7.17.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
ua-parser-js >= 0.7.14, fixed in 0.7.24, uses a regular expression which is vulnerable to denial of service. If an attacker sends a malicious User-Agent header, ua-parser-js will get stuck processing it for an extended period of time.
<p>Publish Date: 2021-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27292>CVE-2021-27292</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/faisalman/ua-parser-js/releases/tag/0.7.24">https://github.com/faisalman/ua-parser-js/releases/tag/0.7.24</a></p>
<p>Release Date: 2021-03-17</p>
<p>Fix Resolution (ua-parser-js): 0.7.24</p>
<p>Direct dependency fix Resolution (react): 16.3.0-alpha.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-7733</summary>
### Vulnerable Library - <b>ua-parser-js-0.7.17.tgz</b></p>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/ua-parser-js/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- :x: **ua-parser-js-0.7.17.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA.
<p>Publish Date: 2020-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733>CVE-2020-7733</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733</a></p>
<p>Release Date: 2020-09-16</p>
<p>Fix Resolution (ua-parser-js): 0.7.22</p>
<p>Direct dependency fix Resolution (react): 16.3.0-alpha.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-7793</summary>
### Vulnerable Library - <b>ua-parser-js-0.7.17.tgz</b></p>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/ua-parser-js/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- :x: **ua-parser-js-0.7.17.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The package ua-parser-js before 0.7.23 are vulnerable to Regular Expression Denial of Service (ReDoS) in multiple regexes (see linked commit for more info).
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7793>CVE-2020-7793</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/faisalman/ua-parser-js/commit/6d1f26df051ba681463ef109d36c9cf0f7e32b18">https://github.com/faisalman/ua-parser-js/commit/6d1f26df051ba681463ef109d36c9cf0f7e32b18</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution (ua-parser-js): 0.7.23</p>
<p>Direct dependency fix Resolution (react): 16.3.0-alpha.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0235</summary>
### Vulnerable Library - <b>node-fetch-1.7.3.tgz</b></p>
<p>A light-weight module that brings window.fetch to node.js and io.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/node-fetch/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- isomorphic-fetch-2.2.1.tgz
- :x: **node-fetch-1.7.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor
<p>Publish Date: 2022-01-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0235>CVE-2022-0235</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-r683-j2x4-v87g">https://github.com/advisories/GHSA-r683-j2x4-v87g</a></p>
<p>Release Date: 2022-01-16</p>
<p>Fix Resolution: node-fetch - 2.6.7,3.1.1</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-15168</summary>
### Vulnerable Library - <b>node-fetch-1.7.3.tgz</b></p>
<p>A light-weight module that brings window.fetch to node.js and io.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/node-fetch/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- isomorphic-fetch-2.2.1.tgz
- :x: **node-fetch-1.7.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
node-fetch before versions 2.6.1 and 3.0.0-beta.9 did not honor the size option after following a redirect, which means that when a content size was over the limit, a FetchError would never get thrown and the process would end without failure. For most people, this fix will have a little or no impact. However, if you are relying on node-fetch to gate files above a size, the impact could be significant, for example: If you don't double-check the size of the data after fetch() has completed, your JS thread could get tied up doing work on a large file (DoS) and/or cost you money in computing.
<p>Publish Date: 2020-09-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168>CVE-2020-15168</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r">https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r</a></p>
<p>Release Date: 2020-09-17</p>
<p>Fix Resolution (node-fetch): 2.6.1</p>
<p>Direct dependency fix Resolution (react): 16.5.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"react","packageVersion":"16.2.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":false,"dependencyTree":"react:16.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"16.3.0-alpha.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-27292","vulnerabilityDetails":"ua-parser-js \u003e\u003d 0.7.14, fixed in 0.7.24, uses a regular expression which is vulnerable to denial of service. If an attacker sends a malicious User-Agent header, ua-parser-js will get stuck processing it for an extended period of time.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27292","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"react","packageVersion":"16.2.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":false,"dependencyTree":"react:16.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"16.3.0-alpha.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7733","vulnerabilityDetails":"The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"react","packageVersion":"16.2.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":false,"dependencyTree":"react:16.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"16.3.0-alpha.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7793","vulnerabilityDetails":"The package ua-parser-js before 0.7.23 are vulnerable to Regular Expression Denial of Service (ReDoS) in multiple regexes (see linked commit for more info).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7793","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-fetch","packageVersion":"1.7.3","packageFilePaths":["/client/package.json"],"isTransitiveDependency":true,"dependencyTree":"react:16.2.0;fbjs:0.8.16;isomorphic-fetch:2.2.1;node-fetch:1.7.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"node-fetch - 2.6.7,3.1.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-0235","vulnerabilityDetails":"node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0235","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"react","packageVersion":"16.2.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":false,"dependencyTree":"react:16.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"16.5.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-15168","vulnerabilityDetails":"node-fetch before versions 2.6.1 and 3.0.0-beta.9 did not honor the size option after following a redirect, which means that when a content size was over the limit, a FetchError would never get thrown and the process would end without failure. For most people, this fix will have a little or no impact. However, if you are relying on node-fetch to gate files above a size, the impact could be significant, for example: If you don\u0027t double-check the size of the data after fetch() has completed, your JS thread could get tied up doing work on a large file (DoS) and/or cost you money in computing.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> --> | True | react-16.2.0.tgz: 5 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>react-16.2.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/ua-parser-js/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-27292](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27292) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ua-parser-js-0.7.17.tgz | Transitive | 16.3.0-alpha.0 | ❌ |
| [CVE-2020-7733](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ua-parser-js-0.7.17.tgz | Transitive | 16.3.0-alpha.0 | ❌ |
| [CVE-2020-7793](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7793) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ua-parser-js-0.7.17.tgz | Transitive | 16.3.0-alpha.0 | ❌ |
| [CVE-2022-0235](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0235) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | node-fetch-1.7.3.tgz | Transitive | N/A | ❌ |
| [CVE-2020-15168](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | node-fetch-1.7.3.tgz | Transitive | 16.5.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-27292</summary>
### Vulnerable Library - <b>ua-parser-js-0.7.17.tgz</b></p>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/ua-parser-js/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- :x: **ua-parser-js-0.7.17.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
ua-parser-js >= 0.7.14, fixed in 0.7.24, uses a regular expression which is vulnerable to denial of service. If an attacker sends a malicious User-Agent header, ua-parser-js will get stuck processing it for an extended period of time.
<p>Publish Date: 2021-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27292>CVE-2021-27292</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/faisalman/ua-parser-js/releases/tag/0.7.24">https://github.com/faisalman/ua-parser-js/releases/tag/0.7.24</a></p>
<p>Release Date: 2021-03-17</p>
<p>Fix Resolution (ua-parser-js): 0.7.24</p>
<p>Direct dependency fix Resolution (react): 16.3.0-alpha.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-7733</summary>
### Vulnerable Library - <b>ua-parser-js-0.7.17.tgz</b></p>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/ua-parser-js/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- :x: **ua-parser-js-0.7.17.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA.
<p>Publish Date: 2020-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733>CVE-2020-7733</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733</a></p>
<p>Release Date: 2020-09-16</p>
<p>Fix Resolution (ua-parser-js): 0.7.22</p>
<p>Direct dependency fix Resolution (react): 16.3.0-alpha.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-7793</summary>
### Vulnerable Library - <b>ua-parser-js-0.7.17.tgz</b></p>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/ua-parser-js/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- :x: **ua-parser-js-0.7.17.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The package ua-parser-js before 0.7.23 are vulnerable to Regular Expression Denial of Service (ReDoS) in multiple regexes (see linked commit for more info).
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7793>CVE-2020-7793</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/faisalman/ua-parser-js/commit/6d1f26df051ba681463ef109d36c9cf0f7e32b18">https://github.com/faisalman/ua-parser-js/commit/6d1f26df051ba681463ef109d36c9cf0f7e32b18</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution (ua-parser-js): 0.7.23</p>
<p>Direct dependency fix Resolution (react): 16.3.0-alpha.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0235</summary>
### Vulnerable Library - <b>node-fetch-1.7.3.tgz</b></p>
<p>A light-weight module that brings window.fetch to node.js and io.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/node-fetch/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- isomorphic-fetch-2.2.1.tgz
- :x: **node-fetch-1.7.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor
<p>Publish Date: 2022-01-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0235>CVE-2022-0235</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-r683-j2x4-v87g">https://github.com/advisories/GHSA-r683-j2x4-v87g</a></p>
<p>Release Date: 2022-01-16</p>
<p>Fix Resolution: node-fetch - 2.6.7,3.1.1</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-15168</summary>
### Vulnerable Library - <b>node-fetch-1.7.3.tgz</b></p>
<p>A light-weight module that brings window.fetch to node.js and io.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/node-fetch/package.json</p>
<p>
Dependency Hierarchy:
- react-16.2.0.tgz (Root Library)
- fbjs-0.8.16.tgz
- isomorphic-fetch-2.2.1.tgz
- :x: **node-fetch-1.7.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/excel-to-json/commit/1a1de35ab02810e012c50902e6111bd128e1e32a">1a1de35ab02810e012c50902e6111bd128e1e32a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
node-fetch before versions 2.6.1 and 3.0.0-beta.9 did not honor the size option after following a redirect, which means that when a content size was over the limit, a FetchError would never get thrown and the process would end without failure. For most people, this fix will have a little or no impact. However, if you are relying on node-fetch to gate files above a size, the impact could be significant, for example: If you don't double-check the size of the data after fetch() has completed, your JS thread could get tied up doing work on a large file (DoS) and/or cost you money in computing.
<p>Publish Date: 2020-09-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168>CVE-2020-15168</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r">https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r</a></p>
<p>Release Date: 2020-09-17</p>
<p>Fix Resolution (node-fetch): 2.6.1</p>
<p>Direct dependency fix Resolution (react): 16.5.0</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"react","packageVersion":"16.2.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":false,"dependencyTree":"react:16.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"16.3.0-alpha.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-27292","vulnerabilityDetails":"ua-parser-js \u003e\u003d 0.7.14, fixed in 0.7.24, uses a regular expression which is vulnerable to denial of service. If an attacker sends a malicious User-Agent header, ua-parser-js will get stuck processing it for an extended period of time.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27292","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"react","packageVersion":"16.2.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":false,"dependencyTree":"react:16.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"16.3.0-alpha.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7733","vulnerabilityDetails":"The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"react","packageVersion":"16.2.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":false,"dependencyTree":"react:16.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"16.3.0-alpha.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7793","vulnerabilityDetails":"The package ua-parser-js before 0.7.23 are vulnerable to Regular Expression Denial of Service (ReDoS) in multiple regexes (see linked commit for more info).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7793","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-fetch","packageVersion":"1.7.3","packageFilePaths":["/client/package.json"],"isTransitiveDependency":true,"dependencyTree":"react:16.2.0;fbjs:0.8.16;isomorphic-fetch:2.2.1;node-fetch:1.7.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"node-fetch - 2.6.7,3.1.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-0235","vulnerabilityDetails":"node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0235","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"react","packageVersion":"16.2.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":false,"dependencyTree":"react:16.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"16.5.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-15168","vulnerabilityDetails":"node-fetch before versions 2.6.1 and 3.0.0-beta.9 did not honor the size option after following a redirect, which means that when a content size was over the limit, a FetchError would never get thrown and the process would end without failure. For most people, this fix will have a little or no impact. However, if you are relying on node-fetch to gate files above a size, the impact could be significant, for example: If you don\u0027t double-check the size of the data after fetch() has completed, your JS thread could get tied up doing work on a large file (DoS) and/or cost you money in computing.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> --> | non_test | react tgz vulnerabilities highest severity is vulnerable library react tgz path to dependency file client package json path to vulnerable library client node modules ua parser js package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high ua parser js tgz transitive alpha high ua parser js tgz transitive alpha high ua parser js tgz transitive alpha medium node fetch tgz transitive n a medium node fetch tgz transitive details cve vulnerable library ua parser js tgz lightweight javascript based user agent string parser library home page a href path to dependency file client package json path to vulnerable library client node modules ua parser js package json dependency hierarchy react tgz root library fbjs tgz x ua parser js tgz vulnerable library found in head commit a href found in base branch master vulnerability details ua parser js fixed in uses a regular expression which is vulnerable to denial of service if an attacker sends a malicious user agent header ua parser js will get stuck processing it for an extended period of time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ua parser js direct dependency fix resolution react alpha step up your open source security game with whitesource cve vulnerable library ua parser js tgz lightweight javascript based user agent string parser library home page a href path to dependency file client package json path to vulnerable library client node modules ua parser js package json dependency hierarchy react tgz root library fbjs tgz x ua parser js tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package ua parser js before are vulnerable to regular expression denial of service redos via the regex for redmi phones and mi pad tablets ua publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ua parser js direct dependency fix resolution react alpha step up your open source security game with whitesource cve vulnerable library ua parser js tgz lightweight javascript based user agent string parser library home page a href path to dependency file client package json path to vulnerable library client node modules ua parser js package json dependency hierarchy react tgz root library fbjs tgz x ua parser js tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package ua parser js before are vulnerable to regular expression denial of service redos in multiple regexes see linked commit for more info publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ua parser js direct dependency fix resolution react alpha step up your open source security game with whitesource cve vulnerable library node fetch tgz a light weight module that brings window fetch to node js and io js library home page a href path to dependency file client package json path to vulnerable library client node modules node fetch package json dependency hierarchy react tgz root library fbjs tgz isomorphic fetch tgz x node fetch tgz vulnerable library found in head commit a href found in base branch master vulnerability details node fetch is vulnerable to exposure of sensitive information to an unauthorized actor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node fetch step up your open source security game with whitesource cve vulnerable library node fetch tgz a light weight module that brings window fetch to node js and io js library home page a href path to dependency file client package json path to vulnerable library client node modules node fetch package json dependency hierarchy react tgz root library fbjs tgz isomorphic fetch tgz x node fetch tgz vulnerable library found in head commit a href found in base branch master vulnerability details node fetch before versions and beta did not honor the size option after following a redirect which means that when a content size was over the limit a fetcherror would never get thrown and the process would end without failure for most people this fix will have a little or no impact however if you are relying on node fetch to gate files above a size the impact could be significant for example if you don t double check the size of the data after fetch has completed your js thread could get tied up doing work on a large file dos and or cost you money in computing publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node fetch direct dependency fix resolution react step up your open source security game with whitesource istransitivedependency false dependencytree react isminimumfixversionavailable true minimumfixversion alpha isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails ua parser js fixed in uses a regular expression which is vulnerable to denial of service if an attacker sends a malicious user agent header ua parser js will get stuck processing it for an extended period of time vulnerabilityurl istransitivedependency false dependencytree react isminimumfixversionavailable true minimumfixversion alpha isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the package ua parser js before are vulnerable to regular expression denial of service redos via the regex for redmi phones and mi pad tablets ua vulnerabilityurl istransitivedependency false dependencytree react isminimumfixversionavailable true minimumfixversion alpha isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the package ua parser js before are vulnerable to regular expression denial of service redos in multiple regexes see linked commit for more info vulnerabilityurl istransitivedependency true dependencytree react fbjs isomorphic fetch node fetch isminimumfixversionavailable true minimumfixversion node fetch isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails node fetch is vulnerable to exposure of sensitive information to an unauthorized actor vulnerabilityurl istransitivedependency false dependencytree react isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails node fetch before versions and beta did not honor the size option after following a redirect which means that when a content size was over the limit a fetcherror would never get thrown and the process would end without failure for most people this fix will have a little or no impact however if you are relying on node fetch to gate files above a size the impact could be significant for example if you don double check the size of the data after fetch has completed your js thread could get tied up doing work on a large file dos and or cost you money in computing vulnerabilityurl | 0 |
791,804 | 27,878,171,967 | IssuesEvent | 2023-03-21 17:21:30 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | Saving a Slack integration with an invalid files channel name will appear to fail, but actually save with the default value | Type:Bug Priority:P3 .Regression Notifications/Slack | **To Reproduce**
Steps to reproduce the behavior:
1. Try to save a new Slack integration with a valid OAuth token, but a files channel that does not exist
2. You should see an error when saving, as expected
3. Refresh the page
4. The page will show the Slack integration as being saved successfully, with `metabase_files` (the default value) as the files channel
**Information about your Metabase Installation:**
Current master
**Severity**
Annoying, but maybe p3 since it is only hit if you try to use an invalid files channel | 1.0 | Saving a Slack integration with an invalid files channel name will appear to fail, but actually save with the default value - **To Reproduce**
Steps to reproduce the behavior:
1. Try to save a new Slack integration with a valid OAuth token, but a files channel that does not exist
2. You should see an error when saving, as expected
3. Refresh the page
4. The page will show the Slack integration as being saved successfully, with `metabase_files` (the default value) as the files channel
**Information about your Metabase Installation:**
Current master
**Severity**
Annoying, but maybe p3 since it is only hit if you try to use an invalid files channel | non_test | saving a slack integration with an invalid files channel name will appear to fail but actually save with the default value to reproduce steps to reproduce the behavior try to save a new slack integration with a valid oauth token but a files channel that does not exist you should see an error when saving as expected refresh the page the page will show the slack integration as being saved successfully with metabase files the default value as the files channel information about your metabase installation current master severity annoying but maybe since it is only hit if you try to use an invalid files channel | 0 |
791,081 | 27,849,111,905 | IssuesEvent | 2023-03-20 17:23:15 | NCAR/wrfcloud | https://api.github.com/repos/NCAR/wrfcloud | closed | Add wind barb/arrow plot for wind direction | priority: high type: new feature component: graphics |
## Describe the New Feature ##
Derived wind speed and direction were added via PR #147. Direction is currently a contoured plot, which isn't that usable. Need to add a plot type that displays directions as either wind barbs or wind arrows.
### Acceptance Testing ###
*List input data types and sources.*
*Describe tests required for new functionality.*
### Time Estimate ###
*Estimate the amount of work required here.*
*Issues should represent approximately 1 to 3 days of work.*
### Sub-Issues ###
Consider breaking the new feature down into sub-issues.
- [ ] *Add a checkbox for each sub-issue here.*
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
### Projects and Milestone ###
- [ ] Select **Project**
- [ ] Select **Milestone** as the next official version or **Backlog of Development Ideas**
## New Feature Checklist ##
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>/<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)**, **Project**, and **Development** issue
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| 1.0 | Add wind barb/arrow plot for wind direction -
## Describe the New Feature ##
Derived wind speed and direction were added via PR #147. Direction is currently a contoured plot, which isn't that usable. Need to add a plot type that displays directions as either wind barbs or wind arrows.
### Acceptance Testing ###
*List input data types and sources.*
*Describe tests required for new functionality.*
### Time Estimate ###
*Estimate the amount of work required here.*
*Issues should represent approximately 1 to 3 days of work.*
### Sub-Issues ###
Consider breaking the new feature down into sub-issues.
- [ ] *Add a checkbox for each sub-issue here.*
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
### Projects and Milestone ###
- [ ] Select **Project**
- [ ] Select **Milestone** as the next official version or **Backlog of Development Ideas**
## New Feature Checklist ##
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>/<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)**, **Project**, and **Development** issue
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| non_test | add wind barb arrow plot for wind direction describe the new feature derived wind speed and direction were added via pr direction is currently a contoured plot which isn t that usable need to add a plot type that displays directions as either wind barbs or wind arrows acceptance testing list input data types and sources describe tests required for new functionality time estimate estimate the amount of work required here issues should represent approximately to days of work sub issues consider breaking the new feature down into sub issues add a checkbox for each sub issue here relevant deadlines list relevant project deadlines here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority projects and milestone select project select milestone as the next official version or backlog of development ideas new feature checklist complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s project and development issue select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue | 0 |
59,412 | 3,109,935,142 | IssuesEvent | 2015-09-02 01:43:50 | cs2103aug2015-w09-3j/main | https://api.github.com/repos/cs2103aug2015-w09-3j/main | opened | A user can sync his tasks to Google Calendar | priority.medium type.story | ... so that his tasks can be seen in his Google Calendar | 1.0 | A user can sync his tasks to Google Calendar - ... so that his tasks can be seen in his Google Calendar | non_test | a user can sync his tasks to google calendar so that his tasks can be seen in his google calendar | 0 |
46,812 | 5,830,831,736 | IssuesEvent | 2017-05-08 17:50:50 | bounswe/bounswe2017group10 | https://api.github.com/repos/bounswe/bounswe2017group10 | closed | Develop an acceptance test for the project. | Development Testing | Find a good template for the documentation and develop test cases for all use-cases in our system. | 1.0 | Develop an acceptance test for the project. - Find a good template for the documentation and develop test cases for all use-cases in our system. | test | develop an acceptance test for the project find a good template for the documentation and develop test cases for all use cases in our system | 1 |
10,939 | 16,008,416,470 | IssuesEvent | 2021-04-20 07:31:31 | wmo-im/monitoring | https://api.github.com/repos/wmo-im/monitoring | closed | Support different data views | Technical requirement | The monitoring should support different data views. For example raw data vs. nwp data. That is, raw data is the data, that is in principle available, whereas nwp data is the data, that is actaally used in forecasting | 1.0 | Support different data views - The monitoring should support different data views. For example raw data vs. nwp data. That is, raw data is the data, that is in principle available, whereas nwp data is the data, that is actaally used in forecasting | non_test | support different data views the monitoring should support different data views for example raw data vs nwp data that is raw data is the data that is in principle available whereas nwp data is the data that is actaally used in forecasting | 0 |
101,954 | 12,733,848,239 | IssuesEvent | 2020-06-25 13:00:38 | google/web-stories-wp | https://api.github.com/repos/google/web-stories-wp | closed | Editor UI refinements for beta | Group: Design Panel Group: Editing Pod: Workspace Type: Enhancement UAT: Passed | ## Feature Description
Move "Nothing selected" to the center and a bit further down
Design & Document tabs should be aligned to the left

Remove white lines at the bottom:
<img width="369" alt="Screen Shot 2020-06-19 at 7 52 26 AM" src="https://user-images.githubusercontent.com/841956/85148241-33898600-b250-11ea-9e33-6cf65c23eb20.png">
<img width="369" alt="Screen Shot 2020-06-19 at 7 51 52 AM" src="https://user-images.githubusercontent.com/841956/85148245-34221c80-b250-11ea-82ae-426f5d4ad7ac.png">
---
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance Criteria
<!-- One or more bullet points for acceptance criteria. -->
## Implementation Brief
<!-- One or more bullet points for how to technically implement the feature. -->
| 1.0 | Editor UI refinements for beta - ## Feature Description
Move "Nothing selected" to the center and a bit further down
Design & Document tabs should be aligned to the left

Remove white lines at the bottom:
<img width="369" alt="Screen Shot 2020-06-19 at 7 52 26 AM" src="https://user-images.githubusercontent.com/841956/85148241-33898600-b250-11ea-9e33-6cf65c23eb20.png">
<img width="369" alt="Screen Shot 2020-06-19 at 7 51 52 AM" src="https://user-images.githubusercontent.com/841956/85148245-34221c80-b250-11ea-82ae-426f5d4ad7ac.png">
---
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance Criteria
<!-- One or more bullet points for acceptance criteria. -->
## Implementation Brief
<!-- One or more bullet points for how to technically implement the feature. -->
| non_test | editor ui refinements for beta feature description move nothing selected to the center and a bit further down design document tabs should be aligned to the left remove white lines at the bottom img width alt screen shot at am src img width alt screen shot at am src do not alter or remove anything below the following sections will be managed by moderators only acceptance criteria implementation brief | 0 |
386,887 | 26,705,738,721 | IssuesEvent | 2023-01-27 17:59:40 | lullius/pylibby | https://api.github.com/repos/lullius/pylibby | closed | Works on macOS | documentation | Not an issue per se but your Read Me says you don't know if it works on any operating systems other than Linux. I can confirm that I got it working on macOS. I appreciate the work you've put into this. | 1.0 | Works on macOS - Not an issue per se but your Read Me says you don't know if it works on any operating systems other than Linux. I can confirm that I got it working on macOS. I appreciate the work you've put into this. | non_test | works on macos not an issue per se but your read me says you don t know if it works on any operating systems other than linux i can confirm that i got it working on macos i appreciate the work you ve put into this | 0 |
352,531 | 25,071,736,779 | IssuesEvent | 2022-11-07 12:42:41 | projen/projen | https://api.github.com/repos/projen/projen | closed | Example for projen release of AwsCdkConstructLibrary with CodeCommit, CodePipeline and CodeArtifact | documentation | Hey guys,
first of all a big thank you for this amazing project!
During the last weeks I'm browsing through lots of content regarding AWS CDK (e.g. CDK Day Youtube Videos) and now I'm starting to realize the power of projen.
I want to create an AwsCdkConstructLibrary for our company for internal use, so I've switched to a manual release because I want to use CodeCommit + CodePipeline instead of GitHub and I want to use CodeArtifact and an internal Nexus3 repository for the released packages.
Therefore I use these settings now:
`github: false,`
`releaseTrigger: ReleaseTrigger.manual(),`
Unfortunately I was not able to find a documentation including a buildspec.yml that explains how "npx projen build", "npx projen release" and so on works inside a CodePipeline/CodeBuild project.
Could you add this to the documentation maybe?
Best wishes
Johnny | 1.0 | Example for projen release of AwsCdkConstructLibrary with CodeCommit, CodePipeline and CodeArtifact - Hey guys,
first of all a big thank you for this amazing project!
During the last weeks I'm browsing through lots of content regarding AWS CDK (e.g. CDK Day Youtube Videos) and now I'm starting to realize the power of projen.
I want to create an AwsCdkConstructLibrary for our company for internal use, so I've switched to a manual release because I want to use CodeCommit + CodePipeline instead of GitHub and I want to use CodeArtifact and an internal Nexus3 repository for the released packages.
Therefore I use these settings now:
`github: false,`
`releaseTrigger: ReleaseTrigger.manual(),`
Unfortunately I was not able to find a documentation including a buildspec.yml that explains how "npx projen build", "npx projen release" and so on works inside a CodePipeline/CodeBuild project.
Could you add this to the documentation maybe?
Best wishes
Johnny | non_test | example for projen release of awscdkconstructlibrary with codecommit codepipeline and codeartifact hey guys first of all a big thank you for this amazing project during the last weeks i m browsing through lots of content regarding aws cdk e g cdk day youtube videos and now i m starting to realize the power of projen i want to create an awscdkconstructlibrary for our company for internal use so i ve switched to a manual release because i want to use codecommit codepipeline instead of github and i want to use codeartifact and an internal repository for the released packages therefore i use these settings now github false releasetrigger releasetrigger manual unfortunately i was not able to find a documentation including a buildspec yml that explains how npx projen build npx projen release and so on works inside a codepipeline codebuild project could you add this to the documentation maybe best wishes johnny | 0 |
154,670 | 12,224,202,624 | IssuesEvent | 2020-05-02 21:17:33 | axelvondreden/dms | https://api.github.com/repos/axelvondreden/dms | closed | make editing/creating words in import process possible | enhancement test successful | Ich bin mal so frei und habe dir deine Todo Notiz aus dem Release als Ticket erstellt. | 1.0 | make editing/creating words in import process possible - Ich bin mal so frei und habe dir deine Todo Notiz aus dem Release als Ticket erstellt. | test | make editing creating words in import process possible ich bin mal so frei und habe dir deine todo notiz aus dem release als ticket erstellt | 1 |
138,925 | 12,833,155,422 | IssuesEvent | 2020-07-07 08:52:59 | equinor/design-system | https://api.github.com/repos/equinor/design-system | opened | Create documentation for Slider | documentation react | Write documentation for [Slider](https://eds.equinor.com/components/slider/)
**Sub tasks**
- [ ] Write documentation
- [ ] Create examples
- [ ] Add props-table
- [ ] Add live preview | 1.0 | Create documentation for Slider - Write documentation for [Slider](https://eds.equinor.com/components/slider/)
**Sub tasks**
- [ ] Write documentation
- [ ] Create examples
- [ ] Add props-table
- [ ] Add live preview | non_test | create documentation for slider write documentation for sub tasks write documentation create examples add props table add live preview | 0 |
431,158 | 30,220,708,370 | IssuesEvent | 2023-07-05 19:08:48 | Quantum-Accelerators/quacc | https://api.github.com/repos/Quantum-Accelerators/quacc | closed | Finalize support for Prefect | documentation enhancement | 1. Complete the tutorial docs
2. Figure out how to run each task as a Slurm job
3. Ideally: figure out how to do job packing
4. Confirm the dynamic job tutorial works as expected in terms of the number of Slurm jobs | 1.0 | Finalize support for Prefect - 1. Complete the tutorial docs
2. Figure out how to run each task as a Slurm job
3. Ideally: figure out how to do job packing
4. Confirm the dynamic job tutorial works as expected in terms of the number of Slurm jobs | non_test | finalize support for prefect complete the tutorial docs figure out how to run each task as a slurm job ideally figure out how to do job packing confirm the dynamic job tutorial works as expected in terms of the number of slurm jobs | 0 |
131,975 | 10,726,181,141 | IssuesEvent | 2019-10-28 08:48:34 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | [WebSub] Update code snippets in module MD in websub module to reflect new changes | Area/StandardLibs BetaTesting Component/WebSub Type/Docs | **Description:**
$subject
**Steps to reproduce:**
Add __immediateStop and __gracefulStop in custom websub listener sample
**Affected Versions:**
1.0-beta
| 1.0 | [WebSub] Update code snippets in module MD in websub module to reflect new changes - **Description:**
$subject
**Steps to reproduce:**
Add __immediateStop and __gracefulStop in custom websub listener sample
**Affected Versions:**
1.0-beta
| test | update code snippets in module md in websub module to reflect new changes description subject steps to reproduce add immediatestop and gracefulstop in custom websub listener sample affected versions beta | 1 |
270,144 | 8,452,831,112 | IssuesEvent | 2018-10-20 08:59:24 | Diorite/dommons | https://api.github.com/repos/Diorite/dommons | opened | Add annotated types support to Type API | enhancement low priority | Currently Type API does not support `AnnotatedType`, it would be great to be able to represent such types too, we already support constructing annotations by class and values.
Goals: (can be done in separate tasks/PRs)
- [ ] Be able to construct annotated type for any existing `Type` with given `Annotation`s
- [ ] Improve parser to be able to parse types like `List<@Annotation(value = "x") String>` | 1.0 | Add annotated types support to Type API - Currently Type API does not support `AnnotatedType`, it would be great to be able to represent such types too, we already support constructing annotations by class and values.
Goals: (can be done in separate tasks/PRs)
- [ ] Be able to construct annotated type for any existing `Type` with given `Annotation`s
- [ ] Improve parser to be able to parse types like `List<@Annotation(value = "x") String>` | non_test | add annotated types support to type api currently type api does not support annotatedtype it would be great to be able to represent such types too we already support constructing annotations by class and values goals can be done in separate tasks prs be able to construct annotated type for any existing type with given annotation s improve parser to be able to parse types like list | 0 |
47,078 | 5,847,524,106 | IssuesEvent | 2017-05-10 18:41:38 | PowerShell/PowerShell | https://api.github.com/repos/PowerShell/PowerShell | opened | Validate PSGallery Modules compatibility on .Net Std 2.0 | Area-Test | Need some automation to install modules from the gallery, test against latest PSCore6 and see if they are tagged correctly for compatibility with coreclr as well as non-Windows. | 1.0 | Validate PSGallery Modules compatibility on .Net Std 2.0 - Need some automation to install modules from the gallery, test against latest PSCore6 and see if they are tagged correctly for compatibility with coreclr as well as non-Windows. | test | validate psgallery modules compatibility on net std need some automation to install modules from the gallery test against latest and see if they are tagged correctly for compatibility with coreclr as well as non windows | 1 |
264,595 | 23,126,837,915 | IssuesEvent | 2022-07-28 06:44:02 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | DISABLED test_delayed_reduce_scatter_offload_true_none (__main__.TestParityWithDDP) | module: flaky-tests skipped module: unknown | Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_delayed_reduce_scatter_offload_true_none&suite=TestParityWithDDP&file=distributed/fsdp/test_fsdp_core.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7552152283).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green. | 1.0 | DISABLED test_delayed_reduce_scatter_offload_true_none (__main__.TestParityWithDDP) - Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_delayed_reduce_scatter_offload_true_none&suite=TestParityWithDDP&file=distributed/fsdp/test_fsdp_core.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7552152283).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green. | test | disabled test delayed reduce scatter offload true none main testparitywithddp platforms linux rocm this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green | 1 |
13,854 | 3,365,955,567 | IssuesEvent | 2015-11-21 01:05:49 | FezVrasta/bootstrap-material-design | https://api.github.com/repos/FezVrasta/bootstrap-material-design | closed | Integration With AngularJS Does not Work | needs test case | Currently integrating bootstrap-material-design with Angularjs. However there are no erros but most of the element does not work using angularjs 1.3 +
Please suggest
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="user-scalable=no, width=device-width, initial-scale=1, maximum-scale=1">
<title>Material Design</title>
<link rel="stylesheet" href="plugins/bootstrap-3.3.4/css/bootstrap.min.css" />
<link rel="stylesheet" href="plugins/bootstrap-material-design-master/dist/css/material-fullpalette.min.css" />
<link rel="stylesheet" href="plugins/bootstrap-material-design-master/dist/css/ripples.min.css" />
<link rel="stylesheet" href="plugins/bootstrap-material-design-master/dist/css/roboto.min.css" />
<link rel="stylesheet" href="plugins/font-awesome-4.3.0/css/font-awesome.min.css" />
<link rel="stylesheet" href="style/init.css" />
</head>
<body ng-app="myApp" ng-controller="globalCtrl">
<div ng-view></div>
</body>
<script src="plugins/angular/angular.min.js" ></script>
<script src="plugins/angular/angular-animate.min.js" ></script>
<script src="plugins/angular/angular-route.min.js" ></script>
<script src="plugins/angular/angular-aria.js" ></script>
<script src="plugins/jquery-1.11.3.min.js" ></script>
<script src="init.js" ></script>
<!-- Controllers -->
<script src="controllers/landingCtrl.js"></script>
<script src="controllers/globalCtrl.js"></script>
</html>
| 1.0 | Integration With AngularJS Does not Work - Currently integrating bootstrap-material-design with Angularjs. However there are no erros but most of the element does not work using angularjs 1.3 +
Please suggest
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="user-scalable=no, width=device-width, initial-scale=1, maximum-scale=1">
<title>Material Design</title>
<link rel="stylesheet" href="plugins/bootstrap-3.3.4/css/bootstrap.min.css" />
<link rel="stylesheet" href="plugins/bootstrap-material-design-master/dist/css/material-fullpalette.min.css" />
<link rel="stylesheet" href="plugins/bootstrap-material-design-master/dist/css/ripples.min.css" />
<link rel="stylesheet" href="plugins/bootstrap-material-design-master/dist/css/roboto.min.css" />
<link rel="stylesheet" href="plugins/font-awesome-4.3.0/css/font-awesome.min.css" />
<link rel="stylesheet" href="style/init.css" />
</head>
<body ng-app="myApp" ng-controller="globalCtrl">
<div ng-view></div>
</body>
<script src="plugins/angular/angular.min.js" ></script>
<script src="plugins/angular/angular-animate.min.js" ></script>
<script src="plugins/angular/angular-route.min.js" ></script>
<script src="plugins/angular/angular-aria.js" ></script>
<script src="plugins/jquery-1.11.3.min.js" ></script>
<script src="init.js" ></script>
<!-- Controllers -->
<script src="controllers/landingCtrl.js"></script>
<script src="controllers/globalCtrl.js"></script>
</html>
| test | integration with angularjs does not work currently integrating bootstrap material design with angularjs however there are no erros but most of the element does not work using angularjs please suggest material design | 1 |
25,570 | 4,162,244,745 | IssuesEvent | 2016-06-17 19:37:15 | grpc/grpc | https://api.github.com/repos/grpc/grpc | opened | Python tests should not complete with reference cycles | cleanup python test | According to @kpayson64 [in this comment](https://github.com/grpc/grpc/pull/6857#issuecomment-225091528) our unit tests currently have cyclic object graphs after testing completes. We should clean that up; while it's true that the garbage collector will get to it eventually, I doubt that we need it and it's just a weird platform-dependent obstacle right now. | 1.0 | Python tests should not complete with reference cycles - According to @kpayson64 [in this comment](https://github.com/grpc/grpc/pull/6857#issuecomment-225091528) our unit tests currently have cyclic object graphs after testing completes. We should clean that up; while it's true that the garbage collector will get to it eventually, I doubt that we need it and it's just a weird platform-dependent obstacle right now. | test | python tests should not complete with reference cycles according to our unit tests currently have cyclic object graphs after testing completes we should clean that up while it s true that the garbage collector will get to it eventually i doubt that we need it and it s just a weird platform dependent obstacle right now | 1 |
325,871 | 9,937,040,905 | IssuesEvent | 2019-07-02 20:45:36 | mlr-org/paradox | https://api.github.com/repos/mlr-org/paradox | closed | Function to print x values | Priority: Medium Type: Enhancement | Function values are stored in named lists.
To transform them to a single string you could use `paste(names(x), x, sep = "=" ,collapse=",")`
This is problematic for
* Long values
* x values that can not be transferred to a character. These should not exists, because complex types are just created by transformation. But we have a untyped param class.
* Real valued numbers with many decimal places.
because they can mess up the output.
So we want to shorten and format some of them.
Formatting and shortening should be configurable.
Each ParamNode should be able to transform a named list to a character.
I propose `Param(Set/Real/...)$value_to_string(x)`.
| 1.0 | Function to print x values - Function values are stored in named lists.
To transform them to a single string you could use `paste(names(x), x, sep = "=" ,collapse=",")`
This is problematic for
* Long values
* x values that can not be transferred to a character. These should not exists, because complex types are just created by transformation. But we have a untyped param class.
* Real valued numbers with many decimal places.
because they can mess up the output.
So we want to shorten and format some of them.
Formatting and shortening should be configurable.
Each ParamNode should be able to transform a named list to a character.
I propose `Param(Set/Real/...)$value_to_string(x)`.
| non_test | function to print x values function values are stored in named lists to transform them to a single string you could use paste names x x sep collapse this is problematic for long values x values that can not be transferred to a character these should not exists because complex types are just created by transformation but we have a untyped param class real valued numbers with many decimal places because they can mess up the output so we want to shorten and format some of them formatting and shortening should be configurable each paramnode should be able to transform a named list to a character i propose param set real value to string x | 0 |
10,951 | 3,151,671,629 | IssuesEvent | 2015-09-16 09:29:31 | python-hyper/hyper-h2 | https://api.github.com/repos/python-hyper/hyper-h2 | closed | Rewrite threaded integration tests using coroutines. | Enhancement Testing | Like all my best ideas, this one is actually @shazow's: why use threads and sockets when we can fake the whole lot out and use coroutines for the tests in `test_interacting_stacks.py`? From IRC:
```
[19:11:34] shazow: if i had a ~month of funded work, i'd love to rewrite urllib3's testing framework
[19:11:37] shazow: maybe collab with you on that
[19:12:18] shazow: ideally release it as a standalone thing that you can use to test any client/server protocol thing
[19:12:27] lukasa: Oh that'd be super cool.
[19:12:48] lukasa: I think I want to rewrite this without an actual socket, instead providing a 'socket-like' that is actually a queue
[19:13:01] lukasa: Because all this messing about with events is fundamentally about ensuring that I don't leave data in sockets
[19:13:09] lukasa: Or get stuck waiting for data that will never come
[19:13:26] shazow: yes, exactly
[19:13:34] shazow: a mock socket would be really powerful
[19:13:37] shazow: and mock async-ness
[19:13:48] lukasa: Such a neat idea
[19:13:55] lukasa: OH CRAP
[19:13:56] shazow: (rather than threads)
[19:13:58] lukasa: Why am I not doing this with coroutines?
[19:14:02] lukasa: Alright
[19:14:02] shazow: :)
[19:14:08] lukasa: Tearing this up and doing new stuff
``` | 1.0 | Rewrite threaded integration tests using coroutines. - Like all my best ideas, this one is actually @shazow's: why use threads and sockets when we can fake the whole lot out and use coroutines for the tests in `test_interacting_stacks.py`? From IRC:
```
[19:11:34] shazow: if i had a ~month of funded work, i'd love to rewrite urllib3's testing framework
[19:11:37] shazow: maybe collab with you on that
[19:12:18] shazow: ideally release it as a standalone thing that you can use to test any client/server protocol thing
[19:12:27] lukasa: Oh that'd be super cool.
[19:12:48] lukasa: I think I want to rewrite this without an actual socket, instead providing a 'socket-like' that is actually a queue
[19:13:01] lukasa: Because all this messing about with events is fundamentally about ensuring that I don't leave data in sockets
[19:13:09] lukasa: Or get stuck waiting for data that will never come
[19:13:26] shazow: yes, exactly
[19:13:34] shazow: a mock socket would be really powerful
[19:13:37] shazow: and mock async-ness
[19:13:48] lukasa: Such a neat idea
[19:13:55] lukasa: OH CRAP
[19:13:56] shazow: (rather than threads)
[19:13:58] lukasa: Why am I not doing this with coroutines?
[19:14:02] lukasa: Alright
[19:14:02] shazow: :)
[19:14:08] lukasa: Tearing this up and doing new stuff
``` | test | rewrite threaded integration tests using coroutines like all my best ideas this one is actually shazow s why use threads and sockets when we can fake the whole lot out and use coroutines for the tests in test interacting stacks py from irc shazow if i had a month of funded work i d love to rewrite s testing framework shazow maybe collab with you on that shazow ideally release it as a standalone thing that you can use to test any client server protocol thing lukasa oh that d be super cool lukasa i think i want to rewrite this without an actual socket instead providing a socket like that is actually a queue lukasa because all this messing about with events is fundamentally about ensuring that i don t leave data in sockets lukasa or get stuck waiting for data that will never come shazow yes exactly shazow a mock socket would be really powerful shazow and mock async ness lukasa such a neat idea lukasa oh crap shazow rather than threads lukasa why am i not doing this with coroutines lukasa alright shazow lukasa tearing this up and doing new stuff | 1 |
515,319 | 14,959,943,863 | IssuesEvent | 2021-01-27 04:32:12 | cybercongress/go-cyber | https://api.github.com/repos/cybercongress/go-cyber | opened | Bank proxy panic on creation of Community Spend Proposal to new account | Priority: Critical Status: In Progress Type: Bug | During proposal creation gov module executes proposal's content in cache-wrapped context
```
cacheCtx, _ := ctx.CacheContext()
handler := keeper.router.GetRoute(content.ProposalRoute())
if err := handler(cacheCtx, content); err != nil {
return types.Proposal{}, sdkerrors.Wrap(types.ErrInvalidProposalContent, err.Error())
}
```
This not cause state change but triggers hook with SendCoins in our bank's proxy module to index given address (in this case community pool spend proposal recipient).
If address not already exist this will follow to panic on nil pointer on GetAccountNumber
https://github.com/cybercongress/go-cyber/tree/master/x/bank/internal/keeper/index.go#L91
```
accNum := cbd.AccNumber(s.accountKeeper.GetAccount(ctx, addr).GetAccountNumber())
```
| 1.0 | Bank proxy panic on creation of Community Spend Proposal to new account - During proposal creation gov module executes proposal's content in cache-wrapped context
```
cacheCtx, _ := ctx.CacheContext()
handler := keeper.router.GetRoute(content.ProposalRoute())
if err := handler(cacheCtx, content); err != nil {
return types.Proposal{}, sdkerrors.Wrap(types.ErrInvalidProposalContent, err.Error())
}
```
This not cause state change but triggers hook with SendCoins in our bank's proxy module to index given address (in this case community pool spend proposal recipient).
If address not already exist this will follow to panic on nil pointer on GetAccountNumber
https://github.com/cybercongress/go-cyber/tree/master/x/bank/internal/keeper/index.go#L91
```
accNum := cbd.AccNumber(s.accountKeeper.GetAccount(ctx, addr).GetAccountNumber())
```
| non_test | bank proxy panic on creation of community spend proposal to new account during proposal creation gov module executes proposal s content in cache wrapped context cachectx ctx cachecontext handler keeper router getroute content proposalroute if err handler cachectx content err nil return types proposal sdkerrors wrap types errinvalidproposalcontent err error this not cause state change but triggers hook with sendcoins in our bank s proxy module to index given address in this case community pool spend proposal recipient if address not already exist this will follow to panic on nil pointer on getaccountnumber accnum cbd accnumber s accountkeeper getaccount ctx addr getaccountnumber | 0 |
27,399 | 4,309,315,022 | IssuesEvent | 2016-07-21 15:38:00 | weaveworks/microservices-demo | https://api.github.com/repos/weaveworks/microservices-demo | opened | Add container level tests for each service | enhancement testing | Write bash scripts in the `/testing/container` folder to run container level tests. These scripts should only use docker, not an orchestration framework. Preferably, the container under test should use mocked out containers for dependencies. | 1.0 | Add container level tests for each service - Write bash scripts in the `/testing/container` folder to run container level tests. These scripts should only use docker, not an orchestration framework. Preferably, the container under test should use mocked out containers for dependencies. | test | add container level tests for each service write bash scripts in the testing container folder to run container level tests these scripts should only use docker not an orchestration framework preferably the container under test should use mocked out containers for dependencies | 1 |
246,080 | 7,893,151,104 | IssuesEvent | 2018-06-28 17:05:32 | visit-dav/issues-test | https://api.github.com/repos/visit-dav/issues-test | closed | Data level comparison wizard creates unparseable expression | Bug Likelihood: 3 - Occasional OS: All Priority: High Severity: 4 - Crash / Wrong Results Support Group: Any version: 2.7.1 | Open Wave.visit
Open data level comparison wizard.
Choose 'Between different time slices on same mesh'
select pressure as donor field
Use Absolute time with index 0
Conn_cmfe
simply place it on target mesh for later use
After clicking finish, open the expressions window and look at the created expression:
conn_cmfe(<<[0]i:pressure>>, <quadmesh>)
The expression parser will complain about the double "<<"
This may be related to #1640, though I couldn't tell for certain from that ticket's description.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 05/01/2014 12:19 pm
Original update: 05/16/2014 01:05 pm
Ticket number: 1828 | 1.0 | Data level comparison wizard creates unparseable expression - Open Wave.visit
Open data level comparison wizard.
Choose 'Between different time slices on same mesh'
select pressure as donor field
Use Absolute time with index 0
Conn_cmfe
simply place it on target mesh for later use
After clicking finish, open the expressions window and look at the created expression:
conn_cmfe(<<[0]i:pressure>>, <quadmesh>)
The expression parser will complain about the double "<<"
This may be related to #1640, though I couldn't tell for certain from that ticket's description.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 05/01/2014 12:19 pm
Original update: 05/16/2014 01:05 pm
Ticket number: 1828 | non_test | data level comparison wizard creates unparseable expression open wave visit open data level comparison wizard choose between different time slices on same mesh select pressure as donor field use absolute time with index conn cmfe simply place it on target mesh for later use after clicking finish open the expressions window and look at the created expression conn cmfe the expression parser will complain about the double this may be related to though i couldn t tell for certain from that ticket s description redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author kathleen biagas original creation pm original update pm ticket number | 0 |
303,445 | 9,307,320,935 | IssuesEvent | 2019-03-25 11:58:52 | saveriomiroddi/geet | https://api.github.com/repos/saveriomiroddi/geet | closed | Merging a PR sometimes finds multiple PRs for the current branch | bug top_priority | In some cases, `pr merge` finds multiple PRs for the current branch.
This is causes by the `<owner>:` prefix missing from the `head` parameter. | 1.0 | Merging a PR sometimes finds multiple PRs for the current branch - In some cases, `pr merge` finds multiple PRs for the current branch.
This is causes by the `<owner>:` prefix missing from the `head` parameter. | non_test | merging a pr sometimes finds multiple prs for the current branch in some cases pr merge finds multiple prs for the current branch this is causes by the prefix missing from the head parameter | 0 |
44,351 | 23,595,403,336 | IssuesEvent | 2022-08-23 18:44:45 | timescale/promscale_extension | https://api.github.com/repos/timescale/promscale_extension | opened | Improve performance of set_metric_retention_period | Performance | This issue was reported by a customer and investigated by @jgpruitt
Create 2000 metrics
```
select format($$select _prom_catalog.get_or_create_metric_table_name('foo%s')$$, x) from generate_series(1, 2000) x
\gexec
```
Attempting to set the metric retention period in this manner works:
```
select prom_api.set_metric_retention_period(metric_name, INTERVAL '2h')
from _prom_catalog.metric
;
```
Now, create 2000 continuous aggregates on top of the metrics:
```
select format(
$$
create materialized view %s_daily with (timescaledb.continuous) as
select
time_bucket('1 day', "time") AS day,
series_id,
avg(value) as avg_val,
min(value) as min_val,
max(value) as max_val
from prom_data.%s
group by 1, 2
$$, metric_name, metric_name)
from _prom_catalog.metric
;
```
Attempting to set the metric retention period as before will now take FOREVER:
```
select prom_api.set_metric_retention_period(metric_name, INTERVAL '2h')
from _prom_catalog.metric
;
```
Using `\gexec` works, though:
```
select format($$select prom_api.set_metric_retention_period('foo%s', interval '1h')$$, x) from generate_series(1, 2000) x
\gexec
```
`_prom_catalog.get_cagg_info` is called by `prom_api.set_metric_retention_period` and appears to be considerably slower in this case. `_prom_catalog.get_cagg_info` in turn calls `_prom_catalog.get_first_level_view_on_metric` which I believe may be the source of the issue.
| True | Improve performance of set_metric_retention_period - This issue was reported by a customer and investigated by @jgpruitt
Create 2000 metrics
```
select format($$select _prom_catalog.get_or_create_metric_table_name('foo%s')$$, x) from generate_series(1, 2000) x
\gexec
```
Attempting to set the metric retention period in this manner works:
```
select prom_api.set_metric_retention_period(metric_name, INTERVAL '2h')
from _prom_catalog.metric
;
```
Now, create 2000 continuous aggregates on top of the metrics:
```
select format(
$$
create materialized view %s_daily with (timescaledb.continuous) as
select
time_bucket('1 day', "time") AS day,
series_id,
avg(value) as avg_val,
min(value) as min_val,
max(value) as max_val
from prom_data.%s
group by 1, 2
$$, metric_name, metric_name)
from _prom_catalog.metric
;
```
Attempting to set the metric retention period as before will now take FOREVER:
```
select prom_api.set_metric_retention_period(metric_name, INTERVAL '2h')
from _prom_catalog.metric
;
```
Using `\gexec` works, though:
```
select format($$select prom_api.set_metric_retention_period('foo%s', interval '1h')$$, x) from generate_series(1, 2000) x
\gexec
```
`_prom_catalog.get_cagg_info` is called by `prom_api.set_metric_retention_period` and appears to be considerably slower in this case. `_prom_catalog.get_cagg_info` in turn calls `_prom_catalog.get_first_level_view_on_metric` which I believe may be the source of the issue.
| non_test | improve performance of set metric retention period this issue was reported by a customer and investigated by jgpruitt create metrics select format select prom catalog get or create metric table name foo s x from generate series x gexec attempting to set the metric retention period in this manner works select prom api set metric retention period metric name interval from prom catalog metric now create continuous aggregates on top of the metrics select format create materialized view s daily with timescaledb continuous as select time bucket day time as day series id avg value as avg val min value as min val max value as max val from prom data s group by metric name metric name from prom catalog metric attempting to set the metric retention period as before will now take forever select prom api set metric retention period metric name interval from prom catalog metric using gexec works though select format select prom api set metric retention period foo s interval x from generate series x gexec prom catalog get cagg info is called by prom api set metric retention period and appears to be considerably slower in this case prom catalog get cagg info in turn calls prom catalog get first level view on metric which i believe may be the source of the issue | 0 |
276,388 | 23,990,848,439 | IssuesEvent | 2022-09-14 00:50:38 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | closed | Refactor spark-tests script to follow PYSP_TEST configs pattern | test task | Iceberg pytests are not scaled out using xdist because they have to use SPARK_SUBMIT_FLAGS as a workaround for #6351. With this being resolved we can rewrite the config as PYSP_TEST env and run Iceberg with TEST_PARALLEL >= 2 | 1.0 | Refactor spark-tests script to follow PYSP_TEST configs pattern - Iceberg pytests are not scaled out using xdist because they have to use SPARK_SUBMIT_FLAGS as a workaround for #6351. With this being resolved we can rewrite the config as PYSP_TEST env and run Iceberg with TEST_PARALLEL >= 2 | test | refactor spark tests script to follow pysp test configs pattern iceberg pytests are not scaled out using xdist because they have to use spark submit flags as a workaround for with this being resolved we can rewrite the config as pysp test env and run iceberg with test parallel | 1 |
185,461 | 14,354,772,154 | IssuesEvent | 2020-11-30 09:09:49 | reportportal/reportportal | https://api.github.com/repos/reportportal/reportportal | closed | [v5] Data retention does not work very well | Check: Test | **Describe the bug**
The data retention does not work very well.
- You cannot set unlimited retention via UI (API complains that 0 is unsupported value)
- Dubious default values
- in the `application.yaml` sets the initial delay of all Cleanup jobs to 7Days, that in practice means that the process has to survive 7D in order to launch the job. The counter is reset each time the process is restarted which in a dynamic environment like K8s could mean *never* in practice.
- Another weird default setting is that the period is set to 7 and 14D respectively. Cleaning up launches once 14 days could mean you need to go through thousands of launches.
- Together with another issue where the whole <Whatever>CleanupJob run is run in a single large transaction e.g. `com.epam.ta.reportportal.job.service.impl.LaunchCleanerServiceImpl#cleanOutdatedLaunches` leads to the situation when the cleanup could take days generating tons of temporary data in the Postgres (due to being in a single large transaction and Postgres having to create duplicates to ensure transaction isolation)
**To Reproduce**
**Expected behaviour**
* Unlimited retention could be set
* Sane default values built-in
* Not using single huge transactions to cleanup all launches etc... at once so there is a chance to finish the Job
**Additional context**
* We run reportportal in K8s v5.3.2
| 1.0 | [v5] Data retention does not work very well - **Describe the bug**
The data retention does not work very well.
- You cannot set unlimited retention via UI (API complains that 0 is unsupported value)
- Dubious default values
- in the `application.yaml` sets the initial delay of all Cleanup jobs to 7Days, that in practice means that the process has to survive 7D in order to launch the job. The counter is reset each time the process is restarted which in a dynamic environment like K8s could mean *never* in practice.
- Another weird default setting is that the period is set to 7 and 14D respectively. Cleaning up launches once 14 days could mean you need to go through thousands of launches.
- Together with another issue where the whole <Whatever>CleanupJob run is run in a single large transaction e.g. `com.epam.ta.reportportal.job.service.impl.LaunchCleanerServiceImpl#cleanOutdatedLaunches` leads to the situation when the cleanup could take days generating tons of temporary data in the Postgres (due to being in a single large transaction and Postgres having to create duplicates to ensure transaction isolation)
**To Reproduce**
**Expected behaviour**
* Unlimited retention could be set
* Sane default values built-in
* Not using single huge transactions to cleanup all launches etc... at once so there is a chance to finish the Job
**Additional context**
* We run reportportal in K8s v5.3.2
| test | data retention does not work very well describe the bug the data retention does not work very well you cannot set unlimited retention via ui api complains that is unsupported value dubious default values in the application yaml sets the initial delay of all cleanup jobs to that in practice means that the process has to survive in order to launch the job the counter is reset each time the process is restarted which in a dynamic environment like could mean never in practice another weird default setting is that the period is set to and respectively cleaning up launches once days could mean you need to go through thousands of launches together with another issue where the whole cleanupjob run is run in a single large transaction e g com epam ta reportportal job service impl launchcleanerserviceimpl cleanoutdatedlaunches leads to the situation when the cleanup could take days generating tons of temporary data in the postgres due to being in a single large transaction and postgres having to create duplicates to ensure transaction isolation to reproduce expected behaviour unlimited retention could be set sane default values built in not using single huge transactions to cleanup all launches etc at once so there is a chance to finish the job additional context we run reportportal in | 1 |
141,957 | 19,010,992,447 | IssuesEvent | 2021-11-23 09:17:57 | AlexRogalskiy/github-action-branch-mapper | https://api.github.com/repos/AlexRogalskiy/github-action-branch-mapper | opened | CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz | security vulnerability | ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: github-action-branch-mapper/package.json</p>
<p>Path to vulnerable library: github-action-branch-mapper/node_modules/npm/node_modules/json-schema/package.json,github-action-branch-mapper/node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- jest-27.0.0-next.2.tgz (Root Library)
- jest-cli-27.0.0-next.2.tgz
- jest-config-27.0.0-next.2.tgz
- jest-environment-jsdom-27.0.0-next.1.tgz
- jsdom-16.4.0.tgz
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-branch-mapper/commit/34eac7785a4f52bb3ce2a9fb659eb28fe2361495">34eac7785a4f52bb3ce2a9fb659eb28fe2361495</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution: json-schema - 0.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz - ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: github-action-branch-mapper/package.json</p>
<p>Path to vulnerable library: github-action-branch-mapper/node_modules/npm/node_modules/json-schema/package.json,github-action-branch-mapper/node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- jest-27.0.0-next.2.tgz (Root Library)
- jest-cli-27.0.0-next.2.tgz
- jest-config-27.0.0-next.2.tgz
- jest-environment-jsdom-27.0.0-next.1.tgz
- jsdom-16.4.0.tgz
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-branch-mapper/commit/34eac7785a4f52bb3ce2a9fb659eb28fe2361495">34eac7785a4f52bb3ce2a9fb659eb28fe2361495</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution: json-schema - 0.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in json schema tgz cve high severity vulnerability vulnerable library json schema tgz json schema validation and specifications library home page a href path to dependency file github action branch mapper package json path to vulnerable library github action branch mapper node modules npm node modules json schema package json github action branch mapper node modules json schema package json dependency hierarchy jest next tgz root library jest cli next tgz jest config next tgz jest environment jsdom next tgz jsdom tgz request tgz http signature tgz jsprim tgz x json schema tgz vulnerable library found in head commit a href vulnerability details json schema is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution json schema step up your open source security game with whitesource | 0 |
227,891 | 25,131,203,318 | IssuesEvent | 2022-11-09 15:15:53 | samq-democorp/Umbraco-CMS | https://api.github.com/repos/samq-democorp/Umbraco-CMS | opened | system.data.sqlclient.4.8.2.nupkg: 1 vulnerabilities (highest severity is: 5.5) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.data.sqlclient.4.8.2.nupkg</b></p></summary>
<p>Provides the data provider for SQL Server. These classes provide access to versions of SQL Server an...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.data.sqlclient.4.8.2.nupkg">https://api.nuget.org/packages/system.data.sqlclient.4.8.2.nupkg</a></p>
<p>Path to dependency file: /legacy/Umbraco.Tests/Umbraco.Tests.csproj</p>
<p>Path to vulnerable library: /56_SXMOJW/dotnet_ZSMOHC/20221025165256/system.data.sqlclient/4.8.2/system.data.sqlclient.4.8.2.nupkg</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (system.data.sqlclient.4.8.2.nupkg version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2022-0377](https://github.com/advisories/GHSA-8g2p-5pqh-5jmc) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | system.data.sqlclient.4.8.2.nupkg | Direct | Microsoft.Data.SqlClient - 1.1.4,2.1.2;System.Data.SqlClient - 4.8.5 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2022-0377</summary>
### Vulnerable Library - <b>system.data.sqlclient.4.8.2.nupkg</b></p>
<p>Provides the data provider for SQL Server. These classes provide access to versions of SQL Server an...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.data.sqlclient.4.8.2.nupkg">https://api.nuget.org/packages/system.data.sqlclient.4.8.2.nupkg</a></p>
<p>Path to dependency file: /legacy/Umbraco.Tests/Umbraco.Tests.csproj</p>
<p>Path to vulnerable library: /56_SXMOJW/dotnet_ZSMOHC/20221025165256/system.data.sqlclient/4.8.2/system.data.sqlclient.4.8.2.nupkg</p>
<p>
Dependency Hierarchy:
- :x: **system.data.sqlclient.4.8.2.nupkg** (Vulnerable Library)
<p>Found in base branch: <b>v10/contrib</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Microsoft is releasing this security advisory to provide information about a vulnerability in .NET, .NET Core and .NET Framework's System.Data.SqlClient and Microsoft.Data.SqlClient NuGet Packages.
A vulnerability exists in System.Data.SqlClient and Microsoft.Data.SqlClient libraries where a timeout occurring under high load can cause incorrect data to be returned as the result of an asynchronously executed query.
<p>Publish Date: 2022-11-09
<p>URL: <a href=https://github.com/advisories/GHSA-8g2p-5pqh-5jmc>WS-2022-0377</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-8g2p-5pqh-5jmc">https://github.com/advisories/GHSA-8g2p-5pqh-5jmc</a></p>
<p>Release Date: 2022-11-09</p>
<p>Fix Resolution: Microsoft.Data.SqlClient - 1.1.4,2.1.2;System.Data.SqlClient - 4.8.5</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | system.data.sqlclient.4.8.2.nupkg: 1 vulnerabilities (highest severity is: 5.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.data.sqlclient.4.8.2.nupkg</b></p></summary>
<p>Provides the data provider for SQL Server. These classes provide access to versions of SQL Server an...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.data.sqlclient.4.8.2.nupkg">https://api.nuget.org/packages/system.data.sqlclient.4.8.2.nupkg</a></p>
<p>Path to dependency file: /legacy/Umbraco.Tests/Umbraco.Tests.csproj</p>
<p>Path to vulnerable library: /56_SXMOJW/dotnet_ZSMOHC/20221025165256/system.data.sqlclient/4.8.2/system.data.sqlclient.4.8.2.nupkg</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (system.data.sqlclient.4.8.2.nupkg version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2022-0377](https://github.com/advisories/GHSA-8g2p-5pqh-5jmc) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | system.data.sqlclient.4.8.2.nupkg | Direct | Microsoft.Data.SqlClient - 1.1.4,2.1.2;System.Data.SqlClient - 4.8.5 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2022-0377</summary>
### Vulnerable Library - <b>system.data.sqlclient.4.8.2.nupkg</b></p>
<p>Provides the data provider for SQL Server. These classes provide access to versions of SQL Server an...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.data.sqlclient.4.8.2.nupkg">https://api.nuget.org/packages/system.data.sqlclient.4.8.2.nupkg</a></p>
<p>Path to dependency file: /legacy/Umbraco.Tests/Umbraco.Tests.csproj</p>
<p>Path to vulnerable library: /56_SXMOJW/dotnet_ZSMOHC/20221025165256/system.data.sqlclient/4.8.2/system.data.sqlclient.4.8.2.nupkg</p>
<p>
Dependency Hierarchy:
- :x: **system.data.sqlclient.4.8.2.nupkg** (Vulnerable Library)
<p>Found in base branch: <b>v10/contrib</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Microsoft is releasing this security advisory to provide information about a vulnerability in .NET, .NET Core and .NET Framework's System.Data.SqlClient and Microsoft.Data.SqlClient NuGet Packages.
A vulnerability exists in System.Data.SqlClient and Microsoft.Data.SqlClient libraries where a timeout occurring under high load can cause incorrect data to be returned as the result of an asynchronously executed query.
<p>Publish Date: 2022-11-09
<p>URL: <a href=https://github.com/advisories/GHSA-8g2p-5pqh-5jmc>WS-2022-0377</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-8g2p-5pqh-5jmc">https://github.com/advisories/GHSA-8g2p-5pqh-5jmc</a></p>
<p>Release Date: 2022-11-09</p>
<p>Fix Resolution: Microsoft.Data.SqlClient - 1.1.4,2.1.2;System.Data.SqlClient - 4.8.5</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_test | system data sqlclient nupkg vulnerabilities highest severity is vulnerable library system data sqlclient nupkg provides the data provider for sql server these classes provide access to versions of sql server an library home page a href path to dependency file legacy umbraco tests umbraco tests csproj path to vulnerable library sxmojw dotnet zsmohc system data sqlclient system data sqlclient nupkg vulnerabilities cve severity cvss dependency type fixed in system data sqlclient nupkg version remediation available medium system data sqlclient nupkg direct microsoft data sqlclient system data sqlclient details ws vulnerable library system data sqlclient nupkg provides the data provider for sql server these classes provide access to versions of sql server an library home page a href path to dependency file legacy umbraco tests umbraco tests csproj path to vulnerable library sxmojw dotnet zsmohc system data sqlclient system data sqlclient nupkg dependency hierarchy x system data sqlclient nupkg vulnerable library found in base branch contrib vulnerability details microsoft is releasing this security advisory to provide information about a vulnerability in net net core and net framework s system data sqlclient and microsoft data sqlclient nuget packages a vulnerability exists in system data sqlclient and microsoft data sqlclient libraries where a timeout occurring under high load can cause incorrect data to be returned as the result of an asynchronously executed query publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution microsoft data sqlclient system data sqlclient rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
30,198 | 11,801,266,350 | IssuesEvent | 2020-03-18 19:06:33 | jgeraigery/blueocean-environments | https://api.github.com/repos/jgeraigery/blueocean-environments | opened | CVE-2019-11358 (Medium) detected in jquery-3.3.1.tgz, jquery-2.1.0-beta2.js | security vulnerability | ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.tgz</b>, <b>jquery-2.1.0-beta2.js</b></p></summary>
<p>
<details><summary><b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/blueocean-environments/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/blueocean-environments/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- design-language-0.0.162.tgz (Root Library)
- linkifyjs-2.1.4.tgz
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.1.0-beta2.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.0-beta2/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.0-beta2/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/blueocean-environments/node_modules/@jenkins-cd/design-language/node_modules/moment-duration-format/test/test.html</p>
<p>Path to vulnerable library: /blueocean-environments/node_modules/@jenkins-cd/design-language/node_modules/moment-duration-format/test/vendor/jquery.js,/blueocean-environments/node_modules/moment-duration-format/test/vendor/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.0-beta2.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/blueocean-environments/commit/906df1e2c2b1353a7f809ef51960f8980d6cec13">906df1e2c2b1353a7f809ef51960f8980d6cec13</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"jquery","packageVersion":"3.3.1","isTransitiveDependency":true,"dependencyTree":"@jenkins-cd/design-language:0.0.162;linkifyjs:2.1.4;jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.0-beta2","isTransitiveDependency":false,"dependencyTree":"jquery:2.1.0-beta2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"}],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-11358 (Medium) detected in jquery-3.3.1.tgz, jquery-2.1.0-beta2.js - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-3.3.1.tgz</b>, <b>jquery-2.1.0-beta2.js</b></p></summary>
<p>
<details><summary><b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/blueocean-environments/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/blueocean-environments/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- design-language-0.0.162.tgz (Root Library)
- linkifyjs-2.1.4.tgz
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.1.0-beta2.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.0-beta2/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.0-beta2/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/blueocean-environments/node_modules/@jenkins-cd/design-language/node_modules/moment-duration-format/test/test.html</p>
<p>Path to vulnerable library: /blueocean-environments/node_modules/@jenkins-cd/design-language/node_modules/moment-duration-format/test/vendor/jquery.js,/blueocean-environments/node_modules/moment-duration-format/test/vendor/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.0-beta2.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/blueocean-environments/commit/906df1e2c2b1353a7f809ef51960f8980d6cec13">906df1e2c2b1353a7f809ef51960f8980d6cec13</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"jquery","packageVersion":"3.3.1","isTransitiveDependency":true,"dependencyTree":"@jenkins-cd/design-language:0.0.162;linkifyjs:2.1.4;jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.0-beta2","isTransitiveDependency":false,"dependencyTree":"jquery:2.1.0-beta2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"}],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in jquery tgz jquery js cve medium severity vulnerability vulnerable libraries jquery tgz jquery js jquery tgz javascript library for dom operations library home page a href path to dependency file tmp ws scm blueocean environments package json path to vulnerable library tmp ws scm blueocean environments node modules jquery package json dependency hierarchy design language tgz root library linkifyjs tgz x jquery tgz vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file tmp ws scm blueocean environments node modules jenkins cd design language node modules moment duration format test test html path to vulnerable library blueocean environments node modules jenkins cd design language node modules moment duration format test vendor jquery js blueocean environments node modules moment duration format test vendor jquery js dependency hierarchy x jquery js vulnerable library found in head commit a href vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype vulnerabilityurl | 0 |
261,629 | 27,809,823,566 | IssuesEvent | 2023-03-18 01:50:38 | madhans23/linux-4.1.15 | https://api.github.com/repos/madhans23/linux-4.1.15 | closed | CVE-2016-5829 (High) detected in linux-stable-rtv4.1.33 - autoclosed | Mend: dependency security vulnerability | ## CVE-2016-5829 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/usbhid/hiddev.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/usbhid/hiddev.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Multiple heap-based buffer overflows in the hiddev_ioctl_usage function in drivers/hid/usbhid/hiddev.c in the Linux kernel through 4.6.3 allow local users to cause a denial of service or possibly have unspecified other impact via a crafted (1) HIDIOCGUSAGES or (2) HIDIOCSUSAGES ioctl call.
<p>Publish Date: 2016-06-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-5829>CVE-2016-5829</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2016-5829">https://www.linuxkernelcves.com/cves/CVE-2016-5829</a></p>
<p>Release Date: 2016-06-27</p>
<p>Fix Resolution: v4.7-rc5,v3.12.62,v3.14.74,v3.16.37,v3.18.37,v3.2.82,v4.1.28,v4.4.16,v4.6.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-5829 (High) detected in linux-stable-rtv4.1.33 - autoclosed - ## CVE-2016-5829 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/usbhid/hiddev.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/usbhid/hiddev.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Multiple heap-based buffer overflows in the hiddev_ioctl_usage function in drivers/hid/usbhid/hiddev.c in the Linux kernel through 4.6.3 allow local users to cause a denial of service or possibly have unspecified other impact via a crafted (1) HIDIOCGUSAGES or (2) HIDIOCSUSAGES ioctl call.
<p>Publish Date: 2016-06-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-5829>CVE-2016-5829</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2016-5829">https://www.linuxkernelcves.com/cves/CVE-2016-5829</a></p>
<p>Release Date: 2016-06-27</p>
<p>Fix Resolution: v4.7-rc5,v3.12.62,v3.14.74,v3.16.37,v3.18.37,v3.2.82,v4.1.28,v4.4.16,v4.6.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in linux stable autoclosed cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers hid usbhid hiddev c drivers hid usbhid hiddev c vulnerability details multiple heap based buffer overflows in the hiddev ioctl usage function in drivers hid usbhid hiddev c in the linux kernel through allow local users to cause a denial of service or possibly have unspecified other impact via a crafted hidiocgusages or hidiocsusages ioctl call publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
326,818 | 28,020,952,103 | IssuesEvent | 2023-03-28 05:19:57 | wpfoodmanager/wp-food-manager | https://api.github.com/repos/wpfoodmanager/wp-food-manager | closed | Admin : Food listing type and category selection issue. | In Testing Issue Resolved | **Issue :**
As you can see below screenshot there is no food on the menu but still, it's showing 1 in Vegetarian.
Show the select box with filter because when user deselect all option above filter shoud be clear.
<img width="1382" alt="image" src="https://user-images.githubusercontent.com/35419531/227777758-c6a97613-47e8-41de-982a-251f64ec4565.png">
| 1.0 | Admin : Food listing type and category selection issue. - **Issue :**
As you can see below screenshot there is no food on the menu but still, it's showing 1 in Vegetarian.
Show the select box with filter because when user deselect all option above filter shoud be clear.
<img width="1382" alt="image" src="https://user-images.githubusercontent.com/35419531/227777758-c6a97613-47e8-41de-982a-251f64ec4565.png">
| test | admin food listing type and category selection issue issue as you can see below screenshot there is no food on the menu but still it s showing in vegetarian show the select box with filter because when user deselect all option above filter shoud be clear img width alt image src | 1 |
301,240 | 26,028,436,656 | IssuesEvent | 2022-12-21 18:32:04 | FlowCrypt/flowcrypt-ios | https://api.github.com/repos/FlowCrypt/flowcrypt-ios | closed | Fix drafts test | actionable tests | `check drafts functionality` test occasionally fails on semaphoreci.
Needs to be fixed | 1.0 | Fix drafts test - `check drafts functionality` test occasionally fails on semaphoreci.
Needs to be fixed | test | fix drafts test check drafts functionality test occasionally fails on semaphoreci needs to be fixed | 1 |
322,446 | 27,606,002,052 | IssuesEvent | 2023-03-09 13:02:54 | Kotlin/kotlinx.coroutines | https://api.github.com/repos/Kotlin/kotlinx.coroutines | closed | kotlinx-coroutines-test TestMainDispatcher concurrent read/write is difficult to debug | test | ## Current Behavior (1.6.3)
The `TestMainDispatcher` is wrapped as a delegate within `NonConcurrentlyModifiable` which tests for concurrent writes while the dispatcher is being read or written. If concurrent writes are detected (which can happen via both the reader and the writer) then an exception is raised and thrown upon the next write.
The above logic results in the failure mode throwing an exception with a stack trace similar to the following (for the write detected while reading scenario):
```
Dispatchers.Main is used concurrently with setting it
java.lang.IllegalStateException: Dispatchers.Main is used concurrently with setting it
at kotlinx.coroutines.test.internal.TestMainDispatcher$NonConcurrentlyModifiable.concurrentRW(TestMainDispatcher.kt:70)
at kotlinx.coroutines.test.internal.TestMainDispatcher$NonConcurrentlyModifiable.setValue(TestMainDispatcher.kt:84)
at kotlinx.coroutines.test.internal.TestMainDispatcher.resetDispatcher(TestMainDispatcher.kt:40)
at kotlinx.coroutines.test.TestDispatchers.resetMain(TestDispatchers.kt:37)
at com.elided.TestCoroutineRule.after(TestCoroutineRule.kt:40)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:59)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.mockito.internal.junit.JUnitSessionStore$1.evaluateSafely(JUnitSessionStore.java:55)
at org.mockito.internal.junit.JUnitSessionStore$1.evaluate(JUnitSessionStore.java:43)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
[ …elided for brevity… ]
```
Interpreting the above, the test had completed and the test dispatcher was being reset via test rule. While that was taking place, unidentified leaked async code was attempting to get the dispatcher. Not enough information is presented to identify the location of the problematic code - only that there is such code.
## Desired Behavior
It would be very helpful if the exception thrown included stack frame information for both actors in the problem space. i.e., both the reader+writer or the writer+writer. Doing so would expose the problematic code and simplify diagnosis. | 1.0 | kotlinx-coroutines-test TestMainDispatcher concurrent read/write is difficult to debug - ## Current Behavior (1.6.3)
The `TestMainDispatcher` is wrapped as a delegate within `NonConcurrentlyModifiable` which tests for concurrent writes while the dispatcher is being read or written. If concurrent writes are detected (which can happen via both the reader and the writer) then an exception is raised and thrown upon the next write.
The above logic results in the failure mode throwing an exception with a stack trace similar to the following (for the write detected while reading scenario):
```
Dispatchers.Main is used concurrently with setting it
java.lang.IllegalStateException: Dispatchers.Main is used concurrently with setting it
at kotlinx.coroutines.test.internal.TestMainDispatcher$NonConcurrentlyModifiable.concurrentRW(TestMainDispatcher.kt:70)
at kotlinx.coroutines.test.internal.TestMainDispatcher$NonConcurrentlyModifiable.setValue(TestMainDispatcher.kt:84)
at kotlinx.coroutines.test.internal.TestMainDispatcher.resetDispatcher(TestMainDispatcher.kt:40)
at kotlinx.coroutines.test.TestDispatchers.resetMain(TestDispatchers.kt:37)
at com.elided.TestCoroutineRule.after(TestCoroutineRule.kt:40)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:59)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.mockito.internal.junit.JUnitSessionStore$1.evaluateSafely(JUnitSessionStore.java:55)
at org.mockito.internal.junit.JUnitSessionStore$1.evaluate(JUnitSessionStore.java:43)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
[ …elided for brevity… ]
```
Interpreting the above, the test had completed and the test dispatcher was being reset via test rule. While that was taking place, unidentified leaked async code was attempting to get the dispatcher. Not enough information is presented to identify the location of the problematic code - only that there is such code.
## Desired Behavior
It would be very helpful if the exception thrown included stack frame information for both actors in the problem space. i.e., both the reader+writer or the writer+writer. Doing so would expose the problematic code and simplify diagnosis. | test | kotlinx coroutines test testmaindispatcher concurrent read write is difficult to debug current behavior the testmaindispatcher is wrapped as a delegate within nonconcurrentlymodifiable which tests for concurrent writes while the dispatcher is being read or written if concurrent writes are detected which can happen via both the reader and the writer then an exception is raised and thrown upon the next write the above logic results in the failure mode throwing an exception with a stack trace similar to the following for the write detected while reading scenario dispatchers main is used concurrently with setting it java lang illegalstateexception dispatchers main is used concurrently with setting it at kotlinx coroutines test internal testmaindispatcher nonconcurrentlymodifiable concurrentrw testmaindispatcher kt at kotlinx coroutines test internal testmaindispatcher nonconcurrentlymodifiable setvalue testmaindispatcher kt at kotlinx coroutines test internal testmaindispatcher resetdispatcher testmaindispatcher kt at kotlinx coroutines test testdispatchers resetmain testdispatchers kt at com elided testcoroutinerule after testcoroutinerule kt at org junit rules externalresource evaluate externalresource java at org junit rules testwatcher evaluate testwatcher java at org mockito internal junit junitsessionstore evaluatesafely junitsessionstore java at org mockito internal junit junitsessionstore evaluate junitsessionstore java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java interpreting the above the test had completed and the test dispatcher was being reset via test rule while that was taking place unidentified leaked async code was attempting to get the dispatcher not enough information is presented to identify the location of the problematic code only that there is such code desired behavior it would be very helpful if the exception thrown included stack frame information for both actors in the problem space i e both the reader writer or the writer writer doing so would expose the problematic code and simplify diagnosis | 1 |
220,617 | 17,211,599,254 | IssuesEvent | 2021-07-19 05:46:50 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | Improved OpInfo dtype testing | module: tests triaged | Currently OpInfos must specify which dtypes they support and have a test validating they do so correctly:
https://github.com/pytorch/pytorch/blob/593e8f41cae82a279a323510dfcf1da6466ad5c9/test/test_ops.py#L60
There are a couple improvements we could make to this test:
- It should report all changes for both forward and backward
- It should test all sample inputs and show how many failed, if any, so engineers can understand if an operator has partial support for a dtype | 1.0 | Improved OpInfo dtype testing - Currently OpInfos must specify which dtypes they support and have a test validating they do so correctly:
https://github.com/pytorch/pytorch/blob/593e8f41cae82a279a323510dfcf1da6466ad5c9/test/test_ops.py#L60
There are a couple improvements we could make to this test:
- It should report all changes for both forward and backward
- It should test all sample inputs and show how many failed, if any, so engineers can understand if an operator has partial support for a dtype | test | improved opinfo dtype testing currently opinfos must specify which dtypes they support and have a test validating they do so correctly there are a couple improvements we could make to this test it should report all changes for both forward and backward it should test all sample inputs and show how many failed if any so engineers can understand if an operator has partial support for a dtype | 1 |
207,907 | 15,858,687,769 | IssuesEvent | 2021-04-08 07:04:04 | keystonejs/keystone | https://api.github.com/repos/keystonejs/keystone | closed | Run Cypress tests against Firefox & Edge | tests | [Cypress now supports Firefox & Edge](https://cypress.io/blog/2020/02/06/introducing-firefox-and-edge-support-in-cypress-4-0).
We should run our Cypress tests against FF & Edge.
There's [a Cypress CircleCI Orb](https://circleci.com/orbs/registry/orb/cypress-io/cypress#executors-browsers-chrome73-ff68) which might make this easier, but I'm not sure. | 1.0 | Run Cypress tests against Firefox & Edge - [Cypress now supports Firefox & Edge](https://cypress.io/blog/2020/02/06/introducing-firefox-and-edge-support-in-cypress-4-0).
We should run our Cypress tests against FF & Edge.
There's [a Cypress CircleCI Orb](https://circleci.com/orbs/registry/orb/cypress-io/cypress#executors-browsers-chrome73-ff68) which might make this easier, but I'm not sure. | test | run cypress tests against firefox edge we should run our cypress tests against ff edge there s which might make this easier but i m not sure | 1 |
328,059 | 28,100,008,265 | IssuesEvent | 2023-03-30 18:41:43 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | DISABLED test_variant_consistency_jit_linalg_eigvals_cuda_complex64 (__main__.TestJitCUDA) | triaged module: flaky-tests skipped module: unknown | Platforms: win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_variant_consistency_jit_linalg_eigvals_cuda_complex64&suite=TestJitCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/12020727715).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_variant_consistency_jit_linalg_eigvals_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_ops_jit.py` | 1.0 | DISABLED test_variant_consistency_jit_linalg_eigvals_cuda_complex64 (__main__.TestJitCUDA) - Platforms: win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_variant_consistency_jit_linalg_eigvals_cuda_complex64&suite=TestJitCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/12020727715).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_variant_consistency_jit_linalg_eigvals_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_ops_jit.py` | test | disabled test variant consistency jit linalg eigvals cuda main testjitcuda platforms win windows this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test variant consistency jit linalg eigvals cuda there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path test ops jit py | 1 |
146,885 | 13,195,533,207 | IssuesEvent | 2020-08-13 18:51:32 | Wikunia/Javis.jl | https://api.github.com/repos/Wikunia/Javis.jl | closed | Project philosophy | documentation | In some projects maintainers have a different future goal than the users. We should state our goals somewhere probably in the documentation. | 1.0 | Project philosophy - In some projects maintainers have a different future goal than the users. We should state our goals somewhere probably in the documentation. | non_test | project philosophy in some projects maintainers have a different future goal than the users we should state our goals somewhere probably in the documentation | 0 |
137,298 | 11,123,832,457 | IssuesEvent | 2019-12-19 08:34:21 | web-platform-tests/wpt | https://api.github.com/repos/web-platform-tests/wpt | closed | Support async setup() | infra priority:backlog testharness.js | https://github.com/web-platform-tests/wpt/issues/7188 added async cleanup for testharness.js.
There are also cases where it makes sense to delay the start of tests:
- `idl_test` loads resources from interfaces/ in a dummy "idl_test setup" test
- https://github.com/web-platform-tests/wpt/issues/13192 depends on webfonts being loaded
- many tests wait for the window load event
For such cases, to avoid either waiting in every test individually, or to have an initial dummy test, it would be useful to let `setup()` take promise-vending functions, and delay the start of the tests until it has resolved.
@jugglinmike | 1.0 | Support async setup() - https://github.com/web-platform-tests/wpt/issues/7188 added async cleanup for testharness.js.
There are also cases where it makes sense to delay the start of tests:
- `idl_test` loads resources from interfaces/ in a dummy "idl_test setup" test
- https://github.com/web-platform-tests/wpt/issues/13192 depends on webfonts being loaded
- many tests wait for the window load event
For such cases, to avoid either waiting in every test individually, or to have an initial dummy test, it would be useful to let `setup()` take promise-vending functions, and delay the start of the tests until it has resolved.
@jugglinmike | test | support async setup added async cleanup for testharness js there are also cases where it makes sense to delay the start of tests idl test loads resources from interfaces in a dummy idl test setup test depends on webfonts being loaded many tests wait for the window load event for such cases to avoid either waiting in every test individually or to have an initial dummy test it would be useful to let setup take promise vending functions and delay the start of the tests until it has resolved jugglinmike | 1 |
339,218 | 30,352,430,908 | IssuesEvent | 2023-07-11 20:06:19 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix pointwise_ops.test_torch_logical_not | PyTorch Frontend Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix pointwise_ops.test_torch_logical_not - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5523625629/jobs/10074807417"><img src=https://img.shields.io/badge/-success-success></a>
| test | fix pointwise ops test torch logical not tensorflow a href src torch a href src jax a href src numpy a href src paddle a href src | 1 |
93,752 | 8,443,343,032 | IssuesEvent | 2018-10-18 15:21:48 | LLK/scratch-gui | https://api.github.com/repos/LLK/scratch-gui | closed | Issues found in Bug Hunt on 10/12/18 | smoke-testing | * [x] On Safari while running Bouncy Heroes the Set Y to 0 block flashed in the block list @BryceLTaylor
* [x] Stack glow in 216001494 is flashing on all of the stacks @BryceLTaylor
* [x] Text doesn’t appear on the stage in 216001494 @BryceLTaylor
* [x] When uploading a simple project, the loading screen appears so shortly that it’s not possible to read the text, which makes it seem ‘glitchy’. Consider putting in a minimum time the loading screen should be displayed. (Kevin)
Device | Browser | Name
-- | -- | --
Windows* | Chrome | Katelyn
Mac | Chrome | Karishma
iPad** | Safari | Chrisg
Chromebook | Chrome | Eric R
Windows* | Firefox | kathy
Android Tablet | Chrome |
Windows* | Edge | Ben
Mac | Safari | Bryce
Mac | Firefox | Andrew
| 1.0 | Issues found in Bug Hunt on 10/12/18 - * [x] On Safari while running Bouncy Heroes the Set Y to 0 block flashed in the block list @BryceLTaylor
* [x] Stack glow in 216001494 is flashing on all of the stacks @BryceLTaylor
* [x] Text doesn’t appear on the stage in 216001494 @BryceLTaylor
* [x] When uploading a simple project, the loading screen appears so shortly that it’s not possible to read the text, which makes it seem ‘glitchy’. Consider putting in a minimum time the loading screen should be displayed. (Kevin)
Device | Browser | Name
-- | -- | --
Windows* | Chrome | Katelyn
Mac | Chrome | Karishma
iPad** | Safari | Chrisg
Chromebook | Chrome | Eric R
Windows* | Firefox | kathy
Android Tablet | Chrome |
Windows* | Edge | Ben
Mac | Safari | Bryce
Mac | Firefox | Andrew
| test | issues found in bug hunt on on safari while running bouncy heroes the set y to block flashed in the block list bryceltaylor stack glow in is flashing on all of the stacks bryceltaylor text doesn’t appear on the stage in bryceltaylor when uploading a simple project the loading screen appears so shortly that it’s not possible to read the text which makes it seem ‘glitchy’ consider putting in a minimum time the loading screen should be displayed kevin device browser name windows chrome katelyn mac chrome karishma ipad safari chrisg chromebook chrome eric r windows firefox kathy android tablet chrome windows edge ben mac safari bryce mac firefox andrew | 1 |
85,864 | 8,000,682,172 | IssuesEvent | 2018-07-22 18:50:27 | FossilsArcheologyRevival/FossilsArcheologyRevival | https://api.github.com/repos/FossilsArcheologyRevival/FossilsArcheologyRevival | opened | Embryos "despawn" | 1.12.2 bug important needs testing | Unless you stay in a chunk with the animal and don't leave the world, the embryo will despawn, as opposed to the animal giving birth. | 1.0 | Embryos "despawn" - Unless you stay in a chunk with the animal and don't leave the world, the embryo will despawn, as opposed to the animal giving birth. | test | embryos despawn unless you stay in a chunk with the animal and don t leave the world the embryo will despawn as opposed to the animal giving birth | 1 |
63,105 | 6,826,000,465 | IssuesEvent | 2017-11-08 12:39:01 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | After upgrade faucet, send and location messages are shown as plain text, request is shown without a text | bug high-priority Tested - OK | ### Description
*Type*: Bug
*Summary*: After upgrade from 0.9.11 (both iOs and Android)
1. For sender in Console chat faucet message is shown as plain text
2. For sender in 1-1 chat send and location messages are shown as plain text. Request message is shown without a text (it shows only blinking green icon)
3. Note that received location, send and request messages are shown fine in 1-1 chat, so issue happens only with messages sent by this contact, not with messages received
If re-launch the app then same issue persists


### Reproduction
On 0.9.11 (installed from PlayStore or TestFlight)
- on 2 devices: each of the users should send location, send, request and text message in 1-1 chat
- upgrade both devices to develop build
- open 1-1 chat
iOS: https://app.testfairy.com/projects/4803590-status/builds/6879016/sessions/12/?accessToken=pKobsPKp8kEIQAlYUx6FxHbAaLE
Android: https://app.testfairy.com/projects/4803622-status/builds/6870654/sessions/5/?accessToken=Ba6r8NyEGH5eFGV6HJckkgloqL0
### Additional Information
* Status version: develop 0.9.10-243-g47c4b901
* Operating System:
Real device iPhone 6s, iOs 10.3.3
Real device Galaxy S6, Android 6.0.1
| 1.0 | After upgrade faucet, send and location messages are shown as plain text, request is shown without a text - ### Description
*Type*: Bug
*Summary*: After upgrade from 0.9.11 (both iOs and Android)
1. For sender in Console chat faucet message is shown as plain text
2. For sender in 1-1 chat send and location messages are shown as plain text. Request message is shown without a text (it shows only blinking green icon)
3. Note that received location, send and request messages are shown fine in 1-1 chat, so issue happens only with messages sent by this contact, not with messages received
If re-launch the app then same issue persists


### Reproduction
On 0.9.11 (installed from PlayStore or TestFlight)
- on 2 devices: each of the users should send location, send, request and text message in 1-1 chat
- upgrade both devices to develop build
- open 1-1 chat
iOS: https://app.testfairy.com/projects/4803590-status/builds/6879016/sessions/12/?accessToken=pKobsPKp8kEIQAlYUx6FxHbAaLE
Android: https://app.testfairy.com/projects/4803622-status/builds/6870654/sessions/5/?accessToken=Ba6r8NyEGH5eFGV6HJckkgloqL0
### Additional Information
* Status version: develop 0.9.10-243-g47c4b901
* Operating System:
Real device iPhone 6s, iOs 10.3.3
Real device Galaxy S6, Android 6.0.1
| test | after upgrade faucet send and location messages are shown as plain text request is shown without a text description type bug summary after upgrade from both ios and android for sender in console chat faucet message is shown as plain text for sender in chat send and location messages are shown as plain text request message is shown without a text it shows only blinking green icon note that received location send and request messages are shown fine in chat so issue happens only with messages sent by this contact not with messages received if re launch the app then same issue persists reproduction on installed from playstore or testflight on devices each of the users should send location send request and text message in chat upgrade both devices to develop build open chat ios android additional information status version develop operating system real device iphone ios real device galaxy android | 1 |
259,824 | 8,200,334,567 | IssuesEvent | 2018-09-01 02:44:30 | LessWrong2/Lesswrong2 | https://api.github.com/repos/LessWrong2/Lesswrong2 | closed | Parsing of links doesn't work | 2. Medium Priority (Hard) | E.g. `The author was referencing [Lean product development](https://en.wikipedia.org/wiki/Lean_product_development) and [Agile software development](https://en.wikipedia.org/wiki/Agile_software_development)` parses the first link, but fails on the second | 1.0 | Parsing of links doesn't work - E.g. `The author was referencing [Lean product development](https://en.wikipedia.org/wiki/Lean_product_development) and [Agile software development](https://en.wikipedia.org/wiki/Agile_software_development)` parses the first link, but fails on the second | non_test | parsing of links doesn t work e g the author was referencing and parses the first link but fails on the second | 0 |
152,368 | 12,102,795,618 | IssuesEvent | 2020-04-20 17:17:13 | qri-io/qri | https://api.github.com/repos/qri-io/qri | closed | TestDatasetPullPushDeleteFeedsPreviewHTTP is flaky, failed on CI | flaky-test | Saw the following failure:
```
=== RUN TestDatasetPullPushDeleteFeedsPreviewHTTP
0/6 blocks transferred
1/6 blocks transferred
2/6 blocks transferred
3/6 blocks transferred
4/6 blocks transferred
5/6 blocks transferred
6/6 blocks transferred
done!
TestDatasetPullPushDeleteFeedsPreviewHTTP: remote_test.go:135: result mismatch (-want +got):
[]string{
... // 6 identical elements
"DatasetPushPreCheck",
"DatasetPushFinalCheck",
+ "DatasetPushFinalCheck",
+ "DatasetPushFinalCheck",
+ "DatasetPushed",
+ "DatasetPushed",
"DatasetPushed",
"LogRemovePreCheck",
... // 4 identical elements
}
``` | 1.0 | TestDatasetPullPushDeleteFeedsPreviewHTTP is flaky, failed on CI - Saw the following failure:
```
=== RUN TestDatasetPullPushDeleteFeedsPreviewHTTP
0/6 blocks transferred
1/6 blocks transferred
2/6 blocks transferred
3/6 blocks transferred
4/6 blocks transferred
5/6 blocks transferred
6/6 blocks transferred
done!
TestDatasetPullPushDeleteFeedsPreviewHTTP: remote_test.go:135: result mismatch (-want +got):
[]string{
... // 6 identical elements
"DatasetPushPreCheck",
"DatasetPushFinalCheck",
+ "DatasetPushFinalCheck",
+ "DatasetPushFinalCheck",
+ "DatasetPushed",
+ "DatasetPushed",
"DatasetPushed",
"LogRemovePreCheck",
... // 4 identical elements
}
``` | test | testdatasetpullpushdeletefeedspreviewhttp is flaky failed on ci saw the following failure run testdatasetpullpushdeletefeedspreviewhttp blocks transferred blocks transferred blocks transferred blocks transferred blocks transferred blocks transferred blocks transferred done testdatasetpullpushdeletefeedspreviewhttp remote test go result mismatch want got string identical elements datasetpushprecheck datasetpushfinalcheck datasetpushfinalcheck datasetpushfinalcheck datasetpushed datasetpushed datasetpushed logremoveprecheck identical elements | 1 |
325,120 | 9,917,273,774 | IssuesEvent | 2019-06-28 23:21:35 | okTurtles/group-income-simple | https://api.github.com/repos/okTurtles/group-income-simple | closed | Make it easy to style components differently | App:Frontend Kind:Enhancement Note:Research Note:UI/UX Note:Up-for-grabs Priority:Low | ### Problem
Right now it's not clear what a component's look & feel should look like.
Should it be contained with a bulma `.card` to have a nice soft border and shadow? Should it be plain with no border? What if we want to do white-on-black UI instead? etc. etc.
### Solution
Ideally, it should be simple for:
- The enclosing vue to decide what the component's styling should be
- For components to have a default styling that can easily be changed by changing a single line somewhere
- For theme support to be possible by having a dropdown theme toggler
One potential solution is to use [component inheritance](http://vuejsdevelopers.com/2017/06/11/vue-js-extending-components/).
Check out that article for more info.
**EDIT:** It might be simpler to just have theme stylesheets that are swapped out. | 1.0 | Make it easy to style components differently - ### Problem
Right now it's not clear what a component's look & feel should look like.
Should it be contained with a bulma `.card` to have a nice soft border and shadow? Should it be plain with no border? What if we want to do white-on-black UI instead? etc. etc.
### Solution
Ideally, it should be simple for:
- The enclosing vue to decide what the component's styling should be
- For components to have a default styling that can easily be changed by changing a single line somewhere
- For theme support to be possible by having a dropdown theme toggler
One potential solution is to use [component inheritance](http://vuejsdevelopers.com/2017/06/11/vue-js-extending-components/).
Check out that article for more info.
**EDIT:** It might be simpler to just have theme stylesheets that are swapped out. | non_test | make it easy to style components differently problem right now it s not clear what a component s look feel should look like should it be contained with a bulma card to have a nice soft border and shadow should it be plain with no border what if we want to do white on black ui instead etc etc solution ideally it should be simple for the enclosing vue to decide what the component s styling should be for components to have a default styling that can easily be changed by changing a single line somewhere for theme support to be possible by having a dropdown theme toggler one potential solution is to use check out that article for more info edit it might be simpler to just have theme stylesheets that are swapped out | 0 |
37,222 | 6,585,235,037 | IssuesEvent | 2017-09-13 13:23:06 | stelgenhof/AiLight | https://api.github.com/repos/stelgenhof/AiLight | closed | The REST API examples doesn't work | documentation | With curl 7.47.0, the REST API examples with method PATCH don't work. A Content-Type header is needed as well:
-H 'Content-Type: application/json' | 1.0 | The REST API examples doesn't work - With curl 7.47.0, the REST API examples with method PATCH don't work. A Content-Type header is needed as well:
-H 'Content-Type: application/json' | non_test | the rest api examples doesn t work with curl the rest api examples with method patch don t work a content type header is needed as well h content type application json | 0 |
276,803 | 24,021,143,373 | IssuesEvent | 2022-09-15 07:46:12 | 0xs34n/starknet.js | https://api.github.com/repos/0xs34n/starknet.js | closed | Add missing tests for RPC methods `getBlockTransactionCount` | Type: test | After PR: https://github.com/0xs34n/starknet.js/pull/301
New rpc methods were added without unit tests. | 1.0 | Add missing tests for RPC methods `getBlockTransactionCount` - After PR: https://github.com/0xs34n/starknet.js/pull/301
New rpc methods were added without unit tests. | test | add missing tests for rpc methods getblocktransactioncount after pr new rpc methods were added without unit tests | 1 |
240,930 | 20,100,635,236 | IssuesEvent | 2022-02-07 03:21:55 | edh-git/EL_Display_Hub | https://api.github.com/repos/edh-git/EL_Display_Hub | closed | TEST enter 3times on Email ONLY | Twomon SE Android QA Test | Product:Twomon SE<br><br>Device OS:Android<br><br>e-mail:enter@intelAMD<br><br>TTTTT<br><br>[EL_Display_28702b6f-5b06-4563-abc1-97c84b86776a.zip](https://github.com/edh-git/EL_Display_Hub/blob/main/EL_Display_28702b6f-5b06-4563-abc1-97c84b86776a.zip) | 1.0 | TEST enter 3times on Email ONLY - Product:Twomon SE<br><br>Device OS:Android<br><br>e-mail:enter@intelAMD<br><br>TTTTT<br><br>[EL_Display_28702b6f-5b06-4563-abc1-97c84b86776a.zip](https://github.com/edh-git/EL_Display_Hub/blob/main/EL_Display_28702b6f-5b06-4563-abc1-97c84b86776a.zip) | test | test enter on email only product twomon se device os android e mail enter intelamd ttttt | 1 |
625,962 | 19,783,330,379 | IssuesEvent | 2022-01-18 01:30:22 | michaelrsweet/lprint | https://api.github.com/repos/michaelrsweet/lprint | closed | macOS package doesn't start/load LPrint server automatically | bug priority-medium platform issue | I'm running lprint 1.1.0, installed via the pkg here on github.
I can run `lprint devices` and see the printer, but I can't add the printer.
Any ideas?
```
❯ sw_vers
ProductName: macOS
ProductVersion: 12.1
BuildVersion: 21C52
~
❯ lprint devices
snmp://amandas-printer
usb://Zebra%20/LP2844%20?serial=42J112901277
~ took 4s
❯ lprint add -d zebra -v 'usb://Zebra%20/LP2844%20?serial=42J112901277' -m epl2_4inch-203dpi-dt
lprint: Unable to start server: Bad file descriptor
~
❯ lprint add -d zebra -v usb://Zebra%20/LP2844%20?serial=42J112901277 -m epl2_4inch-203dpi-dt
lprint: Unable to start server: Bad file descriptor
~
❯ sudo lprint add -d zebra -v usb://Zebra%20/LP2844%20?serial=42J112901277 -m epl2_4inch-203dpi-dt
lprint: Unable to start server: Bad file descriptor
~
❯ sudo lprint add -d zebra -v 'usb://Zebra%20/LP2844%20?serial=42J112901277' -m epl2_4inch-203dpi-dt
lprint: Unable to start server: Bad file descriptor
~
❯ lprint drivers
lprint: statefile='/Users/wolf/.lprint.conf'
E [2022-01-01T19:37:50.935Z] [Device] Unable to claim USB interface: Access denied (insufficient permissions)
dymo_lm-400 "DYMO LabelMANAGER 400" "MFG:DYMO;MDL:LabelMANAGER 400 ;"
dymo_lm-450 "DYMO LabelMANAGER 450" "MFG:DYMO;MDL:LabelMANAGER 450 ;"
dymo_lm-pc "DYMO LabelMANAGER PC" "MFG:DYMO;MDL:LabelMANAGER PC ;"
dymo_lm-pc-ii "DYMO LabelMANAGER PC II" "MFG:DYMO;MDL:LabelMANAGER PC II ;"
dymo_lm-pnp "DYMO LabelMANAGER PNP" "MFG:DYMO;MDL:LabelMANAGER PNP ;"
dymo_lp-350 "DYMO LabelPOINT 350" "MFG:DYMO;MDL:LabelPOINT 350 ;"
dymo_lw-300 "DYMO LabelWriter 300" "MFG:DYMO;MDL:LabelWriter 300;"
dymo_lw-310 "DYMO LabelWriter 310" "MFG:DYMO;MDL:LabelWriter 310;"
dymo_lw-315 "DYMO LabelWriter 315" "MFG:DYMO;MDL:LabelWriter 315;"
dymo_lw-320 "DYMO LabelWriter 320" "MFG:DYMO;MDL:LabelWriter 320;"
dymo_lw-330 "DYMO LabelWriter 330" "MFG:DYMO;MDL:LabelWriter 330;"
dymo_lw-330-turbo "DYMO LabelWriter 330 Turbo" "MFG:DYMO;MDL:LabelWriter 330 Turbo;"
dymo_lw-400 "DYMO LabelWriter 400" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-400-turbo "DYMO LabelWriter 400 Turbo" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450 "DYMO LabelWriter 450" "MFG:DYMO;MDL:LabelWriter 450;"
dymo_lw-450-duo-label "DYMO LabelWriter 450 DUO Label" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450-duo-tape "DYMO LabelWriter 450 DUO Tape" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450-turbo "DYMO LabelWriter 450 Turbo" "MFG:DYMO;MDL:LabelWriter 450 Turbo;"
dymo_lw-450-twin-turbo "DYMO LabelWriter 450 Twin Turbo" "MFG:DYMO;MDL:LabelWriter 450 Twin Turbo;"
dymo_lw-4xl "DYMO LabelWriter 4XL" "MFG:DYMO;MDL:LabelWriter 4XL;"
dymo_lw-duo-label "DYMO LabelWriter DUO Label" "MFG:DYMO;MDL:LabelWriter DUO Label;"
dymo_lw-duo-tape "DYMO LabelWriter DUO Tape" "MFG:DYMO;MDL:LabelWriter DUO Tape;"
dymo_lw-duo-tape-128 "DYMO LabelWriter DUO Tape 128" "MFG:DYMO;MDL:LabelWriter DUO Tape 128;"
dymo_lw-se450 "DYMO LabelWriter SE450" "MFG:DYMO;MDL:LabelWriter SE450;"
epl2_2inch-203dpi-dt "Zebra ZPL 2-inch/203dpi/Direct-Thermal" ""
epl2_2inch-203dpi-tt "Zebra ZPL 2-inch/203dpi/Thermal-Transfer" ""
epl2_2inch-300dpi-dt "Zebra ZPL 2-inch/300dpi/Direct-Thermal" ""
epl2_2inch-300dpi-tt "Zebra ZPL 2-inch/300dpi/Thermal-Transfer" ""
epl2_4inch-203dpi-dt "Zebra ZPL 4-inch/203dpi/Direct-Thermal" "COMMAND SET:EPL;"
epl2_4inch-203dpi-tt "Zebra ZPL 4-inch/203dpi/Thermal-Transfer" ""
epl2_4inch-300dpi-dt "Zebra ZPL 4-inch/300dpi/Direct-Thermal" ""
epl2_4inch-300dpi-tt "Zebra ZPL 4-inch/300dpi/Thermal-Transfer" ""
zpl_2inch-203dpi-dt "Zebra ZPL 2-inch/203dpi/Direct-Thermal" ""
zpl_2inch-203dpi-tt "Zebra ZPL 2-inch/203dpi/Thermal-Transfer" ""
zpl_2inch-300dpi-dt "Zebra ZPL 2-inch/300dpi/Direct-Thermal" ""
zpl_2inch-300dpi-tt "Zebra ZPL 2-inch/300dpi/Thermal-Transfer" ""
zpl_2inch-600dpi-tt "Zebra ZPL 2-inch/600dpi/Thermal-Transfer" ""
zpl_4inch-203dpi-dt "Zebra ZPL 4-inch/203dpi/Direct-Thermal" ""
zpl_4inch-203dpi-tt "Zebra ZPL 4-inch/203dpi/Thermal-Transfer" ""
zpl_4inch-300dpi-dt "Zebra ZPL 4-inch/300dpi/Direct-Thermal" ""
zpl_4inch-300dpi-tt "Zebra ZPL 4-inch/300dpi/Thermal-Transfer" ""
zpl_4inch-600dpi-tt "Zebra ZPL 4-inch/600dpi/Thermal-Transfer" ""
~
❯ sudo lprint drivers
lprint: statefile='/Users/wolf/.lprint.conf'
E [2022-01-01T19:37:58.347Z] [Device] Unable to claim USB interface: Access denied (insufficient permissions)
dymo_lm-400 "DYMO LabelMANAGER 400" "MFG:DYMO;MDL:LabelMANAGER 400 ;"
dymo_lm-450 "DYMO LabelMANAGER 450" "MFG:DYMO;MDL:LabelMANAGER 450 ;"
dymo_lm-pc "DYMO LabelMANAGER PC" "MFG:DYMO;MDL:LabelMANAGER PC ;"
dymo_lm-pc-ii "DYMO LabelMANAGER PC II" "MFG:DYMO;MDL:LabelMANAGER PC II ;"
dymo_lm-pnp "DYMO LabelMANAGER PNP" "MFG:DYMO;MDL:LabelMANAGER PNP ;"
dymo_lp-350 "DYMO LabelPOINT 350" "MFG:DYMO;MDL:LabelPOINT 350 ;"
dymo_lw-300 "DYMO LabelWriter 300" "MFG:DYMO;MDL:LabelWriter 300;"
dymo_lw-310 "DYMO LabelWriter 310" "MFG:DYMO;MDL:LabelWriter 310;"
dymo_lw-315 "DYMO LabelWriter 315" "MFG:DYMO;MDL:LabelWriter 315;"
dymo_lw-320 "DYMO LabelWriter 320" "MFG:DYMO;MDL:LabelWriter 320;"
dymo_lw-330 "DYMO LabelWriter 330" "MFG:DYMO;MDL:LabelWriter 330;"
dymo_lw-330-turbo "DYMO LabelWriter 330 Turbo" "MFG:DYMO;MDL:LabelWriter 330 Turbo;"
dymo_lw-400 "DYMO LabelWriter 400" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-400-turbo "DYMO LabelWriter 400 Turbo" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450 "DYMO LabelWriter 450" "MFG:DYMO;MDL:LabelWriter 450;"
dymo_lw-450-duo-label "DYMO LabelWriter 450 DUO Label" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450-duo-tape "DYMO LabelWriter 450 DUO Tape" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450-turbo "DYMO LabelWriter 450 Turbo" "MFG:DYMO;MDL:LabelWriter 450 Turbo;"
dymo_lw-450-twin-turbo "DYMO LabelWriter 450 Twin Turbo" "MFG:DYMO;MDL:LabelWriter 450 Twin Turbo;"
dymo_lw-4xl "DYMO LabelWriter 4XL" "MFG:DYMO;MDL:LabelWriter 4XL;"
dymo_lw-duo-label "DYMO LabelWriter DUO Label" "MFG:DYMO;MDL:LabelWriter DUO Label;"
dymo_lw-duo-tape "DYMO LabelWriter DUO Tape" "MFG:DYMO;MDL:LabelWriter DUO Tape;"
dymo_lw-duo-tape-128 "DYMO LabelWriter DUO Tape 128" "MFG:DYMO;MDL:LabelWriter DUO Tape 128;"
dymo_lw-se450 "DYMO LabelWriter SE450" "MFG:DYMO;MDL:LabelWriter SE450;"
epl2_2inch-203dpi-dt "Zebra ZPL 2-inch/203dpi/Direct-Thermal" ""
epl2_2inch-203dpi-tt "Zebra ZPL 2-inch/203dpi/Thermal-Transfer" ""
epl2_2inch-300dpi-dt "Zebra ZPL 2-inch/300dpi/Direct-Thermal" ""
epl2_2inch-300dpi-tt "Zebra ZPL 2-inch/300dpi/Thermal-Transfer" ""
epl2_4inch-203dpi-dt "Zebra ZPL 4-inch/203dpi/Direct-Thermal" "COMMAND SET:EPL;"
epl2_4inch-203dpi-tt "Zebra ZPL 4-inch/203dpi/Thermal-Transfer" ""
epl2_4inch-300dpi-dt "Zebra ZPL 4-inch/300dpi/Direct-Thermal" ""
epl2_4inch-300dpi-tt "Zebra ZPL 4-inch/300dpi/Thermal-Transfer" ""
zpl_2inch-203dpi-dt "Zebra ZPL 2-inch/203dpi/Direct-Thermal" ""
zpl_2inch-203dpi-tt "Zebra ZPL 2-inch/203dpi/Thermal-Transfer" ""
zpl_2inch-300dpi-dt "Zebra ZPL 2-inch/300dpi/Direct-Thermal" ""
zpl_2inch-300dpi-tt "Zebra ZPL 2-inch/300dpi/Thermal-Transfer" ""
zpl_2inch-600dpi-tt "Zebra ZPL 2-inch/600dpi/Thermal-Transfer" ""
zpl_4inch-203dpi-dt "Zebra ZPL 4-inch/203dpi/Direct-Thermal" ""
zpl_4inch-203dpi-tt "Zebra ZPL 4-inch/203dpi/Thermal-Transfer" ""
zpl_4inch-300dpi-dt "Zebra ZPL 4-inch/300dpi/Direct-Thermal" ""
zpl_4inch-300dpi-tt "Zebra ZPL 4-inch/300dpi/Thermal-Transfer" ""
zpl_4inch-600dpi-tt "Zebra ZPL 4-inch/600dpi/Thermal-Transfer" ""
``` | 1.0 | macOS package doesn't start/load LPrint server automatically - I'm running lprint 1.1.0, installed via the pkg here on github.
I can run `lprint devices` and see the printer, but I can't add the printer.
Any ideas?
```
❯ sw_vers
ProductName: macOS
ProductVersion: 12.1
BuildVersion: 21C52
~
❯ lprint devices
snmp://amandas-printer
usb://Zebra%20/LP2844%20?serial=42J112901277
~ took 4s
❯ lprint add -d zebra -v 'usb://Zebra%20/LP2844%20?serial=42J112901277' -m epl2_4inch-203dpi-dt
lprint: Unable to start server: Bad file descriptor
~
❯ lprint add -d zebra -v usb://Zebra%20/LP2844%20?serial=42J112901277 -m epl2_4inch-203dpi-dt
lprint: Unable to start server: Bad file descriptor
~
❯ sudo lprint add -d zebra -v usb://Zebra%20/LP2844%20?serial=42J112901277 -m epl2_4inch-203dpi-dt
lprint: Unable to start server: Bad file descriptor
~
❯ sudo lprint add -d zebra -v 'usb://Zebra%20/LP2844%20?serial=42J112901277' -m epl2_4inch-203dpi-dt
lprint: Unable to start server: Bad file descriptor
~
❯ lprint drivers
lprint: statefile='/Users/wolf/.lprint.conf'
E [2022-01-01T19:37:50.935Z] [Device] Unable to claim USB interface: Access denied (insufficient permissions)
dymo_lm-400 "DYMO LabelMANAGER 400" "MFG:DYMO;MDL:LabelMANAGER 400 ;"
dymo_lm-450 "DYMO LabelMANAGER 450" "MFG:DYMO;MDL:LabelMANAGER 450 ;"
dymo_lm-pc "DYMO LabelMANAGER PC" "MFG:DYMO;MDL:LabelMANAGER PC ;"
dymo_lm-pc-ii "DYMO LabelMANAGER PC II" "MFG:DYMO;MDL:LabelMANAGER PC II ;"
dymo_lm-pnp "DYMO LabelMANAGER PNP" "MFG:DYMO;MDL:LabelMANAGER PNP ;"
dymo_lp-350 "DYMO LabelPOINT 350" "MFG:DYMO;MDL:LabelPOINT 350 ;"
dymo_lw-300 "DYMO LabelWriter 300" "MFG:DYMO;MDL:LabelWriter 300;"
dymo_lw-310 "DYMO LabelWriter 310" "MFG:DYMO;MDL:LabelWriter 310;"
dymo_lw-315 "DYMO LabelWriter 315" "MFG:DYMO;MDL:LabelWriter 315;"
dymo_lw-320 "DYMO LabelWriter 320" "MFG:DYMO;MDL:LabelWriter 320;"
dymo_lw-330 "DYMO LabelWriter 330" "MFG:DYMO;MDL:LabelWriter 330;"
dymo_lw-330-turbo "DYMO LabelWriter 330 Turbo" "MFG:DYMO;MDL:LabelWriter 330 Turbo;"
dymo_lw-400 "DYMO LabelWriter 400" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-400-turbo "DYMO LabelWriter 400 Turbo" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450 "DYMO LabelWriter 450" "MFG:DYMO;MDL:LabelWriter 450;"
dymo_lw-450-duo-label "DYMO LabelWriter 450 DUO Label" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450-duo-tape "DYMO LabelWriter 450 DUO Tape" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450-turbo "DYMO LabelWriter 450 Turbo" "MFG:DYMO;MDL:LabelWriter 450 Turbo;"
dymo_lw-450-twin-turbo "DYMO LabelWriter 450 Twin Turbo" "MFG:DYMO;MDL:LabelWriter 450 Twin Turbo;"
dymo_lw-4xl "DYMO LabelWriter 4XL" "MFG:DYMO;MDL:LabelWriter 4XL;"
dymo_lw-duo-label "DYMO LabelWriter DUO Label" "MFG:DYMO;MDL:LabelWriter DUO Label;"
dymo_lw-duo-tape "DYMO LabelWriter DUO Tape" "MFG:DYMO;MDL:LabelWriter DUO Tape;"
dymo_lw-duo-tape-128 "DYMO LabelWriter DUO Tape 128" "MFG:DYMO;MDL:LabelWriter DUO Tape 128;"
dymo_lw-se450 "DYMO LabelWriter SE450" "MFG:DYMO;MDL:LabelWriter SE450;"
epl2_2inch-203dpi-dt "Zebra ZPL 2-inch/203dpi/Direct-Thermal" ""
epl2_2inch-203dpi-tt "Zebra ZPL 2-inch/203dpi/Thermal-Transfer" ""
epl2_2inch-300dpi-dt "Zebra ZPL 2-inch/300dpi/Direct-Thermal" ""
epl2_2inch-300dpi-tt "Zebra ZPL 2-inch/300dpi/Thermal-Transfer" ""
epl2_4inch-203dpi-dt "Zebra ZPL 4-inch/203dpi/Direct-Thermal" "COMMAND SET:EPL;"
epl2_4inch-203dpi-tt "Zebra ZPL 4-inch/203dpi/Thermal-Transfer" ""
epl2_4inch-300dpi-dt "Zebra ZPL 4-inch/300dpi/Direct-Thermal" ""
epl2_4inch-300dpi-tt "Zebra ZPL 4-inch/300dpi/Thermal-Transfer" ""
zpl_2inch-203dpi-dt "Zebra ZPL 2-inch/203dpi/Direct-Thermal" ""
zpl_2inch-203dpi-tt "Zebra ZPL 2-inch/203dpi/Thermal-Transfer" ""
zpl_2inch-300dpi-dt "Zebra ZPL 2-inch/300dpi/Direct-Thermal" ""
zpl_2inch-300dpi-tt "Zebra ZPL 2-inch/300dpi/Thermal-Transfer" ""
zpl_2inch-600dpi-tt "Zebra ZPL 2-inch/600dpi/Thermal-Transfer" ""
zpl_4inch-203dpi-dt "Zebra ZPL 4-inch/203dpi/Direct-Thermal" ""
zpl_4inch-203dpi-tt "Zebra ZPL 4-inch/203dpi/Thermal-Transfer" ""
zpl_4inch-300dpi-dt "Zebra ZPL 4-inch/300dpi/Direct-Thermal" ""
zpl_4inch-300dpi-tt "Zebra ZPL 4-inch/300dpi/Thermal-Transfer" ""
zpl_4inch-600dpi-tt "Zebra ZPL 4-inch/600dpi/Thermal-Transfer" ""
~
❯ sudo lprint drivers
lprint: statefile='/Users/wolf/.lprint.conf'
E [2022-01-01T19:37:58.347Z] [Device] Unable to claim USB interface: Access denied (insufficient permissions)
dymo_lm-400 "DYMO LabelMANAGER 400" "MFG:DYMO;MDL:LabelMANAGER 400 ;"
dymo_lm-450 "DYMO LabelMANAGER 450" "MFG:DYMO;MDL:LabelMANAGER 450 ;"
dymo_lm-pc "DYMO LabelMANAGER PC" "MFG:DYMO;MDL:LabelMANAGER PC ;"
dymo_lm-pc-ii "DYMO LabelMANAGER PC II" "MFG:DYMO;MDL:LabelMANAGER PC II ;"
dymo_lm-pnp "DYMO LabelMANAGER PNP" "MFG:DYMO;MDL:LabelMANAGER PNP ;"
dymo_lp-350 "DYMO LabelPOINT 350" "MFG:DYMO;MDL:LabelPOINT 350 ;"
dymo_lw-300 "DYMO LabelWriter 300" "MFG:DYMO;MDL:LabelWriter 300;"
dymo_lw-310 "DYMO LabelWriter 310" "MFG:DYMO;MDL:LabelWriter 310;"
dymo_lw-315 "DYMO LabelWriter 315" "MFG:DYMO;MDL:LabelWriter 315;"
dymo_lw-320 "DYMO LabelWriter 320" "MFG:DYMO;MDL:LabelWriter 320;"
dymo_lw-330 "DYMO LabelWriter 330" "MFG:DYMO;MDL:LabelWriter 330;"
dymo_lw-330-turbo "DYMO LabelWriter 330 Turbo" "MFG:DYMO;MDL:LabelWriter 330 Turbo;"
dymo_lw-400 "DYMO LabelWriter 400" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-400-turbo "DYMO LabelWriter 400 Turbo" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450 "DYMO LabelWriter 450" "MFG:DYMO;MDL:LabelWriter 450;"
dymo_lw-450-duo-label "DYMO LabelWriter 450 DUO Label" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450-duo-tape "DYMO LabelWriter 450 DUO Tape" "MFG:DYMO;MDL:LabelWriter ;"
dymo_lw-450-turbo "DYMO LabelWriter 450 Turbo" "MFG:DYMO;MDL:LabelWriter 450 Turbo;"
dymo_lw-450-twin-turbo "DYMO LabelWriter 450 Twin Turbo" "MFG:DYMO;MDL:LabelWriter 450 Twin Turbo;"
dymo_lw-4xl "DYMO LabelWriter 4XL" "MFG:DYMO;MDL:LabelWriter 4XL;"
dymo_lw-duo-label "DYMO LabelWriter DUO Label" "MFG:DYMO;MDL:LabelWriter DUO Label;"
dymo_lw-duo-tape "DYMO LabelWriter DUO Tape" "MFG:DYMO;MDL:LabelWriter DUO Tape;"
dymo_lw-duo-tape-128 "DYMO LabelWriter DUO Tape 128" "MFG:DYMO;MDL:LabelWriter DUO Tape 128;"
dymo_lw-se450 "DYMO LabelWriter SE450" "MFG:DYMO;MDL:LabelWriter SE450;"
epl2_2inch-203dpi-dt "Zebra ZPL 2-inch/203dpi/Direct-Thermal" ""
epl2_2inch-203dpi-tt "Zebra ZPL 2-inch/203dpi/Thermal-Transfer" ""
epl2_2inch-300dpi-dt "Zebra ZPL 2-inch/300dpi/Direct-Thermal" ""
epl2_2inch-300dpi-tt "Zebra ZPL 2-inch/300dpi/Thermal-Transfer" ""
epl2_4inch-203dpi-dt "Zebra ZPL 4-inch/203dpi/Direct-Thermal" "COMMAND SET:EPL;"
epl2_4inch-203dpi-tt "Zebra ZPL 4-inch/203dpi/Thermal-Transfer" ""
epl2_4inch-300dpi-dt "Zebra ZPL 4-inch/300dpi/Direct-Thermal" ""
epl2_4inch-300dpi-tt "Zebra ZPL 4-inch/300dpi/Thermal-Transfer" ""
zpl_2inch-203dpi-dt "Zebra ZPL 2-inch/203dpi/Direct-Thermal" ""
zpl_2inch-203dpi-tt "Zebra ZPL 2-inch/203dpi/Thermal-Transfer" ""
zpl_2inch-300dpi-dt "Zebra ZPL 2-inch/300dpi/Direct-Thermal" ""
zpl_2inch-300dpi-tt "Zebra ZPL 2-inch/300dpi/Thermal-Transfer" ""
zpl_2inch-600dpi-tt "Zebra ZPL 2-inch/600dpi/Thermal-Transfer" ""
zpl_4inch-203dpi-dt "Zebra ZPL 4-inch/203dpi/Direct-Thermal" ""
zpl_4inch-203dpi-tt "Zebra ZPL 4-inch/203dpi/Thermal-Transfer" ""
zpl_4inch-300dpi-dt "Zebra ZPL 4-inch/300dpi/Direct-Thermal" ""
zpl_4inch-300dpi-tt "Zebra ZPL 4-inch/300dpi/Thermal-Transfer" ""
zpl_4inch-600dpi-tt "Zebra ZPL 4-inch/600dpi/Thermal-Transfer" ""
``` | non_test | macos package doesn t start load lprint server automatically i m running lprint installed via the pkg here on github i can run lprint devices and see the printer but i can t add the printer any ideas ❯ sw vers productname macos productversion buildversion ❯ lprint devices snmp amandas printer usb zebra serial took ❯ lprint add d zebra v usb zebra serial m dt lprint unable to start server bad file descriptor ❯ lprint add d zebra v usb zebra serial m dt lprint unable to start server bad file descriptor ❯ sudo lprint add d zebra v usb zebra serial m dt lprint unable to start server bad file descriptor ❯ sudo lprint add d zebra v usb zebra serial m dt lprint unable to start server bad file descriptor ❯ lprint drivers lprint statefile users wolf lprint conf e unable to claim usb interface access denied insufficient permissions dymo lm dymo labelmanager mfg dymo mdl labelmanager dymo lm dymo labelmanager mfg dymo mdl labelmanager dymo lm pc dymo labelmanager pc mfg dymo mdl labelmanager pc dymo lm pc ii dymo labelmanager pc ii mfg dymo mdl labelmanager pc ii dymo lm pnp dymo labelmanager pnp mfg dymo mdl labelmanager pnp dymo lp dymo labelpoint mfg dymo mdl labelpoint dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw turbo dymo labelwriter turbo mfg dymo mdl labelwriter turbo dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw turbo dymo labelwriter turbo mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw duo label dymo labelwriter duo label mfg dymo mdl labelwriter dymo lw duo tape dymo labelwriter duo tape mfg dymo mdl labelwriter dymo lw turbo dymo labelwriter turbo mfg dymo mdl labelwriter turbo dymo lw twin turbo dymo labelwriter twin turbo mfg dymo mdl labelwriter twin turbo dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw duo label dymo labelwriter duo label mfg dymo mdl labelwriter duo label dymo lw duo tape dymo labelwriter duo tape mfg dymo mdl labelwriter duo tape dymo lw duo tape dymo labelwriter duo tape mfg dymo mdl labelwriter duo tape dymo lw dymo labelwriter mfg dymo mdl labelwriter dt zebra zpl inch direct thermal tt zebra zpl inch thermal transfer dt zebra zpl inch direct thermal tt zebra zpl inch thermal transfer dt zebra zpl inch direct thermal command set epl tt zebra zpl inch thermal transfer dt zebra zpl inch direct thermal tt zebra zpl inch thermal transfer zpl dt zebra zpl inch direct thermal zpl tt zebra zpl inch thermal transfer zpl dt zebra zpl inch direct thermal zpl tt zebra zpl inch thermal transfer zpl tt zebra zpl inch thermal transfer zpl dt zebra zpl inch direct thermal zpl tt zebra zpl inch thermal transfer zpl dt zebra zpl inch direct thermal zpl tt zebra zpl inch thermal transfer zpl tt zebra zpl inch thermal transfer ❯ sudo lprint drivers lprint statefile users wolf lprint conf e unable to claim usb interface access denied insufficient permissions dymo lm dymo labelmanager mfg dymo mdl labelmanager dymo lm dymo labelmanager mfg dymo mdl labelmanager dymo lm pc dymo labelmanager pc mfg dymo mdl labelmanager pc dymo lm pc ii dymo labelmanager pc ii mfg dymo mdl labelmanager pc ii dymo lm pnp dymo labelmanager pnp mfg dymo mdl labelmanager pnp dymo lp dymo labelpoint mfg dymo mdl labelpoint dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw turbo dymo labelwriter turbo mfg dymo mdl labelwriter turbo dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw turbo dymo labelwriter turbo mfg dymo mdl labelwriter dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw duo label dymo labelwriter duo label mfg dymo mdl labelwriter dymo lw duo tape dymo labelwriter duo tape mfg dymo mdl labelwriter dymo lw turbo dymo labelwriter turbo mfg dymo mdl labelwriter turbo dymo lw twin turbo dymo labelwriter twin turbo mfg dymo mdl labelwriter twin turbo dymo lw dymo labelwriter mfg dymo mdl labelwriter dymo lw duo label dymo labelwriter duo label mfg dymo mdl labelwriter duo label dymo lw duo tape dymo labelwriter duo tape mfg dymo mdl labelwriter duo tape dymo lw duo tape dymo labelwriter duo tape mfg dymo mdl labelwriter duo tape dymo lw dymo labelwriter mfg dymo mdl labelwriter dt zebra zpl inch direct thermal tt zebra zpl inch thermal transfer dt zebra zpl inch direct thermal tt zebra zpl inch thermal transfer dt zebra zpl inch direct thermal command set epl tt zebra zpl inch thermal transfer dt zebra zpl inch direct thermal tt zebra zpl inch thermal transfer zpl dt zebra zpl inch direct thermal zpl tt zebra zpl inch thermal transfer zpl dt zebra zpl inch direct thermal zpl tt zebra zpl inch thermal transfer zpl tt zebra zpl inch thermal transfer zpl dt zebra zpl inch direct thermal zpl tt zebra zpl inch thermal transfer zpl dt zebra zpl inch direct thermal zpl tt zebra zpl inch thermal transfer zpl tt zebra zpl inch thermal transfer | 0 |
33,733 | 4,858,469,258 | IssuesEvent | 2016-11-13 04:49:05 | EarthEvil/yellowHughRichard | https://api.github.com/repos/EarthEvil/yellowHughRichard | closed | Test: test each function | Priority 1 Stress Test | we need to write scripts to test each function: sign up, log in, debit, deposit, get transaction history
Everyone should do it, since it's too much work | 1.0 | Test: test each function - we need to write scripts to test each function: sign up, log in, debit, deposit, get transaction history
Everyone should do it, since it's too much work | test | test test each function we need to write scripts to test each function sign up log in debit deposit get transaction history everyone should do it since it s too much work | 1 |
128,154 | 10,516,853,424 | IssuesEvent | 2019-09-28 20:42:47 | beyondgrep/ack2 | https://api.github.com/repos/beyondgrep/ack2 | closed | test run_ack( '--match', 'Sue' ); fails on Windows, chokes on "stdout.log". | bug testing windows | On Windows (XP SP2, Strawberry 5.16.2, should not matter), `t/ack-match.t` failed for me with `Out of memory` error. This is because ack searches the `stdout.log` file as it is written into the build dir, during the 5th test:
`run_ack( '--match', 'Sue' );`
This patch is a (silly) workaround, a better approach for a fix might be to tell the test instance of ack to ignore all/some `.log`files.
```
--- Util.pm.original.ack-2.10 Tue Sep 24 23:35:58 2013
+++ Util.pm Thu Oct 3 11:33:44 2013
@@ -178,8 +178,15 @@
if ( is_windows() ) {
require Win32::ShellQuote;
# Capture stderr & stdout output into these files (only on Win32).
- my $catchout_file = 'stdout.log';
- my $catcherr_file = 'stderr.log';
+
+ # Put into parent of build dir, because build dir is searched by
+ # ack tests. Test 5 in ack-match.t:
+ # run_ack( '--match', 'Sue' );
+ # can fail with "Out of memory" or fill storage volume -
+ # Test ack reads stdout.log, appending a message for every read line to
+ # stdout.log, filling up both storage and memory.
+ my $catchout_file = '../stdout.log';
+ my $catcherr_file = '../stderr.log';
open(SAVEOUT, ">&STDOUT") or die "Can't dup STDOUT: $!";
open(SAVEERR, ">&STDERR") or die "Can't dup STDERR: $!";
@@ -199,6 +206,8 @@
close SAVEERR;
@stdout = read_file($catchout_file);
@stderr = read_file($catcherr_file);
+ unlink $catchout_file;
+ unlink $catcherr_file;
}
else {
my ( $stdout_read, $stdout_write );
```
| 1.0 | test run_ack( '--match', 'Sue' ); fails on Windows, chokes on "stdout.log". - On Windows (XP SP2, Strawberry 5.16.2, should not matter), `t/ack-match.t` failed for me with `Out of memory` error. This is because ack searches the `stdout.log` file as it is written into the build dir, during the 5th test:
`run_ack( '--match', 'Sue' );`
This patch is a (silly) workaround, a better approach for a fix might be to tell the test instance of ack to ignore all/some `.log`files.
```
--- Util.pm.original.ack-2.10 Tue Sep 24 23:35:58 2013
+++ Util.pm Thu Oct 3 11:33:44 2013
@@ -178,8 +178,15 @@
if ( is_windows() ) {
require Win32::ShellQuote;
# Capture stderr & stdout output into these files (only on Win32).
- my $catchout_file = 'stdout.log';
- my $catcherr_file = 'stderr.log';
+
+ # Put into parent of build dir, because build dir is searched by
+ # ack tests. Test 5 in ack-match.t:
+ # run_ack( '--match', 'Sue' );
+ # can fail with "Out of memory" or fill storage volume -
+ # Test ack reads stdout.log, appending a message for every read line to
+ # stdout.log, filling up both storage and memory.
+ my $catchout_file = '../stdout.log';
+ my $catcherr_file = '../stderr.log';
open(SAVEOUT, ">&STDOUT") or die "Can't dup STDOUT: $!";
open(SAVEERR, ">&STDERR") or die "Can't dup STDERR: $!";
@@ -199,6 +206,8 @@
close SAVEERR;
@stdout = read_file($catchout_file);
@stderr = read_file($catcherr_file);
+ unlink $catchout_file;
+ unlink $catcherr_file;
}
else {
my ( $stdout_read, $stdout_write );
```
| test | test run ack match sue fails on windows chokes on stdout log on windows xp strawberry should not matter t ack match t failed for me with out of memory error this is because ack searches the stdout log file as it is written into the build dir during the test run ack match sue this patch is a silly workaround a better approach for a fix might be to tell the test instance of ack to ignore all some log files util pm original ack tue sep util pm thu oct if is windows require shellquote capture stderr stdout output into these files only on my catchout file stdout log my catcherr file stderr log put into parent of build dir because build dir is searched by ack tests test in ack match t run ack match sue can fail with out of memory or fill storage volume test ack reads stdout log appending a message for every read line to stdout log filling up both storage and memory my catchout file stdout log my catcherr file stderr log open saveout stdout or die can t dup stdout open saveerr stderr or die can t dup stderr close saveerr stdout read file catchout file stderr read file catcherr file unlink catchout file unlink catcherr file else my stdout read stdout write | 1 |
44,684 | 5,639,768,695 | IssuesEvent | 2017-04-06 15:00:27 | wpengine/hgv | https://api.github.com/repos/wpengine/hgv | closed | failed provision on #332 | needs-testing | Getting this error message on a recent provision from the dev branch today #332
`TASK [php-fpm : Download PHP 7 deb] ********************************************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "failed to create temporary content file: The read operation timed out"} | 1.0 | failed provision on #332 - Getting this error message on a recent provision from the dev branch today #332
`TASK [php-fpm : Download PHP 7 deb] ********************************************
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "failed to create temporary content file: The read operation timed out"} | test | failed provision on getting this error message on a recent provision from the dev branch today task fatal failed changed false failed true msg failed to create temporary content file the read operation timed out | 1 |
135,710 | 30,350,784,264 | IssuesEvent | 2023-07-11 18:47:12 | creativecommons/chooser | https://api.github.com/repos/creativecommons/chooser | closed | [Bug] Fix Typo in FAQs | 🟧 priority: high 🏁 status: ready for work 🛠 goal: fix 💻 aspect: code | ## Description
There is unnecessary right parenthesis character as highlighted in the picture below:
<img width="725" alt="Screenshot 2023-03-29 at 4 34 57 PM" src="https://user-images.githubusercontent.com/77684943/228568751-35001fbd-4437-4f38-a64c-465ef6233425.png">
## Reproduction
<!-- Provide detailed steps to reproduce the bug -->
1. Go to https://chooser-beta.creativecommons.org/
2. Scroll down to `Confused? Need help?`
3. Click on `What Are Creative Commons Licenses?`
4. See error in Modal.
## Expectation
Remove the right parenthesis to fix typo.
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [x] I would be interested in resolving this bug.
| 1.0 | [Bug] Fix Typo in FAQs - ## Description
There is unnecessary right parenthesis character as highlighted in the picture below:
<img width="725" alt="Screenshot 2023-03-29 at 4 34 57 PM" src="https://user-images.githubusercontent.com/77684943/228568751-35001fbd-4437-4f38-a64c-465ef6233425.png">
## Reproduction
<!-- Provide detailed steps to reproduce the bug -->
1. Go to https://chooser-beta.creativecommons.org/
2. Scroll down to `Confused? Need help?`
3. Click on `What Are Creative Commons Licenses?`
4. See error in Modal.
## Expectation
Remove the right parenthesis to fix typo.
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [x] I would be interested in resolving this bug.
| non_test | fix typo in faqs description there is unnecessary right parenthesis character as highlighted in the picture below img width alt screenshot at pm src reproduction go to scroll down to confused need help click on what are creative commons licenses see error in modal expectation remove the right parenthesis to fix typo resolution i would be interested in resolving this bug | 0 |
741,743 | 25,815,571,354 | IssuesEvent | 2022-12-12 04:29:07 | coachbots/cctl | https://api.github.com/repos/coachbots/cctl | reopened | cctl on and off needs to be called multiple times | effort: 8 priority: soon work: complicated type: bug type: discussion state: pending | to turn a robot on and off I'm having to call cctl on/off multiple times or waiting a while even just for one robot. | 1.0 | cctl on and off needs to be called multiple times - to turn a robot on and off I'm having to call cctl on/off multiple times or waiting a while even just for one robot. | non_test | cctl on and off needs to be called multiple times to turn a robot on and off i m having to call cctl on off multiple times or waiting a while even just for one robot | 0 |
116,994 | 9,905,334,243 | IssuesEvent | 2019-06-27 11:19:47 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Comment Not Working and View Non-AMP Link | Need Testing [Priority: HIGH] | From past 2 Months i am Experiencing few Issues.
1. Comment Button Doesn't Work.
2. View Non-AMP Version Not Working.
-------------------
**1 ISSUE** - I am using default WordPress Comment Option and i am able to see comments but whenever me or anyone click on "Comment" under every post it just reload the page and nothing happens.
**2 ISSUE** - Recently i activated PWA Option via AMP for WP Plugin. I deleted it instantly and from that day whenever i click on **view non amp version**. I am keep Getting Redirected to same page as AMP Version.
**3 ISSUE** - I Added an Subscribe Widget in AMP footer and it Loads as it should but when i add email and click on subscribe button nothing happens. (This Issue is not very big)
My Website's Link is - [CLICK HERE](https://www.real-tips.xyz)
Hope These Issues would be fixed. | 1.0 | Comment Not Working and View Non-AMP Link - From past 2 Months i am Experiencing few Issues.
1. Comment Button Doesn't Work.
2. View Non-AMP Version Not Working.
-------------------
**1 ISSUE** - I am using default WordPress Comment Option and i am able to see comments but whenever me or anyone click on "Comment" under every post it just reload the page and nothing happens.
**2 ISSUE** - Recently i activated PWA Option via AMP for WP Plugin. I deleted it instantly and from that day whenever i click on **view non amp version**. I am keep Getting Redirected to same page as AMP Version.
**3 ISSUE** - I Added an Subscribe Widget in AMP footer and it Loads as it should but when i add email and click on subscribe button nothing happens. (This Issue is not very big)
My Website's Link is - [CLICK HERE](https://www.real-tips.xyz)
Hope These Issues would be fixed. | test | comment not working and view non amp link from past months i am experiencing few issues comment button doesn t work view non amp version not working issue i am using default wordpress comment option and i am able to see comments but whenever me or anyone click on comment under every post it just reload the page and nothing happens issue recently i activated pwa option via amp for wp plugin i deleted it instantly and from that day whenever i click on view non amp version i am keep getting redirected to same page as amp version issue i added an subscribe widget in amp footer and it loads as it should but when i add email and click on subscribe button nothing happens this issue is not very big my website s link is hope these issues would be fixed | 1 |
196,231 | 14,849,758,139 | IssuesEvent | 2021-01-18 02:23:07 | Thy-Vipe/BeastsOfBermuda-issues | https://api.github.com/repos/Thy-Vipe/BeastsOfBermuda-issues | opened | [Quality of life] Rex Op | Balance Quality of life public_testing | _Originally written by **Gavin | 76561198254999852**_
Game Version: 1.1.1113
*===== System Specs =====
CPU Brand: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
Vendor: GenuineIntel
GPU Brand: NVIDIA GeForce GTX 1070
GPU Driver Info: Unknown
Num CPU Cores: 6
===================*
Context: **Rex**
Map: Rival_Shores
Rex does 1,500 dmg at 1.2 with its special ability meanwhile Apato does around 1,300 at 1.4 (which doesn't even matter because the 1.2 damage cap.) That is a big problem. | 1.0 | [Quality of life] Rex Op - _Originally written by **Gavin | 76561198254999852**_
Game Version: 1.1.1113
*===== System Specs =====
CPU Brand: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
Vendor: GenuineIntel
GPU Brand: NVIDIA GeForce GTX 1070
GPU Driver Info: Unknown
Num CPU Cores: 6
===================*
Context: **Rex**
Map: Rival_Shores
Rex does 1,500 dmg at 1.2 with its special ability meanwhile Apato does around 1,300 at 1.4 (which doesn't even matter because the 1.2 damage cap.) That is a big problem. | test | rex op originally written by gavin game version system specs cpu brand intel r core tm cpu vendor genuineintel gpu brand nvidia geforce gtx gpu driver info unknown num cpu cores context rex map rival shores rex does dmg at with its special ability meanwhile apato does around at which doesn t even matter because the damage cap that is a big problem | 1 |
158,341 | 12,413,005,538 | IssuesEvent | 2020-05-22 11:48:03 | Oldes/Rebol-issues | https://api.github.com/repos/Oldes/Rebol-issues | closed | Have TO LOGIC! 0 return true | CC.resolved Status.important Test wanted Type.wish | _Submitted by:_ **BrianH**
We currently need the TRUE? function to convert conditional values to their corresponding logic values, but why can't we use TO-LOGIC for this? The only thing that TO-LOGIC does differently is convert zero to false, while zero is conditionally truthy in Rebol. Treating zero as false is more C-like.
If we have TO-LOGIC 0 return true instead, that would be more consistent with Rebol conditional semantics. As a side effect, we wouldn't need TRUE? anymore, except for code clarity.
``` rebol
(Submitted to start the discussion. Suggested by #2053.)
```
``` rebol
>> to logic! 0
== true
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=2055)** [ Version: r3 master Type: Wish Platform: All Category: Datatype Reproduce: Always Fixed-in:r3 master ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/2055</sup>
Comments:
---
> **Rebolbot** commented on Sep 16, 2013:
_Submitted by:_ **Ladislav**
I support this idea, since I never needed TO LOGIC! 0 to return FALSE.
---
> **Rebolbot** commented on Sep 16, 2013:
_Submitted by:_ **Ladislav**
Brian, you wrote: "We currently need the TRUE? function to convert conditional values to their corresponding logic values" - in my opinion it is important to realize why we need a function to convert conditional values to logic values.
It is not obvious why it is so, since there are many programming languages not having logic values at all and being "content" with just conditional values. So, we should really find out why?
My idea is that the reason lies in the fact that there are logic operators accepting logic values but not conditional values.
---
> **Rebolbot** commented on Sep 16, 2013:
_Submitted by:_ **BrianH**
Well, let's start with the original purpose of the TRUE? function. When we were working on the initial version of the current R3-GUI, there was a need to store actual true/false values in fields in the GUI, so as to save the conditional result of a previous expression. We couldn't just store the original value because different datatypes were treated differently by the dialect. If TO-LOGIC behaved in a way that was consistent with conditional truthiness, we wouldn't have needed TRUE?. At the time we were being more strict about Rebol 2 compatibility, so it wouldn't have occurred to us to change TO-LOGIC. That time has passed.
Another situation would be when you are calling functions that take logic parameters, and behave differently when passed non-logic parameters if they are allowed at all. Sometimes you want to constrain the parameter types for better debugging, sometimes for future expansion, sometimes because the function is a command that isn't implemented in Rebol so it isn't as flexible. Sometimes the commands implement a dialect where logic values are treated differently than other values - conditional truthiness is more of a DO dialect thing, not necessarily supported in other dialects.
TRUE? is used sometimes in APPLY blocks for passing values to the refinement arguments. It used to be necessary but those arguments are treated conditionally now, so it's mostly done to increase code clarity.
I suppose some people might want to use TRUE? to convert values for use with the AND, OR or XOR operators, but I rarely do so because of how awkward it is to mix prefix and infix expressions in Rebol. It's much easier to use ALL and ANY instead, though there isn't really a prefix, conditional version of XOR. I suppose it could come in handy when using AND~, OR~ and XOR~, for those who use such functions.
---
> **Rebolbot** commented on Sep 17, 2013:
_Submitted by:_ **Ladislav**
Just a remainder:
``` rebol
>> make logic! 0
== false
```
``` rebol
>> make logic! 1
== true
```
---
> **Rebolbot** commented on Sep 17, 2013:
_Submitted by:_ **BrianH**
Well, that means we won't have lost functionality. Unless you want to change MAKE too?
---
> **Rebolbot** commented on Sep 17, 2013:
_Submitted by:_ **abolka**
I'm in favour of amending TO LOGIC! functionality as well.
Not so much in favour of removing TRUE?, though. I think TRUE? reads far better in many cases than TO LOGIC!.
---
> **Rebolbot** commented on Sep 17, 2013:
_Submitted by:_ **BrianH**
TRUE? could be another word for TO-LOGIC, and both could be optional, not included in minimal builds. Just like FOUND?.
---
> **Rebolbot** commented on Sep 18, 2013:
_Submitted by:_ **fork**
In the further area of consistency, currently:
``` rebol
>> to integer! true
== 1
```
``` rebol
>> to integer! false
== 0
```
So long as we're bringing TO in line with Rebol's default worldview, then this should yield an error, just as with other wide types that pass for true... and picking an arbitrary one to map back from true and false is arbitrary:
``` rebol
>> to integer! 12-Dec-2012
** Script error: cannot MAKE/TO integer! from: 12-Dec-2012
** Where: to
** Near: to integer! 12-Dec-2012
```
MAKE can keep the current behavior, by the rationale discussed that it doesn't need to follow "Rebol logic"... construction may be defined as seen fit:
``` rebol
>> make integer! true
== 1
```
``` rebol
>> make integer! false
== 0
```
---
> **Rebolbot** mentioned this issue on Jan 12, 2016:
> [Simplify TO BLOCK! and complex construct via MAKE BLOCK! ](https://github.com/Oldes/Rebol-issues/issues/2056)
---
> **Rebolbot** mentioned this issue on Jan 22, 2016:
> [[Epic] Backwards-incompatible API changes, for the greater good](https://github.com/Oldes/Rebol-issues/issues/2128)
---
> **Rebolbot** added **Type.wish** and **Status.important** on Jan 12, 2016
--- | 1.0 | Have TO LOGIC! 0 return true - _Submitted by:_ **BrianH**
We currently need the TRUE? function to convert conditional values to their corresponding logic values, but why can't we use TO-LOGIC for this? The only thing that TO-LOGIC does differently is convert zero to false, while zero is conditionally truthy in Rebol. Treating zero as false is more C-like.
If we have TO-LOGIC 0 return true instead, that would be more consistent with Rebol conditional semantics. As a side effect, we wouldn't need TRUE? anymore, except for code clarity.
``` rebol
(Submitted to start the discussion. Suggested by #2053.)
```
``` rebol
>> to logic! 0
== true
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=2055)** [ Version: r3 master Type: Wish Platform: All Category: Datatype Reproduce: Always Fixed-in:r3 master ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/2055</sup>
Comments:
---
> **Rebolbot** commented on Sep 16, 2013:
_Submitted by:_ **Ladislav**
I support this idea, since I never needed TO LOGIC! 0 to return FALSE.
---
> **Rebolbot** commented on Sep 16, 2013:
_Submitted by:_ **Ladislav**
Brian, you wrote: "We currently need the TRUE? function to convert conditional values to their corresponding logic values" - in my opinion it is important to realize why we need a function to convert conditional values to logic values.
It is not obvious why it is so, since there are many programming languages not having logic values at all and being "content" with just conditional values. So, we should really find out why?
My idea is that the reason lies in the fact that there are logic operators accepting logic values but not conditional values.
---
> **Rebolbot** commented on Sep 16, 2013:
_Submitted by:_ **BrianH**
Well, let's start with the original purpose of the TRUE? function. When we were working on the initial version of the current R3-GUI, there was a need to store actual true/false values in fields in the GUI, so as to save the conditional result of a previous expression. We couldn't just store the original value because different datatypes were treated differently by the dialect. If TO-LOGIC behaved in a way that was consistent with conditional truthiness, we wouldn't have needed TRUE?. At the time we were being more strict about Rebol 2 compatibility, so it wouldn't have occurred to us to change TO-LOGIC. That time has passed.
Another situation would be when you are calling functions that take logic parameters, and behave differently when passed non-logic parameters if they are allowed at all. Sometimes you want to constrain the parameter types for better debugging, sometimes for future expansion, sometimes because the function is a command that isn't implemented in Rebol so it isn't as flexible. Sometimes the commands implement a dialect where logic values are treated differently than other values - conditional truthiness is more of a DO dialect thing, not necessarily supported in other dialects.
TRUE? is used sometimes in APPLY blocks for passing values to the refinement arguments. It used to be necessary but those arguments are treated conditionally now, so it's mostly done to increase code clarity.
I suppose some people might want to use TRUE? to convert values for use with the AND, OR or XOR operators, but I rarely do so because of how awkward it is to mix prefix and infix expressions in Rebol. It's much easier to use ALL and ANY instead, though there isn't really a prefix, conditional version of XOR. I suppose it could come in handy when using AND~, OR~ and XOR~, for those who use such functions.
---
> **Rebolbot** commented on Sep 17, 2013:
_Submitted by:_ **Ladislav**
Just a remainder:
``` rebol
>> make logic! 0
== false
```
``` rebol
>> make logic! 1
== true
```
---
> **Rebolbot** commented on Sep 17, 2013:
_Submitted by:_ **BrianH**
Well, that means we won't have lost functionality. Unless you want to change MAKE too?
---
> **Rebolbot** commented on Sep 17, 2013:
_Submitted by:_ **abolka**
I'm in favour of amending TO LOGIC! functionality as well.
Not so much in favour of removing TRUE?, though. I think TRUE? reads far better in many cases than TO LOGIC!.
---
> **Rebolbot** commented on Sep 17, 2013:
_Submitted by:_ **BrianH**
TRUE? could be another word for TO-LOGIC, and both could be optional, not included in minimal builds. Just like FOUND?.
---
> **Rebolbot** commented on Sep 18, 2013:
_Submitted by:_ **fork**
In the further area of consistency, currently:
``` rebol
>> to integer! true
== 1
```
``` rebol
>> to integer! false
== 0
```
So long as we're bringing TO in line with Rebol's default worldview, then this should yield an error, just as with other wide types that pass for true... and picking an arbitrary one to map back from true and false is arbitrary:
``` rebol
>> to integer! 12-Dec-2012
** Script error: cannot MAKE/TO integer! from: 12-Dec-2012
** Where: to
** Near: to integer! 12-Dec-2012
```
MAKE can keep the current behavior, by the rationale discussed that it doesn't need to follow "Rebol logic"... construction may be defined as seen fit:
``` rebol
>> make integer! true
== 1
```
``` rebol
>> make integer! false
== 0
```
---
> **Rebolbot** mentioned this issue on Jan 12, 2016:
> [Simplify TO BLOCK! and complex construct via MAKE BLOCK! ](https://github.com/Oldes/Rebol-issues/issues/2056)
---
> **Rebolbot** mentioned this issue on Jan 22, 2016:
> [[Epic] Backwards-incompatible API changes, for the greater good](https://github.com/Oldes/Rebol-issues/issues/2128)
---
> **Rebolbot** added **Type.wish** and **Status.important** on Jan 12, 2016
--- | test | have to logic return true submitted by brianh we currently need the true function to convert conditional values to their corresponding logic values but why can t we use to logic for this the only thing that to logic does differently is convert zero to false while zero is conditionally truthy in rebol treating zero as false is more c like if we have to logic return true instead that would be more consistent with rebol conditional semantics as a side effect we wouldn t need true anymore except for code clarity rebol submitted to start the discussion suggested by rebol to logic true imported from imported from comments rebolbot commented on sep submitted by ladislav i support this idea since i never needed to logic to return false rebolbot commented on sep submitted by ladislav brian you wrote we currently need the true function to convert conditional values to their corresponding logic values in my opinion it is important to realize why we need a function to convert conditional values to logic values it is not obvious why it is so since there are many programming languages not having logic values at all and being content with just conditional values so we should really find out why my idea is that the reason lies in the fact that there are logic operators accepting logic values but not conditional values rebolbot commented on sep submitted by brianh well let s start with the original purpose of the true function when we were working on the initial version of the current gui there was a need to store actual true false values in fields in the gui so as to save the conditional result of a previous expression we couldn t just store the original value because different datatypes were treated differently by the dialect if to logic behaved in a way that was consistent with conditional truthiness we wouldn t have needed true at the time we were being more strict about rebol compatibility so it wouldn t have occurred to us to change to logic that time has passed another situation would be when you are calling functions that take logic parameters and behave differently when passed non logic parameters if they are allowed at all sometimes you want to constrain the parameter types for better debugging sometimes for future expansion sometimes because the function is a command that isn t implemented in rebol so it isn t as flexible sometimes the commands implement a dialect where logic values are treated differently than other values conditional truthiness is more of a do dialect thing not necessarily supported in other dialects true is used sometimes in apply blocks for passing values to the refinement arguments it used to be necessary but those arguments are treated conditionally now so it s mostly done to increase code clarity i suppose some people might want to use true to convert values for use with the and or or xor operators but i rarely do so because of how awkward it is to mix prefix and infix expressions in rebol it s much easier to use all and any instead though there isn t really a prefix conditional version of xor i suppose it could come in handy when using and or and xor for those who use such functions rebolbot commented on sep submitted by ladislav just a remainder rebol make logic false rebol make logic true rebolbot commented on sep submitted by brianh well that means we won t have lost functionality unless you want to change make too rebolbot commented on sep submitted by abolka i m in favour of amending to logic functionality as well not so much in favour of removing true though i think true reads far better in many cases than to logic rebolbot commented on sep submitted by brianh true could be another word for to logic and both could be optional not included in minimal builds just like found rebolbot commented on sep submitted by fork in the further area of consistency currently rebol to integer true rebol to integer false so long as we re bringing to in line with rebol s default worldview then this should yield an error just as with other wide types that pass for true and picking an arbitrary one to map back from true and false is arbitrary rebol to integer dec script error cannot make to integer from dec where to near to integer dec make can keep the current behavior by the rationale discussed that it doesn t need to follow rebol logic construction may be defined as seen fit rebol make integer true rebol make integer false rebolbot mentioned this issue on jan rebolbot mentioned this issue on jan backwards incompatible api changes for the greater good rebolbot added type wish and status important on jan | 1 |
271,605 | 23,617,359,507 | IssuesEvent | 2022-08-24 17:03:31 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | cli/democluster: TestTransientClusterMultitenant failed | C-test-failure O-robot A-multitenancy branch-release-22.1 | cli/democluster.TestTransientClusterMultitenant [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4594706&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4594706&tab=artifacts#/) on release-22.1 @ [2d49a7bd0d1ebeff93277233695411d25b2a5d39](https://github.com/cockroachdb/cockroach/commits/2d49a7bd0d1ebeff93277233695411d25b2a5d39):
```
=== RUN TestTransientClusterMultitenant
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/b296432f77c11d14af9987c5728df5a9/logTestTransientClusterMultitenant1403781840
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #77590 cli/democluster: TestTransientClusterMultitenant failed [C-test-failure O-robot T-sql-queries branch-master]
</p>
</details>
/cc @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestTransientClusterMultitenant.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-13912 | 1.0 | cli/democluster: TestTransientClusterMultitenant failed - cli/democluster.TestTransientClusterMultitenant [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4594706&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4594706&tab=artifacts#/) on release-22.1 @ [2d49a7bd0d1ebeff93277233695411d25b2a5d39](https://github.com/cockroachdb/cockroach/commits/2d49a7bd0d1ebeff93277233695411d25b2a5d39):
```
=== RUN TestTransientClusterMultitenant
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/b296432f77c11d14af9987c5728df5a9/logTestTransientClusterMultitenant1403781840
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #77590 cli/democluster: TestTransientClusterMultitenant failed [C-test-failure O-robot T-sql-queries branch-master]
</p>
</details>
/cc @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestTransientClusterMultitenant.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-13912 | test | cli democluster testtransientclustermultitenant failed cli democluster testtransientclustermultitenant with on release run testtransientclustermultitenant test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline help see also parameters in this failure tags bazel gss same failure on other branches cli democluster testtransientclustermultitenant failed cc cockroachdb server jira issue crdb | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.