Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
255,349
21,919,350,303
IssuesEvent
2022-05-22 10:29:37
bonfire-networks/bonfire-app
https://api.github.com/repos/bonfire-networks/bonfire-app
closed
Add a distraction-free option to write a post
New Issue beta-testing
The text input for writing a post is too small to go beyond microblogging. It does not encourage deeper and structured conversations. We should definitively provide an option for distraction-free writing.
1.0
Add a distraction-free option to write a post - The text input for writing a post is too small to go beyond microblogging. It does not encourage deeper and structured conversations. We should definitively provide an option for distraction-free writing.
test
add a distraction free option to write a post the text input for writing a post is too small to go beyond microblogging it does not encourage deeper and structured conversations we should definitively provide an option for distraction free writing
1
277,598
24,088,295,063
IssuesEvent
2022-09-19 12:53:50
status-im/status-desktop
https://api.github.com/repos/status-im/status-desktop
closed
Fix image display when "see profile pictures from Contacts” is set to true
bug tested priority 1: high
### Description Logged in users don’t see their own profile image in a community’s member list when “see profile pictures from Contacts” is set
1.0
Fix image display when "see profile pictures from Contacts” is set to true - ### Description Logged in users don’t see their own profile image in a community’s member list when “see profile pictures from Contacts” is set
test
fix image display when see profile pictures from contacts” is set to true description logged in users don’t see their own profile image in a community’s member list when “see profile pictures from contacts” is set
1
10,690
15,709,365,620
IssuesEvent
2021-03-26 22:21:29
NASA-PDS/pds-deep-archive
https://api.github.com/repos/NASA-PDS/pds-deep-archive
closed
As a user, I want the SIP manifest to include valid URLs
p.must-have requirement sprint-backlog
...so that I can ensure the validity of the SIP --- **Acceptance Criteria** TBD --- **Engineering Tasks** - [x] provide spot check of bundle.xml URL to make sure the URL is valid
1.0
As a user, I want the SIP manifest to include valid URLs - ...so that I can ensure the validity of the SIP --- **Acceptance Criteria** TBD --- **Engineering Tasks** - [x] provide spot check of bundle.xml URL to make sure the URL is valid
non_test
as a user i want the sip manifest to include valid urls so that i can ensure the validity of the sip acceptance criteria tbd engineering tasks provide spot check of bundle xml url to make sure the url is valid
0
262,644
27,982,852,712
IssuesEvent
2023-03-26 11:11:29
lukebrogan-mend/dotnet-eshop2
https://api.github.com/repos/lukebrogan-mend/dotnet-eshop2
opened
Web-1.0.0: 3 vulnerabilities (highest severity is: 7.8)
Mend: dependency security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Web-1.0.0</b></p></summary> <p></p> <p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (Web version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-41032](https://www.mend.io/vulnerability-database/CVE-2022-41032) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.8 | nuget.protocol.5.11.0.nupkg | Transitive | N/A* | &#10060; | | [CVE-2018-8292](https://www.mend.io/vulnerability-database/CVE-2018-8292) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | system.net.http.4.3.0.nupkg | Transitive | N/A* | &#10060; | | [CVE-2019-0820](https://www.mend.io/vulnerability-database/CVE-2019-0820) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | system.text.regularexpressions.4.3.0.nupkg | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-41032</summary> ### Vulnerable Library - <b>nuget.protocol.5.11.0.nupkg</b></p> <p>NuGet's implementation for interacting with feeds. Contains functionality for all feed types.</p> <p>Library home page: <a href="https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg">https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg</a></p> <p>Path to dependency file: /tests/FunctionalTests/FunctionalTests.csproj</p> <p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.protocol/5.11.0/nuget.protocol.5.11.0.nupkg</p> <p> Dependency Hierarchy: - Web-1.0.0 (Root Library) - microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg - microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg - microsoft.dotnet.scaffolding.shared.6.0.7.nupkg - nuget.projectmodel.5.11.0.nupkg - nuget.dependencyresolver.core.5.11.0.nupkg - :x: **nuget.protocol.5.11.0.nupkg** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> NuGet Client Elevation of Privilege Vulnerability. <p>Publish Date: 2022-10-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41032>CVE-2022-41032</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-11</p> <p>Fix Resolution: NuGet.CommandLine - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1;NuGet.Commands - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1;NuGet.Protocol - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1 </p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-8292</summary> ### Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p> <p>Provides a programming interface for modern HTTP applications, including HTTP client components that allow applications to consume web services over HTTP and HTTP components that can be used by both clients and servers for parsing HTTP headers. </p> <p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p> <p>Path to dependency file: /tests/FunctionalTests/FunctionalTests.csproj</p> <p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p> <p> Dependency Hierarchy: - Web-1.0.0 (Root Library) - microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg - microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg - microsoft.dotnet.scaffolding.shared.6.0.7.nupkg - microsoft.codeanalysis.csharp.features.4.0.0.nupkg - humanizer.core.2.2.0.nupkg - netstandard.library.1.6.1.nupkg - :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0. <p>Publish Date: 2018-10-10 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-8292>CVE-2018-8292</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2018-10-10</p> <p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-0820</summary> ### Vulnerable Library - <b>system.text.regularexpressions.4.3.0.nupkg</b></p> <p>Provides the System.Text.RegularExpressions.Regex class, an implementation of a regular expression e...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p> <p>Path to dependency file: /tests/FunctionalTests/FunctionalTests.csproj</p> <p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> Dependency Hierarchy: - Web-1.0.0 (Root Library) - microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg - microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg - microsoft.dotnet.scaffolding.shared.6.0.7.nupkg - microsoft.codeanalysis.csharp.features.4.0.0.nupkg - humanizer.core.2.2.0.nupkg - netstandard.library.1.6.1.nupkg - system.xml.xdocument.4.3.0.nupkg - system.xml.readerwriter.4.3.0.nupkg - :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A denial of service vulnerability exists when .NET Framework and .NET Core improperly process RegEx strings, aka '.NET Framework and .NET Core Denial of Service Vulnerability'. This CVE ID is unique from CVE-2019-0980, CVE-2019-0981. Mend Note: After conducting further research, Mend has determined that CVE-2019-0820 only affects environments with versions 4.3.0 and 4.3.1 only on netcore50 environment of system.text.regularexpressions.nupkg. <p>Publish Date: 2019-05-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-0820>CVE-2019-0820</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-cmhx-cq75-c4mj">https://github.com/advisories/GHSA-cmhx-cq75-c4mj</a></p> <p>Release Date: 2019-05-16</p> <p>Fix Resolution: System.Text.RegularExpressions - 4.3.1</p> </p> <p></p> </details>
True
Web-1.0.0: 3 vulnerabilities (highest severity is: 7.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Web-1.0.0</b></p></summary> <p></p> <p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (Web version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-41032](https://www.mend.io/vulnerability-database/CVE-2022-41032) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.8 | nuget.protocol.5.11.0.nupkg | Transitive | N/A* | &#10060; | | [CVE-2018-8292](https://www.mend.io/vulnerability-database/CVE-2018-8292) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | system.net.http.4.3.0.nupkg | Transitive | N/A* | &#10060; | | [CVE-2019-0820](https://www.mend.io/vulnerability-database/CVE-2019-0820) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | system.text.regularexpressions.4.3.0.nupkg | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-41032</summary> ### Vulnerable Library - <b>nuget.protocol.5.11.0.nupkg</b></p> <p>NuGet's implementation for interacting with feeds. Contains functionality for all feed types.</p> <p>Library home page: <a href="https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg">https://api.nuget.org/packages/nuget.protocol.5.11.0.nupkg</a></p> <p>Path to dependency file: /tests/FunctionalTests/FunctionalTests.csproj</p> <p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/nuget.protocol/5.11.0/nuget.protocol.5.11.0.nupkg</p> <p> Dependency Hierarchy: - Web-1.0.0 (Root Library) - microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg - microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg - microsoft.dotnet.scaffolding.shared.6.0.7.nupkg - nuget.projectmodel.5.11.0.nupkg - nuget.dependencyresolver.core.5.11.0.nupkg - :x: **nuget.protocol.5.11.0.nupkg** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> NuGet Client Elevation of Privilege Vulnerability. <p>Publish Date: 2022-10-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41032>CVE-2022-41032</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-11</p> <p>Fix Resolution: NuGet.CommandLine - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1;NuGet.Commands - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1;NuGet.Protocol - 4.9.6,5.7.3,5.9.3,5.11.3,6.0.3,6.2.2,6.3.1 </p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-8292</summary> ### Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p> <p>Provides a programming interface for modern HTTP applications, including HTTP client components that allow applications to consume web services over HTTP and HTTP components that can be used by both clients and servers for parsing HTTP headers. </p> <p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p> <p>Path to dependency file: /tests/FunctionalTests/FunctionalTests.csproj</p> <p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p> <p> Dependency Hierarchy: - Web-1.0.0 (Root Library) - microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg - microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg - microsoft.dotnet.scaffolding.shared.6.0.7.nupkg - microsoft.codeanalysis.csharp.features.4.0.0.nupkg - humanizer.core.2.2.0.nupkg - netstandard.library.1.6.1.nupkg - :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0. <p>Publish Date: 2018-10-10 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-8292>CVE-2018-8292</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2018-10-10</p> <p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-0820</summary> ### Vulnerable Library - <b>system.text.regularexpressions.4.3.0.nupkg</b></p> <p>Provides the System.Text.RegularExpressions.Regex class, an implementation of a regular expression e...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg">https://api.nuget.org/packages/system.text.regularexpressions.4.3.0.nupkg</a></p> <p>Path to dependency file: /tests/FunctionalTests/FunctionalTests.csproj</p> <p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.text.regularexpressions/4.3.0/system.text.regularexpressions.4.3.0.nupkg</p> <p> Dependency Hierarchy: - Web-1.0.0 (Root Library) - microsoft.visualstudio.web.codegeneration.design.6.0.7.nupkg - microsoft.visualstudio.web.codegenerators.mvc.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.entityframeworkcore.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.core.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.templating.6.0.7.nupkg - microsoft.visualstudio.web.codegeneration.utils.6.0.7.nupkg - microsoft.dotnet.scaffolding.shared.6.0.7.nupkg - microsoft.codeanalysis.csharp.features.4.0.0.nupkg - humanizer.core.2.2.0.nupkg - netstandard.library.1.6.1.nupkg - system.xml.xdocument.4.3.0.nupkg - system.xml.readerwriter.4.3.0.nupkg - :x: **system.text.regularexpressions.4.3.0.nupkg** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A denial of service vulnerability exists when .NET Framework and .NET Core improperly process RegEx strings, aka '.NET Framework and .NET Core Denial of Service Vulnerability'. This CVE ID is unique from CVE-2019-0980, CVE-2019-0981. Mend Note: After conducting further research, Mend has determined that CVE-2019-0820 only affects environments with versions 4.3.0 and 4.3.1 only on netcore50 environment of system.text.regularexpressions.nupkg. <p>Publish Date: 2019-05-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-0820>CVE-2019-0820</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-cmhx-cq75-c4mj">https://github.com/advisories/GHSA-cmhx-cq75-c4mj</a></p> <p>Release Date: 2019-05-16</p> <p>Fix Resolution: System.Text.RegularExpressions - 4.3.1</p> </p> <p></p> </details>
non_test
web vulnerabilities highest severity is vulnerable library web path to vulnerable library home wss scanner nuget packages system text regularexpressions system text regularexpressions nupkg vulnerabilities cve severity cvss dependency type fixed in web version remediation available high nuget protocol nupkg transitive n a high system net http nupkg transitive n a high system text regularexpressions nupkg transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library nuget protocol nupkg nuget s implementation for interacting with feeds contains functionality for all feed types library home page a href path to dependency file tests functionaltests functionaltests csproj path to vulnerable library home wss scanner nuget packages nuget protocol nuget protocol nupkg dependency hierarchy web root library microsoft visualstudio web codegeneration design nupkg microsoft visualstudio web codegenerators mvc nupkg microsoft visualstudio web codegeneration nupkg microsoft visualstudio web codegeneration entityframeworkcore nupkg microsoft visualstudio web codegeneration core nupkg microsoft visualstudio web codegeneration templating nupkg microsoft visualstudio web codegeneration utils nupkg microsoft dotnet scaffolding shared nupkg nuget projectmodel nupkg nuget dependencyresolver core nupkg x nuget protocol nupkg vulnerable library found in base branch main vulnerability details nuget client elevation of privilege vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution nuget commandline nuget commands nuget protocol cve vulnerable library system net http nupkg provides a programming interface for modern http applications including http client components that allow applications to consume web services over http and http components that can be used by both clients and servers for parsing http headers library home page a href path to dependency file tests functionaltests functionaltests csproj path to vulnerable library home wss scanner nuget packages system net http system net http nupkg dependency hierarchy web root library microsoft visualstudio web codegeneration design nupkg microsoft visualstudio web codegenerators mvc nupkg microsoft visualstudio web codegeneration nupkg microsoft visualstudio web codegeneration entityframeworkcore nupkg microsoft visualstudio web codegeneration core nupkg microsoft visualstudio web codegeneration templating nupkg microsoft visualstudio web codegeneration utils nupkg microsoft dotnet scaffolding shared nupkg microsoft codeanalysis csharp features nupkg humanizer core nupkg netstandard library nupkg x system net http nupkg vulnerable library found in base branch main vulnerability details an information disclosure vulnerability exists in net core when authentication information is inadvertently exposed in a redirect aka net core information disclosure vulnerability this affects net core net core net core powershell core publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution system net http microsoft powershell commands utility rc cve vulnerable library system text regularexpressions nupkg provides the system text regularexpressions regex class an implementation of a regular expression e library home page a href path to dependency file tests functionaltests functionaltests csproj path to vulnerable library home wss scanner nuget packages system text regularexpressions system text regularexpressions nupkg dependency hierarchy web root library microsoft visualstudio web codegeneration design nupkg microsoft visualstudio web codegenerators mvc nupkg microsoft visualstudio web codegeneration nupkg microsoft visualstudio web codegeneration entityframeworkcore nupkg microsoft visualstudio web codegeneration core nupkg microsoft visualstudio web codegeneration templating nupkg microsoft visualstudio web codegeneration utils nupkg microsoft dotnet scaffolding shared nupkg microsoft codeanalysis csharp features nupkg humanizer core nupkg netstandard library nupkg system xml xdocument nupkg system xml readerwriter nupkg x system text regularexpressions nupkg vulnerable library found in base branch main vulnerability details a denial of service vulnerability exists when net framework and net core improperly process regex strings aka net framework and net core denial of service vulnerability this cve id is unique from cve cve mend note after conducting further research mend has determined that cve only affects environments with versions and only on environment of system text regularexpressions nupkg publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution system text regularexpressions
0
235,703
19,412,561,054
IssuesEvent
2021-12-20 11:13:20
bosagora/agora
https://api.github.com/repos/bosagora/agora
closed
PROD env often gets stuck on a block height with no errors logged
prio-URGENT type-bug type-testing
Since we have fixed some of the issues that we detected were blocking the progress of the blockchain we no longer see in the logs why the next block is sometimes stuck in nomination. We should add some other logging info or debug the situation more to find out the cause.
1.0
PROD env often gets stuck on a block height with no errors logged - Since we have fixed some of the issues that we detected were blocking the progress of the blockchain we no longer see in the logs why the next block is sometimes stuck in nomination. We should add some other logging info or debug the situation more to find out the cause.
test
prod env often gets stuck on a block height with no errors logged since we have fixed some of the issues that we detected were blocking the progress of the blockchain we no longer see in the logs why the next block is sometimes stuck in nomination we should add some other logging info or debug the situation more to find out the cause
1
158,518
24,851,175,356
IssuesEvent
2022-10-26 20:13:16
phetsims/molecule-shapes
https://api.github.com/repos/phetsims/molecule-shapes
closed
Grammar change needed in PhET-iO guide
design:phet-io
For https://github.com/phetsims/qa/issues/844 Under: **Loading an Existing Standard PhET-iO Wrapper for Revision, Migration, or Adaptation** section, this sentence could use some rewording: >Once edited, the file becomes a Custom PhET-iO Wrapper and may no longer be able to reliably load it in PhET-iO Studio for revision nor realize the benefits of version migration support. Looks like "you" is missing after "and" in the sentence.
1.0
Grammar change needed in PhET-iO guide - For https://github.com/phetsims/qa/issues/844 Under: **Loading an Existing Standard PhET-iO Wrapper for Revision, Migration, or Adaptation** section, this sentence could use some rewording: >Once edited, the file becomes a Custom PhET-iO Wrapper and may no longer be able to reliably load it in PhET-iO Studio for revision nor realize the benefits of version migration support. Looks like "you" is missing after "and" in the sentence.
non_test
grammar change needed in phet io guide for under loading an existing standard phet io wrapper for revision migration or adaptation section this sentence could use some rewording once edited the file becomes a custom phet io wrapper and may no longer be able to reliably load it in phet io studio for revision nor realize the benefits of version migration support looks like you is missing after and in the sentence
0
755,407
26,428,336,506
IssuesEvent
2023-01-14 13:39:15
robotframework/robotframework
https://api.github.com/repos/robotframework/robotframework
closed
Deprecate `name` argument of `TestSuite.from_model`
enhancement priority: low deprecation effort: small
It's a bit odd to be able to configure the name of the creates suite when using `TestSuite.from_model` but no other attributes like `doc`. It doesn't make sense to support configuring others, because it's easy to do that after the suite is created either by setting attributes normally or by using the convenient `TestSuite.config` method like `TestSuite.from_model(model).config(name='X', doc='Y')`. Let's deprecate the name argument to make API more uniform.
1.0
Deprecate `name` argument of `TestSuite.from_model` - It's a bit odd to be able to configure the name of the creates suite when using `TestSuite.from_model` but no other attributes like `doc`. It doesn't make sense to support configuring others, because it's easy to do that after the suite is created either by setting attributes normally or by using the convenient `TestSuite.config` method like `TestSuite.from_model(model).config(name='X', doc='Y')`. Let's deprecate the name argument to make API more uniform.
non_test
deprecate name argument of testsuite from model it s a bit odd to be able to configure the name of the creates suite when using testsuite from model but no other attributes like doc it doesn t make sense to support configuring others because it s easy to do that after the suite is created either by setting attributes normally or by using the convenient testsuite config method like testsuite from model model config name x doc y let s deprecate the name argument to make api more uniform
0
73,283
7,330,982,873
IssuesEvent
2018-03-05 11:52:18
MachoThemes/modula-lite
https://api.github.com/repos/MachoThemes/modula-lite
closed
WPML integration.
bug need testing
On any other language than the default one the number of images will be doubled and the gallery's grid will be messed up.
1.0
WPML integration. - On any other language than the default one the number of images will be doubled and the gallery's grid will be messed up.
test
wpml integration on any other language than the default one the number of images will be doubled and the gallery s grid will be messed up
1
16,715
10,555,845,601
IssuesEvent
2019-10-03 23:14:57
terraform-providers/terraform-provider-aws
https://api.github.com/repos/terraform-providers/terraform-provider-aws
closed
IAM group membership "does not support import"
enhancement good first issue service/iam
Terraform 0.12.0 provider.aws ~> 2.12 Action: Importing a existing group_membership as stated in the following documentation: https://www.terraform.io/docs/providers/aws/r/iam_user_group_membership.html Result: Error: resource aws_iam_group_membership doesn't support import Terraform will perform the following actions: # aws_iam_group_membership.secing will be created + resource "aws_iam_group_membership" "example1" { + group = "example1" + id = (known after apply) + name = "example1-group-membership" + users = [ + "user1", + "user2", + "user3", + "user4", + "user5", ] } ``` $ terraform import aws_iam_user_group_membership.example1 user1/user2/user3 ```
1.0
IAM group membership "does not support import" - Terraform 0.12.0 provider.aws ~> 2.12 Action: Importing a existing group_membership as stated in the following documentation: https://www.terraform.io/docs/providers/aws/r/iam_user_group_membership.html Result: Error: resource aws_iam_group_membership doesn't support import Terraform will perform the following actions: # aws_iam_group_membership.secing will be created + resource "aws_iam_group_membership" "example1" { + group = "example1" + id = (known after apply) + name = "example1-group-membership" + users = [ + "user1", + "user2", + "user3", + "user4", + "user5", ] } ``` $ terraform import aws_iam_user_group_membership.example1 user1/user2/user3 ```
non_test
iam group membership does not support import terraform provider aws action importing a existing group membership as stated in the following documentation result error resource aws iam group membership doesn t support import terraform will perform the following actions aws iam group membership secing will be created resource aws iam group membership group id known after apply name group membership users terraform import aws iam user group membership
0
273,427
20,791,433,422
IssuesEvent
2022-03-17 02:45:43
FStarLang/FStar
https://api.github.com/repos/FStarLang/FStar
closed
Improve Documentation
kind/enhancement kind/question component/documentation area/usability component/tutorial
There is a big lack of clear and easily found documentation for F*. The tutorial is nice but it is still very lacking. After reading through the tutorial, I had no notion of things like fuel, where different modules reside in the source code, etc. Is this being documented already, or is this something that can be worked on?
1.0
Improve Documentation - There is a big lack of clear and easily found documentation for F*. The tutorial is nice but it is still very lacking. After reading through the tutorial, I had no notion of things like fuel, where different modules reside in the source code, etc. Is this being documented already, or is this something that can be worked on?
non_test
improve documentation there is a big lack of clear and easily found documentation for f the tutorial is nice but it is still very lacking after reading through the tutorial i had no notion of things like fuel where different modules reside in the source code etc is this being documented already or is this something that can be worked on
0
203,947
15,395,800,423
IssuesEvent
2021-03-03 19:43:43
mby/lord
https://api.github.com/repos/mby/lord
closed
Invalid E2E Test Expectations
tests
Expectations for E2E tests are old and henceforth largely invalid. Must be recompiled into proper, valid test expectations!
1.0
Invalid E2E Test Expectations - Expectations for E2E tests are old and henceforth largely invalid. Must be recompiled into proper, valid test expectations!
test
invalid test expectations expectations for tests are old and henceforth largely invalid must be recompiled into proper valid test expectations
1
344,471
30,747,983,055
IssuesEvent
2023-07-28 16:33:04
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: Health Gateway Functional Tests.test/health_gateway/tests/index·ts - health gateway returns 504 on timeout
Team:Core failed-test
A test failed on a tracked branch ``` Error: expected 504 "Gateway Timeout", got 503 "Service Unavailable" at Test._assertStatus (node_modules/supertest/lib/test.js:268:12) at Test._assertFunction (node_modules/supertest/lib/test.js:283:11) at Test.assert (node_modules/supertest/lib/test.js:173:18) at localAssert (node_modules/supertest/lib/test.js:131:12) at /home/buildkite-agent/builds/kb-n2-4-spot-e9a94b5d04746cd0/elastic/kibana-on-merge/kibana/node_modules/supertest/lib/test.js:128:5 at Test.Request.callback (node_modules/superagent/lib/node/index.js:728:3) at /home/buildkite-agent/builds/kb-n2-4-spot-e9a94b5d04746cd0/elastic/kibana-on-merge/kibana/node_modules/superagent/lib/node/index.js:916:18 at IncomingMessage.<anonymous> (node_modules/superagent/lib/node/parsers/json.js:19:7) at IncomingMessage.emit (node:events:525:35) at endReadableNT (node:internal/streams/readable:1358:12) at processTicksAndRejections (node:internal/process/task_queues:83:21) ``` First failure: [CI Build - 8.8](https://buildkite.com/elastic/kibana-on-merge/builds/30378#0188118d-dd7d-4e4d-82f7-e3671c0a0bb0) <!-- kibanaCiData = {"failed-test":{"test.class":"Health Gateway Functional Tests.test/health_gateway/tests/index·ts","test.name":"health gateway returns 504 on timeout","test.failCount":1}} -->
1.0
Failing test: Health Gateway Functional Tests.test/health_gateway/tests/index·ts - health gateway returns 504 on timeout - A test failed on a tracked branch ``` Error: expected 504 "Gateway Timeout", got 503 "Service Unavailable" at Test._assertStatus (node_modules/supertest/lib/test.js:268:12) at Test._assertFunction (node_modules/supertest/lib/test.js:283:11) at Test.assert (node_modules/supertest/lib/test.js:173:18) at localAssert (node_modules/supertest/lib/test.js:131:12) at /home/buildkite-agent/builds/kb-n2-4-spot-e9a94b5d04746cd0/elastic/kibana-on-merge/kibana/node_modules/supertest/lib/test.js:128:5 at Test.Request.callback (node_modules/superagent/lib/node/index.js:728:3) at /home/buildkite-agent/builds/kb-n2-4-spot-e9a94b5d04746cd0/elastic/kibana-on-merge/kibana/node_modules/superagent/lib/node/index.js:916:18 at IncomingMessage.<anonymous> (node_modules/superagent/lib/node/parsers/json.js:19:7) at IncomingMessage.emit (node:events:525:35) at endReadableNT (node:internal/streams/readable:1358:12) at processTicksAndRejections (node:internal/process/task_queues:83:21) ``` First failure: [CI Build - 8.8](https://buildkite.com/elastic/kibana-on-merge/builds/30378#0188118d-dd7d-4e4d-82f7-e3671c0a0bb0) <!-- kibanaCiData = {"failed-test":{"test.class":"Health Gateway Functional Tests.test/health_gateway/tests/index·ts","test.name":"health gateway returns 504 on timeout","test.failCount":1}} -->
test
failing test health gateway functional tests test health gateway tests index·ts health gateway returns on timeout a test failed on a tracked branch error expected gateway timeout got service unavailable at test assertstatus node modules supertest lib test js at test assertfunction node modules supertest lib test js at test assert node modules supertest lib test js at localassert node modules supertest lib test js at home buildkite agent builds kb spot elastic kibana on merge kibana node modules supertest lib test js at test request callback node modules superagent lib node index js at home buildkite agent builds kb spot elastic kibana on merge kibana node modules superagent lib node index js at incomingmessage node modules superagent lib node parsers json js at incomingmessage emit node events at endreadablent node internal streams readable at processticksandrejections node internal process task queues first failure
1
452,103
32,049,823,705
IssuesEvent
2023-09-23 12:24:18
ElektraInitiative/PermaplanT
https://api.github.com/repos/ElektraInitiative/PermaplanT
closed
E2E setup (windows)
documentation e2e
Basically a small docu for windows/mingw installations ### Tasks - [ ] Mention how to install python3-pip - [ ] Mention on which operating system the documentation/scripts work - [ ] Add env variables for clean_db.py (DATABASE_URL etc) ### Something else - [ ] Mention in the top readme that the makefile is only for fast execution of frequent tasks, for more detailed executions to look into the seperate folders readme
1.0
E2E setup (windows) - Basically a small docu for windows/mingw installations ### Tasks - [ ] Mention how to install python3-pip - [ ] Mention on which operating system the documentation/scripts work - [ ] Add env variables for clean_db.py (DATABASE_URL etc) ### Something else - [ ] Mention in the top readme that the makefile is only for fast execution of frequent tasks, for more detailed executions to look into the seperate folders readme
non_test
setup windows basically a small docu for windows mingw installations tasks mention how to install pip mention on which operating system the documentation scripts work add env variables for clean db py database url etc something else mention in the top readme that the makefile is only for fast execution of frequent tasks for more detailed executions to look into the seperate folders readme
0
218,421
16,990,246,661
IssuesEvent
2021-06-30 19:24:34
DynamoRIO/dynamorio
https://api.github.com/repos/DynamoRIO/dynamorio
opened
dwrap-test failed on Windows with output mismatch
Component-Tests OpSys-Windows
https://github.com/DynamoRIO/dynamorio/pull/4977/checks?check_run_id=2955546315 ``` 142/259 Test #142: code_api|client.drwrap-test ..................................***Failed Required regular expression not found. Regex=[^thread\.appdll process init ? <pre-level0> ? in level0 42 ? <pre-level1> ? in level1 42 1111 ? <pre-makes_tailcall> ? <pre-level2> ? in level2 1153 ? <post-level2> ? <post-makes_tailcall> ? <post-level1> ? level1 returned -4 ? <post-level0> ? level0 returned 42 ? <pre-skipme> ? skipme returned 7 and x=3 ? <pre-repeat#1> ? in repeatme with arg 3 ? <post-repeat#1> ? <pre-repeat#2> ? in repeatme with arg 2 ? <post-repeat#2> ? repeatme returned 2 ? replaceme returned 0 and x=6 ? replaceme2 returned 1 and x=999 ? replace_callsite returned 2 and x=777 ? <pre-preonly> ? in preonly ? in postonly ? <post-postonly> ? in skipme ? in postonly ? <pre-direct1> ? <post-direct1> ? <pre-direct2> ? <pre-direct1> ? <post-direct1> ? <post-direct2> ? <pre-direct1> ? in runlots 1024 ? <pre-long0> ? <pre-long1> ? long0 A ? <pre-long2> ? <pre-long3> ? long1 A ? <pre-long4> ? <pre-long5> ? long2 A ? <pre-long6> ? <pre-long7> ? long3 A ? <post-long7 abnormal> ? <post-long6 abnormal> ? <post-long5 abnormal> ? <post-long4 abnormal> ? <post-long3 abnormal> ? <post-long2 abnormal> ? <post-long1 abnormal> ? <post-long0 abnormal> ? longdone ? <pre-called_indirectly> ? <pre-called_indirectly_subcall> ? called_indirectly_subcall 43 ? <post-called_indirectly_subcall> ? called_indirectly 42 => 44 ? <post-called_indirectly> ? <pre-tailcall_test2> ? print_from_asm 1 ? <pre-tailcall_tail> ? print_from_asm 7 ? <post-tailcall_tail> ? <post-tailcall_test2> ? loaded library ? thread\.appdll process init ? <pre-level0> ? in level0 42 ? <pre-level1> ? in level1 42 1111 ? <pre-makes_tailcall> ? <pre-level2> ? in level2 1153 ? <post-level2> ? <post-makes_tailcall> ? <post-level1> ? level1 returned -4 ? <post-level0> ? level0 returned 42 ? <pre-skipme> ? skipme returned 7 and x=3 ? <pre-repeat#1> ? in repeatme with arg 3 ? <post-repeat#1> ? <pre-repeat#2> ? in repeatme with arg 2 ? <post-repeat#2> ? repeatme returned 2 ? replaceme returned 0 and x=6 ? replaceme2 returned 1 and x=999 ? replace_callsite returned 2 and x=777 ? <pre-preonly> ? in preonly ? in postonly ? <post-postonly> ? in skipme ? in postonly ? <pre-direct1> ? <post-direct1> ? <pre-direct2> ? <pre-direct1> ? <post-direct1> ? <post-direct2> ? <pre-direct1> ? in runlots 1024 ? <pre-long0> ? long0 A ? <pre-long1> ? long1 A ? <pre-long2> ? long2 A ? <pre-long3> ? long3 A ? <post-long3 abnormal> ? <post-long2 abnormal> ? <post-long1 abnormal> ? <post-long0 abnormal> ? longdone ? <pre-called_indirectly> ? <pre-called_indirectly_subcall> ? called_indirectly_subcall 43 ? <post-called_indirectly_subcall> ? called_indirectly 42 => 44 ? <post-called_indirectly> ? <pre-tailcall_test2> ? print_from_asm 1 ? <pre-tailcall_tail> ? print_from_asm 7 ? <post-tailcall_tail> ? <post-tailcall_test2> ? loaded library ? thank you for testing the client interface ? all done ? $ ] 2.20 sec thread.appdll process init <pre-level0> in level0 42 <pre-level1> in level1 42 1111 <pre-makes_tailcall> <pre-level2> in level2 1153 <post-level2> <post-makes_tailcall> <post-level1> level1 returned -4 <post-level0> level0 returned 42 <pre-skipme> skipme returned 7 and x=3 <pre-repeat#1> in repeatme with arg 3 <post-repeat#1> <pre-repeat#2> in repeatme with arg 2 <post-repeat#2> repeatme returned 2 replaceme returned 0 and x=6 replaceme2 returned 1 and x=999 replace_callsite returned 2 and x=777 <pre-preonly> in preonly in postonly <post-postonly> in skipme in postonly <pre-direct1> <post-direct1> <pre-direct2> <pre-direct1> <post-direct1> <post-direct2> <pre-direct1> in runlots 1024 <pre-long0> <pre-long1> long0 A <pre-long2> <pre-long3> long1 A <pre-long4> <pre-long5> long2 A <pre-long6> <pre-long7> long3 A <post-long7 abnormal> <post-long6 abnormal> <post-long5 abnormal> <post-long4 abnormal> <post-long3 abnormal> <post-long2 abnormal> <post-long1 abnormal> <post-long0 abnormal> longdone <pre-called_indirectly> <pre-called_indirectly_subcall> called_indirectly_subcall 43 <post-called_indirectly_subcall> called_indirectly 42 => 44 <post-called_indirectly> <pre-tailcall_test2> print_from_asm 1 <pre-tailcall_tail> print_from_asm 7 <post-tailcall_tail> <post-tailcall_test2> loaded library thread.appdll process init in level0 37 in level1 37 74 in level2 111 level1 returned 37 level0 returned 37 in skipme skipme returned -1 and x=4 in repeatme with arg 4 repeatme returned 4 replaceme returned 0 and x=6 replaceme2 returned 1 and x=999 replace_callsite returned 2 and x=777 in preonly in postonly in skipme in postonly <pre-direct1> <post-direct1> <pre-direct2> <pre-direct1> <post-direct1> <post-direct2> <pre-direct1> in runlots 1024 <pre-long0> long0 A <pre-long1> long1 A <pre-long2> long2 A <pre-long3> long3 A <post-long3 abnormal> <post-long2 abnormal> <post-long1 abnormal> <post-long0 abnormal> longdone called_indirectly_subcall 43 called_indirectly 42 => 44 print_from_asm 1 print_from_asm 7 loaded library thank you for testing the client interface all done ```
1.0
dwrap-test failed on Windows with output mismatch - https://github.com/DynamoRIO/dynamorio/pull/4977/checks?check_run_id=2955546315 ``` 142/259 Test #142: code_api|client.drwrap-test ..................................***Failed Required regular expression not found. Regex=[^thread\.appdll process init ? <pre-level0> ? in level0 42 ? <pre-level1> ? in level1 42 1111 ? <pre-makes_tailcall> ? <pre-level2> ? in level2 1153 ? <post-level2> ? <post-makes_tailcall> ? <post-level1> ? level1 returned -4 ? <post-level0> ? level0 returned 42 ? <pre-skipme> ? skipme returned 7 and x=3 ? <pre-repeat#1> ? in repeatme with arg 3 ? <post-repeat#1> ? <pre-repeat#2> ? in repeatme with arg 2 ? <post-repeat#2> ? repeatme returned 2 ? replaceme returned 0 and x=6 ? replaceme2 returned 1 and x=999 ? replace_callsite returned 2 and x=777 ? <pre-preonly> ? in preonly ? in postonly ? <post-postonly> ? in skipme ? in postonly ? <pre-direct1> ? <post-direct1> ? <pre-direct2> ? <pre-direct1> ? <post-direct1> ? <post-direct2> ? <pre-direct1> ? in runlots 1024 ? <pre-long0> ? <pre-long1> ? long0 A ? <pre-long2> ? <pre-long3> ? long1 A ? <pre-long4> ? <pre-long5> ? long2 A ? <pre-long6> ? <pre-long7> ? long3 A ? <post-long7 abnormal> ? <post-long6 abnormal> ? <post-long5 abnormal> ? <post-long4 abnormal> ? <post-long3 abnormal> ? <post-long2 abnormal> ? <post-long1 abnormal> ? <post-long0 abnormal> ? longdone ? <pre-called_indirectly> ? <pre-called_indirectly_subcall> ? called_indirectly_subcall 43 ? <post-called_indirectly_subcall> ? called_indirectly 42 => 44 ? <post-called_indirectly> ? <pre-tailcall_test2> ? print_from_asm 1 ? <pre-tailcall_tail> ? print_from_asm 7 ? <post-tailcall_tail> ? <post-tailcall_test2> ? loaded library ? thread\.appdll process init ? <pre-level0> ? in level0 42 ? <pre-level1> ? in level1 42 1111 ? <pre-makes_tailcall> ? <pre-level2> ? in level2 1153 ? <post-level2> ? <post-makes_tailcall> ? <post-level1> ? level1 returned -4 ? <post-level0> ? level0 returned 42 ? <pre-skipme> ? skipme returned 7 and x=3 ? <pre-repeat#1> ? in repeatme with arg 3 ? <post-repeat#1> ? <pre-repeat#2> ? in repeatme with arg 2 ? <post-repeat#2> ? repeatme returned 2 ? replaceme returned 0 and x=6 ? replaceme2 returned 1 and x=999 ? replace_callsite returned 2 and x=777 ? <pre-preonly> ? in preonly ? in postonly ? <post-postonly> ? in skipme ? in postonly ? <pre-direct1> ? <post-direct1> ? <pre-direct2> ? <pre-direct1> ? <post-direct1> ? <post-direct2> ? <pre-direct1> ? in runlots 1024 ? <pre-long0> ? long0 A ? <pre-long1> ? long1 A ? <pre-long2> ? long2 A ? <pre-long3> ? long3 A ? <post-long3 abnormal> ? <post-long2 abnormal> ? <post-long1 abnormal> ? <post-long0 abnormal> ? longdone ? <pre-called_indirectly> ? <pre-called_indirectly_subcall> ? called_indirectly_subcall 43 ? <post-called_indirectly_subcall> ? called_indirectly 42 => 44 ? <post-called_indirectly> ? <pre-tailcall_test2> ? print_from_asm 1 ? <pre-tailcall_tail> ? print_from_asm 7 ? <post-tailcall_tail> ? <post-tailcall_test2> ? loaded library ? thank you for testing the client interface ? all done ? $ ] 2.20 sec thread.appdll process init <pre-level0> in level0 42 <pre-level1> in level1 42 1111 <pre-makes_tailcall> <pre-level2> in level2 1153 <post-level2> <post-makes_tailcall> <post-level1> level1 returned -4 <post-level0> level0 returned 42 <pre-skipme> skipme returned 7 and x=3 <pre-repeat#1> in repeatme with arg 3 <post-repeat#1> <pre-repeat#2> in repeatme with arg 2 <post-repeat#2> repeatme returned 2 replaceme returned 0 and x=6 replaceme2 returned 1 and x=999 replace_callsite returned 2 and x=777 <pre-preonly> in preonly in postonly <post-postonly> in skipme in postonly <pre-direct1> <post-direct1> <pre-direct2> <pre-direct1> <post-direct1> <post-direct2> <pre-direct1> in runlots 1024 <pre-long0> <pre-long1> long0 A <pre-long2> <pre-long3> long1 A <pre-long4> <pre-long5> long2 A <pre-long6> <pre-long7> long3 A <post-long7 abnormal> <post-long6 abnormal> <post-long5 abnormal> <post-long4 abnormal> <post-long3 abnormal> <post-long2 abnormal> <post-long1 abnormal> <post-long0 abnormal> longdone <pre-called_indirectly> <pre-called_indirectly_subcall> called_indirectly_subcall 43 <post-called_indirectly_subcall> called_indirectly 42 => 44 <post-called_indirectly> <pre-tailcall_test2> print_from_asm 1 <pre-tailcall_tail> print_from_asm 7 <post-tailcall_tail> <post-tailcall_test2> loaded library thread.appdll process init in level0 37 in level1 37 74 in level2 111 level1 returned 37 level0 returned 37 in skipme skipme returned -1 and x=4 in repeatme with arg 4 repeatme returned 4 replaceme returned 0 and x=6 replaceme2 returned 1 and x=999 replace_callsite returned 2 and x=777 in preonly in postonly in skipme in postonly <pre-direct1> <post-direct1> <pre-direct2> <pre-direct1> <post-direct1> <post-direct2> <pre-direct1> in runlots 1024 <pre-long0> long0 A <pre-long1> long1 A <pre-long2> long2 A <pre-long3> long3 A <post-long3 abnormal> <post-long2 abnormal> <post-long1 abnormal> <post-long0 abnormal> longdone called_indirectly_subcall 43 called_indirectly 42 => 44 print_from_asm 1 print_from_asm 7 loaded library thank you for testing the client interface all done ```
test
dwrap test failed on windows with output mismatch test code api client drwrap test failed required regular expression not found regex thread appdll process init in in in returned returned skipme returned and x in repeatme with arg in repeatme with arg repeatme returned replaceme returned and x returned and x replace callsite returned and x in preonly in postonly in skipme in postonly in runlots a a a a longdone called indirectly subcall called indirectly print from asm print from asm loaded library thread appdll process init in in in returned returned skipme returned and x in repeatme with arg in repeatme with arg repeatme returned replaceme returned and x returned and x replace callsite returned and x in preonly in postonly in skipme in postonly in runlots a a a a longdone called indirectly subcall called indirectly print from asm print from asm loaded library thank you for testing the client interface all done sec thread appdll process init in in in returned returned skipme returned and x in repeatme with arg in repeatme with arg repeatme returned replaceme returned and x returned and x replace callsite returned and x in preonly in postonly in skipme in postonly in runlots a a a a longdone called indirectly subcall called indirectly print from asm print from asm loaded library thread appdll process init in in in returned returned in skipme skipme returned and x in repeatme with arg repeatme returned replaceme returned and x returned and x replace callsite returned and x in preonly in postonly in skipme in postonly in runlots a a a a longdone called indirectly subcall called indirectly print from asm print from asm loaded library thank you for testing the client interface all done
1
113,980
9,670,390,263
IssuesEvent
2019-05-21 19:47:49
aces/Loris
https://api.github.com/repos/aces/Loris
opened
[user_accounts] Undefined index: NA_Password when updating user settings.
21.0.0 Testing Bug Fix
I get the error below when updating a user account and saving. ``` Notice: Undefined index: NA_Password in /Users/alizee/Development/GitHub/McGill/Loris/modules/user_accounts/php/edit_user.class.inc on line 1040 -- 1 | 2.1424 | 415360 | {main}( ) | .../index.php:0 2 | 2.1736 | 782816 | LORIS\Middleware\ContentLength->process( ???, ??? ) | .../index.php:47 3 | 2.1736 | 782816 | LORIS\Middleware\ResponseGenerator->process( ???, ??? ) | .../ContentLength.php:51 4 | 2.1736 | 782816 | LORIS\Router\BaseRouter->handle( ??? ) | .../ResponseGenerator.php:49 5 | 2.1752 | 795424 | LORIS\Router\ModuleRouter->handle( ??? ) | .../BaseRouter.php:99 6 | 2.1755 | 797392 | LORIS\Middleware\AuthMiddleware->process( ???, ??? ) | .../ModuleRouter.php:97 7 | 2.1755 | 797392 | LORIS\Middleware\ResponseGenerator->process( ???, ??? ) | .../AuthMiddleware.php:62 8 | 2.1755 | 797392 | LORIS\user_accounts\Module->handle( ??? ) | .../ResponseGenerator.php:49 9 | 2.1759 | 813144 | LORIS\user_accounts\Edit_User->process( ???, ??? ) | .../module.class.inc:41 10 | 2.1761 | 814536 | LORIS\Middleware\PageDecorationMiddleware->process( ???, ??? ) | .../NDB_Page.class.inc:653 11 | 2.1829 | 884840 | LORIS\Middleware\UserPageDecorationMiddleware->process( ???, ??? ) | .../PageDecorationMiddleware.php:49 12 | 2.1945 | 939664 | LORIS\user_accounts\Edit_User->handle( ??? ) | .../UserPageDecorationMiddleware.php:171 13 | 2.1945 | 940144 | LORIS\user_accounts\Edit_User->handle( ??? ) | .../edit_user.class.inc:562 14 | 2.2054 | 1029064 | LORIS\user_accounts\Edit_User->save( ) | .../NDB_Form.class.inc:190 15 | 2.2054 | 1029064 | LorisForm->validate( ) | .../NDB_Form.class.inc:132 16 | 2.2070 | 1033728 | LORIS\user_accounts\Edit_User->_validateEditUser( ??? ) | .../LorisForm.class.inc:1324 ``` ``` Notice: Undefined index: NA_Password in /Users/alizee/Development/GitHub/McGill/Loris/modules/user_accounts/php/edit_user.class.inc on line 1046 -- 1 | 2.1424 | 415360 | {main}( ) | .../index.php:0 2 | 2.1736 | 782816 | LORIS\Middleware\ContentLength->process( ???, ??? ) | .../index.php:47 3 | 2.1736 | 782816 | LORIS\Middleware\ResponseGenerator->process( ???, ??? ) | .../ContentLength.php:51 4 | 2.1736 | 782816 | LORIS\Router\BaseRouter->handle( ??? ) | .../ResponseGenerator.php:49 5 | 2.1752 | 795424 | LORIS\Router\ModuleRouter->handle( ??? ) | .../BaseRouter.php:99 6 | 2.1755 | 797392 | LORIS\Middleware\AuthMiddleware->process( ???, ??? ) | .../ModuleRouter.php:97 7 | 2.1755 | 797392 | LORIS\Middleware\ResponseGenerator->process( ???, ??? ) | .../AuthMiddleware.php:62 8 | 2.1755 | 797392 | LORIS\user_accounts\Module->handle( ??? ) | .../ResponseGenerator.php:49 9 | 2.1759 | 813144 | LORIS\user_accounts\Edit_User->process( ???, ??? ) | .../module.class.inc:41 10 | 2.1761 | 814536 | LORIS\Middleware\PageDecorationMiddleware->process( ???, ??? ) | .../NDB_Page.class.inc:653 11 | 2.1829 | 884840 | LORIS\Middleware\UserPageDecorationMiddleware->process( ???, ??? ) | .../PageDecorationMiddleware.php:49 12 | 2.1945 | 939664 | LORIS\user_accounts\Edit_User->handle( ??? ) | .../UserPageDecorationMiddleware.php:171 13 | 2.1945 | 940144 | LORIS\user_accounts\Edit_User->handle( ??? ) | .../edit_user.class.inc:562 14 | 2.2054 | 1029064 | LORIS\user_accounts\Edit_User->save( ) | .../NDB_Form.class.inc:190 15 | 2.2054 | 1029064 | LorisForm->validate( ) | .../NDB_Form.class.inc:132 16 | 2.2070 | 1033728 | LORIS\user_accounts\Edit_User->_validateEditUser( ??? ) | .../LorisForm.class.inc:1324 ```
1.0
[user_accounts] Undefined index: NA_Password when updating user settings. - I get the error below when updating a user account and saving. ``` Notice: Undefined index: NA_Password in /Users/alizee/Development/GitHub/McGill/Loris/modules/user_accounts/php/edit_user.class.inc on line 1040 -- 1 | 2.1424 | 415360 | {main}( ) | .../index.php:0 2 | 2.1736 | 782816 | LORIS\Middleware\ContentLength->process( ???, ??? ) | .../index.php:47 3 | 2.1736 | 782816 | LORIS\Middleware\ResponseGenerator->process( ???, ??? ) | .../ContentLength.php:51 4 | 2.1736 | 782816 | LORIS\Router\BaseRouter->handle( ??? ) | .../ResponseGenerator.php:49 5 | 2.1752 | 795424 | LORIS\Router\ModuleRouter->handle( ??? ) | .../BaseRouter.php:99 6 | 2.1755 | 797392 | LORIS\Middleware\AuthMiddleware->process( ???, ??? ) | .../ModuleRouter.php:97 7 | 2.1755 | 797392 | LORIS\Middleware\ResponseGenerator->process( ???, ??? ) | .../AuthMiddleware.php:62 8 | 2.1755 | 797392 | LORIS\user_accounts\Module->handle( ??? ) | .../ResponseGenerator.php:49 9 | 2.1759 | 813144 | LORIS\user_accounts\Edit_User->process( ???, ??? ) | .../module.class.inc:41 10 | 2.1761 | 814536 | LORIS\Middleware\PageDecorationMiddleware->process( ???, ??? ) | .../NDB_Page.class.inc:653 11 | 2.1829 | 884840 | LORIS\Middleware\UserPageDecorationMiddleware->process( ???, ??? ) | .../PageDecorationMiddleware.php:49 12 | 2.1945 | 939664 | LORIS\user_accounts\Edit_User->handle( ??? ) | .../UserPageDecorationMiddleware.php:171 13 | 2.1945 | 940144 | LORIS\user_accounts\Edit_User->handle( ??? ) | .../edit_user.class.inc:562 14 | 2.2054 | 1029064 | LORIS\user_accounts\Edit_User->save( ) | .../NDB_Form.class.inc:190 15 | 2.2054 | 1029064 | LorisForm->validate( ) | .../NDB_Form.class.inc:132 16 | 2.2070 | 1033728 | LORIS\user_accounts\Edit_User->_validateEditUser( ??? ) | .../LorisForm.class.inc:1324 ``` ``` Notice: Undefined index: NA_Password in /Users/alizee/Development/GitHub/McGill/Loris/modules/user_accounts/php/edit_user.class.inc on line 1046 -- 1 | 2.1424 | 415360 | {main}( ) | .../index.php:0 2 | 2.1736 | 782816 | LORIS\Middleware\ContentLength->process( ???, ??? ) | .../index.php:47 3 | 2.1736 | 782816 | LORIS\Middleware\ResponseGenerator->process( ???, ??? ) | .../ContentLength.php:51 4 | 2.1736 | 782816 | LORIS\Router\BaseRouter->handle( ??? ) | .../ResponseGenerator.php:49 5 | 2.1752 | 795424 | LORIS\Router\ModuleRouter->handle( ??? ) | .../BaseRouter.php:99 6 | 2.1755 | 797392 | LORIS\Middleware\AuthMiddleware->process( ???, ??? ) | .../ModuleRouter.php:97 7 | 2.1755 | 797392 | LORIS\Middleware\ResponseGenerator->process( ???, ??? ) | .../AuthMiddleware.php:62 8 | 2.1755 | 797392 | LORIS\user_accounts\Module->handle( ??? ) | .../ResponseGenerator.php:49 9 | 2.1759 | 813144 | LORIS\user_accounts\Edit_User->process( ???, ??? ) | .../module.class.inc:41 10 | 2.1761 | 814536 | LORIS\Middleware\PageDecorationMiddleware->process( ???, ??? ) | .../NDB_Page.class.inc:653 11 | 2.1829 | 884840 | LORIS\Middleware\UserPageDecorationMiddleware->process( ???, ??? ) | .../PageDecorationMiddleware.php:49 12 | 2.1945 | 939664 | LORIS\user_accounts\Edit_User->handle( ??? ) | .../UserPageDecorationMiddleware.php:171 13 | 2.1945 | 940144 | LORIS\user_accounts\Edit_User->handle( ??? ) | .../edit_user.class.inc:562 14 | 2.2054 | 1029064 | LORIS\user_accounts\Edit_User->save( ) | .../NDB_Form.class.inc:190 15 | 2.2054 | 1029064 | LorisForm->validate( ) | .../NDB_Form.class.inc:132 16 | 2.2070 | 1033728 | LORIS\user_accounts\Edit_User->_validateEditUser( ??? ) | .../LorisForm.class.inc:1324 ```
test
undefined index na password when updating user settings i get the error below when updating a user account and saving notice undefined index na password in users alizee development github mcgill loris modules user accounts php edit user class inc on line  main index php loris middleware contentlength process index php loris middleware responsegenerator process contentlength php loris router baserouter handle responsegenerator php loris router modulerouter handle baserouter php loris middleware authmiddleware process modulerouter php loris middleware responsegenerator process authmiddleware php loris user accounts module handle responsegenerator php loris user accounts edit user process module class inc loris middleware pagedecorationmiddleware process ndb page class inc loris middleware userpagedecorationmiddleware process pagedecorationmiddleware php loris user accounts edit user handle userpagedecorationmiddleware php loris user accounts edit user handle edit user class inc loris user accounts edit user save ndb form class inc lorisform validate ndb form class inc loris user accounts edit user validateedituser lorisform class inc notice undefined index na password in users alizee development github mcgill loris modules user accounts php edit user class inc on line  main index php loris middleware contentlength process index php loris middleware responsegenerator process contentlength php loris router baserouter handle responsegenerator php loris router modulerouter handle baserouter php loris middleware authmiddleware process modulerouter php loris middleware responsegenerator process authmiddleware php loris user accounts module handle responsegenerator php loris user accounts edit user process module class inc loris middleware pagedecorationmiddleware process ndb page class inc loris middleware userpagedecorationmiddleware process pagedecorationmiddleware php loris user accounts edit user handle userpagedecorationmiddleware php loris user accounts edit user handle edit user class inc loris user accounts edit user save ndb form class inc lorisform validate ndb form class inc loris user accounts edit user validateedituser lorisform class inc
1
291,125
25,123,149,270
IssuesEvent
2022-11-09 09:43:39
influxdata/influxdb_iox
https://api.github.com/repos/influxdata/influxdb_iox
closed
Flaky test: `postgres::tests::test_catalog` (`iox_catalog`)
flaky test
``` ---- postgres::tests::test_catalog stdout ---- thread 'postgres::tests::test_catalog' panicked at 'assertion failed: `(left == right)` left: `[ParquetFile { id: ParquetFileId(55), shard_id: ShardId(18), namespace_id: NamespaceId(20), table_id: TableId(25), partition_id: PartitionId(24), object_store_id: c0ec0792-18f8-4151-bd90-3fa87d7cfa9a, max_sequence_number: SequenceNumber(140), min_time: Timestamp(1), max_time: Timestamp(10), to_delete: None, file_size_bytes: 1337, row_count: 0, compaction_level: FileNonOverlapped, created_at: Timestamp(1), column_set: ColumnSet([ColumnId(1), ColumnId(2)]) }, ParquetFile { id: ParquetFileId(53), shard_id: ShardId(18), namespace_id: NamespaceId(20), table_id: TableId(25), partition_id: PartitionId(24), object_store_id: 199f5d07-9c80-4ab2-b3de-5f31cc98dbcb, max_sequence_number: SequenceNumber(140), min_time: Timestamp(1), max_time: Timestamp(10), to_delete: None, file_size_bytes: 1337, row_count: 0, compaction_level: Initial, created_at: Timestamp(1), column_set: ColumnSet([ColumnId(1), ColumnId(2)]) }]`, right: `[ParquetFile { id: ParquetFileId(53), shard_id: ShardId(18), namespace_id: NamespaceId(20), table_id: TableId(25), partition_id: PartitionId(24), object_store_id: 199f5d07-9c80-4ab2-b3de-5f31cc98dbcb, max_sequence_number: SequenceNumber(140), min_time: Timestamp(1), max_time: Timestamp(10), to_delete: None, file_size_bytes: 1337, row_count: 0, compaction_level: Initial, created_at: Timestamp(1), column_set: ColumnSet([ColumnId(1), ColumnId(2)]) }, ParquetFile { id: ParquetFileId(55), shard_id: ShardId(18), namespace_id: NamespaceId(20), table_id: TableId(25), partition_id: PartitionId(24), object_store_id: c0ec0792-18f8-4151-bd90-3fa87d7cfa9a, max_sequence_number: SequenceNumber(140), min_time: Timestamp(1), max_time: Timestamp(10), to_delete: None, file_size_bytes: 1337, row_count: 0, compaction_level: FileNonOverlapped, created_at: Timestamp(1), column_set: ColumnSet([ColumnId(1), ColumnId(2)]) }]`', iox_catalog/src/interface.rs:3764:9 stack backtrace: 0: rust_begin_unwind at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/std/src/panicking.rs:584:5 1: core::panicking::panic_fmt at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/panicking.rs:142:14 2: core::panicking::assert_failed_inner 3: core::panicking::assert_failed at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/panicking.rs:181:5 4: iox_catalog::interface::test_helpers::test_list_by_partiton_not_to_delete::{{closure}} at ./src/interface.rs:3764:9 5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/future/mod.rs:91:19 6: iox_catalog::interface::test_helpers::test_catalog::{{closure}} at ./src/interface.rs:924:66 7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/future/mod.rs:91:19 8: iox_catalog::postgres::tests::test_catalog::{{closure}} at ./src/postgres.rs:2334:63 9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/future/mod.rs:91:19 10: <core::pin::Pin<P> as core::future::future::Future>::poll at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/future/future.rs:124:9 11: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:525:48 12: tokio::coop::with_budget::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/coop.rs:102:9 13: std::thread::local::LocalKey<T>::try_with at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/std/src/thread/local.rs:445:16 14: std::thread::local::LocalKey<T>::with at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/std/src/thread/local.rs:421:9 15: tokio::coop::with_budget at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/coop.rs:95:5 16: tokio::coop::budget at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/coop.rs:72:5 17: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:525:25 18: tokio::runtime::scheduler::current_thread::Context::enter at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:349:19 19: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:524:36 20: tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:595:57 21: tokio::macros::scoped_tls::ScopedKey<T>::set at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/macros/scoped_tls.rs:61:9 22: tokio::runtime::scheduler::current_thread::CoreGuard::enter at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:595:27 23: tokio::runtime::scheduler::current_thread::CoreGuard::block_on at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:515:19 24: tokio::runtime::scheduler::current_thread::CurrentThread::block_on at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:161:24 25: tokio::runtime::Runtime::block_on at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/mod.rs:490:46 26: iox_catalog::postgres::tests::test_catalog at ./src/postgres.rs:2334:9 27: iox_catalog::postgres::tests::test_catalog::{{closure}} at ./src/postgres.rs:2325:11 28: core::ops::function::FnOnce::call_once at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/ops/function.rs:248:5 29: core::ops::function::FnOnce::call_once at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/ops/function.rs:248:5 note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. failures: postgres::tests::test_catalog test result: FAILED. 18 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 1.97s error: test failed, to rerun pass '-p iox_catalog --lib' ```
1.0
Flaky test: `postgres::tests::test_catalog` (`iox_catalog`) - ``` ---- postgres::tests::test_catalog stdout ---- thread 'postgres::tests::test_catalog' panicked at 'assertion failed: `(left == right)` left: `[ParquetFile { id: ParquetFileId(55), shard_id: ShardId(18), namespace_id: NamespaceId(20), table_id: TableId(25), partition_id: PartitionId(24), object_store_id: c0ec0792-18f8-4151-bd90-3fa87d7cfa9a, max_sequence_number: SequenceNumber(140), min_time: Timestamp(1), max_time: Timestamp(10), to_delete: None, file_size_bytes: 1337, row_count: 0, compaction_level: FileNonOverlapped, created_at: Timestamp(1), column_set: ColumnSet([ColumnId(1), ColumnId(2)]) }, ParquetFile { id: ParquetFileId(53), shard_id: ShardId(18), namespace_id: NamespaceId(20), table_id: TableId(25), partition_id: PartitionId(24), object_store_id: 199f5d07-9c80-4ab2-b3de-5f31cc98dbcb, max_sequence_number: SequenceNumber(140), min_time: Timestamp(1), max_time: Timestamp(10), to_delete: None, file_size_bytes: 1337, row_count: 0, compaction_level: Initial, created_at: Timestamp(1), column_set: ColumnSet([ColumnId(1), ColumnId(2)]) }]`, right: `[ParquetFile { id: ParquetFileId(53), shard_id: ShardId(18), namespace_id: NamespaceId(20), table_id: TableId(25), partition_id: PartitionId(24), object_store_id: 199f5d07-9c80-4ab2-b3de-5f31cc98dbcb, max_sequence_number: SequenceNumber(140), min_time: Timestamp(1), max_time: Timestamp(10), to_delete: None, file_size_bytes: 1337, row_count: 0, compaction_level: Initial, created_at: Timestamp(1), column_set: ColumnSet([ColumnId(1), ColumnId(2)]) }, ParquetFile { id: ParquetFileId(55), shard_id: ShardId(18), namespace_id: NamespaceId(20), table_id: TableId(25), partition_id: PartitionId(24), object_store_id: c0ec0792-18f8-4151-bd90-3fa87d7cfa9a, max_sequence_number: SequenceNumber(140), min_time: Timestamp(1), max_time: Timestamp(10), to_delete: None, file_size_bytes: 1337, row_count: 0, compaction_level: FileNonOverlapped, created_at: Timestamp(1), column_set: ColumnSet([ColumnId(1), ColumnId(2)]) }]`', iox_catalog/src/interface.rs:3764:9 stack backtrace: 0: rust_begin_unwind at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/std/src/panicking.rs:584:5 1: core::panicking::panic_fmt at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/panicking.rs:142:14 2: core::panicking::assert_failed_inner 3: core::panicking::assert_failed at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/panicking.rs:181:5 4: iox_catalog::interface::test_helpers::test_list_by_partiton_not_to_delete::{{closure}} at ./src/interface.rs:3764:9 5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/future/mod.rs:91:19 6: iox_catalog::interface::test_helpers::test_catalog::{{closure}} at ./src/interface.rs:924:66 7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/future/mod.rs:91:19 8: iox_catalog::postgres::tests::test_catalog::{{closure}} at ./src/postgres.rs:2334:63 9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/future/mod.rs:91:19 10: <core::pin::Pin<P> as core::future::future::Future>::poll at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/future/future.rs:124:9 11: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:525:48 12: tokio::coop::with_budget::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/coop.rs:102:9 13: std::thread::local::LocalKey<T>::try_with at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/std/src/thread/local.rs:445:16 14: std::thread::local::LocalKey<T>::with at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/std/src/thread/local.rs:421:9 15: tokio::coop::with_budget at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/coop.rs:95:5 16: tokio::coop::budget at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/coop.rs:72:5 17: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:525:25 18: tokio::runtime::scheduler::current_thread::Context::enter at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:349:19 19: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:524:36 20: tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}} at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:595:57 21: tokio::macros::scoped_tls::ScopedKey<T>::set at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/macros/scoped_tls.rs:61:9 22: tokio::runtime::scheduler::current_thread::CoreGuard::enter at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:595:27 23: tokio::runtime::scheduler::current_thread::CoreGuard::block_on at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:515:19 24: tokio::runtime::scheduler::current_thread::CurrentThread::block_on at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/scheduler/current_thread.rs:161:24 25: tokio::runtime::Runtime::block_on at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.2/src/runtime/mod.rs:490:46 26: iox_catalog::postgres::tests::test_catalog at ./src/postgres.rs:2334:9 27: iox_catalog::postgres::tests::test_catalog::{{closure}} at ./src/postgres.rs:2325:11 28: core::ops::function::FnOnce::call_once at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/ops/function.rs:248:5 29: core::ops::function::FnOnce::call_once at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/ops/function.rs:248:5 note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. failures: postgres::tests::test_catalog test result: FAILED. 18 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 1.97s error: test failed, to rerun pass '-p iox_catalog --lib' ```
test
flaky test postgres tests test catalog iox catalog postgres tests test catalog stdout thread postgres tests test catalog panicked at assertion failed left right left parquetfile id parquetfileid shard id shardid namespace id namespaceid table id tableid partition id partitionid object store id max sequence number sequencenumber min time timestamp max time timestamp to delete none file size bytes row count compaction level initial created at timestamp column set columnset right parquetfile id parquetfileid shard id shardid namespace id namespaceid table id tableid partition id partitionid object store id max sequence number sequencenumber min time timestamp max time timestamp to delete none file size bytes row count compaction level filenonoverlapped created at timestamp column set columnset iox catalog src interface rs stack backtrace rust begin unwind at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs core panicking assert failed inner core panicking assert failed at rustc library core src panicking rs iox catalog interface test helpers test list by partiton not to delete closure at src interface rs as core future future future poll at rustc library core src future mod rs iox catalog interface test helpers test catalog closure at src interface rs as core future future future poll at rustc library core src future mod rs iox catalog postgres tests test catalog closure at src postgres rs as core future future future poll at rustc library core src future mod rs as core future future future poll at rustc library core src future future rs tokio runtime scheduler current thread coreguard block on closure closure closure at usr local cargo registry src github com tokio src runtime scheduler current thread rs tokio coop with budget closure at usr local cargo registry src github com tokio src coop rs std thread local localkey try with at rustc library std src thread local rs std thread local localkey with at rustc library std src thread local rs tokio coop with budget at usr local cargo registry src github com tokio src coop rs tokio coop budget at usr local cargo registry src github com tokio src coop rs tokio runtime scheduler current thread coreguard block on closure closure at usr local cargo registry src github com tokio src runtime scheduler current thread rs tokio runtime scheduler current thread context enter at usr local cargo registry src github com tokio src runtime scheduler current thread rs tokio runtime scheduler current thread coreguard block on closure at usr local cargo registry src github com tokio src runtime scheduler current thread rs tokio runtime scheduler current thread coreguard enter closure at usr local cargo registry src github com tokio src runtime scheduler current thread rs tokio macros scoped tls scopedkey set at usr local cargo registry src github com tokio src macros scoped tls rs tokio runtime scheduler current thread coreguard enter at usr local cargo registry src github com tokio src runtime scheduler current thread rs tokio runtime scheduler current thread coreguard block on at usr local cargo registry src github com tokio src runtime scheduler current thread rs tokio runtime scheduler current thread currentthread block on at usr local cargo registry src github com tokio src runtime scheduler current thread rs tokio runtime runtime block on at usr local cargo registry src github com tokio src runtime mod rs iox catalog postgres tests test catalog at src postgres rs iox catalog postgres tests test catalog closure at src postgres rs core ops function fnonce call once at rustc library core src ops function rs core ops function fnonce call once at rustc library core src ops function rs note some details are omitted run with rust backtrace full for a verbose backtrace failures postgres tests test catalog test result failed passed failed ignored measured filtered out finished in error test failed to rerun pass p iox catalog lib
1
175,143
13,536,467,018
IssuesEvent
2020-09-16 09:05:10
haiwen/seahub
https://api.github.com/repos/haiwen/seahub
closed
Upload: File menu not visible on hover anymore.
need test
Uploading a file using the upload button from the context menu leads to all files not being hoverable anymore (icons to apply actions aren't visible) until one clicks somewhere on the page to change the focus. See https://app.seafile.de/f/d085ef9e60/?raw=1 (mouse is above one of the file items) Correct would be e.g. https://app.seafile.de/f/67c3accd7b/?raw=1
1.0
Upload: File menu not visible on hover anymore. - Uploading a file using the upload button from the context menu leads to all files not being hoverable anymore (icons to apply actions aren't visible) until one clicks somewhere on the page to change the focus. See https://app.seafile.de/f/d085ef9e60/?raw=1 (mouse is above one of the file items) Correct would be e.g. https://app.seafile.de/f/67c3accd7b/?raw=1
test
upload file menu not visible on hover anymore uploading a file using the upload button from the context menu leads to all files not being hoverable anymore icons to apply actions aren t visible until one clicks somewhere on the page to change the focus see mouse is above one of the file items correct would be e g
1
189,030
14,483,252,443
IssuesEvent
2020-12-10 14:57:20
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
Update "Create load test sessions" Jenkins Rake Task to Include Corresponding CSRF Token
QA VSP-testing-team testing
## User Story As a QA engineer creating and executing load tests on the platform, I need the [rake task used to create session cookies](http://jenkins.vfs.va.gov/job/rake_tasks/job/vets-api-load-test-sessions/) to include the corresponding CSRF token. ## Expected Behavior The console output for the rake task includes corresponding X-CSRF Tokens for each session id. ## Acceptance Criteria - [ ] X-CSRF Token is included in the console output for the rake task. - [ ] It's easy enough to copy the related session id and csrf token out into a JSON object.
2.0
Update "Create load test sessions" Jenkins Rake Task to Include Corresponding CSRF Token - ## User Story As a QA engineer creating and executing load tests on the platform, I need the [rake task used to create session cookies](http://jenkins.vfs.va.gov/job/rake_tasks/job/vets-api-load-test-sessions/) to include the corresponding CSRF token. ## Expected Behavior The console output for the rake task includes corresponding X-CSRF Tokens for each session id. ## Acceptance Criteria - [ ] X-CSRF Token is included in the console output for the rake task. - [ ] It's easy enough to copy the related session id and csrf token out into a JSON object.
test
update create load test sessions jenkins rake task to include corresponding csrf token user story as a qa engineer creating and executing load tests on the platform i need the to include the corresponding csrf token expected behavior the console output for the rake task includes corresponding x csrf tokens for each session id acceptance criteria x csrf token is included in the console output for the rake task it s easy enough to copy the related session id and csrf token out into a json object
1
231,599
7,641,170,523
IssuesEvent
2018-05-08 03:08:15
Darkosto/SevTech-Ages
https://api.github.com/repos/Darkosto/SevTech-Ages
closed
Suggestion: Whitelist Twilight Forest Dim for Betweenlands Portals
Category: Config Priority: Low Type: Enhancement
Betweenlands portals work correctly both ways even when created in other dimensions. To enable that though the dimensions have to be whitelisted in the Betweenlands config in `World And Dimension > Portal dimensions whitelist`. Would be nice for people who want to set up a base in the Twilight Forest and still have quick access to the Betweenlands instead of having to go indirectly through the Overworld and then through the BL portal.
1.0
Suggestion: Whitelist Twilight Forest Dim for Betweenlands Portals - Betweenlands portals work correctly both ways even when created in other dimensions. To enable that though the dimensions have to be whitelisted in the Betweenlands config in `World And Dimension > Portal dimensions whitelist`. Would be nice for people who want to set up a base in the Twilight Forest and still have quick access to the Betweenlands instead of having to go indirectly through the Overworld and then through the BL portal.
non_test
suggestion whitelist twilight forest dim for betweenlands portals betweenlands portals work correctly both ways even when created in other dimensions to enable that though the dimensions have to be whitelisted in the betweenlands config in world and dimension portal dimensions whitelist would be nice for people who want to set up a base in the twilight forest and still have quick access to the betweenlands instead of having to go indirectly through the overworld and then through the bl portal
0
224,396
17,691,632,456
IssuesEvent
2021-08-24 10:40:48
finos/waltz
https://api.github.com/repos/finos/waltz
closed
Attestations: cancelling attestation resets the 'next' toggle to false
fixed (test & close) QoL
....should leave it alone.
1.0
Attestations: cancelling attestation resets the 'next' toggle to false - ....should leave it alone.
test
attestations cancelling attestation resets the next toggle to false should leave it alone
1
743,397
25,897,304,656
IssuesEvent
2022-12-15 00:13:02
magento/magento2
https://api.github.com/repos/magento/magento2
closed
Fatal when using GraphQl query
Issue: Confirmed Progress: PR Created Component: GraphQL Reproduced on 2.4.x Priority: P1 Progress: done Issue: ready for confirmation Area: Product Reported on 2.4.5
### Preconditions and environment - Magento 2.4.5 - Default Magento ElasticSearch7 - Not empty database prefix, for example "pref" ### Steps to reproduce 1. Run this graphQl query: ``` query { products(filter: {category_id: {eq: "2"}}, sort: {price:ASC}, currentPage: 1) { items { ...productAttributesForListing ... on ConfigurableProduct { variants { __typename } } } } } fragment productAttributesForListing on ProductInterface { categories { breadcrumbs { category_name } } } ``` ### Expected result Results without error: ![image](https://user-images.githubusercontent.com/90465167/189161206-7462330c-2fbd-4de0-b2f2-6a6f65ddf031.png) ### Actual result Fatal Error: ![image](https://user-images.githubusercontent.com/90465167/189159984-5e8deb4e-e41f-48db-814a-fb711a2220d8.png) ### Additional information This error happened because the subquery `SELECT 1 FROM store_group WHERE cat_index.category_id IN (store_group.root_category_id` is missing a table prefix for `store_group` See `\Magento\CatalogGraphQl\Model\Resolver\Product\ProductCategories::getCategoryIdsByProduct` ### Release note _No response_ ### Triage and priority - [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._ - [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._ - [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._ - [X] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._ - [ ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
1.0
Fatal when using GraphQl query - ### Preconditions and environment - Magento 2.4.5 - Default Magento ElasticSearch7 - Not empty database prefix, for example "pref" ### Steps to reproduce 1. Run this graphQl query: ``` query { products(filter: {category_id: {eq: "2"}}, sort: {price:ASC}, currentPage: 1) { items { ...productAttributesForListing ... on ConfigurableProduct { variants { __typename } } } } } fragment productAttributesForListing on ProductInterface { categories { breadcrumbs { category_name } } } ``` ### Expected result Results without error: ![image](https://user-images.githubusercontent.com/90465167/189161206-7462330c-2fbd-4de0-b2f2-6a6f65ddf031.png) ### Actual result Fatal Error: ![image](https://user-images.githubusercontent.com/90465167/189159984-5e8deb4e-e41f-48db-814a-fb711a2220d8.png) ### Additional information This error happened because the subquery `SELECT 1 FROM store_group WHERE cat_index.category_id IN (store_group.root_category_id` is missing a table prefix for `store_group` See `\Magento\CatalogGraphQl\Model\Resolver\Product\ProductCategories::getCategoryIdsByProduct` ### Release note _No response_ ### Triage and priority - [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._ - [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._ - [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._ - [X] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._ - [ ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
non_test
fatal when using graphql query preconditions and environment magento default magento not empty database prefix for example pref steps to reproduce run this graphql query query products filter category id eq sort price asc currentpage items productattributesforlisting on configurableproduct variants typename fragment productattributesforlisting on productinterface categories breadcrumbs category name expected result results without error actual result fatal error additional information this error happened because the subquery select from store group where cat index category id in store group root category id is missing a table prefix for store group see magento cataloggraphql model resolver product productcategories getcategoryidsbyproduct release note no response triage and priority severity affects critical data or functionality and leaves users without workaround severity affects critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and does not force users to employ a workaround severity affects aesthetics professional look and feel “quality” or “usability”
0
46,915
5,833,690,466
IssuesEvent
2017-05-09 02:49:55
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Test: System.IO.Compression.Tests.zip_ManualAndCompatabilityTests/ZipWithInvalidFileNames_ParsedBasedOnSourceOS failed with "System.ArgumentException"
area-System.IO.Compression test-run-desktop
Opened on behalf of @Jiayili1 The test `System.IO.Compression.Tests.zip_ManualAndCompatabilityTests/ZipWithInvalidFileNames_ParsedBasedOnSourceOS(zipName: \"NullCharFileName_FromUnix.zip\", fileName: \"a\\06b6d\")` has failed. System.ArgumentException : Illegal characters in path. Stack Trace: at System.IO.Path.CheckInvalidPathChars(String path, Boolean checkAdditional) at System.IO.Path.GetFileName(String path) at System.IO.Compression.ZipHelper.EndsWithDirChar(String test) at System.IO.Compression.ZipArchiveEntry.set_FullName(String value) at System.IO.Compression.ZipArchiveEntry..ctor(ZipArchive archive, ZipCentralDirectoryFileHeader cd) at System.IO.Compression.ZipArchive.ReadCentralDirectory() at System.IO.Compression.ZipArchive.get_Entries() at System.IO.Compression.Tests.zip_ManualAndCompatabilityTests.<ZipWithInvalidFileNames_ParsedBasedOnSourceOS>d__3.MoveNext() in E:\A\_work\60\s\corefx\src\System.IO.Compression\tests\ZipArchive\zip_ManualAndCompatibilityTests.cs:line 58 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) Build : Master - 20170412.01 (Full Framework Tests) Failing configurations: - Windows.10.Amd64 - x86-Release - x64-Release - x86-Debug - x64-Debug Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fdesktop~2Fcli~2F/build/20170412.01/workItem/System.IO.Compression.Tests/analysis/xunit/System.IO.Compression.Tests.zip_ManualAndCompatabilityTests~2FZipWithInvalidFileNames_ParsedBasedOnSourceOS(zipName:%20%5C%22NullCharFileName_FromUnix.zip%5C%22,%20fileName:%20%5C%22a%5C%5C06b6d%5C%22)
1.0
Test: System.IO.Compression.Tests.zip_ManualAndCompatabilityTests/ZipWithInvalidFileNames_ParsedBasedOnSourceOS failed with "System.ArgumentException" - Opened on behalf of @Jiayili1 The test `System.IO.Compression.Tests.zip_ManualAndCompatabilityTests/ZipWithInvalidFileNames_ParsedBasedOnSourceOS(zipName: \"NullCharFileName_FromUnix.zip\", fileName: \"a\\06b6d\")` has failed. System.ArgumentException : Illegal characters in path. Stack Trace: at System.IO.Path.CheckInvalidPathChars(String path, Boolean checkAdditional) at System.IO.Path.GetFileName(String path) at System.IO.Compression.ZipHelper.EndsWithDirChar(String test) at System.IO.Compression.ZipArchiveEntry.set_FullName(String value) at System.IO.Compression.ZipArchiveEntry..ctor(ZipArchive archive, ZipCentralDirectoryFileHeader cd) at System.IO.Compression.ZipArchive.ReadCentralDirectory() at System.IO.Compression.ZipArchive.get_Entries() at System.IO.Compression.Tests.zip_ManualAndCompatabilityTests.<ZipWithInvalidFileNames_ParsedBasedOnSourceOS>d__3.MoveNext() in E:\A\_work\60\s\corefx\src\System.IO.Compression\tests\ZipArchive\zip_ManualAndCompatibilityTests.cs:line 58 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) Build : Master - 20170412.01 (Full Framework Tests) Failing configurations: - Windows.10.Amd64 - x86-Release - x64-Release - x86-Debug - x64-Debug Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fdesktop~2Fcli~2F/build/20170412.01/workItem/System.IO.Compression.Tests/analysis/xunit/System.IO.Compression.Tests.zip_ManualAndCompatabilityTests~2FZipWithInvalidFileNames_ParsedBasedOnSourceOS(zipName:%20%5C%22NullCharFileName_FromUnix.zip%5C%22,%20fileName:%20%5C%22a%5C%5C06b6d%5C%22)
test
test system io compression tests zip manualandcompatabilitytests zipwithinvalidfilenames parsedbasedonsourceos failed with system argumentexception opened on behalf of the test system io compression tests zip manualandcompatabilitytests zipwithinvalidfilenames parsedbasedonsourceos zipname nullcharfilename fromunix zip filename a has failed system argumentexception illegal characters in path stack trace at system io path checkinvalidpathchars string path boolean checkadditional at system io path getfilename string path at system io compression ziphelper endswithdirchar string test at system io compression ziparchiveentry set fullname string value at system io compression ziparchiveentry ctor ziparchive archive zipcentraldirectoryfileheader cd at system io compression ziparchive readcentraldirectory at system io compression ziparchive get entries at system io compression tests zip manualandcompatabilitytests d movenext in e a work s corefx src system io compression tests ziparchive zip manualandcompatibilitytests cs line end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task build master full framework tests failing configurations windows release release debug debug detail
1
277,810
24,104,895,142
IssuesEvent
2022-09-20 06:30:24
jajm/koha-staff-interface-redesign
https://api.github.com/repos/jajm/koha-staff-interface-redesign
closed
Additional patron attributes section in patron edit escaped
type: bug status: needs testing
"escaped" outside of the screen: ![grafik](https://user-images.githubusercontent.com/425526/190136283-b22c088a-298f-4bd2-95f1-e802b2d9b1e2.png)
1.0
Additional patron attributes section in patron edit escaped - "escaped" outside of the screen: ![grafik](https://user-images.githubusercontent.com/425526/190136283-b22c088a-298f-4bd2-95f1-e802b2d9b1e2.png)
test
additional patron attributes section in patron edit escaped escaped outside of the screen
1
420,122
28,237,265,648
IssuesEvent
2023-04-06 02:24:37
dimitritsampiras/olympian
https://api.github.com/repos/dimitritsampiras/olympian
closed
Team28 DesignDoc Review: SysDesign Communication Protocols
documentation
Since your application will be communicating between a website and an application on a mobile device, under the communication protocol, you may want to go more in depth about what that architecture looks like. You mentioned http requests, but maybe go more in depth about what will be included in the Post Data, Query String, and Parameters (I'm assuming you're gonna use REST API's to transfer information). That way you can separate data that has to be stored on device, and data that can live on the server side of things.
1.0
Team28 DesignDoc Review: SysDesign Communication Protocols - Since your application will be communicating between a website and an application on a mobile device, under the communication protocol, you may want to go more in depth about what that architecture looks like. You mentioned http requests, but maybe go more in depth about what will be included in the Post Data, Query String, and Parameters (I'm assuming you're gonna use REST API's to transfer information). That way you can separate data that has to be stored on device, and data that can live on the server side of things.
non_test
designdoc review sysdesign communication protocols since your application will be communicating between a website and an application on a mobile device under the communication protocol you may want to go more in depth about what that architecture looks like you mentioned http requests but maybe go more in depth about what will be included in the post data query string and parameters i m assuming you re gonna use rest api s to transfer information that way you can separate data that has to be stored on device and data that can live on the server side of things
0
14,052
3,372,950,350
IssuesEvent
2015-11-24 03:07:29
aidanlane/snapapps
https://api.github.com/repos/aidanlane/snapapps
closed
Specialised nearest sprite to X block
enhancement jmss testing day
While a student was using the nearest block, I noticed that it would be useful to have a block named "nearest [object type] to me" that never returns itself if the [object type] is set to the object's type.
1.0
Specialised nearest sprite to X block - While a student was using the nearest block, I noticed that it would be useful to have a block named "nearest [object type] to me" that never returns itself if the [object type] is set to the object's type.
test
specialised nearest sprite to x block while a student was using the nearest block i noticed that it would be useful to have a block named nearest to me that never returns itself if the is set to the object s type
1
283,282
24,536,943,344
IssuesEvent
2022-10-11 21:47:30
chef/chef-server
https://api.github.com/repos/chef/chef-server
closed
Get the fixie pipeline green
Aspect: Stability Aspect: Testing Type: Chore Status: move to jira
Fixie builds are currently failing. This pipeline needs some love.
1.0
Get the fixie pipeline green - Fixie builds are currently failing. This pipeline needs some love.
test
get the fixie pipeline green fixie builds are currently failing this pipeline needs some love
1
22,826
3,972,602,799
IssuesEvent
2016-05-04 15:48:52
Esri/solutions-geoprocessing-toolbox
https://api.github.com/repos/Esri/solutions-geoprocessing-toolbox
opened
Need to sort out unit test log file in TestKickStart.bat
F - All G - Defense Team priority-moderate unit test
@JudTown17 Something else we left go... we are currently only appending to a single 'default.log' rather than increment based on date/time/id as we originally set out to do. Right now the TestKickStart.bat is always setting the log to go to 'default.log', and not using the facility we had originally set up. I think we need to revisit this after your current test building is done. We'll need to discuss it before you get started though, as I think we should roll this into a new format for the log files (XML vs JSON, rather than text).
1.0
Need to sort out unit test log file in TestKickStart.bat - @JudTown17 Something else we left go... we are currently only appending to a single 'default.log' rather than increment based on date/time/id as we originally set out to do. Right now the TestKickStart.bat is always setting the log to go to 'default.log', and not using the facility we had originally set up. I think we need to revisit this after your current test building is done. We'll need to discuss it before you get started though, as I think we should roll this into a new format for the log files (XML vs JSON, rather than text).
test
need to sort out unit test log file in testkickstart bat something else we left go we are currently only appending to a single default log rather than increment based on date time id as we originally set out to do right now the testkickstart bat is always setting the log to go to default log and not using the facility we had originally set up i think we need to revisit this after your current test building is done we ll need to discuss it before you get started though as i think we should roll this into a new format for the log files xml vs json rather than text
1
16,485
10,514,354,809
IssuesEvent
2019-09-28 00:08:14
microsoft/botbuilder-python
https://api.github.com/repos/microsoft/botbuilder-python
closed
UnsupportedMediaType - Azure Linux WebApp
Bot Services bug customer-replied-to customer-reported
## Version 4.5.0b4 ## Describe the bug I've built docker container for this example https://github.com/microsoft/botbuilder-python/tree/master/samples/06.using-cards Localy with Bot Framework Emulator everything works fine, but when I am trying to run this container inside Linux Web App in Azure I get "There was an error sending this message to your bot: HTTP status code UnsupportedMediaType" in every channel (WebChat, Telegram, Teams etc.) btw, if I connect from Bot Framework Emulator to container in Linux Web App - it works. Dockerfile ``` FROM python:3 COPY ./app /app WORKDIR /app RUN pip install -r requirements.txt EXPOSE 3978 CMD [ "python", "-u", "main.py" ]` ``` [bug]
1.0
UnsupportedMediaType - Azure Linux WebApp - ## Version 4.5.0b4 ## Describe the bug I've built docker container for this example https://github.com/microsoft/botbuilder-python/tree/master/samples/06.using-cards Localy with Bot Framework Emulator everything works fine, but when I am trying to run this container inside Linux Web App in Azure I get "There was an error sending this message to your bot: HTTP status code UnsupportedMediaType" in every channel (WebChat, Telegram, Teams etc.) btw, if I connect from Bot Framework Emulator to container in Linux Web App - it works. Dockerfile ``` FROM python:3 COPY ./app /app WORKDIR /app RUN pip install -r requirements.txt EXPOSE 3978 CMD [ "python", "-u", "main.py" ]` ``` [bug]
non_test
unsupportedmediatype azure linux webapp version describe the bug i ve built docker container for this example localy with bot framework emulator everything works fine but when i am trying to run this container inside linux web app in azure i get there was an error sending this message to your bot http status code unsupportedmediatype in every channel webchat telegram teams etc btw if i connect from bot framework emulator to container in linux web app it works dockerfile from python copy app app workdir app run pip install r requirements txt expose cmd
0
403,512
27,421,557,527
IssuesEvent
2023-03-01 17:03:20
gravitational/teleport
https://api.github.com/repos/gravitational/teleport
opened
Give infrastructure-as-code approaches to Teleport more prominence in the docs
documentation
## Applies To Determining the scope of the change is part of this project. ## Details If people visit the docs, they don't know (until they visit pages buried in the "Manage your Cluster" section) that the Terraform provider or Operator exist. The impression is that you can only configure Teleport using `tctl` and the Web UI. We should make sure that the docs reflect all available techniques for applying configuration resources while putting infrastructure-as-code approaches in the foreground. The Terraform provider and Kubernetes Operator should be first-class citizens for deploying configuration resources. In general, we should direct users toward IAC approaches to installing Teleport and applying configuration resources. Further, rather than using `kubectl exec` to create configuration resources in our Helm guides, we should consider including instructions to use the Kubernetes Operator. **Since this issue has a broad scope, we should create a table of sub-issues/PRs before we start addressing it.** _This issue is based on a [previous issue](https://github.com/gravitational/teleport/issues/19974), refining its scope._ ## Related Issues <!-- If an existing issue relates to this one, please include it here for reference. -->
1.0
Give infrastructure-as-code approaches to Teleport more prominence in the docs - ## Applies To Determining the scope of the change is part of this project. ## Details If people visit the docs, they don't know (until they visit pages buried in the "Manage your Cluster" section) that the Terraform provider or Operator exist. The impression is that you can only configure Teleport using `tctl` and the Web UI. We should make sure that the docs reflect all available techniques for applying configuration resources while putting infrastructure-as-code approaches in the foreground. The Terraform provider and Kubernetes Operator should be first-class citizens for deploying configuration resources. In general, we should direct users toward IAC approaches to installing Teleport and applying configuration resources. Further, rather than using `kubectl exec` to create configuration resources in our Helm guides, we should consider including instructions to use the Kubernetes Operator. **Since this issue has a broad scope, we should create a table of sub-issues/PRs before we start addressing it.** _This issue is based on a [previous issue](https://github.com/gravitational/teleport/issues/19974), refining its scope._ ## Related Issues <!-- If an existing issue relates to this one, please include it here for reference. -->
non_test
give infrastructure as code approaches to teleport more prominence in the docs applies to determining the scope of the change is part of this project details if people visit the docs they don t know until they visit pages buried in the manage your cluster section that the terraform provider or operator exist the impression is that you can only configure teleport using tctl and the web ui we should make sure that the docs reflect all available techniques for applying configuration resources while putting infrastructure as code approaches in the foreground the terraform provider and kubernetes operator should be first class citizens for deploying configuration resources in general we should direct users toward iac approaches to installing teleport and applying configuration resources further rather than using kubectl exec to create configuration resources in our helm guides we should consider including instructions to use the kubernetes operator since this issue has a broad scope we should create a table of sub issues prs before we start addressing it this issue is based on a refining its scope related issues
0
184,724
14,988,334,981
IssuesEvent
2021-01-29 01:00:18
anitab-org/mentorship-backend
https://api.github.com/repos/anitab-org/mentorship-backend
closed
Problem in Readme Setup instructions for bash instructions
Category: Documentation/Training
**Describe the bug** The setup instructions in the readme file for Git bash users has an error. **To Reproduce** The steps to activate the virtual environment for Git Bash users says: `source /venv/Scripts/activate` **Expected behavior** The given command should activate the virtual environment, however it doesn't do so. **Screenshots** ![mentorshipReadmeError](https://user-images.githubusercontent.com/40827680/105349531-dacc0280-5c0f-11eb-9bc6-bd1a48081818.png) **Desktop (please complete the following information):** - OS: [Windows] **Additional context** I would like to add a PR for the correct command
1.0
Problem in Readme Setup instructions for bash instructions - **Describe the bug** The setup instructions in the readme file for Git bash users has an error. **To Reproduce** The steps to activate the virtual environment for Git Bash users says: `source /venv/Scripts/activate` **Expected behavior** The given command should activate the virtual environment, however it doesn't do so. **Screenshots** ![mentorshipReadmeError](https://user-images.githubusercontent.com/40827680/105349531-dacc0280-5c0f-11eb-9bc6-bd1a48081818.png) **Desktop (please complete the following information):** - OS: [Windows] **Additional context** I would like to add a PR for the correct command
non_test
problem in readme setup instructions for bash instructions describe the bug the setup instructions in the readme file for git bash users has an error to reproduce the steps to activate the virtual environment for git bash users says source venv scripts activate expected behavior the given command should activate the virtual environment however it doesn t do so screenshots desktop please complete the following information os additional context i would like to add a pr for the correct command
0
244,160
20,613,326,669
IssuesEvent
2022-03-07 10:45:48
ckeditor/ckeditor4
https://api.github.com/repos/ckeditor/ckeditor4
closed
Unstable tests `/tests/core/creators/detachedinline`
status:confirmed browser:ie11 type:failingtest resolution:cantreproduce core size:?
## Type of report Failing test ## Provide detailed reproduction steps (if any) <!-- Including a simple sample reproducing the issue is also a good idea. It can drastically decrease the time needed to reproduce the issue by our team, which means it can speed up helping you! You can use one of our samples to create the reproduction sample: * CodePen: https://codepen.io/karoldawidziuk/pen/LYzJvdx * JSFiddle: https://jsfiddle.net/Kratek_95/nhwe5uLq * JSBin: https://jsbin.com/gubepar/edit?html,js,output * StackBlitz: https://stackblitz.com/edit/ckeditor4-bug-report --> 1. Open http://localhost:1030/tests/core/creators/detachedinline ### Expected result All tests pass ### Actual result Sometimes tests randomly fail, especially ones connected to immediate editor's creation. ## Other details * Browser: IE11 * OS: Windows 7 * CKEditor version: 4.17.2 (built) * Installed CKEditor plugins: N/A
1.0
Unstable tests `/tests/core/creators/detachedinline` - ## Type of report Failing test ## Provide detailed reproduction steps (if any) <!-- Including a simple sample reproducing the issue is also a good idea. It can drastically decrease the time needed to reproduce the issue by our team, which means it can speed up helping you! You can use one of our samples to create the reproduction sample: * CodePen: https://codepen.io/karoldawidziuk/pen/LYzJvdx * JSFiddle: https://jsfiddle.net/Kratek_95/nhwe5uLq * JSBin: https://jsbin.com/gubepar/edit?html,js,output * StackBlitz: https://stackblitz.com/edit/ckeditor4-bug-report --> 1. Open http://localhost:1030/tests/core/creators/detachedinline ### Expected result All tests pass ### Actual result Sometimes tests randomly fail, especially ones connected to immediate editor's creation. ## Other details * Browser: IE11 * OS: Windows 7 * CKEditor version: 4.17.2 (built) * Installed CKEditor plugins: N/A
test
unstable tests tests core creators detachedinline type of report failing test provide detailed reproduction steps if any including a simple sample reproducing the issue is also a good idea it can drastically decrease the time needed to reproduce the issue by our team which means it can speed up helping you you can use one of our samples to create the reproduction sample codepen jsfiddle jsbin stackblitz open expected result all tests pass actual result sometimes tests randomly fail especially ones connected to immediate editor s creation other details browser os windows ckeditor version built installed ckeditor plugins n a
1
233,689
17,874,750,788
IssuesEvent
2021-09-07 00:33:16
GeiseSaunier/Autizando
https://api.github.com/repos/GeiseSaunier/Autizando
opened
Estruturar a arquitetura da aplicação
documentation
**Descrição** A ideia dessa issue é estruturar a arquitetura da aplicação **Objetivo** Desenvolver a arquitetura da aplicação **Tarefas** - [ ] Criar modelo de arquitetura - [ ] Definir componentes e funcionalidades - [ ] Criar um esquemático da arquitetura **Checklist** - [ ] Modelo de arquitetura finalizado - [ ] Componentes e funcionalidades definidos - [ ] Esquemático desenvolvido **Critérios de aceitação** - [ ] A equipe está de acordo com as implementações;
1.0
Estruturar a arquitetura da aplicação - **Descrição** A ideia dessa issue é estruturar a arquitetura da aplicação **Objetivo** Desenvolver a arquitetura da aplicação **Tarefas** - [ ] Criar modelo de arquitetura - [ ] Definir componentes e funcionalidades - [ ] Criar um esquemático da arquitetura **Checklist** - [ ] Modelo de arquitetura finalizado - [ ] Componentes e funcionalidades definidos - [ ] Esquemático desenvolvido **Critérios de aceitação** - [ ] A equipe está de acordo com as implementações;
non_test
estruturar a arquitetura da aplicação descrição a ideia dessa issue é estruturar a arquitetura da aplicação objetivo desenvolver a arquitetura da aplicação tarefas criar modelo de arquitetura definir componentes e funcionalidades criar um esquemático da arquitetura checklist modelo de arquitetura finalizado componentes e funcionalidades definidos esquemático desenvolvido critérios de aceitação a equipe está de acordo com as implementações
0
12,592
3,282,476,135
IssuesEvent
2015-10-28 06:51:48
The-Compiler/qutebrowser
https://api.github.com/repos/The-Compiler/qutebrowser
opened
Upgrade to pytest 2.8
component: tests priority: 1 - middle
Now that the regressions are getting less, I guess it's time to update to pytest 2.8 after the `bdd` branch is merged. Some issues: - [x] `test_check_coverage.py` has some issues (fixed in 497fba5667e7b17746746ecc732e62f57b973ea6): ``` [...] E py.error.ENOENT: [No such file or directory]: open('/tmp/pytest-of-florian/pytest-2/testdir/test_tested_unlisted0/coverage.xml', 'r') /home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/py/_error.py:84: ENOENT ---------------------- Captured stderr call ----------------------- INTERNALERROR> Traceback (most recent call last): INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/main.py", line 86, in wrap_session INTERNALERROR> config._do_configure() INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/config.py", line 830, in _do_configure INTERNALERROR> self.hook.pytest_configure.call_historic(kwargs=dict(config=self)) INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 729, in call_historic INTERNALERROR> self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs) INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda> INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute() INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute INTERNALERROR> res = hook_impl.function(*args) INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/pytest_faulthandler.py", line 23, in pytest_configure INTERNALERROR> stderr_fd_copy = os.dup(sys.stderr.fileno()) INTERNALERROR> io.UnsupportedOperation: fileno [...] Traceback (most recent call last): File "/usr/lib64/python3.5/runpy.py", line 170, in _run_module_as_main "__main__", mod_spec) File "/usr/lib64/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/py/test.py", line 4, in <module> sys.exit(pytest.main()) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/config.py", line 48, in main return config.hook.pytest_cmdline_main(config=config) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__ return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda> _MultiCall(methods, kwargs, hook.spec_opts).execute() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute res = hook_impl.function(*args) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/main.py", line 115, in pytest_cmdline_main return wrap_session(config, _main) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/main.py", line 110, in wrap_session exitstatus=session.exitstatus) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__ return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda> _MultiCall(methods, kwargs, hook.spec_opts).execute() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute return _wrapped_call(hook_impl.function(*args), self.execute) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 249, in _wrapped_call wrap_controller.send(call_outcome) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/terminal.py", line 361, in pytest_sessionfinish outcome.get_result() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result raise ex[1].with_traceback(ex[2]) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__ self.result = func() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute res = hook_impl.function(*args) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/pytest_cov/plugin.py", line 160, in pytest_sessionfinish self.cov_controller.finish() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/pytest_cov/engine.py", line 140, in finish self.cov.stop() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/coverage/control.py", line 692, in stop self.collector.stop() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/coverage/collector.py", line 277, in stop "Expected current collector to be %r, but it's %r" % (self, self._collectors[-1]) AssertionError: Expected current collector to be <Collector at 0x7f6886e039e8: CTracer>, but it's <Collector at 0x7f6828f6a208: CTracer> ``` - [ ] `check_coverage.py` doesn't like the file paths? ``` /home/florian/proj/qutebrowser/git/qutebrowser/browser/cookies.py has 100% coverage but is not in perfect_files! /home/florian/proj/qutebrowser/git/qutebrowser/browser/http.py has 100% coverage but is not in perfect_files! [...] ``` - [ ] Wait for `pytest-catchlog` release and switch to it, as `pytest-capturelog` uses deprecated `__multicall__` - [ ] `pytest-bdd` shows deprecations too: https://github.com/pytest-dev/pytest-bdd/issues/154
1.0
Upgrade to pytest 2.8 - Now that the regressions are getting less, I guess it's time to update to pytest 2.8 after the `bdd` branch is merged. Some issues: - [x] `test_check_coverage.py` has some issues (fixed in 497fba5667e7b17746746ecc732e62f57b973ea6): ``` [...] E py.error.ENOENT: [No such file or directory]: open('/tmp/pytest-of-florian/pytest-2/testdir/test_tested_unlisted0/coverage.xml', 'r') /home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/py/_error.py:84: ENOENT ---------------------- Captured stderr call ----------------------- INTERNALERROR> Traceback (most recent call last): INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/main.py", line 86, in wrap_session INTERNALERROR> config._do_configure() INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/config.py", line 830, in _do_configure INTERNALERROR> self.hook.pytest_configure.call_historic(kwargs=dict(config=self)) INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 729, in call_historic INTERNALERROR> self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs) INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda> INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute() INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute INTERNALERROR> res = hook_impl.function(*args) INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/pytest_faulthandler.py", line 23, in pytest_configure INTERNALERROR> stderr_fd_copy = os.dup(sys.stderr.fileno()) INTERNALERROR> io.UnsupportedOperation: fileno [...] Traceback (most recent call last): File "/usr/lib64/python3.5/runpy.py", line 170, in _run_module_as_main "__main__", mod_spec) File "/usr/lib64/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/py/test.py", line 4, in <module> sys.exit(pytest.main()) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/config.py", line 48, in main return config.hook.pytest_cmdline_main(config=config) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__ return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda> _MultiCall(methods, kwargs, hook.spec_opts).execute() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute res = hook_impl.function(*args) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/main.py", line 115, in pytest_cmdline_main return wrap_session(config, _main) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/main.py", line 110, in wrap_session exitstatus=session.exitstatus) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__ return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda> _MultiCall(methods, kwargs, hook.spec_opts).execute() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute return _wrapped_call(hook_impl.function(*args), self.execute) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 249, in _wrapped_call wrap_controller.send(call_outcome) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/terminal.py", line 361, in pytest_sessionfinish outcome.get_result() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result raise ex[1].with_traceback(ex[2]) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__ self.result = func() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute res = hook_impl.function(*args) File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/pytest_cov/plugin.py", line 160, in pytest_sessionfinish self.cov_controller.finish() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/pytest_cov/engine.py", line 140, in finish self.cov.stop() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/coverage/control.py", line 692, in stop self.collector.stop() File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/coverage/collector.py", line 277, in stop "Expected current collector to be %r, but it's %r" % (self, self._collectors[-1]) AssertionError: Expected current collector to be <Collector at 0x7f6886e039e8: CTracer>, but it's <Collector at 0x7f6828f6a208: CTracer> ``` - [ ] `check_coverage.py` doesn't like the file paths? ``` /home/florian/proj/qutebrowser/git/qutebrowser/browser/cookies.py has 100% coverage but is not in perfect_files! /home/florian/proj/qutebrowser/git/qutebrowser/browser/http.py has 100% coverage but is not in perfect_files! [...] ``` - [ ] Wait for `pytest-catchlog` release and switch to it, as `pytest-capturelog` uses deprecated `__multicall__` - [ ] `pytest-bdd` shows deprecations too: https://github.com/pytest-dev/pytest-bdd/issues/154
test
upgrade to pytest now that the regressions are getting less i guess it s time to update to pytest after the bdd branch is merged some issues test check coverage py has some issues fixed in e py error enoent open tmp pytest of florian pytest testdir test tested coverage xml r home florian proj qutebrowser git tox lib site packages py error py enoent captured stderr call internalerror traceback most recent call last internalerror file home florian proj qutebrowser git tox lib site packages pytest main py line in wrap session internalerror config do configure internalerror file home florian proj qutebrowser git tox lib site packages pytest config py line in do configure internalerror self hook pytest configure call historic kwargs dict config self internalerror file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in call historic internalerror self hookexec self self nonwrappers self wrappers kwargs internalerror file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in hookexec internalerror return self inner hookexec hook methods kwargs internalerror file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in internalerror multicall methods kwargs hook spec opts execute internalerror file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in execute internalerror res hook impl function args internalerror file home florian proj qutebrowser git tox lib site packages pytest faulthandler py line in pytest configure internalerror stderr fd copy os dup sys stderr fileno internalerror io unsupportedoperation fileno traceback most recent call last file usr runpy py line in run module as main main mod spec file usr runpy py line in run code exec code run globals file home florian proj qutebrowser git tox lib site packages py test py line in sys exit pytest main file home florian proj qutebrowser git tox lib site packages pytest config py line in main return config hook pytest cmdline main config config file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in call return self hookexec self self nonwrappers self wrappers kwargs file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in hookexec return self inner hookexec hook methods kwargs file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in multicall methods kwargs hook spec opts execute file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in execute res hook impl function args file home florian proj qutebrowser git tox lib site packages pytest main py line in pytest cmdline main return wrap session config main file home florian proj qutebrowser git tox lib site packages pytest main py line in wrap session exitstatus session exitstatus file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in call return self hookexec self self nonwrappers self wrappers kwargs file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in hookexec return self inner hookexec hook methods kwargs file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in multicall methods kwargs hook spec opts execute file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in execute return wrapped call hook impl function args self execute file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in wrapped call wrap controller send call outcome file home florian proj qutebrowser git tox lib site packages pytest terminal py line in pytest sessionfinish outcome get result file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in get result raise ex with traceback ex file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in init self result func file home florian proj qutebrowser git tox lib site packages pytest vendored packages pluggy py line in execute res hook impl function args file home florian proj qutebrowser git tox lib site packages pytest cov plugin py line in pytest sessionfinish self cov controller finish file home florian proj qutebrowser git tox lib site packages pytest cov engine py line in finish self cov stop file home florian proj qutebrowser git tox lib site packages coverage control py line in stop self collector stop file home florian proj qutebrowser git tox lib site packages coverage collector py line in stop expected current collector to be r but it s r self self collectors assertionerror expected current collector to be but it s check coverage py doesn t like the file paths home florian proj qutebrowser git qutebrowser browser cookies py has coverage but is not in perfect files home florian proj qutebrowser git qutebrowser browser http py has coverage but is not in perfect files wait for pytest catchlog release and switch to it as pytest capturelog uses deprecated multicall pytest bdd shows deprecations too
1
169,197
13,129,185,335
IssuesEvent
2020-08-06 13:31:42
OpenLiberty/open-liberty
https://api.github.com/repos/OpenLiberty/open-liberty
opened
Fault Tolerance 3.0 Feature Test Summary
Feature Test Summary in:MicroProfile/FaultTolerance team:MicroProfileUK
Epic: #11017 **1) Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature. The description should include the positive and negative testing done, whether all testing is automated, what manual tests exist (if any) and where the tests are stored (source control). Automated testing is expected for all features with manual testing considered an exception to the rule.** For any feature, be aware that only FAT tests (not unit or BVT) are executed in our cross platform testing. To ensure cross platform testing ensure you have sufficient FAT coverage to verify the feature. If delivering tests outside of the standard Liberty FAT framework, do the tests push the results into cognitive testing database (if not, consult with the CSI Team who can provide advice and verify if results are being received)?_ **2) Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.** Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values: 0 - No automated testing delivered 1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature. 2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths 3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level. 4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide. 5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature. Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
1.0
Fault Tolerance 3.0 Feature Test Summary - Epic: #11017 **1) Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature. The description should include the positive and negative testing done, whether all testing is automated, what manual tests exist (if any) and where the tests are stored (source control). Automated testing is expected for all features with manual testing considered an exception to the rule.** For any feature, be aware that only FAT tests (not unit or BVT) are executed in our cross platform testing. To ensure cross platform testing ensure you have sufficient FAT coverage to verify the feature. If delivering tests outside of the standard Liberty FAT framework, do the tests push the results into cognitive testing database (if not, consult with the CSI Team who can provide advice and verify if results are being received)?_ **2) Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.** Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values: 0 - No automated testing delivered 1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature. 2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths 3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level. 4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide. 5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature. Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
test
fault tolerance feature test summary epic describe the test strategy approach for this feature and describe how the approach verifies the functions delivered by this feature the description should include the positive and negative testing done whether all testing is automated what manual tests exist if any and where the tests are stored source control automated testing is expected for all features with manual testing considered an exception to the rule for any feature be aware that only fat tests not unit or bvt are executed in our cross platform testing to ensure cross platform testing ensure you have sufficient fat coverage to verify the feature if delivering tests outside of the standard liberty fat framework do the tests push the results into cognitive testing database if not consult with the csi team who can provide advice and verify if results are being received collectively as a team you need to assess your confidence in the testing delivered based on the values below this should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole please indicate your confidence in the testing up to and including fat delivered with this feature by selecting one of these values no automated testing delivered we have minimal automated coverage of the feature including golden paths there is a relatively high risk that defects or issues could be found in this feature we have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here error outlying scenarios are not really covered there are likely risks that issues may exist in the golden paths we have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error outlying scenarios there is a risk when the feature is used outside the golden paths however we are confident on the golden path note this may still be a valid end state for a feature things like beta features may well suffice at this level we have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error outlying scenarios while more testing of the error outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide we have delivered all automated testing we believe is needed for this feature the testing covers all golden path cases as well as all the error outlying scenarios that make sense we are not aware of any gaps in the testing at this time no manual testing is required to verify this feature based on your answer above for any answer other than a or please provide details of what drove your answer please be aware it may be perfectly reasonable in some scenarios to deliver with any value above we may accept no automated testing is needed for some features we may be happy with low levels of testing on samples for instance so please don t feel the need to drive to a we need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid what are the gaps what is the risk etc please also provide links to the follow on work that is needed to close the gaps should you deem it needed
1
15,443
2,855,138,161
IssuesEvent
2015-06-02 07:36:52
contao/core
https://api.github.com/repos/contao/core
closed
Useless loadLanguageFile call when using defaults
defect
When I use `{{label::MSC:search}}` it works (because `default.php` is always loaded) but there's a useless `System::loadLanguageFile()` call because it tries to load `System::loadLanguageFile('MSC')`. See https://github.com/contao/core/blob/release/3.5.0/system/modules/core/library/Contao/InsertTags.php#L196
1.0
Useless loadLanguageFile call when using defaults - When I use `{{label::MSC:search}}` it works (because `default.php` is always loaded) but there's a useless `System::loadLanguageFile()` call because it tries to load `System::loadLanguageFile('MSC')`. See https://github.com/contao/core/blob/release/3.5.0/system/modules/core/library/Contao/InsertTags.php#L196
non_test
useless loadlanguagefile call when using defaults when i use label msc search it works because default php is always loaded but there s a useless system loadlanguagefile call because it tries to load system loadlanguagefile msc see
0
194,238
14,672,707,753
IssuesEvent
2020-12-30 11:16:42
elastic/kibana
https://api.github.com/repos/elastic/kibana
opened
[APM] latencyAggregationType is not persisted when navigation to Transaction overview to detail
[zube]: 7.11 apm apm:test-plan-regression bug v7.11.0
![Screenshot 2020-12-30 at 12.15.34.png](https://zube.io/files/elastic/556f9c54749818e6485b95ecc47fb562-screenshot-2020-12-30-at-12-15-34.png) On the transaction detail page `95th percentile` should be selected ![Screenshot 2020-12-30 at 12.15.28 (2).png](https://zube.io/files/elastic/8af28d55927e9b2d936a5465b30a8f6f-screenshot-2020-12-30-at-12-15-28-2.png)
1.0
[APM] latencyAggregationType is not persisted when navigation to Transaction overview to detail - ![Screenshot 2020-12-30 at 12.15.34.png](https://zube.io/files/elastic/556f9c54749818e6485b95ecc47fb562-screenshot-2020-12-30-at-12-15-34.png) On the transaction detail page `95th percentile` should be selected ![Screenshot 2020-12-30 at 12.15.28 (2).png](https://zube.io/files/elastic/8af28d55927e9b2d936a5465b30a8f6f-screenshot-2020-12-30-at-12-15-28-2.png)
test
latencyaggregationtype is not persisted when navigation to transaction overview to detail on the transaction detail page percentile should be selected
1
190,073
22,047,209,911
IssuesEvent
2022-05-30 04:06:10
nanopathi/linux-4.19.72_CVE-2021-32399
https://api.github.com/repos/nanopathi/linux-4.19.72_CVE-2021-32399
closed
WS-2022-0014 (Medium) detected in linuxlinux-4.19.236 - autoclosed
security vulnerability
## WS-2022-0014 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.236</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nanopathi/linux-4.19.72_CVE-2021-32399/commit/03cb3c6f0e0b62b5cbcd747df63781fbb2a6ef66">03cb3c6f0e0b62b5cbcd747df63781fbb2a6ef66</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/mouse/appletouch.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/mouse/appletouch.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Input: appletouch - initialize work before device registration <p>Publish Date: 2022-01-10 <p>URL: <a href=https://github.com/gregkh/linux/commit/e79ff8c68acb1eddf709d3ac84716868f2a91012>WS-2022-0014</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000049">https://osv.dev/vulnerability/GSD-2022-1000049</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution: v5.15.13</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2022-0014 (Medium) detected in linuxlinux-4.19.236 - autoclosed - ## WS-2022-0014 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.236</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nanopathi/linux-4.19.72_CVE-2021-32399/commit/03cb3c6f0e0b62b5cbcd747df63781fbb2a6ef66">03cb3c6f0e0b62b5cbcd747df63781fbb2a6ef66</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/mouse/appletouch.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/mouse/appletouch.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Input: appletouch - initialize work before device registration <p>Publish Date: 2022-01-10 <p>URL: <a href=https://github.com/gregkh/linux/commit/e79ff8c68acb1eddf709d3ac84716868f2a91012>WS-2022-0014</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000049">https://osv.dev/vulnerability/GSD-2022-1000049</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution: v5.15.13</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
ws medium detected in linuxlinux autoclosed ws medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers input mouse appletouch c drivers input mouse appletouch c vulnerability details input appletouch initialize work before device registration publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
72,407
31,768,842,441
IssuesEvent
2023-09-12 10:26:06
gauravrs18/issue_onboarding
https://api.github.com/repos/gauravrs18/issue_onboarding
closed
dev-angular-integration-account-services-disconnections-current-component -view-component
CX-account-services
dev-angular-integration-account-services-disconnections-current-component -view-component
1.0
dev-angular-integration-account-services-disconnections-current-component -view-component - dev-angular-integration-account-services-disconnections-current-component -view-component
non_test
dev angular integration account services disconnections current component view component dev angular integration account services disconnections current component view component
0
694,365
23,811,016,186
IssuesEvent
2022-09-04 19:09:24
dodona-edu/judge-html
https://api.github.com/repos/dodona-edu/judge-html
closed
Runtime error when color is missing in CSS
bug high priority
- [ ] https://naos.ugent.be/en/submissions/983357/ ```html <!DOCTYPE html> <html lang="nl-BE"> <head> <title>Fruit</title> <meta charset="UTF-8"> <style> li { margin-top: 10px; } #apple { color: ; } a:visited { color: green; } </style> </head> <body> <img src="figuren/fruit_groenten.jpg" alt="Fruit en groenten"> <ol> <li>Fruit <ul> <li id="apple">Appel</li> <li>Peer</li> </ul> </li> <li><a href="groenten.html">Groenten</a></li> </ol> </body> </html> ``` - [ ] https://naos.ugent.be/en/submissions/983356/ ```html <!DOCTYPE html> <html lang="nl-BE"> <head> <title>Fruit</title> <meta charset="UTF-8"> <style> li { margin-top: 10px; } #apple { font-style: italic; } a:visited { color: ; } </style> </head> <body> <img src="figuren/fruit_groenten.jpg" alt="Fruit en groenten"> <ol> <li>Fruit <ul> <li id="apple">Appel</li> <li>Peer</li> </ul> </li> <li><a href="groenten.html">Groenten</a></li> </ol> </body> </html> ``` ``` File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/html_judge.py", line 42, in main test_suites: List[TestSuite] = evaluator.create_suites(html_content) File "<string>", line 7, in create_suites File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/checks.py", line 1210, in __init__ super().__init__("HTML", content, check_recommended, check_minimal) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/checks.py", line 1122, in __init__ super().__init__(name, content, check_recommended) File "<string>", line 8, in __init__ File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/checks.py", line 865, in __post_init__ self._css_validator = CssValidator(self.content) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/css_validator.py", line 309, in __init__ self.rules = Rules(css) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/css_validator.py", line 193, in __init__ self.rules.append(Rule(selector, declaration)) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/css_validator.py", line 93, in __init__ self.color = Color(self.value_str) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/utils/color_converter.py", line 52, in __init__ elif val[0].isalpha(): IndexError: string index out of range ```
1.0
Runtime error when color is missing in CSS - - [ ] https://naos.ugent.be/en/submissions/983357/ ```html <!DOCTYPE html> <html lang="nl-BE"> <head> <title>Fruit</title> <meta charset="UTF-8"> <style> li { margin-top: 10px; } #apple { color: ; } a:visited { color: green; } </style> </head> <body> <img src="figuren/fruit_groenten.jpg" alt="Fruit en groenten"> <ol> <li>Fruit <ul> <li id="apple">Appel</li> <li>Peer</li> </ul> </li> <li><a href="groenten.html">Groenten</a></li> </ol> </body> </html> ``` - [ ] https://naos.ugent.be/en/submissions/983356/ ```html <!DOCTYPE html> <html lang="nl-BE"> <head> <title>Fruit</title> <meta charset="UTF-8"> <style> li { margin-top: 10px; } #apple { font-style: italic; } a:visited { color: ; } </style> </head> <body> <img src="figuren/fruit_groenten.jpg" alt="Fruit en groenten"> <ol> <li>Fruit <ul> <li id="apple">Appel</li> <li>Peer</li> </ul> </li> <li><a href="groenten.html">Groenten</a></li> </ol> </body> </html> ``` ``` File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/html_judge.py", line 42, in main test_suites: List[TestSuite] = evaluator.create_suites(html_content) File "<string>", line 7, in create_suites File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/checks.py", line 1210, in __init__ super().__init__("HTML", content, check_recommended, check_minimal) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/checks.py", line 1122, in __init__ super().__init__(name, content, check_recommended) File "<string>", line 8, in __init__ File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/checks.py", line 865, in __post_init__ self._css_validator = CssValidator(self.content) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/css_validator.py", line 309, in __init__ self.rules = Rules(css) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/css_validator.py", line 193, in __init__ self.rules.append(Rule(selector, declaration)) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/validators/css_validator.py", line 93, in __init__ self.color = Color(self.value_str) File "/mnt/x9xUdkHctwlC3VoaPNSo0Q/judge/utils/color_converter.py", line 52, in __init__ elif val[0].isalpha(): IndexError: string index out of range ```
non_test
runtime error when color is missing in css html fruit li margin top apple color a visited color green fruit appel peer groenten html fruit li margin top apple font style italic a visited color fruit appel peer groenten file mnt judge html judge py line in main test suites list evaluator create suites html content file line in create suites file mnt judge validators checks py line in init super init html content check recommended check minimal file mnt judge validators checks py line in init super init name content check recommended file line in init file mnt judge validators checks py line in post init self css validator cssvalidator self content file mnt judge validators css validator py line in init self rules rules css file mnt judge validators css validator py line in init self rules append rule selector declaration file mnt judge validators css validator py line in init self color color self value str file mnt judge utils color converter py line in init elif val isalpha indexerror string index out of range
0
167,316
13,019,535,661
IssuesEvent
2020-07-26 23:13:14
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
[Failing Job] pull-kubernetes-node-e2e-containerd is timing out
kind/failing-test priority/important-soon sig/node sig/release
<!-- Please only use this template for submitting reports about continuously failing tests or jobs in Kubernetes CI --> **Which jobs are failing**: pull-kubernetes-node-e2e-containerd **Which test(s) are failing**: Timeout **Since when has it been failing**: - Last good run (2020-04-03 11:00:09 +0000 UTC): https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-node-e2e-containerd/1246029654804926464 - First bad run (2020-04-03 12:37:42 +0000 UTC): https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-node-e2e-containerd/1246054193446260736 - Latest bad run (2020-04-04 01:19:40 +0000 UTC): https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-node-e2e-containerd/1246245958304403456 **Testgrid link**: https://testgrid.k8s.io/sig-node-containerd#pull-node-e2e **Reason for failure**: ```kubetest --timeout triggered``` **Anything else we need to know**: - triage link: https://storage.googleapis.com/k8s-gubernator/triage/index.html?pr=1&job=pull-kubernetes-node-e2e-containerd This isn't a merge-blocking test but it is reporting on all PRs /sig node /sig release /priority important-soon
1.0
[Failing Job] pull-kubernetes-node-e2e-containerd is timing out - <!-- Please only use this template for submitting reports about continuously failing tests or jobs in Kubernetes CI --> **Which jobs are failing**: pull-kubernetes-node-e2e-containerd **Which test(s) are failing**: Timeout **Since when has it been failing**: - Last good run (2020-04-03 11:00:09 +0000 UTC): https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-node-e2e-containerd/1246029654804926464 - First bad run (2020-04-03 12:37:42 +0000 UTC): https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-node-e2e-containerd/1246054193446260736 - Latest bad run (2020-04-04 01:19:40 +0000 UTC): https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-node-e2e-containerd/1246245958304403456 **Testgrid link**: https://testgrid.k8s.io/sig-node-containerd#pull-node-e2e **Reason for failure**: ```kubetest --timeout triggered``` **Anything else we need to know**: - triage link: https://storage.googleapis.com/k8s-gubernator/triage/index.html?pr=1&job=pull-kubernetes-node-e2e-containerd This isn't a merge-blocking test but it is reporting on all PRs /sig node /sig release /priority important-soon
test
pull kubernetes node containerd is timing out which jobs are failing pull kubernetes node containerd which test s are failing timeout since when has it been failing last good run utc first bad run utc latest bad run utc testgrid link reason for failure kubetest timeout triggered anything else we need to know triage link this isn t a merge blocking test but it is reporting on all prs sig node sig release priority important soon
1
334,967
10,147,428,724
IssuesEvent
2019-08-05 10:32:48
pmem/issues
https://api.github.com/repos/pmem/issues
closed
Test: pmem_deep_persist /TEST2 fails
Exposure: Low OS: Linux Priority: 3 medium Type: Bug
<!-- Before creating new issue, ensure that similar issue wasn't already created * Search: https://github.com/pmem/issues/issues Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report. Remember this is just a minimal template. You can extend it with data you think may be useful. --> # ISSUE: <!-- fill the title of issue --> ## Environment Information - PMDK package version(s): 1.6-101-gba30d1b1f - OS(es) version(s): Ubuntu19.04 - ndctl version(s): 63+ - kernel version(s): 5.0.0-21-generic <!-- fill in also other useful environment data --> ## Please provide a reproduction of the bug: ``` ./RUNTESTS pmem_deep_persist -s TEST2 -m force-enable -t all ``` ## How often bug is revealed: (always, often, rare): always <!-- describe special circumstances in section above --> ## Actual behavior: ``` $ ./RUNTESTS pmem_deep_persist -s TEST2 -m force-enable -t all pmem_deep_persist/TEST2: SETUP (all/pmem/debug/memcheck) [MATCHING FAILED, COMPLETE FILE (out2.log) BELOW] pmem_deep_persist/TEST2: START: pmem_deep_persist ./pmem_deep_persist /dev/dax1.0 p -1 0 mocked open, path /sys/bus/nd/devices/region1/deep_flush deep_persist -1 pmem_deep_persist/TEST2: DONE [EOF] out2.log.match:1 pmem_deep_persist$(nW)TEST2: START: pmem_deep_persist$(nW) out2.log:1 pmem_deep_persist/TEST2: START: pmem_deep_persist out2.log.match:2 $(nW)pmem_deep_persist$(nW) $(*) out2.log:2 ./pmem_deep_persist /dev/dax1.0 p -1 0 out2.log.match:3 $(OPT)mocked open$(*) out2.log:3 mocked open, path /sys/bus/nd/devices/region1/deep_flush out2.log.match:4 $(OPT)mocked write$(*) out2.log:4 deep_persist -1 out2.log:4 [skipping optional line] out2.log.match:5 deep_persist 0 out2.log:4 deep_persist -1 FAIL: match: out2.log.match:5 did not match pattern RUNTESTS: stopping: pmem_deep_persist/TEST2 failed, TEST=all FS=any BUILD=debug ``` ## Expected behavior: Test should pass. ## Details Logs: [pmem_deep_persistTEST2.zip](https://github.com/pmem/issues/files/3466918/pmem_deep_persistTEST2.zip) ## Additional information about Priority and Help Requested: Are you willing to submit a pull request with a proposed change? (Yes, No) <!-- check one if possible --> Requested priority: (Showstopper, High, Medium, Low) <!-- check one if possible -->
1.0
Test: pmem_deep_persist /TEST2 fails - <!-- Before creating new issue, ensure that similar issue wasn't already created * Search: https://github.com/pmem/issues/issues Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report. Remember this is just a minimal template. You can extend it with data you think may be useful. --> # ISSUE: <!-- fill the title of issue --> ## Environment Information - PMDK package version(s): 1.6-101-gba30d1b1f - OS(es) version(s): Ubuntu19.04 - ndctl version(s): 63+ - kernel version(s): 5.0.0-21-generic <!-- fill in also other useful environment data --> ## Please provide a reproduction of the bug: ``` ./RUNTESTS pmem_deep_persist -s TEST2 -m force-enable -t all ``` ## How often bug is revealed: (always, often, rare): always <!-- describe special circumstances in section above --> ## Actual behavior: ``` $ ./RUNTESTS pmem_deep_persist -s TEST2 -m force-enable -t all pmem_deep_persist/TEST2: SETUP (all/pmem/debug/memcheck) [MATCHING FAILED, COMPLETE FILE (out2.log) BELOW] pmem_deep_persist/TEST2: START: pmem_deep_persist ./pmem_deep_persist /dev/dax1.0 p -1 0 mocked open, path /sys/bus/nd/devices/region1/deep_flush deep_persist -1 pmem_deep_persist/TEST2: DONE [EOF] out2.log.match:1 pmem_deep_persist$(nW)TEST2: START: pmem_deep_persist$(nW) out2.log:1 pmem_deep_persist/TEST2: START: pmem_deep_persist out2.log.match:2 $(nW)pmem_deep_persist$(nW) $(*) out2.log:2 ./pmem_deep_persist /dev/dax1.0 p -1 0 out2.log.match:3 $(OPT)mocked open$(*) out2.log:3 mocked open, path /sys/bus/nd/devices/region1/deep_flush out2.log.match:4 $(OPT)mocked write$(*) out2.log:4 deep_persist -1 out2.log:4 [skipping optional line] out2.log.match:5 deep_persist 0 out2.log:4 deep_persist -1 FAIL: match: out2.log.match:5 did not match pattern RUNTESTS: stopping: pmem_deep_persist/TEST2 failed, TEST=all FS=any BUILD=debug ``` ## Expected behavior: Test should pass. ## Details Logs: [pmem_deep_persistTEST2.zip](https://github.com/pmem/issues/files/3466918/pmem_deep_persistTEST2.zip) ## Additional information about Priority and Help Requested: Are you willing to submit a pull request with a proposed change? (Yes, No) <!-- check one if possible --> Requested priority: (Showstopper, High, Medium, Low) <!-- check one if possible -->
non_test
test pmem deep persist fails before creating new issue ensure that similar issue wasn t already created search note that if you do not provide enough information to reproduce the issue we may not be able to take action on your report remember this is just a minimal template you can extend it with data you think may be useful issue environment information pmdk package version s os es version s ndctl version s kernel version s generic please provide a reproduction of the bug runtests pmem deep persist s m force enable t all how often bug is revealed always often rare always actual behavior runtests pmem deep persist s m force enable t all pmem deep persist setup all pmem debug memcheck pmem deep persist start pmem deep persist pmem deep persist dev p mocked open path sys bus nd devices deep flush deep persist pmem deep persist done log match pmem deep persist nw start pmem deep persist nw log pmem deep persist start pmem deep persist log match nw pmem deep persist nw log pmem deep persist dev p log match opt mocked open log mocked open path sys bus nd devices deep flush log match opt mocked write log deep persist log log match deep persist log deep persist fail match log match did not match pattern runtests stopping pmem deep persist failed test all fs any build debug expected behavior test should pass details logs additional information about priority and help requested are you willing to submit a pull request with a proposed change yes no requested priority showstopper high medium low
0
104,947
16,623,554,520
IssuesEvent
2021-06-03 06:39:03
Thanraj/_OpenSSL_
https://api.github.com/repos/Thanraj/_OpenSSL_
opened
CVE-2013-0166 (Medium) detected in openssl87a37cbadd4b56fb4dc21008c5bcbe929f0a52b6, openssl87a37cbadd4b56fb4dc21008c5bcbe929f0a52b6
security vulnerability
## CVE-2013-0166 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>openssl87a37cbadd4b56fb4dc21008c5bcbe929f0a52b6</b>, <b>openssl87a37cbadd4b56fb4dc21008c5bcbe929f0a52b6</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> OpenSSL before 0.9.8y, 1.0.0 before 1.0.0k, and 1.0.1 before 1.0.1d does not properly perform signature verification for OCSP responses, which allows remote OCSP servers to cause a denial of service (NULL pointer dereference and application crash) via an invalid key. <p>Publish Date: 2013-02-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-0166>CVE-2013-0166</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-0166">https://nvd.nist.gov/vuln/detail/CVE-2013-0166</a></p> <p>Release Date: 2013-02-08</p> <p>Fix Resolution: 0.9.8y,1.0.0k,1.0.1d</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2013-0166 (Medium) detected in openssl87a37cbadd4b56fb4dc21008c5bcbe929f0a52b6, openssl87a37cbadd4b56fb4dc21008c5bcbe929f0a52b6 - ## CVE-2013-0166 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>openssl87a37cbadd4b56fb4dc21008c5bcbe929f0a52b6</b>, <b>openssl87a37cbadd4b56fb4dc21008c5bcbe929f0a52b6</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> OpenSSL before 0.9.8y, 1.0.0 before 1.0.0k, and 1.0.1 before 1.0.1d does not properly perform signature verification for OCSP responses, which allows remote OCSP servers to cause a denial of service (NULL pointer dereference and application crash) via an invalid key. <p>Publish Date: 2013-02-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-0166>CVE-2013-0166</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-0166">https://nvd.nist.gov/vuln/detail/CVE-2013-0166</a></p> <p>Release Date: 2013-02-08</p> <p>Fix Resolution: 0.9.8y,1.0.0k,1.0.1d</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in cve medium severity vulnerability vulnerable libraries vulnerability details openssl before before and before does not properly perform signature verification for ocsp responses which allows remote ocsp servers to cause a denial of service null pointer dereference and application crash via an invalid key publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
293,989
25,338,664,529
IssuesEvent
2022-11-18 19:14:33
raupargor/Friendsn-t-Games
https://api.github.com/repos/raupargor/Friendsn-t-Games
closed
8.3.Sonido del juego: Pruebas
test Priority: low
Comprobar que los elementos sonoros funcionan correctamente para todos los jugadores de una partida online
1.0
8.3.Sonido del juego: Pruebas - Comprobar que los elementos sonoros funcionan correctamente para todos los jugadores de una partida online
test
sonido del juego pruebas comprobar que los elementos sonoros funcionan correctamente para todos los jugadores de una partida online
1
26,356
7,817,903,934
IssuesEvent
2018-06-13 10:29:03
ShaikASK/Testing
https://api.github.com/repos/ShaikASK/Testing
closed
HR Admin /HR Users : Edit Hires : Application displays an error message when user try to edit "New Hire"
Defect HR Admin Module HR User Module New Hire P1 Release#2 Build#3
Steps : 1.Launch the URL 2.Sign in as HR admin /HR Users 3.Go to New Hires 4.Create a New Hire and Save it 5.Edit the above created New Hire and save the changes Experienced Behaviour : Observed that error message is displayed as "ERROR: Could not update New Hire when user try to edit the New Hire Expected Behaviour : Ensure that application should not display any error message when user try to edit "New Hire"
1.0
HR Admin /HR Users : Edit Hires : Application displays an error message when user try to edit "New Hire" - Steps : 1.Launch the URL 2.Sign in as HR admin /HR Users 3.Go to New Hires 4.Create a New Hire and Save it 5.Edit the above created New Hire and save the changes Experienced Behaviour : Observed that error message is displayed as "ERROR: Could not update New Hire when user try to edit the New Hire Expected Behaviour : Ensure that application should not display any error message when user try to edit "New Hire"
non_test
hr admin hr users edit hires application displays an error message when user try to edit new hire steps launch the url sign in as hr admin hr users go to new hires create a new hire and save it edit the above created new hire and save the changes experienced behaviour observed that error message is displayed as error could not update new hire when user try to edit the new hire expected behaviour ensure that application should not display any error message when user try to edit new hire
0
61,834
6,759,368,554
IssuesEvent
2017-10-24 16:52:19
Microsoft/PTVS
https://api.github.com/repos/Microsoft/PTVS
closed
REPL comments handling different from command line python
area:REPL bug priority:P2 resolution:investigate Tests
In VS Python interactive window, you can't just type a comment, it will keep prompting for input until you also type code. That's not the case when you run python.exe from a command prompt. Affected tests: ReplWindowUITests.ReplWindowPythonTests.Comments ReplWindowUITests.ReplWindowTestsDefaultPrompt.CommentPaste ![image](https://user-images.githubusercontent.com/1696845/27239637-9c8aaf08-5286-11e7-9b44-74c6a85898a8.png) ![image](https://user-images.githubusercontent.com/1696845/27239650-ac451d66-5286-11e7-868c-3172b51b108d.png)
1.0
REPL comments handling different from command line python - In VS Python interactive window, you can't just type a comment, it will keep prompting for input until you also type code. That's not the case when you run python.exe from a command prompt. Affected tests: ReplWindowUITests.ReplWindowPythonTests.Comments ReplWindowUITests.ReplWindowTestsDefaultPrompt.CommentPaste ![image](https://user-images.githubusercontent.com/1696845/27239637-9c8aaf08-5286-11e7-9b44-74c6a85898a8.png) ![image](https://user-images.githubusercontent.com/1696845/27239650-ac451d66-5286-11e7-868c-3172b51b108d.png)
test
repl comments handling different from command line python in vs python interactive window you can t just type a comment it will keep prompting for input until you also type code that s not the case when you run python exe from a command prompt affected tests replwindowuitests replwindowpythontests comments replwindowuitests replwindowtestsdefaultprompt commentpaste
1
226,265
18,008,799,935
IssuesEvent
2021-09-16 05:39:38
python-pillow/Pillow
https://api.github.com/repos/python-pillow/Pillow
closed
Security -- Rotate any secrets on Travis
Testing
This is public knowledge -- From Sept3 -> Sept 10, they were injecting all secrets into public builds, across accounts. https://twitter.com/peter_szilagyi/status/1437646118700175360 Any secrets that we may have on TravisCI should be rotated, and potentially just turn off travis.
1.0
Security -- Rotate any secrets on Travis - This is public knowledge -- From Sept3 -> Sept 10, they were injecting all secrets into public builds, across accounts. https://twitter.com/peter_szilagyi/status/1437646118700175360 Any secrets that we may have on TravisCI should be rotated, and potentially just turn off travis.
test
security rotate any secrets on travis this is public knowledge from sept they were injecting all secrets into public builds across accounts any secrets that we may have on travisci should be rotated and potentially just turn off travis
1
2,177
7,632,527,283
IssuesEvent
2018-05-05 16:15:54
Microsoft/DirectXTex
https://api.github.com/repos/Microsoft/DirectXTex
closed
Invalid character in DirectXTexConvert.cpp.
maintainence
I used this library in visual studio 2015 for Japanese. The warning of C4819 arrived in DirectXTexConvert.cpp. It looks like used special character as bellow: ``` // Y? = Y - 16 // Cb? = Cb - 128 // Cr? = Cr - 128 // R = 1.1644Y? + 1.5960Cr? // G = 1.1644Y? - 0.3917Cb? - 0.8128Cr? // B = 1.1644Y? + 2.0172Cb? ``` so please don't use special character in this source.
True
Invalid character in DirectXTexConvert.cpp. - I used this library in visual studio 2015 for Japanese. The warning of C4819 arrived in DirectXTexConvert.cpp. It looks like used special character as bellow: ``` // Y? = Y - 16 // Cb? = Cb - 128 // Cr? = Cr - 128 // R = 1.1644Y? + 1.5960Cr? // G = 1.1644Y? - 0.3917Cb? - 0.8128Cr? // B = 1.1644Y? + 2.0172Cb? ``` so please don't use special character in this source.
non_test
invalid character in directxtexconvert cpp i used this library in visual studio for japanese the warning of arrived in directxtexconvert cpp it looks like used special character as bellow y y cb cb cr cr r g b so please don t use special character in this source
0
58,602
6,609,803,886
IssuesEvent
2017-09-19 15:36:25
EyeSeeTea/pictureapp
https://api.github.com/repos/EyeSeeTea/pictureapp
closed
Address not deleted when current method's changed
complexity - low (1hr) eReferrals priority - critical testing type - bug
Alpha tester report: Example from Mozambique 1. I enter "Current Method" = "Pills" 2. "HH delivery" shows and I enter "yes" 3. "Address" shows and I enter "anything" 4. I switch "Current Method" to "Condom" 5. "HH delivery" hides but not "Address" I actually want "Address" to be set empty and hidden again
1.0
Address not deleted when current method's changed - Alpha tester report: Example from Mozambique 1. I enter "Current Method" = "Pills" 2. "HH delivery" shows and I enter "yes" 3. "Address" shows and I enter "anything" 4. I switch "Current Method" to "Condom" 5. "HH delivery" hides but not "Address" I actually want "Address" to be set empty and hidden again
test
address not deleted when current method s changed alpha tester report example from mozambique i enter current method pills hh delivery shows and i enter yes address shows and i enter anything i switch current method to condom hh delivery hides but not address i actually want address to be set empty and hidden again
1
65,959
6,979,978,481
IssuesEvent
2017-12-12 23:10:38
participedia/frontend
https://api.github.com/repos/participedia/frontend
closed
Profile bugs
bug Team Launch User Testing Feedback
(I haven't been able to replicate these) **Amber (mac, chrome)** "I would like to be able to edit my profile including my name, etc. The edit button is a bit far from the rest of the content and when I try to use it, the changes that I made do not stick. In fact it returns a page with no content at all." I was not able to see my new submission on my profile page."
1.0
Profile bugs - (I haven't been able to replicate these) **Amber (mac, chrome)** "I would like to be able to edit my profile including my name, etc. The edit button is a bit far from the rest of the content and when I try to use it, the changes that I made do not stick. In fact it returns a page with no content at all." I was not able to see my new submission on my profile page."
test
profile bugs i haven t been able to replicate these amber mac chrome i would like to be able to edit my profile including my name etc the edit button is a bit far from the rest of the content and when i try to use it the changes that i made do not stick in fact it returns a page with no content at all i was not able to see my new submission on my profile page
1
87,498
3,755,451,344
IssuesEvent
2016-03-12 17:29:45
rsanchez-wsu/sp16-ceg3120
https://api.github.com/repos/rsanchez-wsu/sp16-ceg3120
opened
Multiple aliases of the same name can be saved.
priority-med state-todo team-5
Should check to see if alias name exists and prompt for overwrite if found.
1.0
Multiple aliases of the same name can be saved. - Should check to see if alias name exists and prompt for overwrite if found.
non_test
multiple aliases of the same name can be saved should check to see if alias name exists and prompt for overwrite if found
0
57,703
6,554,679,592
IssuesEvent
2017-09-06 07:15:36
brave/browser-laptop
https://api.github.com/repos/brave/browser-laptop
opened
Verified publisher icon not shown on manual upgrade
bug feature/ledger QA/test-plan-specified
- Did you search for similar issues before submitting this one? Yes - Describe the issue you encountered: Verified publisher icon not shown on manual upgrade - Platform (Win7, 8, 10? macOS? Linux distro?): Windows 10 x64 - Brave Version (revision SHA): Brave | 0.18.27 -- | -- rev | eba55a1 Muon | 4.3.16 - Steps to reproduce: 1. Install 0.18.23 build, do not enable payments 2. Manually upgrade to 0.18.27 by running the setup 3. Once updated to 0.18.27, enable payments and visit any verified publisher link (eg: brianbondy.com) 4. Wait for the entry to be added in the ledger table, no verified publisher icon is shown in URL or in ledger table 5. Quit and restart browser, no change 6. Disable/enable payments, no change 7. Delete all ledger*.json files from brave profile and relaunch browser and visit the same sites, verified publisher icon is shown - Actual result: Verified publisher icon not shown on manual upgrade - Expected result: Should show the verified publisher icon irrespective of when payment is enabled and how the browser update is done - Will the steps above reproduce in a fresh profile? If not what other info can be added? Yes - Is this an issue in the currently released version? Yes, Updated manually from 0.17.13 to 0.18.14, no verified publisher icon shown Auto updated from 0.18.14 to 0.18.23, still no verified publisher icon shown - Can this issue be consistently reproduced? Yes - Extra QA steps: 1. 2. 3. - Screenshot if needed: - Any related issues: cc: @mrose17 @NejcZdovc @cezaraugusto
1.0
Verified publisher icon not shown on manual upgrade - - Did you search for similar issues before submitting this one? Yes - Describe the issue you encountered: Verified publisher icon not shown on manual upgrade - Platform (Win7, 8, 10? macOS? Linux distro?): Windows 10 x64 - Brave Version (revision SHA): Brave | 0.18.27 -- | -- rev | eba55a1 Muon | 4.3.16 - Steps to reproduce: 1. Install 0.18.23 build, do not enable payments 2. Manually upgrade to 0.18.27 by running the setup 3. Once updated to 0.18.27, enable payments and visit any verified publisher link (eg: brianbondy.com) 4. Wait for the entry to be added in the ledger table, no verified publisher icon is shown in URL or in ledger table 5. Quit and restart browser, no change 6. Disable/enable payments, no change 7. Delete all ledger*.json files from brave profile and relaunch browser and visit the same sites, verified publisher icon is shown - Actual result: Verified publisher icon not shown on manual upgrade - Expected result: Should show the verified publisher icon irrespective of when payment is enabled and how the browser update is done - Will the steps above reproduce in a fresh profile? If not what other info can be added? Yes - Is this an issue in the currently released version? Yes, Updated manually from 0.17.13 to 0.18.14, no verified publisher icon shown Auto updated from 0.18.14 to 0.18.23, still no verified publisher icon shown - Can this issue be consistently reproduced? Yes - Extra QA steps: 1. 2. 3. - Screenshot if needed: - Any related issues: cc: @mrose17 @NejcZdovc @cezaraugusto
test
verified publisher icon not shown on manual upgrade did you search for similar issues before submitting this one yes describe the issue you encountered verified publisher icon not shown on manual upgrade platform macos linux distro windows brave version revision sha brave rev muon steps to reproduce install build do not enable payments manually upgrade to by running the setup once updated to enable payments and visit any verified publisher link eg brianbondy com wait for the entry to be added in the ledger table no verified publisher icon is shown in url or in ledger table quit and restart browser no change disable enable payments no change delete all ledger json files from brave profile and relaunch browser and visit the same sites verified publisher icon is shown actual result verified publisher icon not shown on manual upgrade expected result should show the verified publisher icon irrespective of when payment is enabled and how the browser update is done will the steps above reproduce in a fresh profile if not what other info can be added yes is this an issue in the currently released version yes updated manually from to no verified publisher icon shown auto updated from to still no verified publisher icon shown can this issue be consistently reproduced yes extra qa steps screenshot if needed any related issues cc nejczdovc cezaraugusto
1
155
2,494,768,325
IssuesEvent
2015-01-06 01:19:10
yaobinshi/test1
https://api.github.com/repos/yaobinshi/test1
opened
cli_Init()
Category: CLI Component: Rank Component: Tester Priority: Normal Status: Closed Tracker: Bug
--- Author Name: **Bob Gobeille** Original Redmine Issue: 111, http://www.fossology.org/issues/111 Original Date: 2011/12/17 Original Assignee: Bob Gobeille --- 1) cli_Init() sets $_SESSION['user'] = 'fossy'; Subverting authentication is an error. 2) loading all the plugins seems like a waste of time. I think cli_Init() should either go away, or accept an argument of the plugins to initialize.
1.0
cli_Init() - --- Author Name: **Bob Gobeille** Original Redmine Issue: 111, http://www.fossology.org/issues/111 Original Date: 2011/12/17 Original Assignee: Bob Gobeille --- 1) cli_Init() sets $_SESSION['user'] = 'fossy'; Subverting authentication is an error. 2) loading all the plugins seems like a waste of time. I think cli_Init() should either go away, or accept an argument of the plugins to initialize.
test
cli init author name bob gobeille original redmine issue original date original assignee bob gobeille cli init sets session fossy subverting authentication is an error loading all the plugins seems like a waste of time i think cli init should either go away or accept an argument of the plugins to initialize
1
85,066
16,598,132,883
IssuesEvent
2021-06-01 15:41:07
MichaelClerx/myokit
https://api.github.com/repos/MichaelClerx/myokit
closed
OpenCL sim pacing warnings
code
``` <program source>:127:16: warning: comparison of unsigned expression >= 0 is always true return (ix >= 0 && ix < 1 && iy >= 0 && iy < 1) ? pace : 0; ~~ ^ ~ <program source>:127:37: warning: comparison of unsigned expression >= 0 is always true return (ix >= 0 && ix < 1 && iy >= 0 && iy < 1) ? pace : 0; ~~ ^ ~ ``` Perfectly fine, really, but could still avoid them
1.0
OpenCL sim pacing warnings - ``` <program source>:127:16: warning: comparison of unsigned expression >= 0 is always true return (ix >= 0 && ix < 1 && iy >= 0 && iy < 1) ? pace : 0; ~~ ^ ~ <program source>:127:37: warning: comparison of unsigned expression >= 0 is always true return (ix >= 0 && ix < 1 && iy >= 0 && iy < 1) ? pace : 0; ~~ ^ ~ ``` Perfectly fine, really, but could still avoid them
non_test
opencl sim pacing warnings warning comparison of unsigned expression is always true return ix ix iy pace warning comparison of unsigned expression is always true return ix ix iy pace perfectly fine really but could still avoid them
0
788,464
27,753,795,921
IssuesEvent
2023-03-15 23:36:08
conan-io/conan
https://api.github.com/repos/conan-io/conan
closed
[scm] Do not copy ignored files to the cache (SCM optimization)
priority: low type: feature good first issue stage: queue complex: low component: scm
The SCM optimization that copies files from the user folder to the cache should take into account the ignored files/folders in the `.gitignore` (or SVN equivalent) and do not copy those files to the `scm_sources_folder`. This applies also to SVN. --- Taken from https://github.com/conan-io/conan/issues/6605#issuecomment-593837409
1.0
[scm] Do not copy ignored files to the cache (SCM optimization) - The SCM optimization that copies files from the user folder to the cache should take into account the ignored files/folders in the `.gitignore` (or SVN equivalent) and do not copy those files to the `scm_sources_folder`. This applies also to SVN. --- Taken from https://github.com/conan-io/conan/issues/6605#issuecomment-593837409
non_test
do not copy ignored files to the cache scm optimization the scm optimization that copies files from the user folder to the cache should take into account the ignored files folders in the gitignore or svn equivalent and do not copy those files to the scm sources folder this applies also to svn taken from
0
138,334
12,813,682,417
IssuesEvent
2020-07-04 14:25:11
mobileappdevhm20/team-project-team_4
https://api.github.com/repos/mobileappdevhm20/team-project-team_4
closed
Provide Documentation on GitHub Pages
Sprint 3 documentation
Set up documentation of your app on Github pages. The target group is other students who may want to maintain your app.
1.0
Provide Documentation on GitHub Pages - Set up documentation of your app on Github pages. The target group is other students who may want to maintain your app.
non_test
provide documentation on github pages set up documentation of your app on github pages the target group is other students who may want to maintain your app
0
7,122
2,878,638,545
IssuesEvent
2015-06-10 03:07:33
openshift/origin
https://api.github.com/repos/openshift/origin
closed
TestDNS integration test getting flaky on travis
area/tests priority/P3
``` failed TestDNS masterAddr: "127.0.0.1:47560" assetAddr: "127.0.0.1:37455" dnsAddr: "127.0.0.1:59453" I0324 21:58:02.583658 8171 start_allinone.go:180] Starting an OpenShift all-in-one I0324 21:58:02.583725 8171 create_signercert.go:76] Creating a signer cert with: admin.CreateSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt", Name:"openshift-signer@1427234282", Overwrite:false} I0324 21:58:02.664766 8171 net.go:36] Got error &net.OpError{Op:"read", Net:"tcp", Addr:(*net.TCPAddr)(0xc2082b4240), Err:0x68}, trying again: "127.0.0.1:47560" I0324 21:58:02.774074 8171 create_allcerts.go:82] Creating all certs with: admin.CreateAllCertsOptions{CertDir:"/tmp/openshift-integration/cert", SignerName:"openshift-signer@1427234282", Hostnames:util.StringList{"127.0.0.1", "localhost"}, NodeList:util.StringList{"localhost"}, APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", Overwrite:false} I0324 21:58:02.774125 8171 create_signercert.go:76] Creating a signer cert with: admin.CreateSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt", Name:"openshift-signer@1427234282", Overwrite:false} I0324 21:58:02.963040 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/openshift-deployer/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-deployer/key.key", User:"system:openshift-deployer", Groups:util.StringList{"system:deployers"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:03.331811 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-deployer/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-deployer/key.key", UserNick:"openshift-deployer", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-deployer/.kubeconfig"} I0324 21:58:03.341567 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/openshift-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-client/key.key", User:"system:openshift-client", Groups:util.StringList{}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:03.707516 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20828e030), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:03.712676 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-client/key.key", UserNick:"openshift-client", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-client/.kubeconfig"} I0324 21:58:03.724671 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/kube-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/kube-client/key.key", User:"system:kube-client", Groups:util.StringList{}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:04.095725 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/kube-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/kube-client/key.key", UserNick:"kube-client", KubeConfigFile:"/tmp/openshift-integration/cert/kube-client/.kubeconfig"} I0324 21:58:04.105152 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/admin/cert.crt", KeyFile:"/tmp/openshift-integration/cert/admin/key.key", User:"system:admin", Groups:util.StringList{"system:cluster-admins"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:04.473101 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/admin/cert.crt", KeyFile:"/tmp/openshift-integration/cert/admin/key.key", UserNick:"admin", KubeConfigFile:"/tmp/openshift-integration/cert/admin/.kubeconfig"} I0324 21:58:04.482172 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/openshift-router/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-router/key.key", User:"system:openshift-router", Groups:util.StringList{"system:routers"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:04.765236 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20828e060), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:04.854811 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-router/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-router/key.key", UserNick:"openshift-router", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-router/.kubeconfig"} I0324 21:58:04.864463 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/openshift-registry/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-registry/key.key", User:"system:openshift-registry", Groups:util.StringList{"system:registries"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:05.230945 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-registry/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-registry/key.key", UserNick:"openshift-registry", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-registry/.kubeconfig"} I0324 21:58:05.240737 8171 create_servercert.go:81] Creating a server cert with: admin.CreateServerCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/node-localhost/server.crt", KeyFile:"/tmp/openshift-integration/cert/node-localhost/server.key", Hostnames:util.StringList{"localhost"}, Overwrite:false} I0324 21:58:05.426912 8171 crypto.go:257] Generating server certificate in /tmp/openshift-integration/cert/node-localhost/server.crt, key in /tmp/openshift-integration/cert/node-localhost/server.key I0324 21:58:05.816774 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20828e030), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:05.992097 8171 create_nodeclientcerts.go:83] Creating a node client cert with: admin.CreateNodeClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/node-localhost/client.crt", KeyFile:"/tmp/openshift-integration/cert/node-localhost/client.key", NodeName:"localhost", Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:05.992156 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/node-localhost/client.crt", KeyFile:"/tmp/openshift-integration/cert/node-localhost/client.key", User:"system:node-localhost", Groups:util.StringList{"system:nodes"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:06.163104 8171 crypto.go:282] Generating client cert in /tmp/openshift-integration/cert/node-localhost/client.crt and key in /tmp/openshift-integration/cert/node-localhost/client.key I0324 21:58:06.868683 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc2081dc030), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:07.729559 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/node-localhost/client.crt", KeyFile:"/tmp/openshift-integration/cert/node-localhost/client.key", UserNick:"localhost", KubeConfigFile:"/tmp/openshift-integration/cert/node-localhost/.kubeconfig"} I0324 21:58:07.739672 8171 create_servercert.go:81] Creating a server cert with: admin.CreateServerCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/master/cert.crt", KeyFile:"/tmp/openshift-integration/cert/master/key.key", Hostnames:util.StringList{"127.0.0.1", "localhost"}, Overwrite:false} I0324 21:58:07.919355 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc2081dc000), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:08.118131 8171 crypto.go:249] Found existing server certificate in /tmp/openshift-integration/cert/master/cert.crt I0324 21:58:08.118188 8171 create_servercert.go:81] Creating a server cert with: admin.CreateServerCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/master/cert.crt", KeyFile:"/tmp/openshift-integration/cert/master/key.key", Hostnames:util.StringList{"127.0.0.1", "localhost"}, Overwrite:false} I0324 21:58:08.488951 8171 crypto.go:249] Found existing server certificate in /tmp/openshift-integration/cert/master/cert.crt I0324 21:58:08.489047 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-deployer/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-deployer/key.key", UserNick:"openshift-deployer", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-deployer/.kubeconfig"} I0324 21:58:08.498792 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-client/key.key", UserNick:"openshift-client", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-client/.kubeconfig"} I0324 21:58:08.507921 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/kube-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/kube-client/key.key", UserNick:"kube-client", KubeConfigFile:"/tmp/openshift-integration/cert/kube-client/.kubeconfig"} I0324 21:58:08.516910 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/admin/cert.crt", KeyFile:"/tmp/openshift-integration/cert/admin/key.key", UserNick:"admin", KubeConfigFile:"/tmp/openshift-integration/cert/admin/.kubeconfig"} I0324 21:58:08.528990 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-router/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-router/key.key", UserNick:"openshift-router", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-router/.kubeconfig"} I0324 21:58:08.538465 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-registry/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-registry/key.key", UserNick:"openshift-registry", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-registry/.kubeconfig"} I0324 21:58:08.551344 8171 start_master.go:321] Starting an OpenShift master, reachable at 127.0.0.1:47560 (etcd: http://127.0.0.1:4001) I0324 21:58:08.551363 8171 start_master.go:322] OpenShift master public address is https://127.0.0.1:47560 I0324 21:58:08.959487 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20828e030), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:09.185826 8171 start_master.go:354] Static Nodes: [localhost] I0324 21:58:09.186388 8171 master.go:260] Setting master service IPs based on PortalNet subnet to "172.30.17.1" (read-only) and "172.30.17.2" (read-write). E0324 21:58:09.197737 8171 reflector.go:115] Failed to list *api.ResourceQuota: Get https://127.0.0.1:47560/api/v1beta1/resourceQuotas?namespace=: dial tcp 127.0.0.1:47560: connection refused E0324 21:58:09.197784 8171 reflector.go:115] Failed to list *api.LimitRange: Get https://127.0.0.1:47560/api/v1beta1/limitRanges?namespace=: dial tcp 127.0.0.1:47560: connection refused I0324 21:58:09.225363 8171 plugin.go:29] Route plugin initialized with suffix=router.default.local 2015/03/24 21:58:09 [restful/swagger] listing is available at /swaggerapi/ 2015/03/24 21:58:09 [restful/swagger] Swagger(File)Path is empty ; no UI is served I0324 21:58:09.285943 8171 master.go:402] Started Kubernetes API at 127.0.0.1:47560/api/v1beta1 I0324 21:58:09.285978 8171 master.go:402] Started Kubernetes API at 127.0.0.1:47560/api/v1beta2 I0324 21:58:09.285989 8171 master.go:402] Started Kubernetes API at 127.0.0.1:47560/api/v1beta3 (experimental) I0324 21:58:09.286001 8171 master.go:402] Started OpenShift API at 127.0.0.1:47560/osapi/v1beta1 I0324 21:58:09.286012 8171 master.go:402] Started OAuth2 API at 127.0.0.1:47560/oauth I0324 21:58:09.286023 8171 master.go:402] Started login server at 127.0.0.1:47560/login I0324 21:58:09.286032 8171 master.go:402] Started Swagger Schema API at 127.0.0.1:47560/swaggerapi/ I0324 21:58:09.305956 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc208654f30), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:09.430109 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc208654000), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:09.553584 8171 master.go:459] No master policy found. Creating bootstrap policy based on: /tmp/openshift-integration/policy/policy.json I0324 21:58:10.564720 8171 factory.go:78] creating scheduler from algorithm provider 'DefaultProvider' I0324 21:58:10.564745 8171 factory.go:108] creating scheduler with fit predicates 'map[PodFitsPorts:{} PodFitsResources:{} NoDiskConflict:{} MatchNodeSelector:{} HostName:{}]' and priority functions 'map[LeastRequestedPriority:{} ServiceSpreadingPriority:{} EqualPriority:{}] I0324 21:58:10.564898 8171 master.go:110] Started Kubernetes Scheduler I0324 21:58:10.564926 8171 master.go:91] Started Kubernetes Replication Manager I0324 21:58:10.564936 8171 master.go:99] Started Kubernetes Endpoint Controller I0324 21:58:10.855892 8171 nodecontroller.go:158] Registered node in registry: localhost I0324 21:58:10.855919 8171 nodecontroller.go:163] Successfully registered all nodes I0324 21:58:10.855933 8171 master.go:137] Started Kubernetes Minion Controller I0324 21:58:10.856035 8171 start_master.go:388] Using images from "openshift/origin-<component>:" W0324 21:58:10.856126 8171 master.go:612] Binding DNS on port 59453 instead of 53 (you may need to run as root and update your config), using 127.0.0.1:59453 which will not resolve from all locations 2015/03/24 21:58:10 skydns: ready for queries on local. for tcp://127.0.0.1:59453 [rcache 0] 2015/03/24 21:58:10 skydns: ready for queries on local. for udp://127.0.0.1:59453 [rcache 0] I0324 21:58:10.890828 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20869fa10), Err:0x6f}, trying again: "127.0.0.1:59453" I0324 21:58:11.073705 8171 master.go:627] OpenShift DNS listening at 127.0.0.1:59453 I0324 21:58:11.080762 8171 master.go:586] OpenShift UI listening at https://127.0.0.1:37455 I0324 21:58:11.114563 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc208829d10), Err:0x6f}, trying again: "127.0.0.1:37455" I0324 21:58:11.137988 8171 http.go:60] HTTP probe error: Get http://localhost:10250/healthz: dial tcp 127.0.0.1:10250: connection refused E0324 21:58:11.178227 8171 nodecontroller.go:315] Can't collect information for node localhost: Get http://localhost:10250/api/v1beta1/nodeInfo: dial tcp 127.0.0.1:10250: connection refused I0324 21:58:11.180565 8171 nodecontroller.go:232] updating node localhost I0324 21:58:11.259431 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc2087e25a0), Err:0x6f}, trying again: "127.0.0.1:37455" F0324 21:58:11.342440 8171 master.go:587] listen tcp 127.0.0.1:37455: bind: address already in use Running TestDeleteBuild... ``` We may not be waiting long enough in StartTestMaster. Needs to be investigated.
1.0
TestDNS integration test getting flaky on travis - ``` failed TestDNS masterAddr: "127.0.0.1:47560" assetAddr: "127.0.0.1:37455" dnsAddr: "127.0.0.1:59453" I0324 21:58:02.583658 8171 start_allinone.go:180] Starting an OpenShift all-in-one I0324 21:58:02.583725 8171 create_signercert.go:76] Creating a signer cert with: admin.CreateSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt", Name:"openshift-signer@1427234282", Overwrite:false} I0324 21:58:02.664766 8171 net.go:36] Got error &net.OpError{Op:"read", Net:"tcp", Addr:(*net.TCPAddr)(0xc2082b4240), Err:0x68}, trying again: "127.0.0.1:47560" I0324 21:58:02.774074 8171 create_allcerts.go:82] Creating all certs with: admin.CreateAllCertsOptions{CertDir:"/tmp/openshift-integration/cert", SignerName:"openshift-signer@1427234282", Hostnames:util.StringList{"127.0.0.1", "localhost"}, NodeList:util.StringList{"localhost"}, APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", Overwrite:false} I0324 21:58:02.774125 8171 create_signercert.go:76] Creating a signer cert with: admin.CreateSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt", Name:"openshift-signer@1427234282", Overwrite:false} I0324 21:58:02.963040 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/openshift-deployer/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-deployer/key.key", User:"system:openshift-deployer", Groups:util.StringList{"system:deployers"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:03.331811 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-deployer/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-deployer/key.key", UserNick:"openshift-deployer", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-deployer/.kubeconfig"} I0324 21:58:03.341567 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/openshift-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-client/key.key", User:"system:openshift-client", Groups:util.StringList{}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:03.707516 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20828e030), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:03.712676 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-client/key.key", UserNick:"openshift-client", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-client/.kubeconfig"} I0324 21:58:03.724671 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/kube-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/kube-client/key.key", User:"system:kube-client", Groups:util.StringList{}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:04.095725 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/kube-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/kube-client/key.key", UserNick:"kube-client", KubeConfigFile:"/tmp/openshift-integration/cert/kube-client/.kubeconfig"} I0324 21:58:04.105152 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/admin/cert.crt", KeyFile:"/tmp/openshift-integration/cert/admin/key.key", User:"system:admin", Groups:util.StringList{"system:cluster-admins"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:04.473101 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/admin/cert.crt", KeyFile:"/tmp/openshift-integration/cert/admin/key.key", UserNick:"admin", KubeConfigFile:"/tmp/openshift-integration/cert/admin/.kubeconfig"} I0324 21:58:04.482172 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/openshift-router/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-router/key.key", User:"system:openshift-router", Groups:util.StringList{"system:routers"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:04.765236 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20828e060), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:04.854811 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-router/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-router/key.key", UserNick:"openshift-router", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-router/.kubeconfig"} I0324 21:58:04.864463 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/openshift-registry/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-registry/key.key", User:"system:openshift-registry", Groups:util.StringList{"system:registries"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:05.230945 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-registry/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-registry/key.key", UserNick:"openshift-registry", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-registry/.kubeconfig"} I0324 21:58:05.240737 8171 create_servercert.go:81] Creating a server cert with: admin.CreateServerCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/node-localhost/server.crt", KeyFile:"/tmp/openshift-integration/cert/node-localhost/server.key", Hostnames:util.StringList{"localhost"}, Overwrite:false} I0324 21:58:05.426912 8171 crypto.go:257] Generating server certificate in /tmp/openshift-integration/cert/node-localhost/server.crt, key in /tmp/openshift-integration/cert/node-localhost/server.key I0324 21:58:05.816774 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20828e030), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:05.992097 8171 create_nodeclientcerts.go:83] Creating a node client cert with: admin.CreateNodeClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/node-localhost/client.crt", KeyFile:"/tmp/openshift-integration/cert/node-localhost/client.key", NodeName:"localhost", Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:05.992156 8171 create_clientcert.go:87] Creating a client cert with: admin.CreateClientCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/node-localhost/client.crt", KeyFile:"/tmp/openshift-integration/cert/node-localhost/client.key", User:"system:node-localhost", Groups:util.StringList{"system:nodes"}, Overwrite:false} and &admin.GetSignerCertOptions{CertFile:"/tmp/openshift-integration/cert/ca/cert.crt", KeyFile:"/tmp/openshift-integration/cert/ca/key.key", SerialFile:"/tmp/openshift-integration/cert/ca/serial.txt"} I0324 21:58:06.163104 8171 crypto.go:282] Generating client cert in /tmp/openshift-integration/cert/node-localhost/client.crt and key in /tmp/openshift-integration/cert/node-localhost/client.key I0324 21:58:06.868683 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc2081dc030), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:07.729559 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/node-localhost/client.crt", KeyFile:"/tmp/openshift-integration/cert/node-localhost/client.key", UserNick:"localhost", KubeConfigFile:"/tmp/openshift-integration/cert/node-localhost/.kubeconfig"} I0324 21:58:07.739672 8171 create_servercert.go:81] Creating a server cert with: admin.CreateServerCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/master/cert.crt", KeyFile:"/tmp/openshift-integration/cert/master/key.key", Hostnames:util.StringList{"127.0.0.1", "localhost"}, Overwrite:false} I0324 21:58:07.919355 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc2081dc000), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:08.118131 8171 crypto.go:249] Found existing server certificate in /tmp/openshift-integration/cert/master/cert.crt I0324 21:58:08.118188 8171 create_servercert.go:81] Creating a server cert with: admin.CreateServerCertOptions{GetSignerCertOptions:(*admin.GetSignerCertOptions)(0xc2081dc450), CertFile:"/tmp/openshift-integration/cert/master/cert.crt", KeyFile:"/tmp/openshift-integration/cert/master/key.key", Hostnames:util.StringList{"127.0.0.1", "localhost"}, Overwrite:false} I0324 21:58:08.488951 8171 crypto.go:249] Found existing server certificate in /tmp/openshift-integration/cert/master/cert.crt I0324 21:58:08.489047 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-deployer/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-deployer/key.key", UserNick:"openshift-deployer", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-deployer/.kubeconfig"} I0324 21:58:08.498792 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-client/key.key", UserNick:"openshift-client", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-client/.kubeconfig"} I0324 21:58:08.507921 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/kube-client/cert.crt", KeyFile:"/tmp/openshift-integration/cert/kube-client/key.key", UserNick:"kube-client", KubeConfigFile:"/tmp/openshift-integration/cert/kube-client/.kubeconfig"} I0324 21:58:08.516910 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/admin/cert.crt", KeyFile:"/tmp/openshift-integration/cert/admin/key.key", UserNick:"admin", KubeConfigFile:"/tmp/openshift-integration/cert/admin/.kubeconfig"} I0324 21:58:08.528990 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-router/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-router/key.key", UserNick:"openshift-router", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-router/.kubeconfig"} I0324 21:58:08.538465 8171 create_kubeconfig.go:115] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://127.0.0.1:47560", PublicAPIServerURL:"https://127.0.0.1:47560", APIServerCAFile:"/tmp/openshift-integration/cert/ca/cert.crt", ServerNick:"master", CertFile:"/tmp/openshift-integration/cert/openshift-registry/cert.crt", KeyFile:"/tmp/openshift-integration/cert/openshift-registry/key.key", UserNick:"openshift-registry", KubeConfigFile:"/tmp/openshift-integration/cert/openshift-registry/.kubeconfig"} I0324 21:58:08.551344 8171 start_master.go:321] Starting an OpenShift master, reachable at 127.0.0.1:47560 (etcd: http://127.0.0.1:4001) I0324 21:58:08.551363 8171 start_master.go:322] OpenShift master public address is https://127.0.0.1:47560 I0324 21:58:08.959487 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20828e030), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:09.185826 8171 start_master.go:354] Static Nodes: [localhost] I0324 21:58:09.186388 8171 master.go:260] Setting master service IPs based on PortalNet subnet to "172.30.17.1" (read-only) and "172.30.17.2" (read-write). E0324 21:58:09.197737 8171 reflector.go:115] Failed to list *api.ResourceQuota: Get https://127.0.0.1:47560/api/v1beta1/resourceQuotas?namespace=: dial tcp 127.0.0.1:47560: connection refused E0324 21:58:09.197784 8171 reflector.go:115] Failed to list *api.LimitRange: Get https://127.0.0.1:47560/api/v1beta1/limitRanges?namespace=: dial tcp 127.0.0.1:47560: connection refused I0324 21:58:09.225363 8171 plugin.go:29] Route plugin initialized with suffix=router.default.local 2015/03/24 21:58:09 [restful/swagger] listing is available at /swaggerapi/ 2015/03/24 21:58:09 [restful/swagger] Swagger(File)Path is empty ; no UI is served I0324 21:58:09.285943 8171 master.go:402] Started Kubernetes API at 127.0.0.1:47560/api/v1beta1 I0324 21:58:09.285978 8171 master.go:402] Started Kubernetes API at 127.0.0.1:47560/api/v1beta2 I0324 21:58:09.285989 8171 master.go:402] Started Kubernetes API at 127.0.0.1:47560/api/v1beta3 (experimental) I0324 21:58:09.286001 8171 master.go:402] Started OpenShift API at 127.0.0.1:47560/osapi/v1beta1 I0324 21:58:09.286012 8171 master.go:402] Started OAuth2 API at 127.0.0.1:47560/oauth I0324 21:58:09.286023 8171 master.go:402] Started login server at 127.0.0.1:47560/login I0324 21:58:09.286032 8171 master.go:402] Started Swagger Schema API at 127.0.0.1:47560/swaggerapi/ I0324 21:58:09.305956 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc208654f30), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:09.430109 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc208654000), Err:0x6f}, trying again: "127.0.0.1:47560" I0324 21:58:09.553584 8171 master.go:459] No master policy found. Creating bootstrap policy based on: /tmp/openshift-integration/policy/policy.json I0324 21:58:10.564720 8171 factory.go:78] creating scheduler from algorithm provider 'DefaultProvider' I0324 21:58:10.564745 8171 factory.go:108] creating scheduler with fit predicates 'map[PodFitsPorts:{} PodFitsResources:{} NoDiskConflict:{} MatchNodeSelector:{} HostName:{}]' and priority functions 'map[LeastRequestedPriority:{} ServiceSpreadingPriority:{} EqualPriority:{}] I0324 21:58:10.564898 8171 master.go:110] Started Kubernetes Scheduler I0324 21:58:10.564926 8171 master.go:91] Started Kubernetes Replication Manager I0324 21:58:10.564936 8171 master.go:99] Started Kubernetes Endpoint Controller I0324 21:58:10.855892 8171 nodecontroller.go:158] Registered node in registry: localhost I0324 21:58:10.855919 8171 nodecontroller.go:163] Successfully registered all nodes I0324 21:58:10.855933 8171 master.go:137] Started Kubernetes Minion Controller I0324 21:58:10.856035 8171 start_master.go:388] Using images from "openshift/origin-<component>:" W0324 21:58:10.856126 8171 master.go:612] Binding DNS on port 59453 instead of 53 (you may need to run as root and update your config), using 127.0.0.1:59453 which will not resolve from all locations 2015/03/24 21:58:10 skydns: ready for queries on local. for tcp://127.0.0.1:59453 [rcache 0] 2015/03/24 21:58:10 skydns: ready for queries on local. for udp://127.0.0.1:59453 [rcache 0] I0324 21:58:10.890828 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20869fa10), Err:0x6f}, trying again: "127.0.0.1:59453" I0324 21:58:11.073705 8171 master.go:627] OpenShift DNS listening at 127.0.0.1:59453 I0324 21:58:11.080762 8171 master.go:586] OpenShift UI listening at https://127.0.0.1:37455 I0324 21:58:11.114563 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc208829d10), Err:0x6f}, trying again: "127.0.0.1:37455" I0324 21:58:11.137988 8171 http.go:60] HTTP probe error: Get http://localhost:10250/healthz: dial tcp 127.0.0.1:10250: connection refused E0324 21:58:11.178227 8171 nodecontroller.go:315] Can't collect information for node localhost: Get http://localhost:10250/api/v1beta1/nodeInfo: dial tcp 127.0.0.1:10250: connection refused I0324 21:58:11.180565 8171 nodecontroller.go:232] updating node localhost I0324 21:58:11.259431 8171 net.go:36] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc2087e25a0), Err:0x6f}, trying again: "127.0.0.1:37455" F0324 21:58:11.342440 8171 master.go:587] listen tcp 127.0.0.1:37455: bind: address already in use Running TestDeleteBuild... ``` We may not be waiting long enough in StartTestMaster. Needs to be investigated.
test
testdns integration test getting flaky on travis failed testdns masteraddr assetaddr dnsaddr start allinone go starting an openshift all in one create signercert go creating a signer cert with admin createsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt name openshift signer overwrite false net go got error net operror op read net tcp addr net tcpaddr err trying again create allcerts go creating all certs with admin createallcertsoptions certdir tmp openshift integration cert signername openshift signer hostnames util stringlist localhost nodelist util stringlist localhost apiserverurl publicapiserverurl overwrite false create signercert go creating a signer cert with admin createsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt name openshift signer overwrite false create clientcert go creating a client cert with admin createclientcertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert openshift deployer cert crt keyfile tmp openshift integration cert openshift deployer key key user system openshift deployer groups util stringlist system deployers overwrite false and admin getsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert openshift deployer cert crt keyfile tmp openshift integration cert openshift deployer key key usernick openshift deployer kubeconfigfile tmp openshift integration cert openshift deployer kubeconfig create clientcert go creating a client cert with admin createclientcertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert openshift client cert crt keyfile tmp openshift integration cert openshift client key key user system openshift client groups util stringlist overwrite false and admin getsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt net go got error net operror op dial net tcp addr net tcpaddr err trying again create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert openshift client cert crt keyfile tmp openshift integration cert openshift client key key usernick openshift client kubeconfigfile tmp openshift integration cert openshift client kubeconfig create clientcert go creating a client cert with admin createclientcertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert kube client cert crt keyfile tmp openshift integration cert kube client key key user system kube client groups util stringlist overwrite false and admin getsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert kube client cert crt keyfile tmp openshift integration cert kube client key key usernick kube client kubeconfigfile tmp openshift integration cert kube client kubeconfig create clientcert go creating a client cert with admin createclientcertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert admin cert crt keyfile tmp openshift integration cert admin key key user system admin groups util stringlist system cluster admins overwrite false and admin getsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert admin cert crt keyfile tmp openshift integration cert admin key key usernick admin kubeconfigfile tmp openshift integration cert admin kubeconfig create clientcert go creating a client cert with admin createclientcertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert openshift router cert crt keyfile tmp openshift integration cert openshift router key key user system openshift router groups util stringlist system routers overwrite false and admin getsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt net go got error net operror op dial net tcp addr net tcpaddr err trying again create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert openshift router cert crt keyfile tmp openshift integration cert openshift router key key usernick openshift router kubeconfigfile tmp openshift integration cert openshift router kubeconfig create clientcert go creating a client cert with admin createclientcertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert openshift registry cert crt keyfile tmp openshift integration cert openshift registry key key user system openshift registry groups util stringlist system registries overwrite false and admin getsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert openshift registry cert crt keyfile tmp openshift integration cert openshift registry key key usernick openshift registry kubeconfigfile tmp openshift integration cert openshift registry kubeconfig create servercert go creating a server cert with admin createservercertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert node localhost server crt keyfile tmp openshift integration cert node localhost server key hostnames util stringlist localhost overwrite false crypto go generating server certificate in tmp openshift integration cert node localhost server crt key in tmp openshift integration cert node localhost server key net go got error net operror op dial net tcp addr net tcpaddr err trying again create nodeclientcerts go creating a node client cert with admin createnodeclientcertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert node localhost client crt keyfile tmp openshift integration cert node localhost client key nodename localhost overwrite false and admin getsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt create clientcert go creating a client cert with admin createclientcertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert node localhost client crt keyfile tmp openshift integration cert node localhost client key user system node localhost groups util stringlist system nodes overwrite false and admin getsignercertoptions certfile tmp openshift integration cert ca cert crt keyfile tmp openshift integration cert ca key key serialfile tmp openshift integration cert ca serial txt crypto go generating client cert in tmp openshift integration cert node localhost client crt and key in tmp openshift integration cert node localhost client key net go got error net operror op dial net tcp addr net tcpaddr err trying again create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert node localhost client crt keyfile tmp openshift integration cert node localhost client key usernick localhost kubeconfigfile tmp openshift integration cert node localhost kubeconfig create servercert go creating a server cert with admin createservercertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert master cert crt keyfile tmp openshift integration cert master key key hostnames util stringlist localhost overwrite false net go got error net operror op dial net tcp addr net tcpaddr err trying again crypto go found existing server certificate in tmp openshift integration cert master cert crt create servercert go creating a server cert with admin createservercertoptions getsignercertoptions admin getsignercertoptions certfile tmp openshift integration cert master cert crt keyfile tmp openshift integration cert master key key hostnames util stringlist localhost overwrite false crypto go found existing server certificate in tmp openshift integration cert master cert crt create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert openshift deployer cert crt keyfile tmp openshift integration cert openshift deployer key key usernick openshift deployer kubeconfigfile tmp openshift integration cert openshift deployer kubeconfig create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert openshift client cert crt keyfile tmp openshift integration cert openshift client key key usernick openshift client kubeconfigfile tmp openshift integration cert openshift client kubeconfig create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert kube client cert crt keyfile tmp openshift integration cert kube client key key usernick kube client kubeconfigfile tmp openshift integration cert kube client kubeconfig create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert admin cert crt keyfile tmp openshift integration cert admin key key usernick admin kubeconfigfile tmp openshift integration cert admin kubeconfig create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert openshift router cert crt keyfile tmp openshift integration cert openshift router key key usernick openshift router kubeconfigfile tmp openshift integration cert openshift router kubeconfig create kubeconfig go creating a kubeconfig with admin createkubeconfigoptions apiserverurl publicapiserverurl apiservercafile tmp openshift integration cert ca cert crt servernick master certfile tmp openshift integration cert openshift registry cert crt keyfile tmp openshift integration cert openshift registry key key usernick openshift registry kubeconfigfile tmp openshift integration cert openshift registry kubeconfig start master go starting an openshift master reachable at etcd start master go openshift master public address is net go got error net operror op dial net tcp addr net tcpaddr err trying again start master go static nodes master go setting master service ips based on portalnet subnet to read only and read write reflector go failed to list api resourcequota get dial tcp connection refused reflector go failed to list api limitrange get dial tcp connection refused plugin go route plugin initialized with suffix router default local listing is available at swaggerapi swagger file path is empty no ui is served master go started kubernetes api at api master go started kubernetes api at api master go started kubernetes api at api experimental master go started openshift api at osapi master go started api at oauth master go started login server at login master go started swagger schema api at swaggerapi net go got error net operror op dial net tcp addr net tcpaddr err trying again net go got error net operror op dial net tcp addr net tcpaddr err trying again master go no master policy found creating bootstrap policy based on tmp openshift integration policy policy json factory go creating scheduler from algorithm provider defaultprovider factory go creating scheduler with fit predicates map and priority functions map master go started kubernetes scheduler master go started kubernetes replication manager master go started kubernetes endpoint controller nodecontroller go registered node in registry localhost nodecontroller go successfully registered all nodes master go started kubernetes minion controller start master go using images from openshift origin master go binding dns on port instead of you may need to run as root and update your config using which will not resolve from all locations skydns ready for queries on local for tcp skydns ready for queries on local for udp net go got error net operror op dial net tcp addr net tcpaddr err trying again master go openshift dns listening at master go openshift ui listening at net go got error net operror op dial net tcp addr net tcpaddr err trying again http go http probe error get dial tcp connection refused nodecontroller go can t collect information for node localhost get dial tcp connection refused nodecontroller go updating node localhost net go got error net operror op dial net tcp addr net tcpaddr err trying again master go listen tcp bind address already in use running testdeletebuild we may not be waiting long enough in starttestmaster needs to be investigated
1
211,744
16,359,406,897
IssuesEvent
2021-05-14 06:58:06
spiral/roadrunner-binary
https://api.github.com/repos/spiral/roadrunner-binary
opened
Implement json-schemas unit testing
A-tests
JSON schemas in a directory `./schemas` should be used for unit-tests of the config file. This must be implemented in the closest future.
1.0
Implement json-schemas unit testing - JSON schemas in a directory `./schemas` should be used for unit-tests of the config file. This must be implemented in the closest future.
test
implement json schemas unit testing json schemas in a directory schemas should be used for unit tests of the config file this must be implemented in the closest future
1
178,022
13,758,110,926
IssuesEvent
2020-10-06 23:08:49
tenable/pyTenable
https://api.github.com/repos/tenable/pyTenable
closed
TSC BlackoutWindowAPI
core unit-testing
the blackouts.py module should support the Blackout Windows API endpoints and by fully tested.
1.0
TSC BlackoutWindowAPI - the blackouts.py module should support the Blackout Windows API endpoints and by fully tested.
test
tsc blackoutwindowapi the blackouts py module should support the blackout windows api endpoints and by fully tested
1
204,235
15,434,226,379
IssuesEvent
2021-03-07 02:04:25
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Manual test run on Win x64 for 1.21.x Release #2
OS/Desktop OS/Windows QA/Yes release-notes/exclude tests
### Installer - [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave-Browser-Beta.app/` and make sure it returns `accepted`. If Windows right click on the `brave_installer-x64.exe` and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window ### Widevine - [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time - [x] Test that you can stream on Netflix on a fresh profile after installing Widevine - [x] Verify `Widevine Notification` is shown when you visit HBO Max for the first time - [x] Test that you can stream on HBO Max on a fresh profile after installing Widevine ## Update tests - [x] Verify visiting `brave://settings/help` triggers update check - [x] Verify once update is downloaded, prompts to `Relaunch` to install update #### Startup & Components - [x] Delete Adblock folder from browser profile and restart browser. Visit `brave://components` and verify `Brave Ad Block Updater` downloads and update the component. Repeat for all Brave components ### Upgrade - [x] Make sure that data from the last version appears in the new version OK - [x] Ensure that `brave://version` lists the expected Brave & Chromium versions - [x] With data from the last version, verify that - [x] Bookmarks on the bookmark toolbar and bookmark folders can be opened - [x] Cookies are preserved - [x] Installed extensions are retained and work correctly - [x] Opened tabs can be reloaded - [x] Stored passwords are preserved - [x] Sync chain created in previous version is retained - [x] Social media blocking buttons changes are retained - [x] Rewards - [x] Wallet balance is retained - [x] Auto-contribute list is retained - [x] Both Tips and Monthly Contributions are retained - [x] Wallet panel transactions list is retained - [x] Changes to rewards settings are retained - [x] Ensure that Auto Contribute is not being enabled when upgrading to a new version if AC was disabled - [x] Ads - [x] Both `Estimated pending rewards` & `Ad notifications received this month` are retained - [x] Changes to ads settings are retained - [x] Ensure that ads are not being enabled when upgrading to a new version if they were disabled - [x] Ensure that ads are not disabled when upgrading to a new version if they were enabled
1.0
Manual test run on Win x64 for 1.21.x Release #2 - ### Installer - [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave-Browser-Beta.app/` and make sure it returns `accepted`. If Windows right click on the `brave_installer-x64.exe` and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window ### Widevine - [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time - [x] Test that you can stream on Netflix on a fresh profile after installing Widevine - [x] Verify `Widevine Notification` is shown when you visit HBO Max for the first time - [x] Test that you can stream on HBO Max on a fresh profile after installing Widevine ## Update tests - [x] Verify visiting `brave://settings/help` triggers update check - [x] Verify once update is downloaded, prompts to `Relaunch` to install update #### Startup & Components - [x] Delete Adblock folder from browser profile and restart browser. Visit `brave://components` and verify `Brave Ad Block Updater` downloads and update the component. Repeat for all Brave components ### Upgrade - [x] Make sure that data from the last version appears in the new version OK - [x] Ensure that `brave://version` lists the expected Brave & Chromium versions - [x] With data from the last version, verify that - [x] Bookmarks on the bookmark toolbar and bookmark folders can be opened - [x] Cookies are preserved - [x] Installed extensions are retained and work correctly - [x] Opened tabs can be reloaded - [x] Stored passwords are preserved - [x] Sync chain created in previous version is retained - [x] Social media blocking buttons changes are retained - [x] Rewards - [x] Wallet balance is retained - [x] Auto-contribute list is retained - [x] Both Tips and Monthly Contributions are retained - [x] Wallet panel transactions list is retained - [x] Changes to rewards settings are retained - [x] Ensure that Auto Contribute is not being enabled when upgrading to a new version if AC was disabled - [x] Ads - [x] Both `Estimated pending rewards` & `Ad notifications received this month` are retained - [x] Changes to ads settings are retained - [x] Ensure that ads are not being enabled when upgrading to a new version if they were disabled - [x] Ensure that ads are not disabled when upgrading to a new version if they were enabled
test
manual test run on win for x release installer check signature if os run spctl assess verbose applications brave browser beta app and make sure it returns accepted if windows right click on the brave installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window widevine verify widevine notification is shown when you visit netflix for the first time test that you can stream on netflix on a fresh profile after installing widevine verify widevine notification is shown when you visit hbo max for the first time test that you can stream on hbo max on a fresh profile after installing widevine update tests verify visiting brave settings help triggers update check verify once update is downloaded prompts to relaunch to install update startup components delete adblock folder from browser profile and restart browser visit brave components and verify brave ad block updater downloads and update the component repeat for all brave components upgrade make sure that data from the last version appears in the new version ok ensure that brave version lists the expected brave chromium versions with data from the last version verify that bookmarks on the bookmark toolbar and bookmark folders can be opened cookies are preserved installed extensions are retained and work correctly opened tabs can be reloaded stored passwords are preserved sync chain created in previous version is retained social media blocking buttons changes are retained rewards wallet balance is retained auto contribute list is retained both tips and monthly contributions are retained wallet panel transactions list is retained changes to rewards settings are retained ensure that auto contribute is not being enabled when upgrading to a new version if ac was disabled ads both estimated pending rewards ad notifications received this month are retained changes to ads settings are retained ensure that ads are not being enabled when upgrading to a new version if they were disabled ensure that ads are not disabled when upgrading to a new version if they were enabled
1
2,601
8,026,207,067
IssuesEvent
2018-07-27 02:32:02
reposense/RepoSense
https://api.github.com/repos/reposense/RepoSense
closed
Modify analysis of files and commits to be at Author level
a-Architecture c.Enhancement
RepoSense only uses Repoconfiguration for the analysis of files and commits, causing these analysis to be on the repo level. As we will be implementing configuration settings for each individual author of a repo, the analysis of files and commits will also need to be on the author level. Let's update the analysis of files and commits to be on the author level.
1.0
Modify analysis of files and commits to be at Author level - RepoSense only uses Repoconfiguration for the analysis of files and commits, causing these analysis to be on the repo level. As we will be implementing configuration settings for each individual author of a repo, the analysis of files and commits will also need to be on the author level. Let's update the analysis of files and commits to be on the author level.
non_test
modify analysis of files and commits to be at author level reposense only uses repoconfiguration for the analysis of files and commits causing these analysis to be on the repo level as we will be implementing configuration settings for each individual author of a repo the analysis of files and commits will also need to be on the author level let s update the analysis of files and commits to be on the author level
0
146,931
11,762,399,182
IssuesEvent
2020-03-14 01:12:31
Vachok/ftpplus
https://api.github.com/repos/Vachok/ftpplus
closed
getSSHLists
High TestQuality bug resolution_Dupe
Execute RestCTRLTest::getSSHLists**getSSHLists** *RestCTRLTest* *connect timed out SocketTimeoutException - connect timed out RestCTRLTest.java: ru.vachok.networker.restapi.RestCTRLTest.getSSHLists(RestCTRLTest.java:236) expected [null] but found [java.net.SocketTimeoutException: connect timed out]* *java.lang.AssertionError*
1.0
getSSHLists - Execute RestCTRLTest::getSSHLists**getSSHLists** *RestCTRLTest* *connect timed out SocketTimeoutException - connect timed out RestCTRLTest.java: ru.vachok.networker.restapi.RestCTRLTest.getSSHLists(RestCTRLTest.java:236) expected [null] but found [java.net.SocketTimeoutException: connect timed out]* *java.lang.AssertionError*
test
getsshlists execute restctrltest getsshlists getsshlists restctrltest connect timed out sockettimeoutexception connect timed out restctrltest java ru vachok networker restapi restctrltest getsshlists restctrltest java expected but found java lang assertionerror
1
286,841
31,769,541,084
IssuesEvent
2023-09-12 10:52:17
valtech-ch/microservice-kubernetes-cluster
https://api.github.com/repos/valtech-ch/microservice-kubernetes-cluster
reopened
CVE-2019-14893 (Critical) detected in jackson-databind-2.9.8.jar
Mend: dependency security vulnerability
## CVE-2019-14893 - Critical Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /functions/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p> <p> Dependency Hierarchy: - spring-cloud-function-adapter-azure-4.0.5.jar (Root Library) - spring-cloud-function-context-4.0.5.jar - :x: **jackson-databind-2.9.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p> <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was discovered in FasterXML jackson-databind in all versions before 2.9.10 and 2.10.0, where it would permit polymorphic deserialization of malicious objects using the xalan JNDI gadget when used in conjunction with polymorphic type handling methods such as `enableDefaultTyping()` or when @JsonTypeInfo is using `Id.CLASS` or `Id.MINIMAL_CLASS` or in any other way which ObjectMapper.readValue might instantiate objects from unsafe sources. An attacker could use this flaw to execute arbitrary code. <p>Publish Date: 2020-03-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-14893>CVE-2019-14893</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14893">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14893</a></p> <p>Release Date: 2020-03-02</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-14893 (Critical) detected in jackson-databind-2.9.8.jar - ## CVE-2019-14893 - Critical Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /functions/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p> <p> Dependency Hierarchy: - spring-cloud-function-adapter-azure-4.0.5.jar (Root Library) - spring-cloud-function-context-4.0.5.jar - :x: **jackson-databind-2.9.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p> <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was discovered in FasterXML jackson-databind in all versions before 2.9.10 and 2.10.0, where it would permit polymorphic deserialization of malicious objects using the xalan JNDI gadget when used in conjunction with polymorphic type handling methods such as `enableDefaultTyping()` or when @JsonTypeInfo is using `Id.CLASS` or `Id.MINIMAL_CLASS` or in any other way which ObjectMapper.readValue might instantiate objects from unsafe sources. An attacker could use this flaw to execute arbitrary code. <p>Publish Date: 2020-03-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-14893>CVE-2019-14893</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14893">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14893</a></p> <p>Release Date: 2020-03-02</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve critical detected in jackson databind jar cve critical severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file functions build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring cloud function adapter azure jar root library spring cloud function context jar x jackson databind jar vulnerable library found in head commit a href found in base branch develop vulnerability details a flaw was discovered in fasterxml jackson databind in all versions before and where it would permit polymorphic deserialization of malicious objects using the xalan jndi gadget when used in conjunction with polymorphic type handling methods such as enabledefaulttyping or when jsontypeinfo is using id class or id minimal class or in any other way which objectmapper readvalue might instantiate objects from unsafe sources an attacker could use this flaw to execute arbitrary code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with mend
0
3,964
4,805,733,793
IssuesEvent
2016-11-02 16:44:33
cf-tm-bot/loggregator
https://api.github.com/repos/cf-tm-bot/loggregator
opened
Implement pooling on Metron to Doppler connection - Story Id: 133603233
feature security unstarted
As an operator I would like ensure that connection between metrons and doplers can scale under __AC__ 1. Run Volley Tests for Load __Steps to Test__ ``` ``` --- Mirrors: [story 133603233](https://www.pivotaltracker.com/story/show/133603233) submitted on Nov 2, 2016 UTC - **Requester**: Adam Hevenor - **Estimate**: 0.0
True
Implement pooling on Metron to Doppler connection - Story Id: 133603233 - As an operator I would like ensure that connection between metrons and doplers can scale under __AC__ 1. Run Volley Tests for Load __Steps to Test__ ``` ``` --- Mirrors: [story 133603233](https://www.pivotaltracker.com/story/show/133603233) submitted on Nov 2, 2016 UTC - **Requester**: Adam Hevenor - **Estimate**: 0.0
non_test
implement pooling on metron to doppler connection story id as an operator i would like ensure that connection between metrons and doplers can scale under ac run volley tests for load steps to test mirrors submitted on nov utc requester adam hevenor estimate
0
236,478
26,010,680,926
IssuesEvent
2022-12-21 01:10:32
hygieia/api-audit
https://api.github.com/repos/hygieia/api-audit
opened
CVE-2022-1471 (High) detected in snakeyaml-1.32.jar
security vulnerability
## CVE-2022-1471 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.32.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /repository/org/yaml/snakeyaml/1.32/snakeyaml-1.32.jar</p> <p> Dependency Hierarchy: - :x: **snakeyaml-1.32.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hygieia/api-audit/commit/9c627a3dee72bf43b46a7cc41b8c073efba5cfab">9c627a3dee72bf43b46a7cc41b8c073efba5cfab</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> SnakeYaml's Constructor() class does not restrict types which can be instantiated during deserialization. Deserializing yaml content provided by an attacker can lead to remote code execution. We recommend using SnakeYaml's SafeConsturctor when parsing untrusted content to restrict deserialization. <p>Publish Date: 2022-12-01 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1471>CVE-2022-1471</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-1471">https://nvd.nist.gov/vuln/detail/CVE-2022-1471</a></p> <p>Release Date: 2022-12-01</p> <p>Fix Resolution: org.yaml:snakeyaml - 1.31</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-1471 (High) detected in snakeyaml-1.32.jar - ## CVE-2022-1471 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.32.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /repository/org/yaml/snakeyaml/1.32/snakeyaml-1.32.jar</p> <p> Dependency Hierarchy: - :x: **snakeyaml-1.32.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hygieia/api-audit/commit/9c627a3dee72bf43b46a7cc41b8c073efba5cfab">9c627a3dee72bf43b46a7cc41b8c073efba5cfab</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> SnakeYaml's Constructor() class does not restrict types which can be instantiated during deserialization. Deserializing yaml content provided by an attacker can lead to remote code execution. We recommend using SnakeYaml's SafeConsturctor when parsing untrusted content to restrict deserialization. <p>Publish Date: 2022-12-01 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1471>CVE-2022-1471</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-1471">https://nvd.nist.gov/vuln/detail/CVE-2022-1471</a></p> <p>Release Date: 2022-12-01</p> <p>Fix Resolution: org.yaml:snakeyaml - 1.31</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in snakeyaml jar cve high severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file pom xml path to vulnerable library repository org yaml snakeyaml snakeyaml jar dependency hierarchy x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details snakeyaml s constructor class does not restrict types which can be instantiated during deserialization deserializing yaml content provided by an attacker can lead to remote code execution we recommend using snakeyaml s safeconsturctor when parsing untrusted content to restrict deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend
0
180,647
13,941,411,633
IssuesEvent
2020-10-22 19:22:23
mindsdb/lightwood
https://api.github.com/repos/mindsdb/lightwood
opened
Hook into network for unsupervised clustering
enhancement test
## The motivation for this proposal It would be nice if lightwood/native had 2 capabilites: 1. Be able to tell us, as part of making a prediction, what points in the training datasets were similar to the point we are predicting for. Thus "justfying the prediction". 2. It would be nice to add some "semi-supervised" capabilities to lightwood, where it can figure out relationships between the different input points, besdies the way they map onto the target. ### How ? Many ways, but an easy one is: a) Add a hook that allows us, when making a predict call, to get the activations of an arbitrary layer b) Add a method that gives us the smallest layer (i.e. the one with the fewest activations) or that gives us the network shape (from which we can determine what this smallest layer is) c) Plug the activations of this layer into an `N->2` unsupervied clustering algorithm so that we can plot the result in 2d space d) Do this for all the validation dataset (or even for the full dataset) to generate a mapping at training time that can be presented in Scout and use this mapping to plot new points that we predict for and allow users to explore their neighbours ### Why would this work ? We care about our inputs as a relationship in part influenced by how they affect our target. So basically this approach would be a sort of semi-supervised clustering, where the inputs are already part way throgh the netwrok (which can be thought of as a function that predicts our target, and if we mentally stretch a bit we can think of it as getting "closer" to our target the deeper in it we are). Thus we get a mapping that is in part the result of a "classical" clustering algorithm and in part a result of the function that helps us map inputs to the target. So instead of saying something like "How close are point 1 and point 2 based on their features ?" we can can now ask "How close are p1 and p2 based on their features and the way said features map onto this target ?" ### What clustering algorithm to use ? We can start with PCA, a while back I dug through unsupervised clustering and found some really cool ones that I now fail to remember. One of them is umap (https://github.com/lmcinnes/umap) But whatever, we can start with a few of them and see if any of them make sense ### How do we validate that this approach is useful ? As in, how do we do this without manually looking at a lot of plots. Maybe try to generate points with a known function (that takes 2d inputs) that has some "human level obvious" similarity built into it (i.e. the kind that unsuperivsed clustering would look for) and then have another function map them to a target. Then check if the resutling mapping from our N->2 clustering algorithm is bouth predictive of the target AND somewhat predictive of the original shape of the 2d inputs that generated our datasets. ### Notes Regrading the explainability features @torrmal had a really nice graph of how this kind of thing can look like.
1.0
Hook into network for unsupervised clustering - ## The motivation for this proposal It would be nice if lightwood/native had 2 capabilites: 1. Be able to tell us, as part of making a prediction, what points in the training datasets were similar to the point we are predicting for. Thus "justfying the prediction". 2. It would be nice to add some "semi-supervised" capabilities to lightwood, where it can figure out relationships between the different input points, besdies the way they map onto the target. ### How ? Many ways, but an easy one is: a) Add a hook that allows us, when making a predict call, to get the activations of an arbitrary layer b) Add a method that gives us the smallest layer (i.e. the one with the fewest activations) or that gives us the network shape (from which we can determine what this smallest layer is) c) Plug the activations of this layer into an `N->2` unsupervied clustering algorithm so that we can plot the result in 2d space d) Do this for all the validation dataset (or even for the full dataset) to generate a mapping at training time that can be presented in Scout and use this mapping to plot new points that we predict for and allow users to explore their neighbours ### Why would this work ? We care about our inputs as a relationship in part influenced by how they affect our target. So basically this approach would be a sort of semi-supervised clustering, where the inputs are already part way throgh the netwrok (which can be thought of as a function that predicts our target, and if we mentally stretch a bit we can think of it as getting "closer" to our target the deeper in it we are). Thus we get a mapping that is in part the result of a "classical" clustering algorithm and in part a result of the function that helps us map inputs to the target. So instead of saying something like "How close are point 1 and point 2 based on their features ?" we can can now ask "How close are p1 and p2 based on their features and the way said features map onto this target ?" ### What clustering algorithm to use ? We can start with PCA, a while back I dug through unsupervised clustering and found some really cool ones that I now fail to remember. One of them is umap (https://github.com/lmcinnes/umap) But whatever, we can start with a few of them and see if any of them make sense ### How do we validate that this approach is useful ? As in, how do we do this without manually looking at a lot of plots. Maybe try to generate points with a known function (that takes 2d inputs) that has some "human level obvious" similarity built into it (i.e. the kind that unsuperivsed clustering would look for) and then have another function map them to a target. Then check if the resutling mapping from our N->2 clustering algorithm is bouth predictive of the target AND somewhat predictive of the original shape of the 2d inputs that generated our datasets. ### Notes Regrading the explainability features @torrmal had a really nice graph of how this kind of thing can look like.
test
hook into network for unsupervised clustering the motivation for this proposal it would be nice if lightwood native had capabilites be able to tell us as part of making a prediction what points in the training datasets were similar to the point we are predicting for thus justfying the prediction it would be nice to add some semi supervised capabilities to lightwood where it can figure out relationships between the different input points besdies the way they map onto the target how many ways but an easy one is a add a hook that allows us when making a predict call to get the activations of an arbitrary layer b add a method that gives us the smallest layer i e the one with the fewest activations or that gives us the network shape from which we can determine what this smallest layer is c plug the activations of this layer into an n unsupervied clustering algorithm so that we can plot the result in space d do this for all the validation dataset or even for the full dataset to generate a mapping at training time that can be presented in scout and use this mapping to plot new points that we predict for and allow users to explore their neighbours why would this work we care about our inputs as a relationship in part influenced by how they affect our target so basically this approach would be a sort of semi supervised clustering where the inputs are already part way throgh the netwrok which can be thought of as a function that predicts our target and if we mentally stretch a bit we can think of it as getting closer to our target the deeper in it we are thus we get a mapping that is in part the result of a classical clustering algorithm and in part a result of the function that helps us map inputs to the target so instead of saying something like how close are point and point based on their features we can can now ask how close are and based on their features and the way said features map onto this target what clustering algorithm to use we can start with pca a while back i dug through unsupervised clustering and found some really cool ones that i now fail to remember one of them is umap but whatever we can start with a few of them and see if any of them make sense how do we validate that this approach is useful as in how do we do this without manually looking at a lot of plots maybe try to generate points with a known function that takes inputs that has some human level obvious similarity built into it i e the kind that unsuperivsed clustering would look for and then have another function map them to a target then check if the resutling mapping from our n clustering algorithm is bouth predictive of the target and somewhat predictive of the original shape of the inputs that generated our datasets notes regrading the explainability features torrmal had a really nice graph of how this kind of thing can look like
1
327,297
24,127,113,687
IssuesEvent
2022-09-21 02:15:58
AtilaDev-team/useCalendar
https://api.github.com/repos/AtilaDev-team/useCalendar
closed
Write a tutorial/blog post on how to use it
documentation
Todo: - Write a blog post/tutorial on how to use this hook with `expo-calendar` library. - You can ask Expo folks to publish it on blog.expo.dev - After the blog post is published, add the link to the repo -> Readme.md file
1.0
Write a tutorial/blog post on how to use it - Todo: - Write a blog post/tutorial on how to use this hook with `expo-calendar` library. - You can ask Expo folks to publish it on blog.expo.dev - After the blog post is published, add the link to the repo -> Readme.md file
non_test
write a tutorial blog post on how to use it todo write a blog post tutorial on how to use this hook with expo calendar library you can ask expo folks to publish it on blog expo dev after the blog post is published add the link to the repo readme md file
0
80,851
15,589,009,582
IssuesEvent
2021-03-18 07:23:51
soumya132/pomscan
https://api.github.com/repos/soumya132/pomscan
closed
CVE-2016-0762 (Medium) detected in tomcat-embed-core-8.5.4.jar
security vulnerability
## CVE-2016-0762 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.4.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p> <p>Path to dependency file: pomscan/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.4/tomcat-embed-core-8.5.4.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-jersey-1.4.0.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-1.4.0.RELEASE.jar - :x: **tomcat-embed-core-8.5.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/soumya132/pomscan/commit/861f87574eb468ca8fdec6e4b2ea25783804ec34">861f87574eb468ca8fdec6e4b2ea25783804ec34</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The Realm implementations in Apache Tomcat versions 9.0.0.M1 to 9.0.0.M9, 8.5.0 to 8.5.4, 8.0.0.RC1 to 8.0.36, 7.0.0 to 7.0.70 and 6.0.0 to 6.0.45 did not process the supplied password if the supplied user name did not exist. This made a timing attack possible to determine valid user names. Note that the default configuration includes the LockOutRealm which makes exploitation of this vulnerability harder. <p>Publish Date: 2017-08-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-0762>CVE-2016-0762</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread.html/1872f96bad43647832bdd84a408794cd06d9cbb557af63085ca10009@%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/1872f96bad43647832bdd84a408794cd06d9cbb557af63085ca10009@%3Cannounce.tomcat.apache.org%3E</a></p> <p>Release Date: 2017-08-10</p> <p>Fix Resolution: 9.0.0.M10, 8.0.37,8.5.5, 7.0.72, 6.0.46</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-0762 (Medium) detected in tomcat-embed-core-8.5.4.jar - ## CVE-2016-0762 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.4.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p> <p>Path to dependency file: pomscan/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.4/tomcat-embed-core-8.5.4.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-jersey-1.4.0.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-1.4.0.RELEASE.jar - :x: **tomcat-embed-core-8.5.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/soumya132/pomscan/commit/861f87574eb468ca8fdec6e4b2ea25783804ec34">861f87574eb468ca8fdec6e4b2ea25783804ec34</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The Realm implementations in Apache Tomcat versions 9.0.0.M1 to 9.0.0.M9, 8.5.0 to 8.5.4, 8.0.0.RC1 to 8.0.36, 7.0.0 to 7.0.70 and 6.0.0 to 6.0.45 did not process the supplied password if the supplied user name did not exist. This made a timing attack possible to determine valid user names. Note that the default configuration includes the LockOutRealm which makes exploitation of this vulnerability harder. <p>Publish Date: 2017-08-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-0762>CVE-2016-0762</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread.html/1872f96bad43647832bdd84a408794cd06d9cbb557af63085ca10009@%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/1872f96bad43647832bdd84a408794cd06d9cbb557af63085ca10009@%3Cannounce.tomcat.apache.org%3E</a></p> <p>Release Date: 2017-08-10</p> <p>Fix Resolution: 9.0.0.M10, 8.0.37,8.5.5, 7.0.72, 6.0.46</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in tomcat embed core jar cve medium severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file pomscan pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter jersey release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details the realm implementations in apache tomcat versions to to to to and to did not process the supplied password if the supplied user name did not exist this made a timing attack possible to determine valid user names note that the default configuration includes the lockoutrealm which makes exploitation of this vulnerability harder publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
36,750
15,054,270,351
IssuesEvent
2021-02-03 17:13:54
terraform-providers/terraform-provider-azurerm
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
closed
Error: Provider produced inconsistent result after apply
bug duplicate service/keyvault
<!--- Please note the following potential times when an issue might be in Terraform core: * [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues * [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues * [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues * [Registry](https://registry.terraform.io/) issues * Spans resources across multiple providers If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead. ---> <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform (and AzureRM Provider) Version <!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). ---> ### Affected Resource(s) `azurerm_key_vault_secret` ``` terraform -v Terraform v0.13.5 + provider registry.terraform.io/aiven/aiven v2.1.3 + provider registry.terraform.io/hashicorp/azuread v1.1.1 + provider registry.terraform.io/hashicorp/azurerm v2.33.0 + provider registry.terraform.io/hashicorp/random v3.0.0 ``` ### Terraform Configuration Files <!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code ---> ```hcl resource "azurerm_key_vault_secret" "cluster_name" { name = "aks-cluster-name" value = module.aks.name key_vault_id = data.terraform_remote_state.storage.outputs.key_vault_id } ``` ### Debug Output <!--- Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist. To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html). ---> ### Panic Output none ### Expected Behaviour works ### Actual Behaviour ``` Error: Provider produced inconsistent result after apply When applying changes to azurerm_key_vault_secret.cluster_name, provider "registry.terraform.io/hashicorp/azurerm" produced an unexpected new value: Root resource was present, but now absent. This is a bug in the provider, which should be reported in the provider's own issue tracker. ``` ### Steps to Reproduce Running plan from Terraform Cloud workspace. This is intermittent. Typically we just re-run and the issue goes away. ### References I'm assuming this is an azurerm provider issues based on this https://github.com/hashicorp/terraform/issues/20688
1.0
Error: Provider produced inconsistent result after apply - <!--- Please note the following potential times when an issue might be in Terraform core: * [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues * [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues * [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues * [Registry](https://registry.terraform.io/) issues * Spans resources across multiple providers If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead. ---> <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform (and AzureRM Provider) Version <!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). ---> ### Affected Resource(s) `azurerm_key_vault_secret` ``` terraform -v Terraform v0.13.5 + provider registry.terraform.io/aiven/aiven v2.1.3 + provider registry.terraform.io/hashicorp/azuread v1.1.1 + provider registry.terraform.io/hashicorp/azurerm v2.33.0 + provider registry.terraform.io/hashicorp/random v3.0.0 ``` ### Terraform Configuration Files <!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code ---> ```hcl resource "azurerm_key_vault_secret" "cluster_name" { name = "aks-cluster-name" value = module.aks.name key_vault_id = data.terraform_remote_state.storage.outputs.key_vault_id } ``` ### Debug Output <!--- Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist. To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html). ---> ### Panic Output none ### Expected Behaviour works ### Actual Behaviour ``` Error: Provider produced inconsistent result after apply When applying changes to azurerm_key_vault_secret.cluster_name, provider "registry.terraform.io/hashicorp/azurerm" produced an unexpected new value: Root resource was present, but now absent. This is a bug in the provider, which should be reported in the provider's own issue tracker. ``` ### Steps to Reproduce Running plan from Terraform Cloud workspace. This is intermittent. Typically we just re-run and the issue goes away. ### References I'm assuming this is an azurerm provider issues based on this https://github.com/hashicorp/terraform/issues/20688
non_test
error provider produced inconsistent result after apply please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version affected resource s azurerm key vault secret terraform v terraform provider registry terraform io aiven aiven provider registry terraform io hashicorp azuread provider registry terraform io hashicorp azurerm provider registry terraform io hashicorp random terraform configuration files hcl resource azurerm key vault secret cluster name name aks cluster name value module aks name key vault id data terraform remote state storage outputs key vault id debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the panic output none expected behaviour works actual behaviour error provider produced inconsistent result after apply when applying changes to azurerm key vault secret cluster name provider registry terraform io hashicorp azurerm produced an unexpected new value root resource was present but now absent this is a bug in the provider which should be reported in the provider s own issue tracker steps to reproduce running plan from terraform cloud workspace this is intermittent typically we just re run and the issue goes away references i m assuming this is an azurerm provider issues based on this
0
31,893
13,653,498,457
IssuesEvent
2020-09-27 13:03:41
tuna/issues
https://api.github.com/repos/tuna/issues
closed
BFSU镜像教程似乎存在一点问题
Service Issue
教程地址: https://mirrors.bfsu.edu.cn/help/lineageOS/ 其中 > 将 > ``` > <remote name="aosp" > fetch="https://android.googlesource.com" > ``` > 改成 > ``` > <remote name="aosp" > fetch="https://aosp.tuna.tsinghua.edu.cn" > ``` 在同步Lineage OS的过程中,BFSU的教程把AOSP mirror定向到了TUNA,但是BFSU本身是有mirror的,这是为了减轻服务器压力,还是一处小错误呢?
1.0
BFSU镜像教程似乎存在一点问题 - 教程地址: https://mirrors.bfsu.edu.cn/help/lineageOS/ 其中 > 将 > ``` > <remote name="aosp" > fetch="https://android.googlesource.com" > ``` > 改成 > ``` > <remote name="aosp" > fetch="https://aosp.tuna.tsinghua.edu.cn" > ``` 在同步Lineage OS的过程中,BFSU的教程把AOSP mirror定向到了TUNA,但是BFSU本身是有mirror的,这是为了减轻服务器压力,还是一处小错误呢?
non_test
bfsu镜像教程似乎存在一点问题 教程地址: 其中 将 remote name aosp fetch 改成 remote name aosp fetch 在同步lineage os的过程中,bfsu的教程把aosp mirror定向到了tuna,但是bfsu本身是有mirror的,这是为了减轻服务器压力,还是一处小错误呢?
0
203,620
23,158,158,504
IssuesEvent
2022-07-29 14:53:12
turkdevops/vscode
https://api.github.com/repos/turkdevops/vscode
closed
CVE-2021-35065 (High) detected in multiple libraries - autoclosed
security vulnerability
## CVE-2021-35065 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-2.0.0.tgz</b>, <b>glob-parent-5.1.0.tgz</b>, <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary> <p> <details><summary><b>glob-parent-2.0.0.tgz</b></p></summary> <p>Strips glob magic from a string to provide the parent path</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p> <p> Dependency Hierarchy: - vscode-gulp-watch-5.0.2.tgz (Root Library) - anymatch-1.3.2.tgz - micromatch-2.3.11.tgz - parse-glob-3.0.4.tgz - glob-base-0.3.0.tgz - :x: **glob-parent-2.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>glob-parent-5.1.0.tgz</b></p></summary> <p>Extract the non-magic parent path from a glob string.</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.0.tgz</a></p> <p> Dependency Hierarchy: - chokidar-3.2.3.tgz (Root Library) - :x: **glob-parent-5.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>glob-parent-3.1.0.tgz</b></p></summary> <p>Strips glob magic from a string to provide the parent directory path</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p> <p> Dependency Hierarchy: - vscode-gulp-watch-5.0.2.tgz (Root Library) - :x: **glob-parent-3.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>glob-parent-5.1.1.tgz</b></p></summary> <p>Extract the non-magic parent path from a glob string.</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p> <p> Dependency Hierarchy: - vscode-gulp-watch-5.0.2.tgz (Root Library) - chokidar-3.3.0.tgz - :x: **glob-parent-5.1.1.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>webview-views</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS) <p>Publish Date: 2021-06-22 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p> <p>Release Date: 2021-06-22</p> <p>Fix Resolution: glob-parent - 6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-35065 (High) detected in multiple libraries - autoclosed - ## CVE-2021-35065 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-2.0.0.tgz</b>, <b>glob-parent-5.1.0.tgz</b>, <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary> <p> <details><summary><b>glob-parent-2.0.0.tgz</b></p></summary> <p>Strips glob magic from a string to provide the parent path</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p> <p> Dependency Hierarchy: - vscode-gulp-watch-5.0.2.tgz (Root Library) - anymatch-1.3.2.tgz - micromatch-2.3.11.tgz - parse-glob-3.0.4.tgz - glob-base-0.3.0.tgz - :x: **glob-parent-2.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>glob-parent-5.1.0.tgz</b></p></summary> <p>Extract the non-magic parent path from a glob string.</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.0.tgz</a></p> <p> Dependency Hierarchy: - chokidar-3.2.3.tgz (Root Library) - :x: **glob-parent-5.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>glob-parent-3.1.0.tgz</b></p></summary> <p>Strips glob magic from a string to provide the parent directory path</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p> <p> Dependency Hierarchy: - vscode-gulp-watch-5.0.2.tgz (Root Library) - :x: **glob-parent-3.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>glob-parent-5.1.1.tgz</b></p></summary> <p>Extract the non-magic parent path from a glob string.</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p> <p> Dependency Hierarchy: - vscode-gulp-watch-5.0.2.tgz (Root Library) - chokidar-3.3.0.tgz - :x: **glob-parent-5.1.1.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>webview-views</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS) <p>Publish Date: 2021-06-22 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p> <p>Release Date: 2021-06-22</p> <p>Fix Resolution: glob-parent - 6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz glob parent tgz glob parent tgz strips glob magic from a string to provide the parent path library home page a href dependency hierarchy vscode gulp watch tgz root library anymatch tgz micromatch tgz parse glob tgz glob base tgz x glob parent tgz vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href dependency hierarchy chokidar tgz root library x glob parent tgz vulnerable library glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href dependency hierarchy vscode gulp watch tgz root library x glob parent tgz vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href dependency hierarchy vscode gulp watch tgz root library chokidar tgz x glob parent tgz vulnerable library found in base branch webview views vulnerability details the package glob parent before are vulnerable to regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with mend
0
324,922
27,831,154,090
IssuesEvent
2023-03-20 05:05:35
rancher/qa-tasks
https://api.github.com/repos/rancher/qa-tasks
closed
[RKE2] Snapshot + restore using etcd only option
team/area2 [zube]: QA Review area/automation-test team/infracloud
Automated test ### Snapshot - Deploy an RKE2 cluster node driver - 3 etcd, 2 cp, 3 worker nodes - enable local etcd snapshots - create workload w1/ingress1 - Take a backup b1 - Verify backup is listed in cluster mgmt --> Snapshots page - Verify backup is available on the etcd nodes ### Restore - Create a workload w2 after backup b1 - Perform restore from b1 - Validate cluster comes up Active - Validate w1 and ingress1 are available - Validate w2 does not exist
1.0
[RKE2] Snapshot + restore using etcd only option - Automated test ### Snapshot - Deploy an RKE2 cluster node driver - 3 etcd, 2 cp, 3 worker nodes - enable local etcd snapshots - create workload w1/ingress1 - Take a backup b1 - Verify backup is listed in cluster mgmt --> Snapshots page - Verify backup is available on the etcd nodes ### Restore - Create a workload w2 after backup b1 - Perform restore from b1 - Validate cluster comes up Active - Validate w1 and ingress1 are available - Validate w2 does not exist
test
snapshot restore using etcd only option automated test snapshot deploy an cluster node driver etcd cp worker nodes enable local etcd snapshots create workload take a backup verify backup is listed in cluster mgmt snapshots page verify backup is available on the etcd nodes restore create a workload after backup perform restore from validate cluster comes up active validate and are available validate does not exist
1
304,549
23,071,352,596
IssuesEvent
2022-07-25 18:23:37
bigbluebutton/bigbluebutton
https://api.github.com/repos/bigbluebutton/bigbluebutton
closed
Measure memory and compare JVM11 with 8 usage
type: test target: documentation
Given that we upgraded JVM to 11 in #14611 we need to monitor and measure whether the currently recommended memory limits, server specs recommendations, etc correspond to the new state of BBB 2.5
1.0
Measure memory and compare JVM11 with 8 usage - Given that we upgraded JVM to 11 in #14611 we need to monitor and measure whether the currently recommended memory limits, server specs recommendations, etc correspond to the new state of BBB 2.5
non_test
measure memory and compare with usage given that we upgraded jvm to in we need to monitor and measure whether the currently recommended memory limits server specs recommendations etc correspond to the new state of bbb
0
97,452
20,263,954,860
IssuesEvent
2022-02-15 10:15:09
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[4.0] com_contact email
No Code Attached Yet
The new form-control css -classes are really great. They give us the possibility to improve the UX significantly. Unfortunately there is a small problem with the email-field. Even if the input is not a valid email address the field gets the css-classes `class="form-control validate-email required valid form-control-success"`
1.0
[4.0] com_contact email - The new form-control css -classes are really great. They give us the possibility to improve the UX significantly. Unfortunately there is a small problem with the email-field. Even if the input is not a valid email address the field gets the css-classes `class="form-control validate-email required valid form-control-success"`
non_test
com contact email the new form control css classes are really great they give us the possibility to improve the ux significantly unfortunately there is a small problem with the email field even if the input is not a valid email address the field gets the css classes class form control validate email required valid form control success
0
220,177
17,154,731,306
IssuesEvent
2021-07-14 04:32:46
crypto-org-chain/chain-main
https://api.github.com/repos/crypto-org-chain/chain-main
opened
Problem: missing setup information in docs
test
as noted by @MaCong-crypto > Unable to find cosmovisor for chain-main project Pull repo with --recurse-submodules flag `git clone git@github.com:crypto-org-chain/chain-main.git --recurse-submodules` Or if you have already pulled the repo: `git submodule update --init --recursive` You can read more about git submodule [here](https://git-scm.com/book/en/v2/Git-Tools-Submodules) > Python Black and Isort VSCode set up https://cereblanco.medium.com/setup-black-and-isort-in-vscode-514804590bf9
1.0
Problem: missing setup information in docs - as noted by @MaCong-crypto > Unable to find cosmovisor for chain-main project Pull repo with --recurse-submodules flag `git clone git@github.com:crypto-org-chain/chain-main.git --recurse-submodules` Or if you have already pulled the repo: `git submodule update --init --recursive` You can read more about git submodule [here](https://git-scm.com/book/en/v2/Git-Tools-Submodules) > Python Black and Isort VSCode set up https://cereblanco.medium.com/setup-black-and-isort-in-vscode-514804590bf9
test
problem missing setup information in docs as noted by macong crypto unable to find cosmovisor for chain main project pull repo with recurse submodules flag git clone git github com crypto org chain chain main git recurse submodules or if you have already pulled the repo git submodule update init recursive you can read more about git submodule python black and isort vscode set up
1
2,539
5,300,272,015
IssuesEvent
2017-02-10 03:52:49
mitchellh/packer
https://api.github.com/repos/mitchellh/packer
closed
Packer hangs when compress post-processor runs out of disk space
bug post-processor/compress
Hey guys, perhaps an unusual case here - I ran out of disk on my macbook whilst compressing a vagrant box. The `packer` processes are left hung and I'm going to have to manually kill them. Pertinent packer output: ``` vmware-iso (vagrant): Compressing: Vagrantfile vmware-iso (vagrant): Compressing: disk-s001.vmdk vmware-iso (vagrant): Compressing: disk-s002.vmdk vmware-iso (vagrant): Compressing: disk-s003.vmdk vmware-iso (vagrant): Compressing: disk-s004.vmdk vmware-iso (vagrant): Compressing: disk-s005.vmdk ^C ^C ``` Some `ps`: ``` > ps -ef | grep packer 501 61889 14816 0 8:50am ttys001 0:04.88 packer build --only=vmware-iso -var iso_checksum=90b6d0431b13e880c904a16ac9fa42a1 -var iso_url=../../iso/OSX_InstallESD_10.10.5_14F27.dmg template.json 501 61890 61889 0 8:50am ttys001 0:01.83 /usr/local/bin/packer build --only=vmware-iso -var iso_checksum=90b6d0431b13e880c904a16ac9fa42a1 -var iso_url=../../iso/OSX_InstallESD_10.10.5_14F27.dmg template.json 501 61891 61890 0 8:50am ttys001 0:17.87 /usr/local/bin/packer-builder-vmware-iso 501 61892 61890 0 8:50am ttys001 0:00.09 /usr/local/bin/packer-provisioner-file 501 61893 61890 0 8:50am ttys001 0:00.76 /usr/local/bin/packer-provisioner-shell 501 61894 61890 0 8:50am ttys001 10:52.47 /usr/local/bin/packer-post-processor-vagrant 501 65007 54027 0 9:53am ttys003 0:00.00 grep --color=auto packer ```
1.0
Packer hangs when compress post-processor runs out of disk space - Hey guys, perhaps an unusual case here - I ran out of disk on my macbook whilst compressing a vagrant box. The `packer` processes are left hung and I'm going to have to manually kill them. Pertinent packer output: ``` vmware-iso (vagrant): Compressing: Vagrantfile vmware-iso (vagrant): Compressing: disk-s001.vmdk vmware-iso (vagrant): Compressing: disk-s002.vmdk vmware-iso (vagrant): Compressing: disk-s003.vmdk vmware-iso (vagrant): Compressing: disk-s004.vmdk vmware-iso (vagrant): Compressing: disk-s005.vmdk ^C ^C ``` Some `ps`: ``` > ps -ef | grep packer 501 61889 14816 0 8:50am ttys001 0:04.88 packer build --only=vmware-iso -var iso_checksum=90b6d0431b13e880c904a16ac9fa42a1 -var iso_url=../../iso/OSX_InstallESD_10.10.5_14F27.dmg template.json 501 61890 61889 0 8:50am ttys001 0:01.83 /usr/local/bin/packer build --only=vmware-iso -var iso_checksum=90b6d0431b13e880c904a16ac9fa42a1 -var iso_url=../../iso/OSX_InstallESD_10.10.5_14F27.dmg template.json 501 61891 61890 0 8:50am ttys001 0:17.87 /usr/local/bin/packer-builder-vmware-iso 501 61892 61890 0 8:50am ttys001 0:00.09 /usr/local/bin/packer-provisioner-file 501 61893 61890 0 8:50am ttys001 0:00.76 /usr/local/bin/packer-provisioner-shell 501 61894 61890 0 8:50am ttys001 10:52.47 /usr/local/bin/packer-post-processor-vagrant 501 65007 54027 0 9:53am ttys003 0:00.00 grep --color=auto packer ```
non_test
packer hangs when compress post processor runs out of disk space hey guys perhaps an unusual case here i ran out of disk on my macbook whilst compressing a vagrant box the packer processes are left hung and i m going to have to manually kill them pertinent packer output vmware iso vagrant compressing vagrantfile vmware iso vagrant compressing disk vmdk vmware iso vagrant compressing disk vmdk vmware iso vagrant compressing disk vmdk vmware iso vagrant compressing disk vmdk vmware iso vagrant compressing disk vmdk c c some ps ps ef grep packer packer build only vmware iso var iso checksum var iso url iso osx installesd dmg template json usr local bin packer build only vmware iso var iso checksum var iso url iso osx installesd dmg template json usr local bin packer builder vmware iso usr local bin packer provisioner file usr local bin packer provisioner shell usr local bin packer post processor vagrant grep color auto packer
0
138,778
31,026,429,230
IssuesEvent
2023-08-10 09:26:43
GEOLYTIX/xyz
https://api.github.com/repos/GEOLYTIX/xyz
closed
Consolidate vector formats.
Feature Code RFC
The geojson and wkt formats already use a query and no longer require separate endpoints. The mapp library modules should be consolidated since they are doing effectively the same. The grid format can also work with a query and should be consolidated with the other vector formats. This will require an optional extension to include a viewport in the reqeust and reload on viewport change.
1.0
Consolidate vector formats. - The geojson and wkt formats already use a query and no longer require separate endpoints. The mapp library modules should be consolidated since they are doing effectively the same. The grid format can also work with a query and should be consolidated with the other vector formats. This will require an optional extension to include a viewport in the reqeust and reload on viewport change.
non_test
consolidate vector formats the geojson and wkt formats already use a query and no longer require separate endpoints the mapp library modules should be consolidated since they are doing effectively the same the grid format can also work with a query and should be consolidated with the other vector formats this will require an optional extension to include a viewport in the reqeust and reload on viewport change
0
321,050
27,502,416,920
IssuesEvent
2023-03-05 20:49:15
jokk-itu/authserver
https://api.github.com/repos/jokk-itu/authserver
closed
Implement E2E Tests
test
## Problem E2E tests are strict to two tests, and need to be more open and easy to create new tests. ## Solution Pack requests to endpoint builders. ### Authorize requests - [x] Make an Authorize Request for Login (WebApp and Native) - [x] Make an Authorize Request for Login Consent (WebApp and Native) - [x] Make an Authorize Request for None (WebApp and Native) ### Token requests - [ ] Make a Token request with grant type AuthorizationCode - [ ] Make a Token request with grant type RefreshToken ### UserInfo requests - [ ] Make a UserInfo request
1.0
Implement E2E Tests - ## Problem E2E tests are strict to two tests, and need to be more open and easy to create new tests. ## Solution Pack requests to endpoint builders. ### Authorize requests - [x] Make an Authorize Request for Login (WebApp and Native) - [x] Make an Authorize Request for Login Consent (WebApp and Native) - [x] Make an Authorize Request for None (WebApp and Native) ### Token requests - [ ] Make a Token request with grant type AuthorizationCode - [ ] Make a Token request with grant type RefreshToken ### UserInfo requests - [ ] Make a UserInfo request
test
implement tests problem tests are strict to two tests and need to be more open and easy to create new tests solution pack requests to endpoint builders authorize requests make an authorize request for login webapp and native make an authorize request for login consent webapp and native make an authorize request for none webapp and native token requests make a token request with grant type authorizationcode make a token request with grant type refreshtoken userinfo requests make a userinfo request
1
322,555
27,617,005,132
IssuesEvent
2023-03-09 20:13:27
glygener/glygen-issues
https://api.github.com/repos/glygener/glygen-issues
closed
Check all glycan and protoeform datatsets
backend testing
As the result of updating EBI downloads, all glycan and proteoform datasets have been updated. Please use the script check-dataset-group.py to check sanity of these datasets.
1.0
Check all glycan and protoeform datatsets - As the result of updating EBI downloads, all glycan and proteoform datasets have been updated. Please use the script check-dataset-group.py to check sanity of these datasets.
test
check all glycan and protoeform datatsets as the result of updating ebi downloads all glycan and proteoform datasets have been updated please use the script check dataset group py to check sanity of these datasets
1
17,255
3,602,641,999
IssuesEvent
2016-02-03 16:18:29
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
Failed tests (12202): TestLogSplits
test-failure TestLogSplits
The following test appears to have failed: [#12202](https://circleci.com/gh/cockroachdb/cockroach/12202): ``` 127.0.0.1:0 16:17:27.897296 12.69µs ·meta descriptor lookup kv/dist_sender.go:565 127.0.0.1:0 16:17:27.897326 2.798855ms ·sending RPC kv/dist_sender.go:422 127.0.0.1:0 16:17:27.897390 0 ··sending to 127.0.0.1:56938 kv/send.go:236 I0203 16:17:27.904063 609 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- FAIL: TestLogSplits (1.33s) <autogenerated>:30: remove /tmp/TestLogic_ca: no such file or directory <autogenerated>:30: remove /tmp/TestLogic_cert: no such file or directory <autogenerated>:30: remove /tmp/TestLogic_key: no such file or directory === RUN Example_rebalancing --- PASS: Example_rebalancing (0.47s) FAIL FAIL github.com/cockroachdb/cockroach/storage 34.809s === RUN TestBatchBasics I0203 16:17:42.039714 679 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchBasics (0.00s) === RUN TestBatchGet I0203 16:17:42.043593 679 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchGet (0.01s) === RUN TestBatchMerge I0203 16:17:42.050754 679 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchMerge (0.00s) === RUN TestBatchProto ``` Please assign, take a look and update the issue accordingly.
2.0
Failed tests (12202): TestLogSplits - The following test appears to have failed: [#12202](https://circleci.com/gh/cockroachdb/cockroach/12202): ``` 127.0.0.1:0 16:17:27.897296 12.69µs ·meta descriptor lookup kv/dist_sender.go:565 127.0.0.1:0 16:17:27.897326 2.798855ms ·sending RPC kv/dist_sender.go:422 127.0.0.1:0 16:17:27.897390 0 ··sending to 127.0.0.1:56938 kv/send.go:236 I0203 16:17:27.904063 609 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- FAIL: TestLogSplits (1.33s) <autogenerated>:30: remove /tmp/TestLogic_ca: no such file or directory <autogenerated>:30: remove /tmp/TestLogic_cert: no such file or directory <autogenerated>:30: remove /tmp/TestLogic_key: no such file or directory === RUN Example_rebalancing --- PASS: Example_rebalancing (0.47s) FAIL FAIL github.com/cockroachdb/cockroach/storage 34.809s === RUN TestBatchBasics I0203 16:17:42.039714 679 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchBasics (0.00s) === RUN TestBatchGet I0203 16:17:42.043593 679 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchGet (0.01s) === RUN TestBatchMerge I0203 16:17:42.050754 679 storage/engine/rocksdb.go:137 closing in-memory rocksdb instance --- PASS: TestBatchMerge (0.00s) === RUN TestBatchProto ``` Please assign, take a look and update the issue accordingly.
test
failed tests testlogsplits the following test appears to have failed ·meta descriptor lookup kv dist sender go ·sending rpc kv dist sender go ··sending to kv send go storage engine rocksdb go closing in memory rocksdb instance fail testlogsplits remove tmp testlogic ca no such file or directory remove tmp testlogic cert no such file or directory remove tmp testlogic key no such file or directory run example rebalancing pass example rebalancing fail fail github com cockroachdb cockroach storage run testbatchbasics storage engine rocksdb go closing in memory rocksdb instance pass testbatchbasics run testbatchget storage engine rocksdb go closing in memory rocksdb instance pass testbatchget run testbatchmerge storage engine rocksdb go closing in memory rocksdb instance pass testbatchmerge run testbatchproto please assign take a look and update the issue accordingly
1
182,239
14,111,375,971
IssuesEvent
2020-11-07 00:02:30
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
[EKSv2]: enabling GPU in Rancher does not enable it on the cluster in AWS console and enabling GPU for an EKS cluster should result in the correct AMI being used
[zube]: To Test
<!-- Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase. --> **What kind of request is this (question/bug/enhancement/feature request):** bug **Steps to reproduce (least amount of steps as possible):** Enable GPU **Result:** non-GPU ami is used **Other details that may be helpful:** Add the following to createNodegroup function in the eks-operator: `if gpu := group.Gpu; aws.BoolValue(gpu) { nodeGroupCreateInput.AmiType = aws.String(eks.AMITypesAl2X8664Gpu) }`
1.0
[EKSv2]: enabling GPU in Rancher does not enable it on the cluster in AWS console and enabling GPU for an EKS cluster should result in the correct AMI being used - <!-- Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase. --> **What kind of request is this (question/bug/enhancement/feature request):** bug **Steps to reproduce (least amount of steps as possible):** Enable GPU **Result:** non-GPU ami is used **Other details that may be helpful:** Add the following to createNodegroup function in the eks-operator: `if gpu := group.Gpu; aws.BoolValue(gpu) { nodeGroupCreateInput.AmiType = aws.String(eks.AMITypesAl2X8664Gpu) }`
test
enabling gpu in rancher does not enable it on the cluster in aws console and enabling gpu for an eks cluster should result in the correct ami being used please search for existing issues first then read to see what we expect in an issue for security issues please email security rancher com instead of posting a public issue in github you may but are not required to use the gpg key located on keybase what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible enable gpu result non gpu ami is used other details that may be helpful add the following to createnodegroup function in the eks operator if gpu group gpu aws boolvalue gpu nodegroupcreateinput amitype aws string eks
1
335,884
30,090,922,033
IssuesEvent
2023-06-29 12:17:31
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: Firefox UI Functional Tests - Visualize.test/functional/apps/visualize/group5/_tsvb_time_series·ts - visualize app visual builder Time Series basics Clicking on the chart should create a filter for series with multiple split by terms fields one of which has formatting
blocker Team:Visualizations failed-test skipped-test v8.9.0 v8.10.0
A test failed on a tracked branch ``` Error: retry.try timeout: StaleElementReferenceError: The element with the reference b5d57f48-f58b-4894-8c7d-73d08ce93a69 is stale; either its node document is not the active document, or it is no longer connected to the DOM at Object.throwDecodedError (node_modules/selenium-webdriver/lib/error.js:524:15) at parseHttpResponse (node_modules/selenium-webdriver/lib/http.js:601:13) at Executor.execute (node_modules/selenium-webdriver/lib/http.js:529:28) at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at Task.exec (prevent_parallel_calls.ts:28:20) at onFailure (retry_for_success.ts:17:9) at retryForSuccess (retry_for_success.ts:59:13) at RetryService.try (retry.ts:31:12) at VisualBuilderPageObject.setAnotherGroupByTermsField (visual_builder_page.ts:854:5) at Context.<anonymous> (_tsvb_time_series.ts:226:13) at Object.apply (wrap_function.js:73:16) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/31106#01887dcc-40f4-447a-a9df-76d40b0e4eeb) <!-- kibanaCiData = {"failed-test":{"test.class":"Firefox UI Functional Tests - Visualize.test/functional/apps/visualize/group5/_tsvb_time_series·ts","test.name":"visualize app visual builder Time Series basics Clicking on the chart should create a filter for series with multiple split by terms fields one of which has formatting","test.failCount":5}} --> <img width="1283" alt="image" src="https://github.com/elastic/kibana/assets/10977896/6a9b8fdc-abaf-4fc4-a16f-1d671e4ba855">
2.0
Failing test: Firefox UI Functional Tests - Visualize.test/functional/apps/visualize/group5/_tsvb_time_series·ts - visualize app visual builder Time Series basics Clicking on the chart should create a filter for series with multiple split by terms fields one of which has formatting - A test failed on a tracked branch ``` Error: retry.try timeout: StaleElementReferenceError: The element with the reference b5d57f48-f58b-4894-8c7d-73d08ce93a69 is stale; either its node document is not the active document, or it is no longer connected to the DOM at Object.throwDecodedError (node_modules/selenium-webdriver/lib/error.js:524:15) at parseHttpResponse (node_modules/selenium-webdriver/lib/http.js:601:13) at Executor.execute (node_modules/selenium-webdriver/lib/http.js:529:28) at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at Task.exec (prevent_parallel_calls.ts:28:20) at onFailure (retry_for_success.ts:17:9) at retryForSuccess (retry_for_success.ts:59:13) at RetryService.try (retry.ts:31:12) at VisualBuilderPageObject.setAnotherGroupByTermsField (visual_builder_page.ts:854:5) at Context.<anonymous> (_tsvb_time_series.ts:226:13) at Object.apply (wrap_function.js:73:16) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/31106#01887dcc-40f4-447a-a9df-76d40b0e4eeb) <!-- kibanaCiData = {"failed-test":{"test.class":"Firefox UI Functional Tests - Visualize.test/functional/apps/visualize/group5/_tsvb_time_series·ts","test.name":"visualize app visual builder Time Series basics Clicking on the chart should create a filter for series with multiple split by terms fields one of which has formatting","test.failCount":5}} --> <img width="1283" alt="image" src="https://github.com/elastic/kibana/assets/10977896/6a9b8fdc-abaf-4fc4-a16f-1d671e4ba855">
test
failing test firefox ui functional tests visualize test functional apps visualize tsvb time series·ts visualize app visual builder time series basics clicking on the chart should create a filter for series with multiple split by terms fields one of which has formatting a test failed on a tracked branch error retry try timeout staleelementreferenceerror the element with the reference is stale either its node document is not the active document or it is no longer connected to the dom at object throwdecodederror node modules selenium webdriver lib error js at parsehttpresponse node modules selenium webdriver lib http js at executor execute node modules selenium webdriver lib http js at runmicrotasks at processticksandrejections node internal process task queues at task exec prevent parallel calls ts at onfailure retry for success ts at retryforsuccess retry for success ts at retryservice try retry ts at visualbuilderpageobject setanothergroupbytermsfield visual builder page ts at context tsvb time series ts at object apply wrap function js first failure img width alt image src
1
184,916
14,290,116,085
IssuesEvent
2020-11-23 20:21:48
github-vet/rangeclosure-findings
https://api.github.com/repos/github-vet/rangeclosure-findings
closed
honeycombio/honeytail: parsers/keyval/keyval_test.go; 33 LoC
fresh small test
Found a possible issue in [honeycombio/honeytail](https://www.github.com/honeycombio/honeytail) at [parsers/keyval/keyval_test.go](https://github.com/honeycombio/honeytail/blob/151063b4f4a7d2f2d774b692644b478484a34ea7/parsers/keyval/keyval_test.go#L117-L149) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/honeycombio/honeytail/blob/151063b4f4a7d2f2d774b692644b478484a34ea7/parsers/keyval/keyval_test.go#L117-L149) <details> <summary>Click here to show the 33 line(s) of Go which triggered the analyzer.</summary> ```go for _, tst := range tsts { p := &Parser{} p.Init(&Options{ NumParsers: 5, FilterRegex: tst.filterString, InvertFilter: tst.invertFilter, }) lines := make(chan string) send := make(chan event.Event) // send input into lines in a goroutine then close the lines channel go func() { for _, line := range tst.lines { lines <- line } close(lines) }() // read from the send channel and see if we got back what we expected wg := sync.WaitGroup{} wg.Add(1) go func() { var counter int for range send { counter++ } if counter != tst.expectedEvents { t.Errorf("expected %d messages out the send channel, got %d\n", tst.expectedEvents, counter) } wg.Done() }() p.ProcessLines(lines, send, nil) close(send) wg.Wait() } ``` </details> commit ID: 151063b4f4a7d2f2d774b692644b478484a34ea7
1.0
honeycombio/honeytail: parsers/keyval/keyval_test.go; 33 LoC - Found a possible issue in [honeycombio/honeytail](https://www.github.com/honeycombio/honeytail) at [parsers/keyval/keyval_test.go](https://github.com/honeycombio/honeytail/blob/151063b4f4a7d2f2d774b692644b478484a34ea7/parsers/keyval/keyval_test.go#L117-L149) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/honeycombio/honeytail/blob/151063b4f4a7d2f2d774b692644b478484a34ea7/parsers/keyval/keyval_test.go#L117-L149) <details> <summary>Click here to show the 33 line(s) of Go which triggered the analyzer.</summary> ```go for _, tst := range tsts { p := &Parser{} p.Init(&Options{ NumParsers: 5, FilterRegex: tst.filterString, InvertFilter: tst.invertFilter, }) lines := make(chan string) send := make(chan event.Event) // send input into lines in a goroutine then close the lines channel go func() { for _, line := range tst.lines { lines <- line } close(lines) }() // read from the send channel and see if we got back what we expected wg := sync.WaitGroup{} wg.Add(1) go func() { var counter int for range send { counter++ } if counter != tst.expectedEvents { t.Errorf("expected %d messages out the send channel, got %d\n", tst.expectedEvents, counter) } wg.Done() }() p.ProcessLines(lines, send, nil) close(send) wg.Wait() } ``` </details> commit ID: 151063b4f4a7d2f2d774b692644b478484a34ea7
test
honeycombio honeytail parsers keyval keyval test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for tst range tsts p parser p init options numparsers filterregex tst filterstring invertfilter tst invertfilter lines make chan string send make chan event event send input into lines in a goroutine then close the lines channel go func for line range tst lines lines line close lines read from the send channel and see if we got back what we expected wg sync waitgroup wg add go func var counter int for range send counter if counter tst expectedevents t errorf expected d messages out the send channel got d n tst expectedevents counter wg done p processlines lines send nil close send wg wait commit id
1
300,511
25,973,298,452
IssuesEvent
2022-12-19 13:02:47
DucTrann1310/FeedbackOnline
https://api.github.com/repos/DucTrann1310/FeedbackOnline
opened
[BugID_88]_FUNC_Xuất kết quả_Nội dung file xuất kết quả chỉ toàn là kết quả của topic đầu tiên
bug Open Priority_Medium Severity_Medium Fun_Incomplete Function Acceptance Testing
Preconditon : - Admin đang ở trang Xuất kết quả - một hoặc nhiều học viên đã đánh fb cho topic được chọn Step: 1. Chọn item trong [Lớp] combobox 2. Check vào checkbox của Topic đã nêu ở precondition 3. Click [Xuất] button Actual output: nội dung file xuất kết quả chỉ toàn là kết quả của topic đầu tiên Expected output: nội dung file xuất kết quả chứa các checked topic -------------------------
1.0
[BugID_88]_FUNC_Xuất kết quả_Nội dung file xuất kết quả chỉ toàn là kết quả của topic đầu tiên - Preconditon : - Admin đang ở trang Xuất kết quả - một hoặc nhiều học viên đã đánh fb cho topic được chọn Step: 1. Chọn item trong [Lớp] combobox 2. Check vào checkbox của Topic đã nêu ở precondition 3. Click [Xuất] button Actual output: nội dung file xuất kết quả chỉ toàn là kết quả của topic đầu tiên Expected output: nội dung file xuất kết quả chứa các checked topic -------------------------
test
func xuất kết quả nội dung file xuất kết quả chỉ toàn là kết quả của topic đầu tiên preconditon admin đang ở trang xuất kết quả một hoặc nhiều học viên đã đánh fb cho topic được chọn step chọn item trong combobox check vào checkbox của topic đã nêu ở precondition click button actual output nội dung file xuất kết quả chỉ toàn là kết quả của topic đầu tiên expected output nội dung file xuất kết quả chứa các checked topic
1
157,304
12,369,506,040
IssuesEvent
2020-05-18 15:21:14
kami-blue/client
https://api.github.com/repos/kami-blue/client
closed
ElytraReplace won't replace broken elytra midair
03-testing-pending bug/module enhancement/module
ElytraReplace won't automatically replace your elytra if it breaks at the moment.
1.0
ElytraReplace won't replace broken elytra midair - ElytraReplace won't automatically replace your elytra if it breaks at the moment.
test
elytrareplace won t replace broken elytra midair elytrareplace won t automatically replace your elytra if it breaks at the moment
1
235,288
19,320,629,066
IssuesEvent
2021-12-14 04:52:13
elisebeall/rancidtomatillos
https://api.github.com/repos/elisebeall/rancidtomatillos
closed
[🧑‍💻] Testing : Movie Detail Page Flow : Trailer
testing
😊 When a user views the page, they should be shown a trailer. 😊 When more than one trailer is available, they should be given buttons to select another trailer. ☹️ When no trailers are available, a message should be displayed.
1.0
[🧑‍💻] Testing : Movie Detail Page Flow : Trailer - 😊 When a user views the page, they should be shown a trailer. 😊 When more than one trailer is available, they should be given buttons to select another trailer. ☹️ When no trailers are available, a message should be displayed.
test
testing movie detail page flow trailer 😊 when a user views the page they should be shown a trailer 😊 when more than one trailer is available they should be given buttons to select another trailer ☹️ when no trailers are available a message should be displayed
1
147,081
11,771,123,850
IssuesEvent
2020-03-15 22:24:56
GTNewHorizons/NewHorizons
https://api.github.com/repos/GTNewHorizons/NewHorizons
closed
[BartWorks] Electric Implosion Compressor recipe missing: Pile of Neutrons
BartWorks FixedInDev need to be tested
#### Which modpack version are you using? 2.0.8.3 # #### If in multiplayer; On which server does this happen? My private server # #### What did you try to do, and what did you expect to happen? Trying to compress some pile of neutrons from Electric Implosion Compressor. This should work because the normal Implosion Compressor has the recipe to process this. # #### What happened instead? (Attach screenshots if needed) It doesn't work. ![image](https://user-images.githubusercontent.com/6583339/74414538-7b0d8a80-4df6-11ea-9694-e44715a38a4b.png) ![image](https://user-images.githubusercontent.com/6583339/74414576-89f43d00-4df6-11ea-9182-02fc67d51b35.png) # #### What do you suggest instead/what changes do you propose? Please fix it.
1.0
[BartWorks] Electric Implosion Compressor recipe missing: Pile of Neutrons - #### Which modpack version are you using? 2.0.8.3 # #### If in multiplayer; On which server does this happen? My private server # #### What did you try to do, and what did you expect to happen? Trying to compress some pile of neutrons from Electric Implosion Compressor. This should work because the normal Implosion Compressor has the recipe to process this. # #### What happened instead? (Attach screenshots if needed) It doesn't work. ![image](https://user-images.githubusercontent.com/6583339/74414538-7b0d8a80-4df6-11ea-9694-e44715a38a4b.png) ![image](https://user-images.githubusercontent.com/6583339/74414576-89f43d00-4df6-11ea-9182-02fc67d51b35.png) # #### What do you suggest instead/what changes do you propose? Please fix it.
test
electric implosion compressor recipe missing pile of neutrons which modpack version are you using if in multiplayer on which server does this happen my private server what did you try to do and what did you expect to happen trying to compress some pile of neutrons from electric implosion compressor this should work because the normal implosion compressor has the recipe to process this what happened instead attach screenshots if needed it doesn t work what do you suggest instead what changes do you propose please fix it
1
4,208
20,739,170,815
IssuesEvent
2022-03-14 16:08:23
backdrop-ops/contrib
https://api.github.com/repos/backdrop-ops/contrib
closed
Maintainer change request: insert module
Maintainer change request
**Thank you for supporting the Backdrop community!** Please note the procedure to add a new maintainer to a project: 1. Please join the Backdrop Contrib group (if you have not already) by submitting [an application](https://github.com/backdrop-ops/contrib/issues/new?assignees=klonos&labels=Maintainer+application&template=application-to-join-the-contrib-group.md&title=Application+to+join+the+Contrib+Group%3A). 2. File an issue in the current project's issue queue offering to help maintain that project. 3. Create a PR for that project that adds your name to the README.md file in the list of maintainers. <!-- The project maintainer, or a backdrop-contrib administrator, will merge this PR to accept your offer of help. --> 4. If the project does not have a listed maintainer, or if a current maintainer does not respond within 2 weeks, create *this issue* to take over the project. **Please include a link to the issue you filed for the project.** https://github.com/backdrop-contrib/insert/issues/17 **Please include a link to the PR that adds your name to the README.md file.** https://github.com/backdrop-contrib/insert/pull/18 <!-- After confirming the project has been abandoned for a period of 2 weeks or more, a Backdrop Contrib administrator will add your name to the list of maintainers in that project's README.md file, and grant you admin access to the project. -->
True
Maintainer change request: insert module - **Thank you for supporting the Backdrop community!** Please note the procedure to add a new maintainer to a project: 1. Please join the Backdrop Contrib group (if you have not already) by submitting [an application](https://github.com/backdrop-ops/contrib/issues/new?assignees=klonos&labels=Maintainer+application&template=application-to-join-the-contrib-group.md&title=Application+to+join+the+Contrib+Group%3A). 2. File an issue in the current project's issue queue offering to help maintain that project. 3. Create a PR for that project that adds your name to the README.md file in the list of maintainers. <!-- The project maintainer, or a backdrop-contrib administrator, will merge this PR to accept your offer of help. --> 4. If the project does not have a listed maintainer, or if a current maintainer does not respond within 2 weeks, create *this issue* to take over the project. **Please include a link to the issue you filed for the project.** https://github.com/backdrop-contrib/insert/issues/17 **Please include a link to the PR that adds your name to the README.md file.** https://github.com/backdrop-contrib/insert/pull/18 <!-- After confirming the project has been abandoned for a period of 2 weeks or more, a Backdrop Contrib administrator will add your name to the list of maintainers in that project's README.md file, and grant you admin access to the project. -->
non_test
maintainer change request insert module thank you for supporting the backdrop community please note the procedure to add a new maintainer to a project please join the backdrop contrib group if you have not already by submitting file an issue in the current project s issue queue offering to help maintain that project create a pr for that project that adds your name to the readme md file in the list of maintainers the project maintainer or a backdrop contrib administrator will merge this pr to accept your offer of help if the project does not have a listed maintainer or if a current maintainer does not respond within weeks create this issue to take over the project please include a link to the issue you filed for the project please include a link to the pr that adds your name to the readme md file after confirming the project has been abandoned for a period of weeks or more a backdrop contrib administrator will add your name to the list of maintainers in that project s readme md file and grant you admin access to the project
0
167,513
6,340,210,091
IssuesEvent
2017-07-27 10:15:49
status-im/status-go
https://api.github.com/repos/status-im/status-go
closed
Geth 1.5.9 introduces HD wallets, consider switching to their implementation
advanced high-priority
- https://github.com/ethereum/go-ethereum/releases/tag/v1.5.9 - we need to review their implementation, and probably sit out couple of releases, before attempting to switch to their wallet implementation (since account and account manager API changes a lot atm, we will chase the moving target otherwise)
1.0
Geth 1.5.9 introduces HD wallets, consider switching to their implementation - - https://github.com/ethereum/go-ethereum/releases/tag/v1.5.9 - we need to review their implementation, and probably sit out couple of releases, before attempting to switch to their wallet implementation (since account and account manager API changes a lot atm, we will chase the moving target otherwise)
non_test
geth introduces hd wallets consider switching to their implementation we need to review their implementation and probably sit out couple of releases before attempting to switch to their wallet implementation since account and account manager api changes a lot atm we will chase the moving target otherwise
0
334,333
10,140,953,013
IssuesEvent
2019-08-03 09:18:15
jenkins-x/jx
https://api.github.com/repos/jenkins-x/jx
closed
jx create pullrequest is DOA
area/git area/git-providers kind/bug lifecycle/rotten priority/backlog
### Summary `jx create pullrequest` does not create a pull request but errors out. ### Steps to reproduce the behavior ``` hub clone org/repo hub fork username git checkout -b somebranch echo hello world > file.txt git add file.txt git commit -a -m "added file jx create pullrequest ``` answer the questions.... ### Jx version The output of `jx version` is: ``` NAME VERSION jx 1.3.604 Kubernetes cluster v1.9.7-gke.11 kubectl v1.10.3 git git version 2.16.2.windows.1 Operating System Windows 10 Pro 1803 build 17134 ``` ### Kubernetes cluster What kind of Kubernetes cluster are you using & how did you create it? GCP but this is local so... ### Operating system / Environment as above ### Expected behavior a PR is created using the PR template provided by the github repo ### Actual behavior ``` C:\workarea\source\github\blah>jx create pullrequest ? user name to use for authenticating with git issues jtnord ? PullRequest title: added file error: POST https://api.github.com/repos/cloudbees/astro/pulls: 422 Validation Failed [{Resource:PullRequest Field:head Code:invalid Message:}] ```
1.0
jx create pullrequest is DOA - ### Summary `jx create pullrequest` does not create a pull request but errors out. ### Steps to reproduce the behavior ``` hub clone org/repo hub fork username git checkout -b somebranch echo hello world > file.txt git add file.txt git commit -a -m "added file jx create pullrequest ``` answer the questions.... ### Jx version The output of `jx version` is: ``` NAME VERSION jx 1.3.604 Kubernetes cluster v1.9.7-gke.11 kubectl v1.10.3 git git version 2.16.2.windows.1 Operating System Windows 10 Pro 1803 build 17134 ``` ### Kubernetes cluster What kind of Kubernetes cluster are you using & how did you create it? GCP but this is local so... ### Operating system / Environment as above ### Expected behavior a PR is created using the PR template provided by the github repo ### Actual behavior ``` C:\workarea\source\github\blah>jx create pullrequest ? user name to use for authenticating with git issues jtnord ? PullRequest title: added file error: POST https://api.github.com/repos/cloudbees/astro/pulls: 422 Validation Failed [{Resource:PullRequest Field:head Code:invalid Message:}] ```
non_test
jx create pullrequest is doa summary jx create pullrequest does not create a pull request but errors out steps to reproduce the behavior hub clone org repo hub fork username git checkout b somebranch echo hello world file txt git add file txt git commit a m added file jx create pullrequest answer the questions jx version the output of jx version is name version jx   kubernetes cluster  gke  kubectl   git  version windows  operating system  pro build  kubernetes cluster what kind of kubernetes cluster are you using how did you create it gcp but this is local so operating system environment as above expected behavior a pr is created using the pr template provided by the github repo actual behavior c workarea source github blah jx create pullrequest user name to use for authenticating with git issues jtnord pullrequest title added file error post validation failed
0
8,760
3,005,712,969
IssuesEvent
2015-07-27 03:21:33
EasyRPG/Player
https://api.github.com/repos/EasyRPG/Player
closed
Regression: Warrior Saga game hangs on introduction scenes
Hang Interpreter Testcase available
Test case: [Warrior Saga](http://rmaker.ru/games/Warrior_Saga.rar) introduction itself. Walk until the end of the map on the right side, step across the world map then the cutscene will return to the previous map, where it will hang.
1.0
Regression: Warrior Saga game hangs on introduction scenes - Test case: [Warrior Saga](http://rmaker.ru/games/Warrior_Saga.rar) introduction itself. Walk until the end of the map on the right side, step across the world map then the cutscene will return to the previous map, where it will hang.
test
regression warrior saga game hangs on introduction scenes test case introduction itself walk until the end of the map on the right side step across the world map then the cutscene will return to the previous map where it will hang
1
85,085
7,960,738,560
IssuesEvent
2018-07-13 08:23:25
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
github.com/cockroachdb/cockroach/pkg/ccl/importccl: _comma_delim failed under stress
C-test-failure O-robot
SHA: https://github.com/cockroachdb/cockroach/commits/f818c4c3b946c40839921c72fc1322fb3b385ee6 Parameters: ``` TAGS= GOFLAGS=-race ``` Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=772409&tab=buildLog ``` === RUN TestImportData/PGCOPY:_comma_delim I180711 06:32:18.390334 697 sql/lease.go:312 [n1,client=127.0.0.1:52360,user=root] publish: descID=73 (t) version=2 mtime=2018-07-11 06:32:18.375179631 +0000 UTC I180711 06:32:18.473350 697 sql/event_log.go:126 [n1,client=127.0.0.1:52360,user=root] Event: "drop_table", target: 73, info: {TableName:d.public.t Statement:DROP TABLE IF EXISTS d.public.t User:root CascadeDroppedViews:[]} I180711 06:32:18.572602 697 sql/lease.go:286 publish (1 leases): desc=[{t 73 1}] I180711 06:32:18.683499 697 sql/lease.go:312 [n1,client=127.0.0.1:52360,user=root,scExec=] publish: descID=73 (t) version=3 mtime=2018-07-11 06:32:18.682433627 +0000 UTC I180711 06:32:18.995688 2831 ccl/importccl/read_import_proc.go:82 [import-distsql,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1 I180711 06:32:19.259474 2808 ccl/importccl/read_import_proc.go:82 [import-distsql,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1 I180711 06:32:19.347659 2861 storage/replica_command.go:275 [n1,s1,r46/1:/{Table/74-Max}] initiating a split of this range at key /Table/75 [r47] W180711 06:32:19.382532 2884 storage/intent_resolver.go:642 [n1,s1] failed to push during intent resolution: failed to push "split" id=2c0e2968 key=/Local/Range/Table/74/RangeDescriptor rw=true pri=0.01253206 iso=SERIALIZABLE stat=PENDING epo=0 ts=1531290739.347775158,0 orig=1531290739.347775158,0 max=1531290739.347775158,0 wto=false rop=false seq=1 I180711 06:32:19.388619 250 storage/replica_proposal.go:203 [n1,s1,r46/1:/{Table/74-Max}] new range lease repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0 following repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0 ```
1.0
github.com/cockroachdb/cockroach/pkg/ccl/importccl: _comma_delim failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/f818c4c3b946c40839921c72fc1322fb3b385ee6 Parameters: ``` TAGS= GOFLAGS=-race ``` Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=772409&tab=buildLog ``` === RUN TestImportData/PGCOPY:_comma_delim I180711 06:32:18.390334 697 sql/lease.go:312 [n1,client=127.0.0.1:52360,user=root] publish: descID=73 (t) version=2 mtime=2018-07-11 06:32:18.375179631 +0000 UTC I180711 06:32:18.473350 697 sql/event_log.go:126 [n1,client=127.0.0.1:52360,user=root] Event: "drop_table", target: 73, info: {TableName:d.public.t Statement:DROP TABLE IF EXISTS d.public.t User:root CascadeDroppedViews:[]} I180711 06:32:18.572602 697 sql/lease.go:286 publish (1 leases): desc=[{t 73 1}] I180711 06:32:18.683499 697 sql/lease.go:312 [n1,client=127.0.0.1:52360,user=root,scExec=] publish: descID=73 (t) version=3 mtime=2018-07-11 06:32:18.682433627 +0000 UTC I180711 06:32:18.995688 2831 ccl/importccl/read_import_proc.go:82 [import-distsql,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1 I180711 06:32:19.259474 2808 ccl/importccl/read_import_proc.go:82 [import-distsql,n1] could not fetch file size; falling back to per-file progress: bad ContentLength: -1 I180711 06:32:19.347659 2861 storage/replica_command.go:275 [n1,s1,r46/1:/{Table/74-Max}] initiating a split of this range at key /Table/75 [r47] W180711 06:32:19.382532 2884 storage/intent_resolver.go:642 [n1,s1] failed to push during intent resolution: failed to push "split" id=2c0e2968 key=/Local/Range/Table/74/RangeDescriptor rw=true pri=0.01253206 iso=SERIALIZABLE stat=PENDING epo=0 ts=1531290739.347775158,0 orig=1531290739.347775158,0 max=1531290739.347775158,0 wto=false rop=false seq=1 I180711 06:32:19.388619 250 storage/replica_proposal.go:203 [n1,s1,r46/1:/{Table/74-Max}] new range lease repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0 following repl=(n1,s1):1 seq=3 start=1531290721.658934057,0 epo=1 pro=1531290721.660350222,0 ```
test
github com cockroachdb cockroach pkg ccl importccl comma delim failed under stress sha parameters tags goflags race failed test run testimportdata pgcopy comma delim sql lease go publish descid t version mtime utc sql event log go event drop table target info tablename d public t statement drop table if exists d public t user root cascadedroppedviews sql lease go publish leases desc sql lease go publish descid t version mtime utc ccl importccl read import proc go could not fetch file size falling back to per file progress bad contentlength ccl importccl read import proc go could not fetch file size falling back to per file progress bad contentlength storage replica command go initiating a split of this range at key table storage intent resolver go failed to push during intent resolution failed to push split id key local range table rangedescriptor rw true pri iso serializable stat pending epo ts orig max wto false rop false seq storage replica proposal go new range lease repl seq start epo pro following repl seq start epo pro
1
294,451
25,372,181,716
IssuesEvent
2022-11-21 11:23:51
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
com.hazelcast.jet.impl.connector.WriteJdbcPTest [HZ-1712]
Type: Test-Failure Source: Internal Module: Jet to-jira Team: Platform
_master_ (commit 5bcb6c0fa7d5f3ea4b5f8ad912d87a6be8fb60f8) Failed on IbmJDK8 in FIPS mode: https://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-IbmJDK8-fips-nightly/451/testReport/com.hazelcast.jet.impl.connector/WriteJdbcPTest/ Following tests failed: - `test_transactional_withRestarts_graceful_atLeastOnce` - `testFailJob_withNonTransientException` - `testFailJob_whenGetConnection_withNonTransientExceptionCause` - `testFailJob_whenGetConnection_withNonTransientException` <details><summary>Stacktrace for test_transactional_withRestarts_graceful_atLeastOnce:</summary> ``` java.lang.AssertionError: jobId=087d-5fa1-ed8e-0001 expected:<RUNNING> but was:<COMPLETING> at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:120) at com.hazelcast.jet.core.JetTestSupport.lambda$assertJobStatusEventually$2(JetTestSupport.java:306) at com.hazelcast.jet.core.JetTestSupport$$Lambda$4871/00000000BC264140.run(Unknown Source) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1236) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1253) at com.hazelcast.jet.core.JetTestSupport.assertJobStatusEventually(JetTestSupport.java:305) at com.hazelcast.jet.core.JetTestSupport.assertJobStatusEventually(JetTestSupport.java:235) at com.hazelcast.jet.core.JetTestSupport.assertJobRunningEventually(JetTestSupport.java:282) at com.hazelcast.jet.impl.connector.SinkStressTestUtil.test_withRestarts(SinkStressTestUtil.java:93) at com.hazelcast.jet.impl.connector.WriteJdbcPTest.test_transactional_withRestarts(WriteJdbcPTest.java:334) at com.hazelcast.jet.impl.connector.WriteJdbcPTest.test_transactional_withRestarts_graceful_atLeastOnce(WriteJdbcPTest.java:311) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:508) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115) at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107) at java.util.concurrent.FutureTask.run(FutureTask.java:277) at java.lang.Thread.run(Thread.java:822) ``` </details> <details><summary>Stacktrace for testFailJob_withNonTransientException, testFailJob_whenGetConnection_withNonTransientExceptionCause and testFailJob_whenGetConnection_withNonTransientException:</summary> ``` org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:646) at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:180) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223) at org.postgresql.Driver.makeConnection(Driver.java:402) at org.postgresql.Driver.connect(Driver.java:261) at java.sql.DriverManager.getConnection(DriverManager.java:675) at java.sql.DriverManager.getConnection(DriverManager.java:258) at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:103) at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:87) at com.hazelcast.jet.impl.connector.WriteJdbcPTest.setup(WriteJdbcPTest.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:508) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at com.hazelcast.test.AbstractHazelcastClassRunner$ThreadDumpAwareRunAfters.evaluate(AbstractHazelcastClassRunner.java:365) at com.hazelcast.test.DumpBuildInfoOnFailureRule$1.evaluate(DumpBuildInfoOnFailureRule.java:37) at com.hazelcast.test.jitter.JitterRule$1.evaluate(JitterRule.java:104) at com.hazelcast.test.metrics.MetricsRule$1.evaluate(MetricsRule.java:63) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at com.hazelcast.test.HazelcastSerialClassRunner.runChild(HazelcastSerialClassRunner.java:50) at com.hazelcast.test.HazelcastSerialClassRunner.runChild(HazelcastSerialClassRunner.java:29) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at com.hazelcast.test.AfterClassesStatement.evaluate(AfterClassesStatement.java:39) at org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29) at com.hazelcast.test.OverridePropertyRule$1.evaluate(OverridePropertyRule.java:66) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:277) at java.lang.Thread.run(Thread.java:822) ``` </details> Standard output can be found here - https://console.aws.amazon.com/s3/buckets/j-artifacts/Hazelcast-master-IbmJDK8-fips-nightly/451/
1.0
com.hazelcast.jet.impl.connector.WriteJdbcPTest [HZ-1712] - _master_ (commit 5bcb6c0fa7d5f3ea4b5f8ad912d87a6be8fb60f8) Failed on IbmJDK8 in FIPS mode: https://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-IbmJDK8-fips-nightly/451/testReport/com.hazelcast.jet.impl.connector/WriteJdbcPTest/ Following tests failed: - `test_transactional_withRestarts_graceful_atLeastOnce` - `testFailJob_withNonTransientException` - `testFailJob_whenGetConnection_withNonTransientExceptionCause` - `testFailJob_whenGetConnection_withNonTransientException` <details><summary>Stacktrace for test_transactional_withRestarts_graceful_atLeastOnce:</summary> ``` java.lang.AssertionError: jobId=087d-5fa1-ed8e-0001 expected:<RUNNING> but was:<COMPLETING> at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:120) at com.hazelcast.jet.core.JetTestSupport.lambda$assertJobStatusEventually$2(JetTestSupport.java:306) at com.hazelcast.jet.core.JetTestSupport$$Lambda$4871/00000000BC264140.run(Unknown Source) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1236) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1253) at com.hazelcast.jet.core.JetTestSupport.assertJobStatusEventually(JetTestSupport.java:305) at com.hazelcast.jet.core.JetTestSupport.assertJobStatusEventually(JetTestSupport.java:235) at com.hazelcast.jet.core.JetTestSupport.assertJobRunningEventually(JetTestSupport.java:282) at com.hazelcast.jet.impl.connector.SinkStressTestUtil.test_withRestarts(SinkStressTestUtil.java:93) at com.hazelcast.jet.impl.connector.WriteJdbcPTest.test_transactional_withRestarts(WriteJdbcPTest.java:334) at com.hazelcast.jet.impl.connector.WriteJdbcPTest.test_transactional_withRestarts_graceful_atLeastOnce(WriteJdbcPTest.java:311) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:508) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115) at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107) at java.util.concurrent.FutureTask.run(FutureTask.java:277) at java.lang.Thread.run(Thread.java:822) ``` </details> <details><summary>Stacktrace for testFailJob_withNonTransientException, testFailJob_whenGetConnection_withNonTransientExceptionCause and testFailJob_whenGetConnection_withNonTransientException:</summary> ``` org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:646) at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:180) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223) at org.postgresql.Driver.makeConnection(Driver.java:402) at org.postgresql.Driver.connect(Driver.java:261) at java.sql.DriverManager.getConnection(DriverManager.java:675) at java.sql.DriverManager.getConnection(DriverManager.java:258) at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:103) at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:87) at com.hazelcast.jet.impl.connector.WriteJdbcPTest.setup(WriteJdbcPTest.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:508) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at com.hazelcast.test.AbstractHazelcastClassRunner$ThreadDumpAwareRunAfters.evaluate(AbstractHazelcastClassRunner.java:365) at com.hazelcast.test.DumpBuildInfoOnFailureRule$1.evaluate(DumpBuildInfoOnFailureRule.java:37) at com.hazelcast.test.jitter.JitterRule$1.evaluate(JitterRule.java:104) at com.hazelcast.test.metrics.MetricsRule$1.evaluate(MetricsRule.java:63) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at com.hazelcast.test.HazelcastSerialClassRunner.runChild(HazelcastSerialClassRunner.java:50) at com.hazelcast.test.HazelcastSerialClassRunner.runChild(HazelcastSerialClassRunner.java:29) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at com.hazelcast.test.AfterClassesStatement.evaluate(AfterClassesStatement.java:39) at org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29) at com.hazelcast.test.OverridePropertyRule$1.evaluate(OverridePropertyRule.java:66) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:277) at java.lang.Thread.run(Thread.java:822) ``` </details> Standard output can be found here - https://console.aws.amazon.com/s3/buckets/j-artifacts/Hazelcast-master-IbmJDK8-fips-nightly/451/
test
com hazelcast jet impl connector writejdbcptest master commit failed on in fips mode following tests failed test transactional withrestarts graceful atleastonce testfailjob withnontransientexception testfailjob whengetconnection withnontransientexceptioncause testfailjob whengetconnection withnontransientexception stacktrace for test transactional withrestarts graceful atleastonce java lang assertionerror jobid expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at com hazelcast jet core jettestsupport lambda assertjobstatuseventually jettestsupport java at com hazelcast jet core jettestsupport lambda run unknown source at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast jet core jettestsupport assertjobstatuseventually jettestsupport java at com hazelcast jet core jettestsupport assertjobstatuseventually jettestsupport java at com hazelcast jet core jettestsupport assertjobrunningeventually jettestsupport java at com hazelcast jet impl connector sinkstresstestutil test withrestarts sinkstresstestutil java at com hazelcast jet impl connector writejdbcptest test transactional withrestarts writejdbcptest java at com hazelcast jet impl connector writejdbcptest test transactional withrestarts graceful atleastonce writejdbcptest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java util concurrent futuretask run futuretask java at java lang thread run thread java stacktrace for testfailjob withnontransientexception testfailjob whengetconnection withnontransientexceptioncause and testfailjob whengetconnection withnontransientexception org postgresql util psqlexception fatal sorry too many clients already at org postgresql core connectionfactoryimpl doauthentication connectionfactoryimpl java at org postgresql core connectionfactoryimpl tryconnect connectionfactoryimpl java at org postgresql core connectionfactoryimpl openconnectionimpl connectionfactoryimpl java at org postgresql core connectionfactory openconnection connectionfactory java at org postgresql jdbc pgconnection pgconnection java at org postgresql driver makeconnection driver java at org postgresql driver connect driver java at java sql drivermanager getconnection drivermanager java at java sql drivermanager getconnection drivermanager java at org postgresql ds common basedatasource getconnection basedatasource java at org postgresql ds common basedatasource getconnection basedatasource java at com hazelcast jet impl connector writejdbcptest setup writejdbcptest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements runbefores invokemethod runbefores java at org junit internal runners statements runbefores evaluate runbefores java at com hazelcast test abstracthazelcastclassrunner threaddumpawarerunafters evaluate abstracthazelcastclassrunner java at com hazelcast test dumpbuildinfoonfailurerule evaluate dumpbuildinfoonfailurerule java at com hazelcast test jitter jitterrule evaluate jitterrule java at com hazelcast test metrics metricsrule evaluate metricsrule java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at com hazelcast test hazelcastserialclassrunner runchild hazelcastserialclassrunner java at com hazelcast test hazelcastserialclassrunner runchild hazelcastserialclassrunner java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at com hazelcast test afterclassesstatement evaluate afterclassesstatement java at org testcontainers containers failuredetectingexternalresource evaluate failuredetectingexternalresource java at com hazelcast test overridepropertyrule evaluate overridepropertyrule java at org junit internal runners statements failontimeout callablestatement call failontimeout java at org junit internal runners statements failontimeout callablestatement call failontimeout java at java util concurrent futuretask run futuretask java at java lang thread run thread java standard output can be found here
1
66,924
8,058,248,250
IssuesEvent
2018-08-02 17:52:16
unidoscontraacorrupcao/ej-server
https://api.github.com/repos/unidoscontraacorrupcao/ej-server
closed
criar estilo com cores diferentes pros toasts
design
"poderiamos pensar cores diferentes para os toasts que aparecem apos algumas açoes? (aqueles cards rosa quando acontece algum erro por exemplo ou quando atualiza o perfil) pq os cards sem diferença de estilo vai dar impressao que tudo e erro" toast: ![photo_2018-07-23_15-41-01](https://user-images.githubusercontent.com/6249788/43096299-010fc5c2-8e8f-11e8-8049-ac2f678b5bfc.jpg)
1.0
criar estilo com cores diferentes pros toasts - "poderiamos pensar cores diferentes para os toasts que aparecem apos algumas açoes? (aqueles cards rosa quando acontece algum erro por exemplo ou quando atualiza o perfil) pq os cards sem diferença de estilo vai dar impressao que tudo e erro" toast: ![photo_2018-07-23_15-41-01](https://user-images.githubusercontent.com/6249788/43096299-010fc5c2-8e8f-11e8-8049-ac2f678b5bfc.jpg)
non_test
criar estilo com cores diferentes pros toasts poderiamos pensar cores diferentes para os toasts que aparecem apos algumas açoes aqueles cards rosa quando acontece algum erro por exemplo ou quando atualiza o perfil pq os cards sem diferença de estilo vai dar impressao que tudo e erro toast
0
219,160
17,068,604,663
IssuesEvent
2021-07-07 10:25:53
openshift/odo
https://api.github.com/repos/openshift/odo
closed
Use PSI for running the tests currently running on Travis
area/infra area/testing lifecycle/stale priority/High
Move the tests currently running on Travis to PSI cluster. These are mainly OpenShift 3.11 and Kubernetes tests. - [x] Kubernetes tests on PSI #4287 - [ ] OpenShift 3.x tests on PSI #4288 /area testing /assign @mohammedzee1000
1.0
Use PSI for running the tests currently running on Travis - Move the tests currently running on Travis to PSI cluster. These are mainly OpenShift 3.11 and Kubernetes tests. - [x] Kubernetes tests on PSI #4287 - [ ] OpenShift 3.x tests on PSI #4288 /area testing /assign @mohammedzee1000
test
use psi for running the tests currently running on travis move the tests currently running on travis to psi cluster these are mainly openshift and kubernetes tests kubernetes tests on psi openshift x tests on psi area testing assign
1
233,635
17,873,541,711
IssuesEvent
2021-09-06 20:43:57
racklet/kicad-rs
https://api.github.com/repos/racklet/kicad-rs
opened
Explain the "pipeline" formed by chaining the tools
documentation
As noted in #5, an explanation on how the tools in this repository can be chained (evaluator -> parser -> classifier -> \<ordering tool\> -> \<visualization tool\>) to form what has unofficially been called the "pipeline" is very much necessary to have in the README (and maybe a separate detailed document as well). A supportive visualization using the diagrams.net rendering integration would also be very helpful, with the format of the data (e.g. KiCad schematic, YAML file, Markdown document) visible between each stage.
1.0
Explain the "pipeline" formed by chaining the tools - As noted in #5, an explanation on how the tools in this repository can be chained (evaluator -> parser -> classifier -> \<ordering tool\> -> \<visualization tool\>) to form what has unofficially been called the "pipeline" is very much necessary to have in the README (and maybe a separate detailed document as well). A supportive visualization using the diagrams.net rendering integration would also be very helpful, with the format of the data (e.g. KiCad schematic, YAML file, Markdown document) visible between each stage.
non_test
explain the pipeline formed by chaining the tools as noted in an explanation on how the tools in this repository can be chained evaluator parser classifier to form what has unofficially been called the pipeline is very much necessary to have in the readme and maybe a separate detailed document as well a supportive visualization using the diagrams net rendering integration would also be very helpful with the format of the data e g kicad schematic yaml file markdown document visible between each stage
0
429,456
30,055,449,606
IssuesEvent
2023-06-28 06:21:03
quarto-dev/quarto-cli
https://api.github.com/repos/quarto-dev/quarto-cli
closed
Ambiguous attributes in the author/affiliation schemas
documentation
Some useful suggestions from a discussion about authors and affiliations. ### Discussed in https://github.com/quarto-dev/quarto-cli/discussions/5860 <div type='discussions-op-text'> <sup>Originally posted by **arnaudgallou** June 9, 2023</sup> ### Description [Quarto's author and affiliation schemas](https://quarto.org/docs/journals/authors.html) both have an attribute `number`. I find the name of that attribute ambiguous and would recommend using a more specific name (if possible) or explaining what these attributes are in the documentation. I guess most people (myself included) understand the `number` attribute in the author schema as phone number. If this is what it actually is, wouldn't it be better to rename that attribute as `phone-number` instead? Now the affiliation schema also has an attribute `number` and that is very confusing. Are we talking here of another phone number? If so, what's the difference with the one in the author schema? Is one number a personnal phone number and the other an office phone number? Or is it something else, e.g. the number of the office/room where the author works? Not completely related but that would also be great to have a bit more details on the attribute `acknowledgements`. In my field (Natural Sciences), acknowledgements and authors are two distinct things. I've never seen an author who's also acknowledged so having some insights on how one should use that attribute (and what it actually does) would be helpful.</div>
1.0
Ambiguous attributes in the author/affiliation schemas - Some useful suggestions from a discussion about authors and affiliations. ### Discussed in https://github.com/quarto-dev/quarto-cli/discussions/5860 <div type='discussions-op-text'> <sup>Originally posted by **arnaudgallou** June 9, 2023</sup> ### Description [Quarto's author and affiliation schemas](https://quarto.org/docs/journals/authors.html) both have an attribute `number`. I find the name of that attribute ambiguous and would recommend using a more specific name (if possible) or explaining what these attributes are in the documentation. I guess most people (myself included) understand the `number` attribute in the author schema as phone number. If this is what it actually is, wouldn't it be better to rename that attribute as `phone-number` instead? Now the affiliation schema also has an attribute `number` and that is very confusing. Are we talking here of another phone number? If so, what's the difference with the one in the author schema? Is one number a personnal phone number and the other an office phone number? Or is it something else, e.g. the number of the office/room where the author works? Not completely related but that would also be great to have a bit more details on the attribute `acknowledgements`. In my field (Natural Sciences), acknowledgements and authors are two distinct things. I've never seen an author who's also acknowledged so having some insights on how one should use that attribute (and what it actually does) would be helpful.</div>
non_test
ambiguous attributes in the author affiliation schemas some useful suggestions from a discussion about authors and affiliations discussed in originally posted by arnaudgallou june description both have an attribute number i find the name of that attribute ambiguous and would recommend using a more specific name if possible or explaining what these attributes are in the documentation i guess most people myself included understand the number attribute in the author schema as phone number if this is what it actually is wouldn t it be better to rename that attribute as phone number instead now the affiliation schema also has an attribute number and that is very confusing are we talking here of another phone number if so what s the difference with the one in the author schema is one number a personnal phone number and the other an office phone number or is it something else e g the number of the office room where the author works not completely related but that would also be great to have a bit more details on the attribute acknowledgements in my field natural sciences acknowledgements and authors are two distinct things i ve never seen an author who s also acknowledged so having some insights on how one should use that attribute and what it actually does would be helpful
0
263,111
23,036,617,768
IssuesEvent
2022-07-22 19:34:16
OvercastCommunity/public-competitive
https://api.github.com/repos/OvercastCommunity/public-competitive
closed
[KotF/5v5] Manzikert
contest
### Checklist - [x] I have [pruned](https://pgm.dev/docs/guides/packaging/pruning-chunks) the map - [x] I have agreed with assigning the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/) to this map - [x] I have provided an XML file - [x] I have uploaded the map zip file to a file sharing service # Map Name Manzikert ## Gamemode & Map Description KoTF ## Screenshots ![2022-04-15_00 56 42](https://user-images.githubusercontent.com/8431616/163483481-4bbc19d6-3a19-4c80-9fce-05ee2794c000.png) ![2022-04-15_00 57 17](https://user-images.githubusercontent.com/8431616/163483492-bf1c73de-ad91-4faf-ba12-a82e62b7696e.png) ![2022-04-15_00 57 34](https://user-images.githubusercontent.com/8431616/163483498-a1b6ea14-1ec9-414e-92d9-9b69e628a4ba.png) ## XML https://gist.github.com/Diamyx/a64c9f12b9c158d665ea8dfdf625b537 ## Map Image ![map](https://user-images.githubusercontent.com/8431616/163483815-1df67f28-fcc2-4a04-8dcf-246a71119cbf.png) ## Download Link https://www.dropbox.com/s/6622izjrt20cxjf/manzikert.zip?dl=0
1.0
[KotF/5v5] Manzikert - ### Checklist - [x] I have [pruned](https://pgm.dev/docs/guides/packaging/pruning-chunks) the map - [x] I have agreed with assigning the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/) to this map - [x] I have provided an XML file - [x] I have uploaded the map zip file to a file sharing service # Map Name Manzikert ## Gamemode & Map Description KoTF ## Screenshots ![2022-04-15_00 56 42](https://user-images.githubusercontent.com/8431616/163483481-4bbc19d6-3a19-4c80-9fce-05ee2794c000.png) ![2022-04-15_00 57 17](https://user-images.githubusercontent.com/8431616/163483492-bf1c73de-ad91-4faf-ba12-a82e62b7696e.png) ![2022-04-15_00 57 34](https://user-images.githubusercontent.com/8431616/163483498-a1b6ea14-1ec9-414e-92d9-9b69e628a4ba.png) ## XML https://gist.github.com/Diamyx/a64c9f12b9c158d665ea8dfdf625b537 ## Map Image ![map](https://user-images.githubusercontent.com/8431616/163483815-1df67f28-fcc2-4a04-8dcf-246a71119cbf.png) ## Download Link https://www.dropbox.com/s/6622izjrt20cxjf/manzikert.zip?dl=0
test
manzikert checklist i have the map i have agreed with assigning the to this map i have provided an xml file i have uploaded the map zip file to a file sharing service map name manzikert gamemode map description kotf screenshots xml map image download link
1
258,151
27,563,860,314
IssuesEvent
2023-03-08 01:11:38
jtimberlake/pacbot
https://api.github.com/repos/jtimberlake/pacbot
opened
CVE-2018-1260 (High) detected in spring-security-oauth2-2.2.1.RELEASE.jar
security vulnerability
## CVE-2018-1260 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-oauth2-2.2.1.RELEASE.jar</b></p></summary> <p>Module for providing OAuth2 support to Spring Security</p> <p>Library home page: <a href="http://static.springframework.org/spring-security/oauth">http://static.springframework.org/spring-security/oauth</a></p> <p>Path to dependency file: /api/pacman-api-statistics/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar</p> <p> Dependency Hierarchy: - spring-cloud-starter-oauth2-2.0.0.RELEASE.jar (Root Library) - spring-security-oauth2-autoconfigure-2.0.0.RELEASE.jar - :x: **spring-security-oauth2-2.2.1.RELEASE.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Spring Security OAuth, versions 2.3 prior to 2.3.3, 2.2 prior to 2.2.2, 2.1 prior to 2.1.2, 2.0 prior to 2.0.15 and older unsupported versions contains a remote code execution vulnerability. A malicious user or attacker can craft an authorization request to the authorization endpoint that can lead to remote code execution when the resource owner is forwarded to the approval endpoint. <p>Publish Date: 2018-05-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-1260>CVE-2018-1260</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://pivotal.io/security/cve-2018-1260">https://pivotal.io/security/cve-2018-1260</a></p> <p>Release Date: 2018-05-09</p> <p>Fix Resolution (org.springframework.security.oauth:spring-security-oauth2): 2.2.2.RELEASE</p> <p>Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-oauth2): 2.1.0.RELEASE</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2018-1260 (High) detected in spring-security-oauth2-2.2.1.RELEASE.jar - ## CVE-2018-1260 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-oauth2-2.2.1.RELEASE.jar</b></p></summary> <p>Module for providing OAuth2 support to Spring Security</p> <p>Library home page: <a href="http://static.springframework.org/spring-security/oauth">http://static.springframework.org/spring-security/oauth</a></p> <p>Path to dependency file: /api/pacman-api-statistics/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/security/oauth/spring-security-oauth2/2.2.1.RELEASE/spring-security-oauth2-2.2.1.RELEASE.jar</p> <p> Dependency Hierarchy: - spring-cloud-starter-oauth2-2.0.0.RELEASE.jar (Root Library) - spring-security-oauth2-autoconfigure-2.0.0.RELEASE.jar - :x: **spring-security-oauth2-2.2.1.RELEASE.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Spring Security OAuth, versions 2.3 prior to 2.3.3, 2.2 prior to 2.2.2, 2.1 prior to 2.1.2, 2.0 prior to 2.0.15 and older unsupported versions contains a remote code execution vulnerability. A malicious user or attacker can craft an authorization request to the authorization endpoint that can lead to remote code execution when the resource owner is forwarded to the approval endpoint. <p>Publish Date: 2018-05-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-1260>CVE-2018-1260</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://pivotal.io/security/cve-2018-1260">https://pivotal.io/security/cve-2018-1260</a></p> <p>Release Date: 2018-05-09</p> <p>Fix Resolution (org.springframework.security.oauth:spring-security-oauth2): 2.2.2.RELEASE</p> <p>Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-oauth2): 2.1.0.RELEASE</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_test
cve high detected in spring security release jar cve high severity vulnerability vulnerable library spring security release jar module for providing support to spring security library home page a href path to dependency file api pacman api statistics pom xml path to vulnerable library home wss scanner repository org springframework security oauth spring security release spring security release jar home wss scanner repository org springframework security oauth spring security release spring security release jar home wss scanner repository org springframework security oauth spring security release spring security release jar home wss scanner repository org springframework security oauth spring security release spring security release jar home wss scanner repository org springframework security oauth spring security release spring security release jar home wss scanner repository org springframework security oauth spring security release spring security release jar home wss scanner repository org springframework security oauth spring security release spring security release jar home wss scanner repository org springframework security oauth spring security release spring security release jar dependency hierarchy spring cloud starter release jar root library spring security autoconfigure release jar x spring security release jar vulnerable library found in base branch master vulnerability details spring security oauth versions prior to prior to prior to prior to and older unsupported versions contains a remote code execution vulnerability a malicious user or attacker can craft an authorization request to the authorization endpoint that can lead to remote code execution when the resource owner is forwarded to the approval endpoint publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework security oauth spring security release direct dependency fix resolution org springframework cloud spring cloud starter release rescue worker helmet automatic remediation is available for this issue
0
631,630
20,156,849,980
IssuesEvent
2022-02-09 17:13:45
craftercms/craftercms
https://api.github.com/repos/craftercms/craftercms
closed
[studio-ui] RTE should have white background in dark theme unless configured otherwise
bug priority: medium
### Bug Report #### Crafter CMS Version 4.0.0.rc1 #### Describe the bug RTE should have white background in dark theme unless configured otherwise #### To Reproduce Steps to reproduce the behavior: 1. Use dark theme 3. Open content form with RTE 4. See RTE is darked out which seems very hard to use with typical black text #### Logs N/A #### Screenshots <img width="1239" alt="Screen Shot 2021-12-30 at 4 16 42 PM" src="https://user-images.githubusercontent.com/169432/147788552-c54750a1-4106-4122-98c8-c3a9d6415702.png">
1.0
[studio-ui] RTE should have white background in dark theme unless configured otherwise - ### Bug Report #### Crafter CMS Version 4.0.0.rc1 #### Describe the bug RTE should have white background in dark theme unless configured otherwise #### To Reproduce Steps to reproduce the behavior: 1. Use dark theme 3. Open content form with RTE 4. See RTE is darked out which seems very hard to use with typical black text #### Logs N/A #### Screenshots <img width="1239" alt="Screen Shot 2021-12-30 at 4 16 42 PM" src="https://user-images.githubusercontent.com/169432/147788552-c54750a1-4106-4122-98c8-c3a9d6415702.png">
non_test
rte should have white background in dark theme unless configured otherwise bug report crafter cms version describe the bug rte should have white background in dark theme unless configured otherwise to reproduce steps to reproduce the behavior use dark theme open content form with rte see rte is darked out which seems very hard to use with typical black text logs n a screenshots img width alt screen shot at pm src
0
6,485
3,393,182,458
IssuesEvent
2015-11-30 22:53:55
rust-lang/rust
https://api.github.com/repos/rust-lang/rust
closed
Static Item Duplication
A-codegen
This code: ```rust static TEST: int = 10; static TEST2: &'static int = &TEST; ``` Produces the following IR: ```llvm @_ZN4TEST20h60213af30ae88ef1eaaE = internal constant i64 10 @const = private constant i64 10 @_ZN5TEST220hc4be0dea553bf116iaaE = internal constant i64* @const ``` So `TEST2` points to a duplicate of `TEST`, namely `@const`, rather than `TEST` itself.
1.0
Static Item Duplication - This code: ```rust static TEST: int = 10; static TEST2: &'static int = &TEST; ``` Produces the following IR: ```llvm @_ZN4TEST20h60213af30ae88ef1eaaE = internal constant i64 10 @const = private constant i64 10 @_ZN5TEST220hc4be0dea553bf116iaaE = internal constant i64* @const ``` So `TEST2` points to a duplicate of `TEST`, namely `@const`, rather than `TEST` itself.
non_test
static item duplication this code rust static test int static static int test produces the following ir llvm internal constant const private constant internal constant const so points to a duplicate of test namely const rather than test itself
0