Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
75,393 | 20,792,528,373 | IssuesEvent | 2022-03-17 04:47:18 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | [Bug]: On page change in the view mode we can see a vertical scroll bar appear and then disappear | Bug App Viewers Pod High UI Building Pod View Mode Papercut | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When i try to switch pages, i can observe that a scroll bar appears and then goes away immediately.
### Steps To Reproduce
[](https://www.loom.com/share/d7b8ee1b7ef54edbaadf4f3ff4bbe3e6)
### Environment
Production
### Version
Cloud | 1.0 | [Bug]: On page change in the view mode we can see a vertical scroll bar appear and then disappear - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When i try to switch pages, i can observe that a scroll bar appears and then goes away immediately.
### Steps To Reproduce
[](https://www.loom.com/share/d7b8ee1b7ef54edbaadf4f3ff4bbe3e6)
### Environment
Production
### Version
Cloud | non_infrastructure | on page change in the view mode we can see a vertical scroll bar appear and then disappear is there an existing issue for this i have searched the existing issues current behavior when i try to switch pages i can observe that a scroll bar appears and then goes away immediately steps to reproduce environment production version cloud | 0 |
320,583 | 9,782,800,811 | IssuesEvent | 2019-06-08 02:56:47 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Cannot use "Sign in with Google" option for app.mysms.com | priority/P4 webcompat | ## Description
mysms web app cannot be logged into using Brave if mysms account uses a Google account for it's login. Selecting the "Sign in with Google" option offered on the mysms page, results in a separate window being launched, persisting for a few moments without content, then closing. No login is performed and the mysms app cannot be accessed.
In Chrome/Firefox: selecting "Sign in with Google" opens a separate window in which active Google accounts are displayed as a list, and an appropriate set of credentials can be selected. The window then closes and loading of the mysms web app proceeds as expected.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. having an active mysms account & mobile app
2. navigate to app.mysms.com
3. select "Sign in with Google"
## Actual result:
A small window pops up briefly then disappears.
## Expected result:
Ability to select appropriate Google account and proceed with logging in to mysms web app.
## Reproduces how often:
always
## Brave version (brave://version info)
Brave 0.56.12 Chromium: 70.0.3538.77 (Official Build) (64-bit)
Revision 0f6ce0b0cd63a12cb4eccea3637b1bc9a29148d9-refs/branch-heads/3538@{#1039}
OS Linux
### Reproducible on current release:
- Does it reproduce on brave-browser dev/beta builds?
Unknown I only have release channel installed.
### Website problems only:
- Does the issue resolve itself when disabling Brave Shields?
No
- Is the issue reproducible on the latest version of Chrome?
No
### Additional Information
N/A
| 1.0 | Cannot use "Sign in with Google" option for app.mysms.com - ## Description
mysms web app cannot be logged into using Brave if mysms account uses a Google account for it's login. Selecting the "Sign in with Google" option offered on the mysms page, results in a separate window being launched, persisting for a few moments without content, then closing. No login is performed and the mysms app cannot be accessed.
In Chrome/Firefox: selecting "Sign in with Google" opens a separate window in which active Google accounts are displayed as a list, and an appropriate set of credentials can be selected. The window then closes and loading of the mysms web app proceeds as expected.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. having an active mysms account & mobile app
2. navigate to app.mysms.com
3. select "Sign in with Google"
## Actual result:
A small window pops up briefly then disappears.
## Expected result:
Ability to select appropriate Google account and proceed with logging in to mysms web app.
## Reproduces how often:
always
## Brave version (brave://version info)
Brave 0.56.12 Chromium: 70.0.3538.77 (Official Build) (64-bit)
Revision 0f6ce0b0cd63a12cb4eccea3637b1bc9a29148d9-refs/branch-heads/3538@{#1039}
OS Linux
### Reproducible on current release:
- Does it reproduce on brave-browser dev/beta builds?
Unknown I only have release channel installed.
### Website problems only:
- Does the issue resolve itself when disabling Brave Shields?
No
- Is the issue reproducible on the latest version of Chrome?
No
### Additional Information
N/A
| non_infrastructure | cannot use sign in with google option for app mysms com description mysms web app cannot be logged into using brave if mysms account uses a google account for it s login selecting the sign in with google option offered on the mysms page results in a separate window being launched persisting for a few moments without content then closing no login is performed and the mysms app cannot be accessed in chrome firefox selecting sign in with google opens a separate window in which active google accounts are displayed as a list and an appropriate set of credentials can be selected the window then closes and loading of the mysms web app proceeds as expected steps to reproduce having an active mysms account mobile app navigate to app mysms com select sign in with google actual result a small window pops up briefly then disappears expected result ability to select appropriate google account and proceed with logging in to mysms web app reproduces how often always brave version brave version info brave chromium official build bit revision refs branch heads os linux reproducible on current release does it reproduce on brave browser dev beta builds unknown i only have release channel installed website problems only does the issue resolve itself when disabling brave shields no is the issue reproducible on the latest version of chrome no additional information n a | 0 |
21,682 | 14,706,546,058 | IssuesEvent | 2021-01-04 20:02:09 | dotnet/docker-tools | https://api.github.com/repos/dotnet/docker-tools | closed | Upgrade Image Builder to .NET 5 | area-infrastructure enhancement | Image Builder is currently targeting .NET Core 3.1: https://github.com/dotnet/docker-tools/blob/daf226c1dcc2c1b7a33708d4673c19cac6febaab/src/Microsoft.DotNet.ImageBuilder/src/Microsoft.DotNet.ImageBuilder.csproj#L6
Now that .NET 5 has been released, it should really be updated to that version. | 1.0 | Upgrade Image Builder to .NET 5 - Image Builder is currently targeting .NET Core 3.1: https://github.com/dotnet/docker-tools/blob/daf226c1dcc2c1b7a33708d4673c19cac6febaab/src/Microsoft.DotNet.ImageBuilder/src/Microsoft.DotNet.ImageBuilder.csproj#L6
Now that .NET 5 has been released, it should really be updated to that version. | infrastructure | upgrade image builder to net image builder is currently targeting net core now that net has been released it should really be updated to that version | 1 |
27,464 | 21,769,944,381 | IssuesEvent | 2022-05-13 08:04:37 | splendo/kaluga | https://api.github.com/repos/splendo/kaluga | opened | Use version catalogs | 👷♀️infrastructure 0.5.0 | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| 1.0 | Use version catalogs - **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| infrastructure | use version catalogs is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here | 1 |
145,369 | 19,339,395,795 | IssuesEvent | 2021-12-15 01:26:58 | billmcchesney1/page.js | https://api.github.com/repos/billmcchesney1/page.js | opened | CVE-2021-3807 (High) detected in ansi-regex-3.0.0.tgz | security vulnerability | ## CVE-2021-3807 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-regex-3.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p>
<p>Path to dependency file: page.js/package.json</p>
<p>Path to vulnerable library: page.js/node_modules/string-width/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- serve-11.3.2.tgz (Root Library)
- boxen-1.3.0.tgz
- string-width-2.1.1.tgz
- strip-ansi-4.0.0.tgz
- :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"3.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"serve:11.3.2;boxen:1.3.0;string-width:2.1.1;strip-ansi:4.0.0;ansi-regex:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3807","vulnerabilityDetails":"ansi-regex is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-3807 (High) detected in ansi-regex-3.0.0.tgz - ## CVE-2021-3807 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-regex-3.0.0.tgz</b></p></summary>
<p>Regular expression for matching ANSI escape codes</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p>
<p>Path to dependency file: page.js/package.json</p>
<p>Path to vulnerable library: page.js/node_modules/string-width/node_modules/ansi-regex/package.json</p>
<p>
Dependency Hierarchy:
- serve-11.3.2.tgz (Root Library)
- boxen-1.3.0.tgz
- string-width-2.1.1.tgz
- strip-ansi-4.0.0.tgz
- :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ansi-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"3.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"serve:11.3.2;boxen:1.3.0;string-width:2.1.1;strip-ansi:4.0.0;ansi-regex:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3807","vulnerabilityDetails":"ansi-regex is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve high detected in ansi regex tgz cve high severity vulnerability vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file page js package json path to vulnerable library page js node modules string width node modules ansi regex package json dependency hierarchy serve tgz root library boxen tgz string width tgz strip ansi tgz x ansi regex tgz vulnerable library found in base branch master vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree serve boxen string width strip ansi ansi regex isminimumfixversionavailable true minimumfixversion ansi regex isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails ansi regex is vulnerable to inefficient regular expression complexity vulnerabilityurl | 0 |
249,639 | 21,181,784,462 | IssuesEvent | 2022-04-08 08:40:55 | RRZE-Webteam/rrze-statistik | https://api.github.com/repos/RRZE-Webteam/rrze-statistik | closed | 🌋 Cronjob auf Transients umstellen | Der Cronjob soll weg | 🔮 Getestet auf Beta 🦩 Getestet in Testumgebung 🐜 Prüfen | Der Cronjob soll in der nächsten Version für Transients weichen. (Die gab es schonmal s. 1. März)
- [x] Cronjob nach dem Update abmelden?
- [x] Api-Abruf s. unten
```
$apiUrl = 'die-url';
$response = wp_remote_get($apiUrl);
$responseBody = wp_remote_retrieve_body($response);
$result = json_decode($responseBody);
if (is_array($result) && ! is_wp_error($result)) {
// Data OK
} else {
// Data error
}
``` | 3.0 | 🌋 Cronjob auf Transients umstellen | Der Cronjob soll weg - Der Cronjob soll in der nächsten Version für Transients weichen. (Die gab es schonmal s. 1. März)
- [x] Cronjob nach dem Update abmelden?
- [x] Api-Abruf s. unten
```
$apiUrl = 'die-url';
$response = wp_remote_get($apiUrl);
$responseBody = wp_remote_retrieve_body($response);
$result = json_decode($responseBody);
if (is_array($result) && ! is_wp_error($result)) {
// Data OK
} else {
// Data error
}
``` | non_infrastructure | 🌋 cronjob auf transients umstellen der cronjob soll weg der cronjob soll in der nächsten version für transients weichen die gab es schonmal s märz cronjob nach dem update abmelden api abruf s unten apiurl die url response wp remote get apiurl responsebody wp remote retrieve body response result json decode responsebody if is array result is wp error result data ok else data error | 0 |
30,215 | 24,650,757,374 | IssuesEvent | 2022-10-17 18:26:06 | IBM-Cloud/terraform-provider-ibm | https://api.github.com/repos/IBM-Cloud/terraform-provider-ibm | closed | Cannot create ibm_is_vpc_routing_table_route | service/VPC Infrastructure | <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform IBM Provider Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
```
Terraform v1.3.2
on darwin_amd64
+ provider registry.terraform.io/ibm-cloud/ibm v1.46.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* ibm_is_vpc_routing_table_route
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
```hcl
variable "ibmcloud-api-key" {}
variable "login-region" {}
##################################################
provider "ibm" {
ibmcloud_api_key = var.ibmcloud-api-key
region = var.login-region
}
##################################################
resource "ibm_is_vpc" "vpc1" {
name = "example-vpc1"
}
resource "ibm_is_vpc_routing_table" "rtbl1" {
vpc = ibm_is_vpc.vpc1.id
name = "example-rtbl1"
route_direct_link_ingress = false
route_transit_gateway_ingress = false
route_vpc_zone_ingress = false
}
resource "ibm_is_vpc_routing_table_route" "rtbl-rt1" {
vpc = ibm_is_vpc.vpc1.id
routing_table = ibm_is_vpc_routing_table.rtbl1.id
zone = "us-south-1"
name = "example-rtbl-rt1"
destination = "172.16.1.123/32"
action = "drop"
next_hop = "0.0.0.0"
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
<!--- What should have happened? --->
Should be able to create `ibm_is_vpc_routing_table_route` without any error.
### Actual Behavior
<!--- What actually happened? --->
```
ibm_is_vpc.vpc1: Creating...
ibm_is_vpc.vpc1: Still creating... [10s elapsed]
ibm_is_vpc.vpc1: Creation complete after 16s [id=r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab]
ibm_is_vpc_routing_table.rtbl1: Creating...
ibm_is_vpc_routing_table.rtbl1: Creation complete after 0s [id=r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/r006-24a8017a-d7f1-4df8-bff6-0020c0dd7493]
ibm_is_vpc_routing_table_route.rtbl-rt1: Creating...
╷
│ Error: /v1/vpcs/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/routing_tables/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/r006-24a8017a-d7f1-4df8-bff6-0020c0dd7493/routes endpoint not found
│
│ with ibm_is_vpc_routing_table_route.rtbl-rt1,
│ on bug_report_tf_route_main.tf line 25, in resource "ibm_is_vpc_routing_table_route" "rtbl-rt1":
│ 25: resource "ibm_is_vpc_routing_table_route" "rtbl-rt1" {
│
╵
```
The resource path `/v1/vpcs/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/routing_tables/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/r006-24a8017a-d7f1-4df8-bff6-0020c0dd7493/routes` is wrong. Why does it include the VPC ID **twice** in the path? It should be `/v1/vpcs/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/routing_tables/r006-24a8017a-d7f1-4df8-bff6-0020c0dd7493/routes`.
The environment is in us-south-1.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #0000
| 1.0 | Cannot create ibm_is_vpc_routing_table_route - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform IBM Provider Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
```
Terraform v1.3.2
on darwin_amd64
+ provider registry.terraform.io/ibm-cloud/ibm v1.46.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* ibm_is_vpc_routing_table_route
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
```hcl
variable "ibmcloud-api-key" {}
variable "login-region" {}
##################################################
provider "ibm" {
ibmcloud_api_key = var.ibmcloud-api-key
region = var.login-region
}
##################################################
resource "ibm_is_vpc" "vpc1" {
name = "example-vpc1"
}
resource "ibm_is_vpc_routing_table" "rtbl1" {
vpc = ibm_is_vpc.vpc1.id
name = "example-rtbl1"
route_direct_link_ingress = false
route_transit_gateway_ingress = false
route_vpc_zone_ingress = false
}
resource "ibm_is_vpc_routing_table_route" "rtbl-rt1" {
vpc = ibm_is_vpc.vpc1.id
routing_table = ibm_is_vpc_routing_table.rtbl1.id
zone = "us-south-1"
name = "example-rtbl-rt1"
destination = "172.16.1.123/32"
action = "drop"
next_hop = "0.0.0.0"
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
<!--- What should have happened? --->
Should be able to create `ibm_is_vpc_routing_table_route` without any error.
### Actual Behavior
<!--- What actually happened? --->
```
ibm_is_vpc.vpc1: Creating...
ibm_is_vpc.vpc1: Still creating... [10s elapsed]
ibm_is_vpc.vpc1: Creation complete after 16s [id=r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab]
ibm_is_vpc_routing_table.rtbl1: Creating...
ibm_is_vpc_routing_table.rtbl1: Creation complete after 0s [id=r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/r006-24a8017a-d7f1-4df8-bff6-0020c0dd7493]
ibm_is_vpc_routing_table_route.rtbl-rt1: Creating...
╷
│ Error: /v1/vpcs/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/routing_tables/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/r006-24a8017a-d7f1-4df8-bff6-0020c0dd7493/routes endpoint not found
│
│ with ibm_is_vpc_routing_table_route.rtbl-rt1,
│ on bug_report_tf_route_main.tf line 25, in resource "ibm_is_vpc_routing_table_route" "rtbl-rt1":
│ 25: resource "ibm_is_vpc_routing_table_route" "rtbl-rt1" {
│
╵
```
The resource path `/v1/vpcs/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/routing_tables/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/r006-24a8017a-d7f1-4df8-bff6-0020c0dd7493/routes` is wrong. Why does it include the VPC ID **twice** in the path? It should be `/v1/vpcs/r006-65c00f04-548c-492a-ba9e-7a8e688ea7ab/routing_tables/r006-24a8017a-d7f1-4df8-bff6-0020c0dd7493/routes`.
The environment is in us-south-1.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #0000
| infrastructure | cannot create ibm is vpc routing table route please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform ibm provider version terraform on darwin provider registry terraform io ibm cloud ibm affected resource s ibm is vpc routing table route terraform configuration files please include all terraform configurations required to reproduce the bug bug reports without a functional reproduction may be closed without investigation hcl variable ibmcloud api key variable login region provider ibm ibmcloud api key var ibmcloud api key region var login region resource ibm is vpc name example resource ibm is vpc routing table vpc ibm is vpc id name example route direct link ingress false route transit gateway ingress false route vpc zone ingress false resource ibm is vpc routing table route rtbl vpc ibm is vpc id routing table ibm is vpc routing table id zone us south name example rtbl destination action drop next hop debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the panic output expected behavior should be able to create ibm is vpc routing table route without any error actual behavior ibm is vpc creating ibm is vpc still creating ibm is vpc creation complete after ibm is vpc routing table creating ibm is vpc routing table creation complete after ibm is vpc routing table route rtbl creating ╷ │ error vpcs routing tables routes endpoint not found │ │ with ibm is vpc routing table route rtbl │ on bug report tf route main tf line in resource ibm is vpc routing table route rtbl │ resource ibm is vpc routing table route rtbl │ ╵ the resource path vpcs routing tables routes is wrong why does it include the vpc id twice in the path it should be vpcs routing tables routes the environment is in us south steps to reproduce terraform apply important factoids references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor documentation for example | 1 |
180,289 | 21,625,647,848 | IssuesEvent | 2022-05-05 01:30:13 | searchboy-sudo/headless-wp-nuxt | https://api.github.com/repos/searchboy-sudo/headless-wp-nuxt | closed | CVE-2020-28168 (Medium) detected in axios-0.19.1.tgz - autoclosed | security vulnerability | ## CVE-2020-28168 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axios-0.19.1.tgz</b></p></summary>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.19.1.tgz">https://registry.npmjs.org/axios/-/axios-0.19.1.tgz</a></p>
<p>Path to dependency file: headless-wp-nuxt/package.json</p>
<p>Path to vulnerable library: /node_modules/axios/package.json</p>
<p>
Dependency Hierarchy:
- :x: **axios-0.19.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address.
<p>Publish Date: 2020-11-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28168>CVE-2020-28168</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/axios/axios/commit/c7329fefc890050edd51e40e469a154d0117fc55">https://github.com/axios/axios/commit/c7329fefc890050edd51e40e469a154d0117fc55</a></p>
<p>Release Date: 2020-11-06</p>
<p>Fix Resolution: axios - 0.21.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-28168 (Medium) detected in axios-0.19.1.tgz - autoclosed - ## CVE-2020-28168 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axios-0.19.1.tgz</b></p></summary>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.19.1.tgz">https://registry.npmjs.org/axios/-/axios-0.19.1.tgz</a></p>
<p>Path to dependency file: headless-wp-nuxt/package.json</p>
<p>Path to vulnerable library: /node_modules/axios/package.json</p>
<p>
Dependency Hierarchy:
- :x: **axios-0.19.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address.
<p>Publish Date: 2020-11-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28168>CVE-2020-28168</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/axios/axios/commit/c7329fefc890050edd51e40e469a154d0117fc55">https://github.com/axios/axios/commit/c7329fefc890050edd51e40e469a154d0117fc55</a></p>
<p>Release Date: 2020-11-06</p>
<p>Fix Resolution: axios - 0.21.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in axios tgz autoclosed cve medium severity vulnerability vulnerable library axios tgz promise based http client for the browser and node js library home page a href path to dependency file headless wp nuxt package json path to vulnerable library node modules axios package json dependency hierarchy x axios tgz vulnerable library found in base branch master vulnerability details axios npm package contains a server side request forgery ssrf vulnerability where an attacker is able to bypass a proxy by providing a url that responds with a redirect to a restricted host or ip address publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution axios step up your open source security game with whitesource | 0 |
46,208 | 11,799,633,494 | IssuesEvent | 2020-03-18 16:12:00 | FRRouting/frr | https://api.github.com/repos/FRRouting/frr | closed | build: vrrpd should only be compiled on Linux | build | Since vrrpd is only suppored on Linux, we should not try to compile it on other operating systems, which we do by default.
For instance, on OpenBSD this fails with:
```
CC vrrpd/vrrp.o
vrrpd/vrrp.c:1120:7: error: use of undeclared identifier 'SO_BINDTODEVICE'
SO_BINDTODEVICE, r->mvl_ifp->name,
^
vrrpd/vrrp.c:1157:7: error: use of undeclared identifier 'SO_BINDTODEVICE'
SO_BINDTODEVICE, r->vr->ifp->name,
^
vrrpd/vrrp.c:1213:19: error: variable has incomplete type 'struct ip_mreqn'
struct ip_mreqn mreqn = {};
^
vrrpd/vrrp.c:1213:10: note: forward declaration of 'struct ip_mreqn'
struct ip_mreqn mreqn = {};
^
vrrpd/vrrp.c:1267:7: error: use of undeclared identifier 'SO_BINDTODEVICE'
SO_BINDTODEVICE, r->vr->ifp->name,
^
4 errors generated.
```
Some checks will have to be added to configure.ac to prevent this. | 1.0 | build: vrrpd should only be compiled on Linux - Since vrrpd is only suppored on Linux, we should not try to compile it on other operating systems, which we do by default.
For instance, on OpenBSD this fails with:
```
CC vrrpd/vrrp.o
vrrpd/vrrp.c:1120:7: error: use of undeclared identifier 'SO_BINDTODEVICE'
SO_BINDTODEVICE, r->mvl_ifp->name,
^
vrrpd/vrrp.c:1157:7: error: use of undeclared identifier 'SO_BINDTODEVICE'
SO_BINDTODEVICE, r->vr->ifp->name,
^
vrrpd/vrrp.c:1213:19: error: variable has incomplete type 'struct ip_mreqn'
struct ip_mreqn mreqn = {};
^
vrrpd/vrrp.c:1213:10: note: forward declaration of 'struct ip_mreqn'
struct ip_mreqn mreqn = {};
^
vrrpd/vrrp.c:1267:7: error: use of undeclared identifier 'SO_BINDTODEVICE'
SO_BINDTODEVICE, r->vr->ifp->name,
^
4 errors generated.
```
Some checks will have to be added to configure.ac to prevent this. | non_infrastructure | build vrrpd should only be compiled on linux since vrrpd is only suppored on linux we should not try to compile it on other operating systems which we do by default for instance on openbsd this fails with cc vrrpd vrrp o vrrpd vrrp c error use of undeclared identifier so bindtodevice so bindtodevice r mvl ifp name vrrpd vrrp c error use of undeclared identifier so bindtodevice so bindtodevice r vr ifp name vrrpd vrrp c error variable has incomplete type struct ip mreqn struct ip mreqn mreqn vrrpd vrrp c note forward declaration of struct ip mreqn struct ip mreqn mreqn vrrpd vrrp c error use of undeclared identifier so bindtodevice so bindtodevice r vr ifp name errors generated some checks will have to be added to configure ac to prevent this | 0 |
8,963 | 7,751,504,027 | IssuesEvent | 2018-05-30 17:16:21 | ampproject/docs | https://api.github.com/repos/ampproject/docs | closed | Build infra: Lock dep versions and keep up to date automagically | P2: Medium Type: Site Infrastructure | We should lock versions of dependencies to make our builds less fragile and our caching in Travis more dependable, and find a way to semi-automatically/painlessly keep them updated. | 1.0 | Build infra: Lock dep versions and keep up to date automagically - We should lock versions of dependencies to make our builds less fragile and our caching in Travis more dependable, and find a way to semi-automatically/painlessly keep them updated. | infrastructure | build infra lock dep versions and keep up to date automagically we should lock versions of dependencies to make our builds less fragile and our caching in travis more dependable and find a way to semi automatically painlessly keep them updated | 1 |
11,053 | 8,887,108,333 | IssuesEvent | 2019-01-15 03:51:49 | eclipse/vorto | https://api.github.com/repos/eclipse/vorto | opened | Vorto Pull Requests are enhanced with additional checks | Infrastructure | Additional Checks:
- SonarCloud
- Compliance / CLM
- BlackDuck
- Codacy for Warnings, etc
- Reports are linked from Pull Request to e.g. S3 bucket for the community to view | 1.0 | Vorto Pull Requests are enhanced with additional checks - Additional Checks:
- SonarCloud
- Compliance / CLM
- BlackDuck
- Codacy for Warnings, etc
- Reports are linked from Pull Request to e.g. S3 bucket for the community to view | infrastructure | vorto pull requests are enhanced with additional checks additional checks sonarcloud compliance clm blackduck codacy for warnings etc reports are linked from pull request to e g bucket for the community to view | 1 |
17,259 | 12,263,862,036 | IssuesEvent | 2020-05-07 02:22:38 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Issue running Soybean validation | bug interface/infrastructure | I get an error when running the soybean validation set on a 32core workstation. The model throws an exception "System.InvalidOperationException: 'Collection was modified; enumeration operation may not execute.'"
The error is on the following line in APSIM.cs
public static List<IModel> ChildrenRecursively(IModel model)
{
List<IModel> models = new List<IModel>();
**foreach (Model child in model.Children)**
{
models.Add(child);
models.AddRange(ChildrenRecursively(child));
}
return models;
} | 1.0 | Issue running Soybean validation - I get an error when running the soybean validation set on a 32core workstation. The model throws an exception "System.InvalidOperationException: 'Collection was modified; enumeration operation may not execute.'"
The error is on the following line in APSIM.cs
public static List<IModel> ChildrenRecursively(IModel model)
{
List<IModel> models = new List<IModel>();
**foreach (Model child in model.Children)**
{
models.Add(child);
models.AddRange(ChildrenRecursively(child));
}
return models;
} | infrastructure | issue running soybean validation i get an error when running the soybean validation set on a workstation the model throws an exception system invalidoperationexception collection was modified enumeration operation may not execute the error is on the following line in apsim cs public static list childrenrecursively imodel model list models new list foreach model child in model children models add child models addrange childrenrecursively child return models | 1 |
664,396 | 22,268,764,079 | IssuesEvent | 2022-06-10 10:04:52 | opencrvs/opencrvs-core | https://api.github.com/repos/opencrvs/opencrvs-core | closed | Correct Record is not showing Informant details, if 'Who is applying for birth registration?' is changed when reviewing a birth application | 👹Bug Priority: high | **Bug Description:**
If a birth application that is applied by the father/mother is sent for review and the register changes the 'Who is applying for birth registration?' from father/mother to any other option and registers the application, the updated informant details don't get updated in the correct record.
**Steps:**
1. A complete birth application that has been applied by the father or mother is available in the ready for review of the register
2. Log in as a Register
3. Navigate to ready for review
4. Download the application and click review
5. Change the 'Who is applying for birth registration?' from father/mother to any other option except mother/father
6. Register the Application
7. Click on ready to print
8. Click on the application
9. Click on Correct record
10. Observe the Declaration details
**Actual Result:**
'Who is applying for birth registration?' is showing previous data. Don't show the updated data. Also, no Informant details are available.
**Expected Result:**
'Who is applying for birth registration?' should show the update to the Informant. Also, Informant details should show.
**Screen record:**
https://user-images.githubusercontent.com/105625699/170972928-1ce89883-0502-4c86-8994-d5dc745231bc.mp4
**Tested on:**
https://login.farajaland-qa.opencrvs.org/
**Username & Password Used:**
Username: kennedy.mweene
password: test
**Desktop:**
OS: Windows 10
Browser: Chrome
| 1.0 | Correct Record is not showing Informant details, if 'Who is applying for birth registration?' is changed when reviewing a birth application - **Bug Description:**
If a birth application that is applied by the father/mother is sent for review and the register changes the 'Who is applying for birth registration?' from father/mother to any other option and registers the application, the updated informant details don't get updated in the correct record.
**Steps:**
1. A complete birth application that has been applied by the father or mother is available in the ready for review of the register
2. Log in as a Register
3. Navigate to ready for review
4. Download the application and click review
5. Change the 'Who is applying for birth registration?' from father/mother to any other option except mother/father
6. Register the Application
7. Click on ready to print
8. Click on the application
9. Click on Correct record
10. Observe the Declaration details
**Actual Result:**
'Who is applying for birth registration?' is showing previous data. Don't show the updated data. Also, no Informant details are available.
**Expected Result:**
'Who is applying for birth registration?' should show the update to the Informant. Also, Informant details should show.
**Screen record:**
https://user-images.githubusercontent.com/105625699/170972928-1ce89883-0502-4c86-8994-d5dc745231bc.mp4
**Tested on:**
https://login.farajaland-qa.opencrvs.org/
**Username & Password Used:**
Username: kennedy.mweene
password: test
**Desktop:**
OS: Windows 10
Browser: Chrome
| non_infrastructure | correct record is not showing informant details if who is applying for birth registration is changed when reviewing a birth application bug description if a birth application that is applied by the father mother is sent for review and the register changes the who is applying for birth registration from father mother to any other option and registers the application the updated informant details don t get updated in the correct record steps a complete birth application that has been applied by the father or mother is available in the ready for review of the register log in as a register navigate to ready for review download the application and click review change the who is applying for birth registration from father mother to any other option except mother father register the application click on ready to print click on the application click on correct record observe the declaration details actual result who is applying for birth registration is showing previous data don t show the updated data also no informant details are available expected result who is applying for birth registration should show the update to the informant also informant details should show screen record tested on username password used username kennedy mweene password test desktop os windows browser chrome | 0 |
10,820 | 8,743,993,388 | IssuesEvent | 2018-12-12 20:51:15 | InsightSoftwareConsortium/ITK | https://api.github.com/repos/InsightSoftwareConsortium/ITK | closed | Not all tests are run in Azure's Python builds | area:Python wrapping type:Infrastructure | ### Description
Not all Python tests are run on Azure's Python Build (42 instead of 81).
### Steps to Reproduce
To check what Python tests are run in Azure's Python's builds, one can connect to dev.azure.com from the check result links on Github. If the PR has been closed, one can still access the check result links by clicking on the green check mark or red cross to expand the list of checks. For example, for PR #268 , the results of the Linux Python Builds are available [here](https://dev.azure.com/itkrobotlinuxpython/ITK.Linux.Python/_build/results?buildId=181).
On the `Build and test` line, one can click on the blue icon `download the log` to download a zip file containing the build and test log.
After that, the following command can be run to count the number of Python tests that were run:
`more 7.txt|grep Python |grep Test |grep '#' |wc -l`
### Expected behavior
In theory, there should be around 80 tests run for Python builds, based on what happens on my machine.
### Actual behavior
Only 42 Python tests are run.
### Reproducibility
Unknown as the PR used to reproduce this step was the first re-activating about 40 tests that had been accidently deactivated.
### Versions
At least for PR #268
### Environment
I haven't checked the logs on Max and Windows, but I expect the same behavior since all the tests were reporting as passing even though there should have been some tests failing. This demonstrates that the tests that should have failed were not run.
| 1.0 | Not all tests are run in Azure's Python builds - ### Description
Not all Python tests are run on Azure's Python Build (42 instead of 81).
### Steps to Reproduce
To check what Python tests are run in Azure's Python's builds, one can connect to dev.azure.com from the check result links on Github. If the PR has been closed, one can still access the check result links by clicking on the green check mark or red cross to expand the list of checks. For example, for PR #268 , the results of the Linux Python Builds are available [here](https://dev.azure.com/itkrobotlinuxpython/ITK.Linux.Python/_build/results?buildId=181).
On the `Build and test` line, one can click on the blue icon `download the log` to download a zip file containing the build and test log.
After that, the following command can be run to count the number of Python tests that were run:
`more 7.txt|grep Python |grep Test |grep '#' |wc -l`
### Expected behavior
In theory, there should be around 80 tests run for Python builds, based on what happens on my machine.
### Actual behavior
Only 42 Python tests are run.
### Reproducibility
Unknown as the PR used to reproduce this step was the first re-activating about 40 tests that had been accidently deactivated.
### Versions
At least for PR #268
### Environment
I haven't checked the logs on Max and Windows, but I expect the same behavior since all the tests were reporting as passing even though there should have been some tests failing. This demonstrates that the tests that should have failed were not run.
| infrastructure | not all tests are run in azure s python builds description not all python tests are run on azure s python build instead of steps to reproduce to check what python tests are run in azure s python s builds one can connect to dev azure com from the check result links on github if the pr has been closed one can still access the check result links by clicking on the green check mark or red cross to expand the list of checks for example for pr the results of the linux python builds are available on the build and test line one can click on the blue icon download the log to download a zip file containing the build and test log after that the following command can be run to count the number of python tests that were run more txt grep python grep test grep wc l expected behavior in theory there should be around tests run for python builds based on what happens on my machine actual behavior only python tests are run reproducibility unknown as the pr used to reproduce this step was the first re activating about tests that had been accidently deactivated versions at least for pr environment i haven t checked the logs on max and windows but i expect the same behavior since all the tests were reporting as passing even though there should have been some tests failing this demonstrates that the tests that should have failed were not run | 1 |
108,581 | 13,641,269,315 | IssuesEvent | 2020-09-25 13:56:44 | nextcloud/mail | https://api.github.com/repos/nextcloud/mail | opened | Layout for non-threaded | design enhancement | For mails which are not a thread but just single emails, the layout is a bit strange currently cause it shows too much info. Also it makes no sense to be able to collapse the only email.
So we could check what to do with the header there (possibly just not show the people bubbles in the header?) and probably block collapsing the mail. | 1.0 | Layout for non-threaded - For mails which are not a thread but just single emails, the layout is a bit strange currently cause it shows too much info. Also it makes no sense to be able to collapse the only email.
So we could check what to do with the header there (possibly just not show the people bubbles in the header?) and probably block collapsing the mail. | non_infrastructure | layout for non threaded for mails which are not a thread but just single emails the layout is a bit strange currently cause it shows too much info also it makes no sense to be able to collapse the only email so we could check what to do with the header there possibly just not show the people bubbles in the header and probably block collapsing the mail | 0 |
76,184 | 9,921,627,362 | IssuesEvent | 2019-06-30 19:55:46 | SFTtech/openage | https://api.github.com/repos/SFTtech/openage | closed | Release process is undocumented | documentation packaging | We are missing documentation on how to release a new Version. This needs to be fixed ASAP, probably while releasing `0.4.0`, which i should have done two weeks ago.
See also #1133 | 1.0 | Release process is undocumented - We are missing documentation on how to release a new Version. This needs to be fixed ASAP, probably while releasing `0.4.0`, which i should have done two weeks ago.
See also #1133 | non_infrastructure | release process is undocumented we are missing documentation on how to release a new version this needs to be fixed asap probably while releasing which i should have done two weeks ago see also | 0 |
144,081 | 22,269,401,388 | IssuesEvent | 2022-06-10 10:42:16 | brudermusscode/lidl_timer | https://api.github.com/repos/brudermusscode/lidl_timer | closed | Feature | Soundsignal | it's a feature Design | Nach ablauf des [Counters](https://github.com/brudermusscode/lidl_timer/issues/1) ein Ton integrieren der abgespielt wird a la Wecker. | 1.0 | Feature | Soundsignal - Nach ablauf des [Counters](https://github.com/brudermusscode/lidl_timer/issues/1) ein Ton integrieren der abgespielt wird a la Wecker. | non_infrastructure | feature soundsignal nach ablauf des ein ton integrieren der abgespielt wird a la wecker | 0 |
17,295 | 12,289,042,882 | IssuesEvent | 2020-05-09 19:31:45 | intel/dffml | https://api.github.com/repos/intel/dffml | opened | ci: Test against windows | enhancement kind/infrastructure p0 tM | We need to modify the `.github/workflows/testing.yaml` file so that we test against Python 3.7 and 3.8 on Windows too. | 1.0 | ci: Test against windows - We need to modify the `.github/workflows/testing.yaml` file so that we test against Python 3.7 and 3.8 on Windows too. | infrastructure | ci test against windows we need to modify the github workflows testing yaml file so that we test against python and on windows too | 1 |
11,550 | 3,006,500,226 | IssuesEvent | 2015-07-27 10:49:30 | javaslang/javaslang | https://api.github.com/repos/javaslang/javaslang | opened | Harmonize static factory methods of collections | design/refactoring | Now that Seq will also have static factory methods, including `Seq.empty()`, which will return the empty `List`, the user will expect to find this method in all collections. List currently has `List.nil()`. I'm no friend of redundant methods, so we will add `List.empty()` and `List.nil()` will be removed.
Same for `Stream`. Queue already has `empty`. The existing `Stack.nil()` made no sense from the beginning... | 1.0 | Harmonize static factory methods of collections - Now that Seq will also have static factory methods, including `Seq.empty()`, which will return the empty `List`, the user will expect to find this method in all collections. List currently has `List.nil()`. I'm no friend of redundant methods, so we will add `List.empty()` and `List.nil()` will be removed.
Same for `Stream`. Queue already has `empty`. The existing `Stack.nil()` made no sense from the beginning... | non_infrastructure | harmonize static factory methods of collections now that seq will also have static factory methods including seq empty which will return the empty list the user will expect to find this method in all collections list currently has list nil i m no friend of redundant methods so we will add list empty and list nil will be removed same for stream queue already has empty the existing stack nil made no sense from the beginning | 0 |
20,202 | 13,755,843,840 | IssuesEvent | 2020-10-06 19:00:34 | Sage-Bionetworks/schematic | https://api.github.com/repos/Sage-Bionetworks/schematic | opened | Add scoping of dataset to subfolders | communications enhancement infrastructure | Currently schematic supports dataset scope defined in two ways:
1. top-level (root) project folder - all files in a root folder (and its subfolders) are considered part of the same dataset and are annotated together (i.e. all files are listed in the same metadata template)
2. a folder with annotation 'contentType' set to 'dataset' - all files in the 'dataset' folder (and its subfolders) are considered part of the same dataset and are annotated together (i.e. all files are listed in the same metadata template). The 'dataset' folder can be a subfolder in a project (i.e. it doesn't need to be root)
@xindiguo has code that can scope a dataset to a folder (not necessarily a root folder) in a project w/o requiring annotation 'contentType' = 'dataset' for that folder.
@xindiguo could you
- pull the latest code from develop or main
- apply your dataset subfolder scope changes
- PR the updated code to the branch develop-fha in schematic's upstream Sage repo
I will go over your code, incorporate and test.
Meanwhile, @xindiguo could you evaluate if option 2 above is working for you (e.g. perhaps data contributors can add this annotation to their dataset subfolders; that can be done in bulk from the main project fileview)? If/when you test that, could you include the code from these PRs https://github.com/Sage-Bionetworks/schematic/pull/301 and https://github.com/Sage-Bionetworks/schematic/pull/299, if they haven't been merged yet by that time.
Once we do the above, @xindiguo can check if https://github.com/Sage-Bionetworks/schematic/issues/274 is resolved. If it isn't, I'll coordinate with @xdoan on updating the data curator frontend to avoid any filename-entityid joins that can be/are done in the backend instead.
@sujaypatil96 once we've merged and tested the code changes above as necessary, I'll PR to the 'develop' branch any changes coming from @xindiguo's schematic fork's develop-fha branch.
| 1.0 | Add scoping of dataset to subfolders - Currently schematic supports dataset scope defined in two ways:
1. top-level (root) project folder - all files in a root folder (and its subfolders) are considered part of the same dataset and are annotated together (i.e. all files are listed in the same metadata template)
2. a folder with annotation 'contentType' set to 'dataset' - all files in the 'dataset' folder (and its subfolders) are considered part of the same dataset and are annotated together (i.e. all files are listed in the same metadata template). The 'dataset' folder can be a subfolder in a project (i.e. it doesn't need to be root)
@xindiguo has code that can scope a dataset to a folder (not necessarily a root folder) in a project w/o requiring annotation 'contentType' = 'dataset' for that folder.
@xindiguo could you
- pull the latest code from develop or main
- apply your dataset subfolder scope changes
- PR the updated code to the branch develop-fha in schematic's upstream Sage repo
I will go over your code, incorporate and test.
Meanwhile, @xindiguo could you evaluate if option 2 above is working for you (e.g. perhaps data contributors can add this annotation to their dataset subfolders; that can be done in bulk from the main project fileview)? If/when you test that, could you include the code from these PRs https://github.com/Sage-Bionetworks/schematic/pull/301 and https://github.com/Sage-Bionetworks/schematic/pull/299, if they haven't been merged yet by that time.
Once we do the above, @xindiguo can check if https://github.com/Sage-Bionetworks/schematic/issues/274 is resolved. If it isn't, I'll coordinate with @xdoan on updating the data curator frontend to avoid any filename-entityid joins that can be/are done in the backend instead.
@sujaypatil96 once we've merged and tested the code changes above as necessary, I'll PR to the 'develop' branch any changes coming from @xindiguo's schematic fork's develop-fha branch.
| infrastructure | add scoping of dataset to subfolders currently schematic supports dataset scope defined in two ways top level root project folder all files in a root folder and its subfolders are considered part of the same dataset and are annotated together i e all files are listed in the same metadata template a folder with annotation contenttype set to dataset all files in the dataset folder and its subfolders are considered part of the same dataset and are annotated together i e all files are listed in the same metadata template the dataset folder can be a subfolder in a project i e it doesn t need to be root xindiguo has code that can scope a dataset to a folder not necessarily a root folder in a project w o requiring annotation contenttype dataset for that folder xindiguo could you pull the latest code from develop or main apply your dataset subfolder scope changes pr the updated code to the branch develop fha in schematic s upstream sage repo i will go over your code incorporate and test meanwhile xindiguo could you evaluate if option above is working for you e g perhaps data contributors can add this annotation to their dataset subfolders that can be done in bulk from the main project fileview if when you test that could you include the code from these prs and if they haven t been merged yet by that time once we do the above xindiguo can check if is resolved if it isn t i ll coordinate with xdoan on updating the data curator frontend to avoid any filename entityid joins that can be are done in the backend instead once we ve merged and tested the code changes above as necessary i ll pr to the develop branch any changes coming from xindiguo s schematic fork s develop fha branch | 1 |
829 | 2,942,300,265 | IssuesEvent | 2015-07-02 13:37:25 | google/trace-viewer | https://api.github.com/repos/google/trace-viewer | closed | Prepare for catapultization. | Infrastructure | When we move to catapult we'll have a different directory structure. We're going to move trace-viewer to mirror what will be the catapult structure ahead of time.
This means trace-viewer structure will be a bit weird for a while.
Specifically, everything in in trace-viewer will move into trace-viewer/tracing. We'll then move what is currently trace-viewer/trace_viewer to trace-viewer/tracing/tracing. This will match where we end up in catapult.
In order to do this we need to:
* move the source code and fix all includes
* Roll deps (this has some extra magic to it)
* update telemetry to have correct paths
* update .gn to have new path to trace-viewer/BUILD.gn
* update content/browser/tracing/BUILD.gn
* update content/browser/tracing/tracing_resources.gyp
* update tools/profile_chrome/trace_packager.py
* notify Android systrace folks and adb_profile_chrome folks about new source layout. | 1.0 | Prepare for catapultization. - When we move to catapult we'll have a different directory structure. We're going to move trace-viewer to mirror what will be the catapult structure ahead of time.
This means trace-viewer structure will be a bit weird for a while.
Specifically, everything in in trace-viewer will move into trace-viewer/tracing. We'll then move what is currently trace-viewer/trace_viewer to trace-viewer/tracing/tracing. This will match where we end up in catapult.
In order to do this we need to:
* move the source code and fix all includes
* Roll deps (this has some extra magic to it)
* update telemetry to have correct paths
* update .gn to have new path to trace-viewer/BUILD.gn
* update content/browser/tracing/BUILD.gn
* update content/browser/tracing/tracing_resources.gyp
* update tools/profile_chrome/trace_packager.py
* notify Android systrace folks and adb_profile_chrome folks about new source layout. | infrastructure | prepare for catapultization when we move to catapult we ll have a different directory structure we re going to move trace viewer to mirror what will be the catapult structure ahead of time this means trace viewer structure will be a bit weird for a while specifically everything in in trace viewer will move into trace viewer tracing we ll then move what is currently trace viewer trace viewer to trace viewer tracing tracing this will match where we end up in catapult in order to do this we need to move the source code and fix all includes roll deps this has some extra magic to it update telemetry to have correct paths update gn to have new path to trace viewer build gn update content browser tracing build gn update content browser tracing tracing resources gyp update tools profile chrome trace packager py notify android systrace folks and adb profile chrome folks about new source layout | 1 |
29,136 | 23,750,914,516 | IssuesEvent | 2022-08-31 20:31:09 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | [Automated] PRs inserted in VS build main-32831.189 | Area-Infrastructure untriaged vs-insertion | [View Complete Diff of Changes](https://github.com/dotnet/roslyn/compare/2cf85e445dab552c3e176de5d736464aa696d37f...2f760738cb92f32f50c981b68ba04ac3c8b7ee48?w=1)
- [Restore focus to the textview after rename (63660)](https://github.com/dotnet/roslyn/pull/63660)
- [Fix potential deadlock if we are previewing a rename change (63652)](https://github.com/dotnet/roslyn/pull/63652)
- [Move Windows builds to new images (63482)](https://github.com/dotnet/roslyn/pull/63482)
- [Move Windows builds to new images (63498)](https://github.com/dotnet/roslyn/pull/63498)
- [[release/dev17.0-vs-deps] Update dependencies from dotnet/arcade (62370)](https://github.com/dotnet/roslyn/pull/62370)
- [[release/dev16.11-vs-deps] Update dependencies from dotnet/arcade (63357)](https://github.com/dotnet/roslyn/pull/63357)
| 1.0 | [Automated] PRs inserted in VS build main-32831.189 - [View Complete Diff of Changes](https://github.com/dotnet/roslyn/compare/2cf85e445dab552c3e176de5d736464aa696d37f...2f760738cb92f32f50c981b68ba04ac3c8b7ee48?w=1)
- [Restore focus to the textview after rename (63660)](https://github.com/dotnet/roslyn/pull/63660)
- [Fix potential deadlock if we are previewing a rename change (63652)](https://github.com/dotnet/roslyn/pull/63652)
- [Move Windows builds to new images (63482)](https://github.com/dotnet/roslyn/pull/63482)
- [Move Windows builds to new images (63498)](https://github.com/dotnet/roslyn/pull/63498)
- [[release/dev17.0-vs-deps] Update dependencies from dotnet/arcade (62370)](https://github.com/dotnet/roslyn/pull/62370)
- [[release/dev16.11-vs-deps] Update dependencies from dotnet/arcade (63357)](https://github.com/dotnet/roslyn/pull/63357)
| infrastructure | prs inserted in vs build main update dependencies from dotnet arcade update dependencies from dotnet arcade | 1 |
34,303 | 29,192,652,818 | IssuesEvent | 2023-05-19 21:53:34 | jhu-bids/TermHub | https://api.github.com/repos/jhu-bids/TermHub | opened | Azure run config -> plain text | infrastructure | ## Overview
Right now the command to start up the app (as well as some possibly necessary system dependency installations) is configured in the Azure portal under configuration > general settings (e.g. [here](https://portal.azure.com/#@live.johnshopkins.edu/resource/subscriptions/fe24df19-d251-4821-9a6f-f037c93d7e47/resourceGroups/jh-termhub-webapp-rg/providers/Microsoft.Web/sites/termhub/slots/dev/configuration) for the dev app).
I believe that Martin said today that it is possible to instead commit a plain text file to TermHub where we store this configuration. This would be the better option. | 1.0 | Azure run config -> plain text - ## Overview
Right now the command to start up the app (as well as some possibly necessary system dependency installations) is configured in the Azure portal under configuration > general settings (e.g. [here](https://portal.azure.com/#@live.johnshopkins.edu/resource/subscriptions/fe24df19-d251-4821-9a6f-f037c93d7e47/resourceGroups/jh-termhub-webapp-rg/providers/Microsoft.Web/sites/termhub/slots/dev/configuration) for the dev app).
I believe that Martin said today that it is possible to instead commit a plain text file to TermHub where we store this configuration. This would be the better option. | infrastructure | azure run config plain text overview right now the command to start up the app as well as some possibly necessary system dependency installations is configured in the azure portal under configuration general settings e g for the dev app i believe that martin said today that it is possible to instead commit a plain text file to termhub where we store this configuration this would be the better option | 1 |
5,831 | 6,001,502,785 | IssuesEvent | 2017-06-05 09:26:24 | lampepfl/dotty | https://api.github.com/repos/lampepfl/dotty | opened | Add embedded code examples dotty website | area:infrastructure exp:novice help wanted itype:enhancement | Depends on: https://github.com/scalacenter/scastie/issues/154
We would like to take some of the code examples from the documentation pages and put them into embedded views on the landing page.
Something like:
- [ ] Enums
- [ ] Union & Intersection types
- [ ] Trait Parameters
Would be a good start to have examples of. | 1.0 | Add embedded code examples dotty website - Depends on: https://github.com/scalacenter/scastie/issues/154
We would like to take some of the code examples from the documentation pages and put them into embedded views on the landing page.
Something like:
- [ ] Enums
- [ ] Union & Intersection types
- [ ] Trait Parameters
Would be a good start to have examples of. | infrastructure | add embedded code examples dotty website depends on we would like to take some of the code examples from the documentation pages and put them into embedded views on the landing page something like enums union intersection types trait parameters would be a good start to have examples of | 1 |
2,404 | 3,669,205,870 | IssuesEvent | 2016-02-21 02:50:50 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Build failure on OSX | Infrastructure Mac OSX | Building from root fails on OSX with 7 errors related to GenFacades.
A sample is below
> /Users/kapilash/src/corefx/packages/Microsoft.DotNet.BuildTools.1.0.25-prerelease-00112/lib/partialfacades.targets(180,5): error MSB3073: The command ""/Users/kapilash/src/corefx/bin/obj/OSX.AnyCPU.Debug/ToolRuntime/corerun" "/Users/kapilash/src/corefx/packages/Microsoft.DotNet.BuildTools.1.0.25-prerelease-00112/lib/GenFacades.exe" -partialFacadeAssemblyPath:"/Users/kapilash/src/corefx/bin/obj/OSX.AnyCPU.Debug/System.Reflection.TypeExtensions.CoreCLR/PreGenFacades/System.Reflection.TypeExtensions.dll" -contracts:"/Users/kapilash/src/corefx/packages/System.Reflection.TypeExtensions/4.1.0-beta-23429/ref/dotnet5.1/System.Reflection.TypeExtensions.dll" -seeds:"/Users/kapilash/src/corefx/packages/Microsoft.DotNet.CoreCLR/1.0.5-prerelease/lib/dnxcore50/mscorlib.dll" -facadePath:"/Users/kapilash/src/corefx/bin/obj/OSX.AnyCPU.Debug/System.Reflection.TypeExtensions.CoreCLR/" -producePdb:false" exited with code 255. [/Users/kapilash/src/corefx/src/System.Reflection.TypeExtensions/src/System.Reflection.TypeExtensions.CoreCLR.csproj]
It seems the problem is the presence of the [file called mscorlib.ni.dll](https://github.com/dotnet/coreclr/issues/1419#issuecomment-134156283).
Removing that from ```$HOME/bin/obj/OSX.AnyCPU.Debug/ToolRuntime``` fixes the build failure.
| 1.0 | Build failure on OSX - Building from root fails on OSX with 7 errors related to GenFacades.
A sample is below
> /Users/kapilash/src/corefx/packages/Microsoft.DotNet.BuildTools.1.0.25-prerelease-00112/lib/partialfacades.targets(180,5): error MSB3073: The command ""/Users/kapilash/src/corefx/bin/obj/OSX.AnyCPU.Debug/ToolRuntime/corerun" "/Users/kapilash/src/corefx/packages/Microsoft.DotNet.BuildTools.1.0.25-prerelease-00112/lib/GenFacades.exe" -partialFacadeAssemblyPath:"/Users/kapilash/src/corefx/bin/obj/OSX.AnyCPU.Debug/System.Reflection.TypeExtensions.CoreCLR/PreGenFacades/System.Reflection.TypeExtensions.dll" -contracts:"/Users/kapilash/src/corefx/packages/System.Reflection.TypeExtensions/4.1.0-beta-23429/ref/dotnet5.1/System.Reflection.TypeExtensions.dll" -seeds:"/Users/kapilash/src/corefx/packages/Microsoft.DotNet.CoreCLR/1.0.5-prerelease/lib/dnxcore50/mscorlib.dll" -facadePath:"/Users/kapilash/src/corefx/bin/obj/OSX.AnyCPU.Debug/System.Reflection.TypeExtensions.CoreCLR/" -producePdb:false" exited with code 255. [/Users/kapilash/src/corefx/src/System.Reflection.TypeExtensions/src/System.Reflection.TypeExtensions.CoreCLR.csproj]
It seems the problem is the presence of the [file called mscorlib.ni.dll](https://github.com/dotnet/coreclr/issues/1419#issuecomment-134156283).
Removing that from ```$HOME/bin/obj/OSX.AnyCPU.Debug/ToolRuntime``` fixes the build failure.
| infrastructure | build failure on osx building from root fails on osx with errors related to genfacades a sample is below users kapilash src corefx packages microsoft dotnet buildtools prerelease lib partialfacades targets error the command users kapilash src corefx bin obj osx anycpu debug toolruntime corerun users kapilash src corefx packages microsoft dotnet buildtools prerelease lib genfacades exe partialfacadeassemblypath users kapilash src corefx bin obj osx anycpu debug system reflection typeextensions coreclr pregenfacades system reflection typeextensions dll contracts users kapilash src corefx packages system reflection typeextensions beta ref system reflection typeextensions dll seeds users kapilash src corefx packages microsoft dotnet coreclr prerelease lib mscorlib dll facadepath users kapilash src corefx bin obj osx anycpu debug system reflection typeextensions coreclr producepdb false exited with code it seems the problem is the presence of the removing that from home bin obj osx anycpu debug toolruntime fixes the build failure | 1 |
12,300 | 19,594,297,363 | IssuesEvent | 2022-01-05 16:08:02 | NASA-PDS/registry-api-service | https://api.github.com/repos/NASA-PDS/registry-api-service | closed | As an API user, I want an average query response time of 1 second for q=* queries | requirement p.should-have proj.registry+api icebox | <!--
For more information on how to populate this new feature request, see the PDS Wiki on User Story Development:
https://github.com/NASA-PDS/nasa-pds.github.io/wiki/Issue-Tracking#user-story-development
-->
## Motivation
...so that I can ensure usability of the API through rapid responses to queries
## Additional Details
<!-- Please prove any additional details or information that could help provide some context for the user story. -->
1 second is somewhat arbitrary but loosely taken from https://www.nngroup.com/articles/response-times-3-important-limits/
Other details for the requirement:
* Registry should contain a minimum of 1mil products for sufficient testing
* Time starts from query received by API service
## Acceptance Criteria
**Given** a deployed API and registry with 1mil+ products ingested
**When I perform** a request or query against any endpoint with a query of `q=*`
**Then I expect** an average 1 second response time, regardless of the type of response type (e.g. pds4+json, json, etc.)
Note: per the performance note, this should be tested against all endpoints and all response formats.
## Engineering Details
<!--
For dev team. Provide some design / implementation details and/or a sub-task checklist as needed.
Convert issue to Epic if estimate is outside the scope of 1 sprint.
-->
Once #13 is implemented, this may just be a simple regression test we add to the repo to check this. Or we can talk to folks on the team to figure out if we know of any long-running queries that may push this. right now, I can't think of any.
| 1.0 | As an API user, I want an average query response time of 1 second for q=* queries - <!--
For more information on how to populate this new feature request, see the PDS Wiki on User Story Development:
https://github.com/NASA-PDS/nasa-pds.github.io/wiki/Issue-Tracking#user-story-development
-->
## Motivation
...so that I can ensure usability of the API through rapid responses to queries
## Additional Details
<!-- Please prove any additional details or information that could help provide some context for the user story. -->
1 second is somewhat arbitrary but loosely taken from https://www.nngroup.com/articles/response-times-3-important-limits/
Other details for the requirement:
* Registry should contain a minimum of 1mil products for sufficient testing
* Time starts from query received by API service
## Acceptance Criteria
**Given** a deployed API and registry with 1mil+ products ingested
**When I perform** a request or query against any endpoint with a query of `q=*`
**Then I expect** an average 1 second response time, regardless of the type of response type (e.g. pds4+json, json, etc.)
Note: per the performance note, this should be tested against all endpoints and all response formats.
## Engineering Details
<!--
For dev team. Provide some design / implementation details and/or a sub-task checklist as needed.
Convert issue to Epic if estimate is outside the scope of 1 sprint.
-->
Once #13 is implemented, this may just be a simple regression test we add to the repo to check this. Or we can talk to folks on the team to figure out if we know of any long-running queries that may push this. right now, I can't think of any.
| non_infrastructure | as an api user i want an average query response time of second for q queries for more information on how to populate this new feature request see the pds wiki on user story development motivation so that i can ensure usability of the api through rapid responses to queries additional details second is somewhat arbitrary but loosely taken from other details for the requirement registry should contain a minimum of products for sufficient testing time starts from query received by api service acceptance criteria given a deployed api and registry with products ingested when i perform a request or query against any endpoint with a query of q then i expect an average second response time regardless of the type of response type e g json json etc note per the performance note this should be tested against all endpoints and all response formats engineering details for dev team provide some design implementation details and or a sub task checklist as needed convert issue to epic if estimate is outside the scope of sprint once is implemented this may just be a simple regression test we add to the repo to check this or we can talk to folks on the team to figure out if we know of any long running queries that may push this right now i can t think of any | 0 |
23,377 | 16,095,748,153 | IssuesEvent | 2021-04-26 23:17:23 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | opened | Update extensions release/3.1 to use ubuntu18.04 agents | area-infrastructure area-runtime feature-hosting investigate task | The blocker here is that Microsoft.Extensions.Hosting.Systemd is broken when built on ubuntu18.04. This is likely a product issue so investigation is needed. | 1.0 | Update extensions release/3.1 to use ubuntu18.04 agents - The blocker here is that Microsoft.Extensions.Hosting.Systemd is broken when built on ubuntu18.04. This is likely a product issue so investigation is needed. | infrastructure | update extensions release to use agents the blocker here is that microsoft extensions hosting systemd is broken when built on this is likely a product issue so investigation is needed | 1 |
10,438 | 8,565,256,921 | IssuesEvent | 2018-11-09 19:17:23 | angular/material2 | https://api.github.com/repos/angular/material2 | closed | Completely migrate from TravisCI to CircleCI | in progress infrastructure | We want migrate completely off of TravisCI by the end of Q4. | 1.0 | Completely migrate from TravisCI to CircleCI - We want migrate completely off of TravisCI by the end of Q4. | infrastructure | completely migrate from travisci to circleci we want migrate completely off of travisci by the end of | 1 |
9,105 | 2,607,926,247 | IssuesEvent | 2015-02-26 00:24:45 | chrsmithdemos/minify | https://api.github.com/repos/chrsmithdemos/minify | closed | Improve JSMin performance | auto-migrated Milestone-Release-1.1.0 Priority-High Type-Enhancement | ```
The JSMin library is hideously slow. It probably needs quite a bit of
optimization before Minify will be usable for JavaScript minification on a
high-traffic website.
```
-----
Original issue reported on code.google.com by `rgr...@gmail.com` on 3 May 2007 at 5:54 | 1.0 | Improve JSMin performance - ```
The JSMin library is hideously slow. It probably needs quite a bit of
optimization before Minify will be usable for JavaScript minification on a
high-traffic website.
```
-----
Original issue reported on code.google.com by `rgr...@gmail.com` on 3 May 2007 at 5:54 | non_infrastructure | improve jsmin performance the jsmin library is hideously slow it probably needs quite a bit of optimization before minify will be usable for javascript minification on a high traffic website original issue reported on code google com by rgr gmail com on may at | 0 |
35,675 | 32,019,445,233 | IssuesEvent | 2023-09-22 02:19:14 | accessibility-exchange/platform | https://api.github.com/repos/accessibility-exchange/platform | opened | Consider switching code coverage from Xdebug to pcov | enhancement infrastructure | **Is your feature request related to a problem? Please describe.**
The test runs on CI, particularly for PRs, take a long time. In particular because we run against multiple versions of php and run the test suite for each. Speeding up the time would help.
**Describe the solution you'd like**
Use pcov for code coverage reporting and Xdebug for debugging.
**Describe alternatives you've considered**
Continue using Xdebug for both code coverage reporting and debugging.
| 1.0 | Consider switching code coverage from Xdebug to pcov - **Is your feature request related to a problem? Please describe.**
The test runs on CI, particularly for PRs, take a long time. In particular because we run against multiple versions of php and run the test suite for each. Speeding up the time would help.
**Describe the solution you'd like**
Use pcov for code coverage reporting and Xdebug for debugging.
**Describe alternatives you've considered**
Continue using Xdebug for both code coverage reporting and debugging.
| infrastructure | consider switching code coverage from xdebug to pcov is your feature request related to a problem please describe the test runs on ci particularly for prs take a long time in particular because we run against multiple versions of php and run the test suite for each speeding up the time would help describe the solution you d like use pcov for code coverage reporting and xdebug for debugging describe alternatives you ve considered continue using xdebug for both code coverage reporting and debugging | 1 |
16,325 | 11,913,945,325 | IssuesEvent | 2020-03-31 12:51:51 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | Live Test system should help reduce "noise" and improve triage | Infrastructure Test | We store all of our live test results in a database. We should analyze that database to help identify certain patterns of test failures.
Recommend adding two columns to test results, "Category" and "Likely Cause":
**Category**
- Tests that sometimes fail and sometimes don't over a period of time should be labeled as "flaky". These would be the lowest priority fixes involving adding some kind of retry logic or otherwise tweaking test infrastructure to be more stable.
- Tests that have been consistently passing and suddenly fail (i.e. NEW failure) should be flagged for quick investigation. These could represent a service outage, failure of the live test systems itself, or, most critically **a CLI regression that was masked by the presence of a recording**.
- Any test failure marked "Live Only" should be a red flag, as described in a separate issue.
- Tests that have been consistently failing should be marked as such. These are often the ones where the test requires specific features to be enabled on the test subscription.
**Likely Cause**
Rather than require the user to click the link to open the logs or recording, it would be helpful if the system (A01 or an external one) would offer a best guess diagnosis. For example, cases where the subscription lacks a feature will often say "Subscription X is not registered for feature...". Even if all it did is find the root error message in the stack trace and display it, it would save us time. Any kind of non-CLI error (ValueError, TypeError, etc) should be a red flag that there's a likely CLI regression, and so on.
| 1.0 | Live Test system should help reduce "noise" and improve triage - We store all of our live test results in a database. We should analyze that database to help identify certain patterns of test failures.
Recommend adding two columns to test results, "Category" and "Likely Cause":
**Category**
- Tests that sometimes fail and sometimes don't over a period of time should be labeled as "flaky". These would be the lowest priority fixes involving adding some kind of retry logic or otherwise tweaking test infrastructure to be more stable.
- Tests that have been consistently passing and suddenly fail (i.e. NEW failure) should be flagged for quick investigation. These could represent a service outage, failure of the live test systems itself, or, most critically **a CLI regression that was masked by the presence of a recording**.
- Any test failure marked "Live Only" should be a red flag, as described in a separate issue.
- Tests that have been consistently failing should be marked as such. These are often the ones where the test requires specific features to be enabled on the test subscription.
**Likely Cause**
Rather than require the user to click the link to open the logs or recording, it would be helpful if the system (A01 or an external one) would offer a best guess diagnosis. For example, cases where the subscription lacks a feature will often say "Subscription X is not registered for feature...". Even if all it did is find the root error message in the stack trace and display it, it would save us time. Any kind of non-CLI error (ValueError, TypeError, etc) should be a red flag that there's a likely CLI regression, and so on.
| infrastructure | live test system should help reduce noise and improve triage we store all of our live test results in a database we should analyze that database to help identify certain patterns of test failures recommend adding two columns to test results category and likely cause category tests that sometimes fail and sometimes don t over a period of time should be labeled as flaky these would be the lowest priority fixes involving adding some kind of retry logic or otherwise tweaking test infrastructure to be more stable tests that have been consistently passing and suddenly fail i e new failure should be flagged for quick investigation these could represent a service outage failure of the live test systems itself or most critically a cli regression that was masked by the presence of a recording any test failure marked live only should be a red flag as described in a separate issue tests that have been consistently failing should be marked as such these are often the ones where the test requires specific features to be enabled on the test subscription likely cause rather than require the user to click the link to open the logs or recording it would be helpful if the system or an external one would offer a best guess diagnosis for example cases where the subscription lacks a feature will often say subscription x is not registered for feature even if all it did is find the root error message in the stack trace and display it it would save us time any kind of non cli error valueerror typeerror etc should be a red flag that there s a likely cli regression and so on | 1 |
29,762 | 24,254,991,654 | IssuesEvent | 2022-09-27 16:58:57 | timescale/timescaledb-toolkit | https://api.github.com/repos/timescale/timescaledb-toolkit | closed | Consider enforcing rustfmt | Infrastructure | This would be a test that asserts no output from `diff source-file <(rustfmt source-file)`, i.e. that all files have been formatted with `rustfmt`.
This is an inexpensive test and part of the value of using a source code formatter is a consistent reading experience, so we'll want to run this on every push so we can have clean formatting before anyone else reads a change. | 1.0 | Consider enforcing rustfmt - This would be a test that asserts no output from `diff source-file <(rustfmt source-file)`, i.e. that all files have been formatted with `rustfmt`.
This is an inexpensive test and part of the value of using a source code formatter is a consistent reading experience, so we'll want to run this on every push so we can have clean formatting before anyone else reads a change. | infrastructure | consider enforcing rustfmt this would be a test that asserts no output from diff source file rustfmt source file i e that all files have been formatted with rustfmt this is an inexpensive test and part of the value of using a source code formatter is a consistent reading experience so we ll want to run this on every push so we can have clean formatting before anyone else reads a change | 1 |
20,908 | 14,235,912,569 | IssuesEvent | 2020-11-18 15:22:53 | microsoft/dotnet-framework-docker | https://api.github.com/repos/microsoft/dotnet-framework-docker | closed | Tests results are not being published in build pipeline | area-infrastructure bug | The tasks that publishes the test results outputs the following warning: `##[warning]No test result files matching tests/Microsoft.DotNet.Docker.Tests/TestResults//**/*.trx were found`. The path it's looking for is incorrect. This prevents test results from showing up in the AzDO UI. | 1.0 | Tests results are not being published in build pipeline - The tasks that publishes the test results outputs the following warning: `##[warning]No test result files matching tests/Microsoft.DotNet.Docker.Tests/TestResults//**/*.trx were found`. The path it's looking for is incorrect. This prevents test results from showing up in the AzDO UI. | infrastructure | tests results are not being published in build pipeline the tasks that publishes the test results outputs the following warning no test result files matching tests microsoft dotnet docker tests testresults trx were found the path it s looking for is incorrect this prevents test results from showing up in the azdo ui | 1 |
34,800 | 30,472,795,881 | IssuesEvent | 2023-07-17 14:36:51 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | CI: `runtime-ioslike-mono`, and `runtime-ioslike-coreclr` pipelines are broken due to setup issues | area-Infrastructure-coreclr blocking-clean-ci area-Infrastructure-mono in-pr | The new pipelines are failing at the `Send to helix` step.
[Log](https://dev.azure.com/dnceng-public/public/_build/results?buildId=341731&view=logs&j=7f49df26-8126-5de3-bf2f-6ac6bde01830&t=c62b6a8c-bf71-5057-7f45-9c355fe0b802):
```
.packages/microsoft.dotnet.helix.sdk/8.0.0-beta.23364.2/tools/azure-pipelines/AzurePipelines.MultiQueue.targets(16,5): error : (NETCORE_ENGINEERING_TELEMETRY=Build) A call to an Azure DevOps api returned 401, which may indicate a bad 'System.AccessToken' value.
Please Check the 'Make secrets available to builds of forks' in the pipeline pull request validation trigger settings.
We have evaluated the security considerations of this setting and have determined that it is fine to use for our public PR validation builds.
```
cc @kotlarmilos @steveisok
<!-- Error message template -->
### Known Issue Error Message
Fill the error message using [step by step known issues guidance](https://github.com/dotnet/arcade/blob/main/Documentation/Projects/Build%20Analysis/KnownIssues.md#how-to-fill-out-a-known-issue-error-section).
<!-- Use ErrorMessage for String.Contains matches. Use ErrorPattern for regex matches (single line/no backtracking). Set BuildRetry to `true` to retry builds with this error. Set ExcludeConsoleLog to `true` to skip helix logs analysis. -->
```json
{
"ErrorMessage": "",
"ErrorPattern": "",
"BuildRetry": false,
"ExcludeConsoleLog": false
}
```
<!--Known issue error report start -->
### Report
#### Summary
|24-Hour Hit Count|7-Day Hit Count|1-Month Count|
|---|---|---|
|0|0|0|
<!--Known issue error report end --> | 2.0 | CI: `runtime-ioslike-mono`, and `runtime-ioslike-coreclr` pipelines are broken due to setup issues - The new pipelines are failing at the `Send to helix` step.
[Log](https://dev.azure.com/dnceng-public/public/_build/results?buildId=341731&view=logs&j=7f49df26-8126-5de3-bf2f-6ac6bde01830&t=c62b6a8c-bf71-5057-7f45-9c355fe0b802):
```
.packages/microsoft.dotnet.helix.sdk/8.0.0-beta.23364.2/tools/azure-pipelines/AzurePipelines.MultiQueue.targets(16,5): error : (NETCORE_ENGINEERING_TELEMETRY=Build) A call to an Azure DevOps api returned 401, which may indicate a bad 'System.AccessToken' value.
Please Check the 'Make secrets available to builds of forks' in the pipeline pull request validation trigger settings.
We have evaluated the security considerations of this setting and have determined that it is fine to use for our public PR validation builds.
```
cc @kotlarmilos @steveisok
<!-- Error message template -->
### Known Issue Error Message
Fill the error message using [step by step known issues guidance](https://github.com/dotnet/arcade/blob/main/Documentation/Projects/Build%20Analysis/KnownIssues.md#how-to-fill-out-a-known-issue-error-section).
<!-- Use ErrorMessage for String.Contains matches. Use ErrorPattern for regex matches (single line/no backtracking). Set BuildRetry to `true` to retry builds with this error. Set ExcludeConsoleLog to `true` to skip helix logs analysis. -->
```json
{
"ErrorMessage": "",
"ErrorPattern": "",
"BuildRetry": false,
"ExcludeConsoleLog": false
}
```
<!--Known issue error report start -->
### Report
#### Summary
|24-Hour Hit Count|7-Day Hit Count|1-Month Count|
|---|---|---|
|0|0|0|
<!--Known issue error report end --> | infrastructure | ci runtime ioslike mono and runtime ioslike coreclr pipelines are broken due to setup issues the new pipelines are failing at the send to helix step packages microsoft dotnet helix sdk beta tools azure pipelines azurepipelines multiqueue targets error netcore engineering telemetry build a call to an azure devops api returned which may indicate a bad system accesstoken value please check the make secrets available to builds of forks in the pipeline pull request validation trigger settings we have evaluated the security considerations of this setting and have determined that it is fine to use for our public pr validation builds cc kotlarmilos steveisok known issue error message fill the error message using json errormessage errorpattern buildretry false excludeconsolelog false report summary hour hit count day hit count month count | 1 |
14,658 | 11,043,268,274 | IssuesEvent | 2019-12-09 10:49:22 | theexiile1305/showcase-wca | https://api.github.com/repos/theexiile1305/showcase-wca | closed | Fixes current eslint failure | bug infrastructure | Currently, there is an eslint failure shown at [https://circleci.com/gh/theexiile1305/showcase-wca/211](https://circleci.com/gh/theexiile1305/showcase-wca/211). Additionally update the README in order to show the badges of the master branch. | 1.0 | Fixes current eslint failure - Currently, there is an eslint failure shown at [https://circleci.com/gh/theexiile1305/showcase-wca/211](https://circleci.com/gh/theexiile1305/showcase-wca/211). Additionally update the README in order to show the badges of the master branch. | infrastructure | fixes current eslint failure currently there is an eslint failure shown at additionally update the readme in order to show the badges of the master branch | 1 |
8,334 | 7,345,604,022 | IssuesEvent | 2018-03-07 17:56:11 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | opened | [spring boot feature] determine support for security manager | team:OSGi Infrastructure | There are currently issues with running spring framework with security manager enabled. This issue is to investigate the issues and determine if we should fix them. | 1.0 | [spring boot feature] determine support for security manager - There are currently issues with running spring framework with security manager enabled. This issue is to investigate the issues and determine if we should fix them. | infrastructure | determine support for security manager there are currently issues with running spring framework with security manager enabled this issue is to investigate the issues and determine if we should fix them | 1 |
52,096 | 12,876,353,650 | IssuesEvent | 2020-07-11 04:15:32 | GoogleCloudPlatform/java-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/java-docs-samples | closed | com.example.vision.DetectSafeSearchTest: testSafeSearch failed | buildcop: issue priority: p1 type: bug | This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: 4c3faaeb6b5f163281c248cd5698ce33d2142031
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/93984f21-a2f2-4580-bff8-fc9dcb74d2d6), [Sponge](http://sponge2/93984f21-a2f2-4580-bff8-fc9dcb74d2d6)
status: failed
<details><summary>Test output</summary><br><pre>expected to contain:
adult:
but was:
Error: Internal server error. Failed to process features.
at com.example.vision.DetectSafeSearchTest.testSafeSearch(DetectSafeSearchTest.java:55)
</pre></details> | 1.0 | com.example.vision.DetectSafeSearchTest: testSafeSearch failed - This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: 4c3faaeb6b5f163281c248cd5698ce33d2142031
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/93984f21-a2f2-4580-bff8-fc9dcb74d2d6), [Sponge](http://sponge2/93984f21-a2f2-4580-bff8-fc9dcb74d2d6)
status: failed
<details><summary>Test output</summary><br><pre>expected to contain:
adult:
but was:
Error: Internal server error. Failed to process features.
at com.example.vision.DetectSafeSearchTest.testSafeSearch(DetectSafeSearchTest.java:55)
</pre></details> | non_infrastructure | com example vision detectsafesearchtest testsafesearch failed this test failed to configure my behavior see if i m commenting on this issue too often add the buildcop quiet label and i will stop commenting commit buildurl status failed test output expected to contain adult but was error internal server error failed to process features at com example vision detectsafesearchtest testsafesearch detectsafesearchtest java | 0 |
245,502 | 26,549,248,547 | IssuesEvent | 2023-01-20 05:25:19 | nidhi7598/linux-3.0.35_CVE-2022-45934 | https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2022-45934 | opened | CVE-2013-7421 (Medium) detected in linuxlinux-3.0.49 | security vulnerability | ## CVE-2013-7421 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.49</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35_CVE-2022-45934/commit/5e23b7f9d2dd0154edd54986754eecd5b5308571">5e23b7f9d2dd0154edd54986754eecd5b5308571</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Crypto API in the Linux kernel before 3.18.5 allows local users to load arbitrary kernel modules via a bind system call for an AF_ALG socket with a module name in the salg_name field, a different vulnerability than CVE-2014-9644.
<p>Publish Date: 2015-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-7421>CVE-2013-7421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7421">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7421</a></p>
<p>Release Date: 2015-03-02</p>
<p>Fix Resolution: v3.19-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2013-7421 (Medium) detected in linuxlinux-3.0.49 - ## CVE-2013-7421 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.49</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35_CVE-2022-45934/commit/5e23b7f9d2dd0154edd54986754eecd5b5308571">5e23b7f9d2dd0154edd54986754eecd5b5308571</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Crypto API in the Linux kernel before 3.18.5 allows local users to load arbitrary kernel modules via a bind system call for an AF_ALG socket with a module name in the salg_name field, a different vulnerability than CVE-2014-9644.
<p>Publish Date: 2015-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-7421>CVE-2013-7421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7421">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7421</a></p>
<p>Release Date: 2015-03-02</p>
<p>Fix Resolution: v3.19-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details the crypto api in the linux kernel before allows local users to load arbitrary kernel modules via a bind system call for an af alg socket with a module name in the salg name field a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
28,717 | 23,463,969,498 | IssuesEvent | 2022-08-16 15:11:10 | airyhq/airy | https://api.github.com/repos/airyhq/airy | closed | components.install installs the latest chart in the repo | infrastructure bug | ## Describe the bug
When installing a new component, the backend installs the latest helm chart in the repo, instead of instaling the version of the currently running `Airy Core` installation.
## To Reproduce
Steps to reproduce the behavior:
1. Create an Airy Core instance with version other then the latest develop
2. Install a new component
3. See with `helm list` how the versions of Airy Core and the component differ
## Expected behavior
As we agreed for now that the components will track the version of Airy Core, the new component (helm chart) should be installed with the same version.
## Screenshots
/
## Environment
- All
## Additional context
/
| 1.0 | components.install installs the latest chart in the repo - ## Describe the bug
When installing a new component, the backend installs the latest helm chart in the repo, instead of instaling the version of the currently running `Airy Core` installation.
## To Reproduce
Steps to reproduce the behavior:
1. Create an Airy Core instance with version other then the latest develop
2. Install a new component
3. See with `helm list` how the versions of Airy Core and the component differ
## Expected behavior
As we agreed for now that the components will track the version of Airy Core, the new component (helm chart) should be installed with the same version.
## Screenshots
/
## Environment
- All
## Additional context
/
| infrastructure | components install installs the latest chart in the repo describe the bug when installing a new component the backend installs the latest helm chart in the repo instead of instaling the version of the currently running airy core installation to reproduce steps to reproduce the behavior create an airy core instance with version other then the latest develop install a new component see with helm list how the versions of airy core and the component differ expected behavior as we agreed for now that the components will track the version of airy core the new component helm chart should be installed with the same version screenshots environment all additional context | 1 |
12,315 | 9,693,102,605 | IssuesEvent | 2019-05-24 15:16:36 | Demiplane/Womb | https://api.github.com/repos/Demiplane/Womb | closed | Change hosting URL | infrastructure | Currently hosted on womb3.azurewebsites.net
Should move to a subdomain of demiplane.io, preferably womb.demiplane.io | 1.0 | Change hosting URL - Currently hosted on womb3.azurewebsites.net
Should move to a subdomain of demiplane.io, preferably womb.demiplane.io | infrastructure | change hosting url currently hosted on azurewebsites net should move to a subdomain of demiplane io preferably womb demiplane io | 1 |
96,790 | 10,963,193,597 | IssuesEvent | 2019-11-27 19:04:20 | dankamongmen/notcurses | https://api.github.com/repos/dankamongmen/notcurses | opened | Render AVFrames | documentation enhancement | Now that we're able to extract AVFrames from video and images, it's time to render them down. We're first looking for the quality level of e.g. `mpv -vo tct`, but then we ought be able to use better drawing characters, better palette matching, and better dithering to improve on it. | 1.0 | Render AVFrames - Now that we're able to extract AVFrames from video and images, it's time to render them down. We're first looking for the quality level of e.g. `mpv -vo tct`, but then we ought be able to use better drawing characters, better palette matching, and better dithering to improve on it. | non_infrastructure | render avframes now that we re able to extract avframes from video and images it s time to render them down we re first looking for the quality level of e g mpv vo tct but then we ought be able to use better drawing characters better palette matching and better dithering to improve on it | 0 |
26,214 | 19,727,673,355 | IssuesEvent | 2022-01-13 21:45:50 | PardeeCenterDU/IFs-Issues-Tracking | https://api.github.com/repos/PardeeCenterDU/IFs-Issues-Tracking | closed | Check if regression needs update: GDP/Capita (PPP 2011) Versus Computer Owning Households(MOSTRECENT) Log | Priority 2 modeling - infrastructure model | INFRASTR.bas
IFsInfra (in 2 places, first year and all but first):
'mti 2015/05/06 - start
'Temporary replcament with a simpe rregression of % of households with computers on GDP PCP (Logistic function)
TbName$ = "GDP/Capita (PPP 2011) Versus Computer Owning Households(MOSTRECENT) Log" 'function for earlier years yet to be estimated
'mti 2015/05/06 – end
'Temporary replcament with a simpe rregression of % of households with computers on GDP PCP (Logistic function)
'mti 2015/05/06 - start
TbName$ = "GDP/Capita (PPP 2011) Versus Computer Owning Households(MOSTRECENT) Log" 'function for earlier years yet to be estimated
'mti 2015/05/06 – end
| 1.0 | Check if regression needs update: GDP/Capita (PPP 2011) Versus Computer Owning Households(MOSTRECENT) Log - INFRASTR.bas
IFsInfra (in 2 places, first year and all but first):
'mti 2015/05/06 - start
'Temporary replcament with a simpe rregression of % of households with computers on GDP PCP (Logistic function)
TbName$ = "GDP/Capita (PPP 2011) Versus Computer Owning Households(MOSTRECENT) Log" 'function for earlier years yet to be estimated
'mti 2015/05/06 – end
'Temporary replcament with a simpe rregression of % of households with computers on GDP PCP (Logistic function)
'mti 2015/05/06 - start
TbName$ = "GDP/Capita (PPP 2011) Versus Computer Owning Households(MOSTRECENT) Log" 'function for earlier years yet to be estimated
'mti 2015/05/06 – end
| infrastructure | check if regression needs update gdp capita ppp versus computer owning households mostrecent log infrastr bas ifsinfra in places first year and all but first mti start temporary replcament with a simpe rregression of of households with computers on gdp pcp logistic function tbname gdp capita ppp versus computer owning households mostrecent log function for earlier years yet to be estimated mti – end temporary replcament with a simpe rregression of of households with computers on gdp pcp logistic function mti start tbname gdp capita ppp versus computer owning households mostrecent log function for earlier years yet to be estimated mti – end | 1 |
17,149 | 12,236,385,356 | IssuesEvent | 2020-05-04 16:16:03 | nwfsc-fram/boatnet | https://api.github.com/repos/nwfsc-fram/boatnet | reopened | Implement login expiration extension on successful boatnet login | Prj:infrastructure | Brendan mentioned that we could extend the stored procedures that handle Boatnet logins to automatically extend the 90-day timeout for Boatnet users.
Creating this ticket to track that request. We may need to do this ourselves (update to bn-auth) if we can get the stored procedure that performs this update.
Attn @sethgerou-noaa | 1.0 | Implement login expiration extension on successful boatnet login - Brendan mentioned that we could extend the stored procedures that handle Boatnet logins to automatically extend the 90-day timeout for Boatnet users.
Creating this ticket to track that request. We may need to do this ourselves (update to bn-auth) if we can get the stored procedure that performs this update.
Attn @sethgerou-noaa | infrastructure | implement login expiration extension on successful boatnet login brendan mentioned that we could extend the stored procedures that handle boatnet logins to automatically extend the day timeout for boatnet users creating this ticket to track that request we may need to do this ourselves update to bn auth if we can get the stored procedure that performs this update attn sethgerou noaa | 1 |
305,724 | 26,407,110,118 | IssuesEvent | 2023-01-13 09:03:07 | pyroteus/movement | https://api.github.com/repos/pyroteus/movement | closed | Hook up GitHub Actions | enhancement testing | Should be fairly straightforward, as we can use the default Firedrake docker image. | 1.0 | Hook up GitHub Actions - Should be fairly straightforward, as we can use the default Firedrake docker image. | non_infrastructure | hook up github actions should be fairly straightforward as we can use the default firedrake docker image | 0 |
20,538 | 14,000,795,021 | IssuesEvent | 2020-10-28 12:50:21 | raiden-network/light-client | https://api.github.com/repos/raiden-network/light-client | reopened | Cypress: Add test for mediated transfers | dApp 📱 infrastructure 🚧 test | ## Description
We need to add tests for not only direct transfers but also mediated ones.
## Acceptance criteria
-
## Tasks
- [ ] Figure out how to set up the e2e tests so that mediated transfers can be done in the dApp via Cypress.
| 1.0 | Cypress: Add test for mediated transfers - ## Description
We need to add tests for not only direct transfers but also mediated ones.
## Acceptance criteria
-
## Tasks
- [ ] Figure out how to set up the e2e tests so that mediated transfers can be done in the dApp via Cypress.
| infrastructure | cypress add test for mediated transfers description we need to add tests for not only direct transfers but also mediated ones acceptance criteria tasks figure out how to set up the tests so that mediated transfers can be done in the dapp via cypress | 1 |
25,130 | 4,147,073,596 | IssuesEvent | 2016-06-15 04:29:49 | NishantUpadhyay-BTC/BLISS-Issue-Tracking | https://api.github.com/repos/NishantUpadhyay-BTC/BLISS-Issue-Tracking | closed | Guest UI > Issue in Availability Table > Show Special periods | bug Deployed to Test | 1. There r no Special Periods in Office UI. Still show in > Guest UI > Special Periods on Availability Table

Expected - If there r no Special periods in Office UI so should not show Special periods on Availability table | 1.0 | Guest UI > Issue in Availability Table > Show Special periods - 1. There r no Special Periods in Office UI. Still show in > Guest UI > Special Periods on Availability Table

Expected - If there r no Special periods in Office UI so should not show Special periods on Availability table | non_infrastructure | guest ui issue in availability table show special periods there r no special periods in office ui still show in guest ui special periods on availability table expected if there r no special periods in office ui so should not show special periods on availability table | 0 |
228,172 | 17,421,645,241 | IssuesEvent | 2021-08-04 02:36:02 | arq5x/bedtools2 | https://api.github.com/repos/arq5x/bedtools2 | closed | Empty links in documentation | documentation enhancement next-major-release | All of these links are empty in the documentation it seems
https://bedtools.readthedocs.io/en/latest/content/tools/makewindows.html https://bedtools.readthedocs.io/en/latest/content/tools/bedpetobam.html https://bedtools.readthedocs.io/en/latest/content/tools/expand.html https://bedtools.readthedocs.io/en/latest/content/tools/igv.html https://bedtools.readthedocs.io/en/latest/content/tools/multiinter.html https://bedtools.readthedocs.io/en/latest/content/tools/nuc.html https://bedtools.readthedocs.io/en/latest/content/tools/pairtobed.html https://bedtools.readthedocs.io/en/latest/content/tools/tag.html
I found this because I wanted makewindows docs personally :) | 1.0 | Empty links in documentation - All of these links are empty in the documentation it seems
https://bedtools.readthedocs.io/en/latest/content/tools/makewindows.html https://bedtools.readthedocs.io/en/latest/content/tools/bedpetobam.html https://bedtools.readthedocs.io/en/latest/content/tools/expand.html https://bedtools.readthedocs.io/en/latest/content/tools/igv.html https://bedtools.readthedocs.io/en/latest/content/tools/multiinter.html https://bedtools.readthedocs.io/en/latest/content/tools/nuc.html https://bedtools.readthedocs.io/en/latest/content/tools/pairtobed.html https://bedtools.readthedocs.io/en/latest/content/tools/tag.html
I found this because I wanted makewindows docs personally :) | non_infrastructure | empty links in documentation all of these links are empty in the documentation it seems i found this because i wanted makewindows docs personally | 0 |
22,659 | 15,361,589,029 | IssuesEvent | 2021-03-01 18:20:36 | seattle-uat/universal-application-tool | https://api.github.com/repos/seattle-uat/universal-application-tool | closed | Create dev-mode controller action that seeds DB | infrastructure | It would be really nice to have an endpoint we could hit while developing that would put a bunch of content in the database so that we could test flows without needing to manually add data each time. This thing should also clean up after itself, of course.
* Create a controller action that truncates the tables and then inserts the seeds, only in dev mode (else throws)
* Create an endpoint like /seed-db calls that action
* Truncate the tables on teardown of the controller | 1.0 | Create dev-mode controller action that seeds DB - It would be really nice to have an endpoint we could hit while developing that would put a bunch of content in the database so that we could test flows without needing to manually add data each time. This thing should also clean up after itself, of course.
* Create a controller action that truncates the tables and then inserts the seeds, only in dev mode (else throws)
* Create an endpoint like /seed-db calls that action
* Truncate the tables on teardown of the controller | infrastructure | create dev mode controller action that seeds db it would be really nice to have an endpoint we could hit while developing that would put a bunch of content in the database so that we could test flows without needing to manually add data each time this thing should also clean up after itself of course create a controller action that truncates the tables and then inserts the seeds only in dev mode else throws create an endpoint like seed db calls that action truncate the tables on teardown of the controller | 1 |
357,230 | 10,603,822,809 | IssuesEvent | 2019-10-10 16:47:48 | googleapis/gapic-generator-python | https://api.github.com/repos/googleapis/gapic-generator-python | opened | Handle primitive types in request setup | priority: p0 samplegen type: feature request | The code currently doesn't handle primitive message fields. | 1.0 | Handle primitive types in request setup - The code currently doesn't handle primitive message fields. | non_infrastructure | handle primitive types in request setup the code currently doesn t handle primitive message fields | 0 |
190,806 | 6,822,702,182 | IssuesEvent | 2017-11-07 21:01:02 | TCA-Team/iOS | https://api.github.com/repos/TCA-Team/iOS | closed | Add search bar to TUM.sexy redirects | enhancement good first issue Low Priority | As the TUM.sexy redirects are growing - add a search bar to the view which filters the list of redirects. | 1.0 | Add search bar to TUM.sexy redirects - As the TUM.sexy redirects are growing - add a search bar to the view which filters the list of redirects. | non_infrastructure | add search bar to tum sexy redirects as the tum sexy redirects are growing add a search bar to the view which filters the list of redirects | 0 |
616,580 | 19,306,465,678 | IssuesEvent | 2021-12-13 12:04:47 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | BAD sad error when using constant expression in index access expression | Type/Bug Priority/Blocker Points/2 Area/TypeChecker Crash | ```
const N = 2;
const M = 3;
type T1 int[N][M]; // same as (int[M])[N]
public function main() {
T1 x1 = [[1, 2, 3], [1, 2, 3]];
int[M] _ = x1[N-1]; // Problem is here
}
```
```
2021-12-01 09:03:23,277] SEVERE {b7a.log.crash} - class org.wso2.ballerinalang.compiler.tree.expressions.BLangBinaryExpr cannot be cast to class org.wso2.ballerinalang.compiler.tree.expressions.BLangSimpleVarRef (org.wso2.ballerinalang.compiler.tree.expressions.BLangBinaryExpr and org.wso2.ballerinalang.compiler.tree.expressions.BLangSimpleVarRef are in unnamed module of loader 'app')
java.lang.ClassCastException: class org.wso2.ballerinalang.compiler.tree.expressions.BLangBinaryExpr cannot be cast to class org.wso2.ballerinalang.compiler.tree.expressions.BLangSimpleVarRef (org.wso2.ballerinalang.compiler.tree.expressions.BLangBinaryExpr and org.wso2.ballerinalang.compiler.tree.expressions.BLangSimpleVarRef are in unnamed module of loader 'app')
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.getConstIndex(TypeChecker.java:7621)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkArrayIndexBasedAccess(TypeChecker.java:7639)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkListIndexBasedAccess(TypeChecker.java:7685)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkIndexAccessExpr(TypeChecker.java:7500)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.visit(TypeChecker.java:2731)
at org.wso2.ballerinalang.compiler.tree.expressions.BLangIndexBasedAccess.accept(BLangIndexBasedAccess.java:56)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkExpr(TypeChecker.java:376)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkExpr(TypeChecker.java:355)
at org.wso2.ballerinalang.compiler.semantics.analyzer.SemanticAnalyzer.visit(SemanticAnalyzer.java:978)
at org.wso2.ballerinalang.compiler.tree.BLangSimpleVariable.accept(BLangSimpleVariable.java:53)
``` | 1.0 | BAD sad error when using constant expression in index access expression - ```
const N = 2;
const M = 3;
type T1 int[N][M]; // same as (int[M])[N]
public function main() {
T1 x1 = [[1, 2, 3], [1, 2, 3]];
int[M] _ = x1[N-1]; // Problem is here
}
```
```
2021-12-01 09:03:23,277] SEVERE {b7a.log.crash} - class org.wso2.ballerinalang.compiler.tree.expressions.BLangBinaryExpr cannot be cast to class org.wso2.ballerinalang.compiler.tree.expressions.BLangSimpleVarRef (org.wso2.ballerinalang.compiler.tree.expressions.BLangBinaryExpr and org.wso2.ballerinalang.compiler.tree.expressions.BLangSimpleVarRef are in unnamed module of loader 'app')
java.lang.ClassCastException: class org.wso2.ballerinalang.compiler.tree.expressions.BLangBinaryExpr cannot be cast to class org.wso2.ballerinalang.compiler.tree.expressions.BLangSimpleVarRef (org.wso2.ballerinalang.compiler.tree.expressions.BLangBinaryExpr and org.wso2.ballerinalang.compiler.tree.expressions.BLangSimpleVarRef are in unnamed module of loader 'app')
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.getConstIndex(TypeChecker.java:7621)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkArrayIndexBasedAccess(TypeChecker.java:7639)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkListIndexBasedAccess(TypeChecker.java:7685)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkIndexAccessExpr(TypeChecker.java:7500)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.visit(TypeChecker.java:2731)
at org.wso2.ballerinalang.compiler.tree.expressions.BLangIndexBasedAccess.accept(BLangIndexBasedAccess.java:56)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkExpr(TypeChecker.java:376)
at org.wso2.ballerinalang.compiler.semantics.analyzer.TypeChecker.checkExpr(TypeChecker.java:355)
at org.wso2.ballerinalang.compiler.semantics.analyzer.SemanticAnalyzer.visit(SemanticAnalyzer.java:978)
at org.wso2.ballerinalang.compiler.tree.BLangSimpleVariable.accept(BLangSimpleVariable.java:53)
``` | non_infrastructure | bad sad error when using constant expression in index access expression const n const m type int same as int public function main int problem is here severe log crash class org ballerinalang compiler tree expressions blangbinaryexpr cannot be cast to class org ballerinalang compiler tree expressions blangsimplevarref org ballerinalang compiler tree expressions blangbinaryexpr and org ballerinalang compiler tree expressions blangsimplevarref are in unnamed module of loader app java lang classcastexception class org ballerinalang compiler tree expressions blangbinaryexpr cannot be cast to class org ballerinalang compiler tree expressions blangsimplevarref org ballerinalang compiler tree expressions blangbinaryexpr and org ballerinalang compiler tree expressions blangsimplevarref are in unnamed module of loader app at org ballerinalang compiler semantics analyzer typechecker getconstindex typechecker java at org ballerinalang compiler semantics analyzer typechecker checkarrayindexbasedaccess typechecker java at org ballerinalang compiler semantics analyzer typechecker checklistindexbasedaccess typechecker java at org ballerinalang compiler semantics analyzer typechecker checkindexaccessexpr typechecker java at org ballerinalang compiler semantics analyzer typechecker visit typechecker java at org ballerinalang compiler tree expressions blangindexbasedaccess accept blangindexbasedaccess java at org ballerinalang compiler semantics analyzer typechecker checkexpr typechecker java at org ballerinalang compiler semantics analyzer typechecker checkexpr typechecker java at org ballerinalang compiler semantics analyzer semanticanalyzer visit semanticanalyzer java at org ballerinalang compiler tree blangsimplevariable accept blangsimplevariable java | 0 |
320,563 | 9,782,435,495 | IssuesEvent | 2019-06-07 23:39:27 | GoogleChrome/lighthouse | https://api.github.com/repos/GoogleChrome/lighthouse | closed | DevTools Error: The asynchronous expression exceeded the allotted time of 60 | needs-priority | **Initial URL**: https://web.skype.com/
**Chrome Version**: 72.0.3626.119
**Error Message**: The asynchronous expression exceeded the allotted time of 60s
**Stack Trace**:
```
Error: The asynchronous expression exceeded the allotted time of 60s
at _ (chrome-devtools://devtools/remote/serve_file/@9a65993e2cde1b5797ec98da4cd9abcea464cd7b/audits2_worker/audits2_worker_module.js:1034:120)
``` | 1.0 | DevTools Error: The asynchronous expression exceeded the allotted time of 60 - **Initial URL**: https://web.skype.com/
**Chrome Version**: 72.0.3626.119
**Error Message**: The asynchronous expression exceeded the allotted time of 60s
**Stack Trace**:
```
Error: The asynchronous expression exceeded the allotted time of 60s
at _ (chrome-devtools://devtools/remote/serve_file/@9a65993e2cde1b5797ec98da4cd9abcea464cd7b/audits2_worker/audits2_worker_module.js:1034:120)
``` | non_infrastructure | devtools error the asynchronous expression exceeded the allotted time of initial url chrome version error message the asynchronous expression exceeded the allotted time of stack trace error the asynchronous expression exceeded the allotted time of at chrome devtools devtools remote serve file worker worker module js | 0 |
3,103 | 4,056,930,455 | IssuesEvent | 2016-05-24 20:20:46 | matthiasbeyer/imag | https://api.github.com/repos/matthiasbeyer/imag | opened | Simplify Error generation with new error setup | complexity/easy kind/enhancement kind/infrastructure kind/refactor meta/importance/medium part/bin/imag-counter part/bin/imag-link part/bin/imag-store part/bin/imag-tag part/bin/imag-view part/lib/imagcounter part/lib/imagentryfilter part/lib/imagentrylink part/lib/imagentrylist part/lib/imagentrymarkup part/lib/imagentryprinter part/lib/imagentrytag part/lib/imagentryview part/lib/imagerror part/lib/imagnotes part/lib/imagrt part/lib/imagstore part/lib/imagstorestdhook part/lib/imagutil | We had nice changes in the merge for `libimagerror` #410.
We should slowly change the codebase to use the new error setup and replace:
```rust
foo.map_err(|e| SomeError::new(SomeErrorKind::FooError, Some(Box::new(e))))
```
with
```rust
foo.map_err(Box::new)
.map_err(|e| SomeErrorKind::FooError.into_error_with_cause(e))
```
and
```rust
return SomeError::new(SomeErrorKind::FooError, None);
```
into
```rust
return SomeErrorKind::FooError.into();
```
Which is way more readable.
---
This is a long-standing issue until the whole codebase is cleaned up. | 1.0 | Simplify Error generation with new error setup - We had nice changes in the merge for `libimagerror` #410.
We should slowly change the codebase to use the new error setup and replace:
```rust
foo.map_err(|e| SomeError::new(SomeErrorKind::FooError, Some(Box::new(e))))
```
with
```rust
foo.map_err(Box::new)
.map_err(|e| SomeErrorKind::FooError.into_error_with_cause(e))
```
and
```rust
return SomeError::new(SomeErrorKind::FooError, None);
```
into
```rust
return SomeErrorKind::FooError.into();
```
Which is way more readable.
---
This is a long-standing issue until the whole codebase is cleaned up. | infrastructure | simplify error generation with new error setup we had nice changes in the merge for libimagerror we should slowly change the codebase to use the new error setup and replace rust foo map err e someerror new someerrorkind fooerror some box new e with rust foo map err box new map err e someerrorkind fooerror into error with cause e and rust return someerror new someerrorkind fooerror none into rust return someerrorkind fooerror into which is way more readable this is a long standing issue until the whole codebase is cleaned up | 1 |
56,188 | 23,719,996,255 | IssuesEvent | 2022-08-30 14:39:14 | hashicorp/terraform-provider-azurerm | https://api.github.com/repos/hashicorp/terraform-provider-azurerm | closed | Custom Policy Definition Modes are restricted to 'All' or 'Indexed' only | enhancement service/policy | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.2.6
### AzureRM Provider Version
3.18.0
### Affected Resource(s)/Data Source(s)
azurerm_policy_definition
### Terraform Configuration Files
```hcl
resource "azurerm_management_group" "example" {
display_name = "Example Management Group"
}
resource azurerm_policy_definition def {
name = "keyvault_secret_expiration"
display_name = "Key Vault secrets should have an expiration date"
description = "Secrets should have a defined expiration date and not be permanent."
policy_type = "Custom"
mode = "Microsoft.KeyVault.Data" # <----
management_group_id = azurerm_management_group.example.id
metadata = jsonencode(local.metadata)
parameters = jsonencode(local.parameters)
policy_rule = jsonencode(local.policy_rule)
lifecycle {
create_before_destroy = true
}
timeouts {
read = "10m"
}
}
```
### Debug Output/Panic Output
```shell
> Error: creating/updating Policy Definition "keyvault_secret_expiration": policy.DefinitionsClient#CreateOrUpdateAtManagementGroup: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidPolicyDefinitionMode" Message="The creation of custom policy definition using mode 'Microsoft.KeyVault.Data' is not allowed. This mode is allowed only in built-in policy definitions."
```
### Expected Behaviour
This is the expected behaviour for custom definitions, the built-in versions will work as seen below:

To Remove:
`Microsoft.ContainerService.Data`, `Microsoft.CustomerLockbox.Data`, `Microsoft.DataCatalog.Data`, `Microsoft.KeyVault.Data`, `Microsoft.MachineLearningServices.Data`, `Microsoft.Network.Data` and `Microsoft.Synapse.Data`
To Keep:
`All`, `Indexed`, `Microsoft.Kubernetes.Data` (This mode supports custom definitions as a public preview)
### Actual Behaviour
_No response_
### Steps to Reproduce
`terraform apply`
### Important Factoids
_No response_
### References
https://github.com/gettek/terraform-azurerm-policy-as-code/issues/34
[Resource Provider Modes](https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure#resource-provider-modes) | 1.0 | Custom Policy Definition Modes are restricted to 'All' or 'Indexed' only - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.2.6
### AzureRM Provider Version
3.18.0
### Affected Resource(s)/Data Source(s)
azurerm_policy_definition
### Terraform Configuration Files
```hcl
resource "azurerm_management_group" "example" {
display_name = "Example Management Group"
}
resource azurerm_policy_definition def {
name = "keyvault_secret_expiration"
display_name = "Key Vault secrets should have an expiration date"
description = "Secrets should have a defined expiration date and not be permanent."
policy_type = "Custom"
mode = "Microsoft.KeyVault.Data" # <----
management_group_id = azurerm_management_group.example.id
metadata = jsonencode(local.metadata)
parameters = jsonencode(local.parameters)
policy_rule = jsonencode(local.policy_rule)
lifecycle {
create_before_destroy = true
}
timeouts {
read = "10m"
}
}
```
### Debug Output/Panic Output
```shell
> Error: creating/updating Policy Definition "keyvault_secret_expiration": policy.DefinitionsClient#CreateOrUpdateAtManagementGroup: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidPolicyDefinitionMode" Message="The creation of custom policy definition using mode 'Microsoft.KeyVault.Data' is not allowed. This mode is allowed only in built-in policy definitions."
```
### Expected Behaviour
This is the expected behaviour for custom definitions, the built-in versions will work as seen below:

To Remove:
`Microsoft.ContainerService.Data`, `Microsoft.CustomerLockbox.Data`, `Microsoft.DataCatalog.Data`, `Microsoft.KeyVault.Data`, `Microsoft.MachineLearningServices.Data`, `Microsoft.Network.Data` and `Microsoft.Synapse.Data`
To Keep:
`All`, `Indexed`, `Microsoft.Kubernetes.Data` (This mode supports custom definitions as a public preview)
### Actual Behaviour
_No response_
### Steps to Reproduce
`terraform apply`
### Important Factoids
_No response_
### References
https://github.com/gettek/terraform-azurerm-policy-as-code/issues/34
[Resource Provider Modes](https://docs.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure#resource-provider-modes) | non_infrastructure | custom policy definition modes are restricted to all or indexed only is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version azurerm provider version affected resource s data source s azurerm policy definition terraform configuration files hcl resource azurerm management group example display name example management group resource azurerm policy definition def name keyvault secret expiration display name key vault secrets should have an expiration date description secrets should have a defined expiration date and not be permanent policy type custom mode microsoft keyvault data management group id azurerm management group example id metadata jsonencode local metadata parameters jsonencode local parameters policy rule jsonencode local policy rule lifecycle create before destroy true timeouts read debug output panic output shell error creating updating policy definition keyvault secret expiration policy definitionsclient createorupdateatmanagementgroup failure responding to request statuscode original error autorest azure service returned an error status code invalidpolicydefinitionmode message the creation of custom policy definition using mode microsoft keyvault data is not allowed this mode is allowed only in built in policy definitions expected behaviour this is the expected behaviour for custom definitions the built in versions will work as seen below to remove microsoft containerservice data microsoft customerlockbox data microsoft datacatalog data microsoft keyvault data microsoft machinelearningservices data microsoft network data and microsoft synapse data to keep all indexed microsoft kubernetes data this mode supports custom definitions as a public preview actual behaviour no response steps to reproduce terraform apply important factoids no response references | 0 |
709,816 | 24,393,129,158 | IssuesEvent | 2022-10-04 16:47:53 | pendulum-chain/portal | https://api.github.com/repos/pendulum-chain/portal | closed | Minor fixes | priority:medium type:enhancement | - [x] Use only `rpc.pendulumchain.tech` as a provider, since every time the `connect()` method is called in the API, it is switching the provider, so even if we add collators as second options, they will be selected often ([code](https://github.com/pendulum-chain/portal/blob/ec9dbb5564ba7e35ba3e5ea4219b7967fc5eff26/src/NodeInfoProvider.tsx#L29))
- [x] Extract & rename mainAddres (to userAddress or something similar) from [this line](https://github.com/pendulum-chain/portal/blob/ec9dbb5564ba7e35ba3e5ea4219b7967fc5eff26/src/NodeInfoProvider.tsx#L12)
- [x] Unharcode prefix in [this line](https://github.com/pendulum-chain/portal/blob/ec9dbb5564ba7e35ba3e5ea4219b7967fc5eff26/src/components/OpenWallet.tsx#L16). The prefix can be obtained from [here](https://github.com/paritytech/ss58-registry/blob/78b2635fc63f0c388ad8d875ab0f61ba825fee13/ss58-registry.json#L518). But if we don't wanna fetch that file (and prefix shouldn't change) then at least we need to define a different prefix for when we connect to Pendulum, and in general a prefix based on the name of the chain.
- [x] Usually a `// FIXME` comment will include the ticket to fix the issue, but [here](https://github.com/pendulum-chain/portal/blob/5e1668733e06c9b9e4818af365431809f95dcc2c/src/components/Layout/index.tsx#L33) or [here](https://github.com/pendulum-chain/portal/blob/79db0fb49917f50a96740d956bc7cc62681aba37/src/components/Layout/Nav.tsx#L26) it looks the other way around either remove the FIXME or create a ticket to fix it.
- [x] remove `@ts-ignore` and fix the type reference [here](https://github.com/pendulum-chain/portal/blob/ec9dbb5564ba7e35ba3e5ea4219b7967fc5eff26/src/NodeInfoProvider.tsx#L65)
**Disclaimer**: I used vscode feature -> Copy as -> Remote file url. I was standing on master. If master changed from that moment, lines may not match. | 1.0 | Minor fixes - - [x] Use only `rpc.pendulumchain.tech` as a provider, since every time the `connect()` method is called in the API, it is switching the provider, so even if we add collators as second options, they will be selected often ([code](https://github.com/pendulum-chain/portal/blob/ec9dbb5564ba7e35ba3e5ea4219b7967fc5eff26/src/NodeInfoProvider.tsx#L29))
- [x] Extract & rename mainAddres (to userAddress or something similar) from [this line](https://github.com/pendulum-chain/portal/blob/ec9dbb5564ba7e35ba3e5ea4219b7967fc5eff26/src/NodeInfoProvider.tsx#L12)
- [x] Unharcode prefix in [this line](https://github.com/pendulum-chain/portal/blob/ec9dbb5564ba7e35ba3e5ea4219b7967fc5eff26/src/components/OpenWallet.tsx#L16). The prefix can be obtained from [here](https://github.com/paritytech/ss58-registry/blob/78b2635fc63f0c388ad8d875ab0f61ba825fee13/ss58-registry.json#L518). But if we don't wanna fetch that file (and prefix shouldn't change) then at least we need to define a different prefix for when we connect to Pendulum, and in general a prefix based on the name of the chain.
- [x] Usually a `// FIXME` comment will include the ticket to fix the issue, but [here](https://github.com/pendulum-chain/portal/blob/5e1668733e06c9b9e4818af365431809f95dcc2c/src/components/Layout/index.tsx#L33) or [here](https://github.com/pendulum-chain/portal/blob/79db0fb49917f50a96740d956bc7cc62681aba37/src/components/Layout/Nav.tsx#L26) it looks the other way around either remove the FIXME or create a ticket to fix it.
- [x] remove `@ts-ignore` and fix the type reference [here](https://github.com/pendulum-chain/portal/blob/ec9dbb5564ba7e35ba3e5ea4219b7967fc5eff26/src/NodeInfoProvider.tsx#L65)
**Disclaimer**: I used vscode feature -> Copy as -> Remote file url. I was standing on master. If master changed from that moment, lines may not match. | non_infrastructure | minor fixes use only rpc pendulumchain tech as a provider since every time the connect method is called in the api it is switching the provider so even if we add collators as second options they will be selected often extract rename mainaddres to useraddress or something similar from unharcode prefix in the prefix can be obtained from but if we don t wanna fetch that file and prefix shouldn t change then at least we need to define a different prefix for when we connect to pendulum and in general a prefix based on the name of the chain usually a fixme comment will include the ticket to fix the issue but or it looks the other way around either remove the fixme or create a ticket to fix it remove ts ignore and fix the type reference disclaimer i used vscode feature copy as remote file url i was standing on master if master changed from that moment lines may not match | 0 |
17,627 | 12,486,783,435 | IssuesEvent | 2020-05-31 04:46:09 | google/oss-fuzz | https://api.github.com/repos/google/oss-fuzz | closed | go-fuzz support | infrastructure | go-fuzz is a fuzzer for Go:
https://github.com/dvyukov/go-fuzz
Since oss-fuzz aimed at supporting multiple fuzzers, go-fuzz support should be possible and would be useful.
go-fuzz contains 70 fuzzers for Go std lib and other popular packages but there is no story for continous running:
https://github.com/dvyukov/go-fuzz/tree/master/examples
| 1.0 | go-fuzz support - go-fuzz is a fuzzer for Go:
https://github.com/dvyukov/go-fuzz
Since oss-fuzz aimed at supporting multiple fuzzers, go-fuzz support should be possible and would be useful.
go-fuzz contains 70 fuzzers for Go std lib and other popular packages but there is no story for continous running:
https://github.com/dvyukov/go-fuzz/tree/master/examples
| infrastructure | go fuzz support go fuzz is a fuzzer for go since oss fuzz aimed at supporting multiple fuzzers go fuzz support should be possible and would be useful go fuzz contains fuzzers for go std lib and other popular packages but there is no story for continous running | 1 |
382,435 | 26,499,139,469 | IssuesEvent | 2023-01-18 08:56:39 | risingwavelabs/risingwave-operator | https://api.github.com/repos/risingwavelabs/risingwave-operator | closed | Warn developers about missing docstrings in PRs | documentation good first issue | IMHO we should warn (not block) developers if they do not provide docstrings. Warnings should be given for every function touched in that PR which does not have a valid docstring.
Also see [go docs: comments](https://tip.golang.org/doc/comment) | 1.0 | Warn developers about missing docstrings in PRs - IMHO we should warn (not block) developers if they do not provide docstrings. Warnings should be given for every function touched in that PR which does not have a valid docstring.
Also see [go docs: comments](https://tip.golang.org/doc/comment) | non_infrastructure | warn developers about missing docstrings in prs imho we should warn not block developers if they do not provide docstrings warnings should be given for every function touched in that pr which does not have a valid docstring also see | 0 |
3,017 | 5,907,798,579 | IssuesEvent | 2017-05-19 18:37:50 | JoryHogeveen/view-admin-as | https://api.github.com/repos/JoryHogeveen/view-admin-as | closed | Compatibility issue: We currently overwrite other user_has_cap filters. | compatibility | When other plugins use the `user_has_cap` filter, VAA overwrites this in a view.
It might be more logical to put our filters at as first, so all other plugin's can still do their magic.
This way the filter get's actually run as if it's a different role instead of a being overwritten by view admin as.
**Extra:**
Maybe it's good to use the `user_has_cap` filter in our `map_meta_cap` filter as well to ensure we get the proper capability modifications from other plugins. | True | Compatibility issue: We currently overwrite other user_has_cap filters. - When other plugins use the `user_has_cap` filter, VAA overwrites this in a view.
It might be more logical to put our filters at as first, so all other plugin's can still do their magic.
This way the filter get's actually run as if it's a different role instead of a being overwritten by view admin as.
**Extra:**
Maybe it's good to use the `user_has_cap` filter in our `map_meta_cap` filter as well to ensure we get the proper capability modifications from other plugins. | non_infrastructure | compatibility issue we currently overwrite other user has cap filters when other plugins use the user has cap filter vaa overwrites this in a view it might be more logical to put our filters at as first so all other plugin s can still do their magic this way the filter get s actually run as if it s a different role instead of a being overwritten by view admin as extra maybe it s good to use the user has cap filter in our map meta cap filter as well to ensure we get the proper capability modifications from other plugins | 0 |
12,250 | 9,661,559,964 | IssuesEvent | 2019-05-20 18:22:26 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | closed | Deadlock in unit test tracing | in:Test Infrastructure test bug | Ocassionally a deadlock occurs when running the com.ibm.ws.channelfw unit tests.
## Javacore info
```
1LKDEADLOCK Deadlock detected !!!
NULL ---------------------
NULL
2LKDEADLOCKTHR Thread "Inbound Read Selector.1" (0x0000000000C38800)
3LKDEADLOCKWTR is waiting for:
4LKDEADLOCKMON sys_mon_t:0x00007FB7604F0038 infl_mon_t: 0x00007FB7604F00B8:
4LKDEADLOCKOBJ com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E0201EF0
3LKDEADLOCKOWN which is owned by:
2LKDEADLOCKTHR Thread "Test worker" (0x0000000000922A00)
3LKDEADLOCKWTR which is waiting for:
4LKDEADLOCKMON sys_mon_t:0x00007FB7604F0878 infl_mon_t: 0x00007FB7604F08F8:
4LKDEADLOCKOBJ com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E05341B0
3LKDEADLOCKOWN which is owned by:
2LKDEADLOCKTHR Thread "Inbound Read Selector.1" (0x0000000000C38800)
3XMTHREADINFO "Inbound Read Selector.1" J9VMThread:0x0000000000C38800, omrthread_t:0x00007FB7604F5FE8, java/lang/Thread:0x00000000E0797850, state:B, prio=5
3XMJAVALTHREAD (java/lang/Thread getId:0x144, isDaemon:true)
3XMTHREADINFO1 (native thread ID:0x44E4, native priority:0x5, native policy:UNKNOWN, vmstate:B, vm thread flags:0x00000281)
3XMTHREADINFO2 (native stack address range from:0x00007FB770C80000, to:0x00007FB770CC1000, size:0x41000)
3XMCPUTIME CPU usage total: 33.385212355 secs, current category="Application"
3XMTHREADBLOCK Blocked on: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E0201EF0 Owned by: "Test worker" (J9VMThread:0x0000000000922A00, java/lang/Thread:0x00000000E0285AB0)
3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
3XMTHREADINFO3 Java callstack:
4XESTACKTRACE at com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter.writeRecord(CapturedOutputHolder.java:355(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E0201EF0, entry count: 1)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/MessageLogHandler.synchronousWrite(MessageLogHandler.java:75(Compiled Code))
4XESTACKTRACE at com/ibm/ws/collector/manager/buffer/BufferManagerImpl.add(BufferManagerImpl.java:100(Compiled Code))
4XESTACKTRACE at com/ibm/ws/logging/source/LogSource.publish(LogSource.java:100)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishToLogSource(BaseTraceService.java:840)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishLogRecord(BaseTraceService.java:819)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.audit(BaseTraceService.java:506)
4XESTACKTRACE at com/ibm/websphere/ras/Tr.audit(Tr.java:365)
4XESTACKTRACE at com/ibm/ws/logging/utils/FileLogHolder.getPrintStream(FileLogHolder.java:363(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/ws/logging/utils/FileLogHolder@0x00000000E0534458, entry count: 2)
4XESTACKTRACE at com/ibm/ws/logging/utils/FileLogHolder.writeRecord(FileLogHolder.java:279(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/ws/logging/utils/FileLogHolder@0x00000000E0534458, entry count: 1)
4XESTACKTRACE at com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter.writeRecord(CapturedOutputHolder.java:355(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E05341B0, entry count: 1)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/MessageLogHandler.synchronousWrite(MessageLogHandler.java:75(Compiled Code))
4XESTACKTRACE at com/ibm/ws/collector/manager/buffer/BufferManagerImpl.add(BufferManagerImpl.java:100(Compiled Code))
4XESTACKTRACE at com/ibm/ws/logging/source/LogSource.publish(LogSource.java:100)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishToLogSource(BaseTraceService.java:840)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishLogRecord(BaseTraceService.java:819)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.error(BaseTraceService.java:518)
4XESTACKTRACE at com/ibm/websphere/ras/Tr.error(Tr.java:485)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/WorkQueueManager.dispatchWorker(WorkQueueManager.java:759)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/WorkQueueManager.dispatch(WorkQueueManager.java:746)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/SocketRWChannelSelector.performRequest(SocketRWChannelSelector.java:134)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/ChannelSelector.run(ChannelSelector.java:220)
4XESTACKTRACE at java/lang/Thread.run(Thread.java:825)
3XMTHREADINFO3 Java callstack:
4XESTACKTRACE at com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter.writeRecord(CapturedOutputHolder.java:355(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E05341B0, entry count: 1)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/MessageLogHandler.synchronousWrite(MessageLogHandler.java:75(Compiled Code))
4XESTACKTRACE at com/ibm/ws/collector/manager/buffer/BufferManagerImpl.add(BufferManagerImpl.java:100(Compiled Code))
4XESTACKTRACE at com/ibm/ws/logging/source/LogSource.publish(LogSource.java:100)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishToLogSource(BaseTraceService.java:840)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishLogRecord(BaseTraceService.java:819)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.audit(BaseTraceService.java:506)
4XESTACKTRACE at com/ibm/websphere/ras/Tr.audit(Tr.java:365)
4XESTACKTRACE at com/ibm/ws/logging/utils/FileLogHolder.getPrintStream(FileLogHolder.java:363(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/ws/logging/utils/FileLogHolder@0x00000000E0201FB0, entry count: 2)
4XESTACKTRACE at com/ibm/ws/logging/utils/FileLogHolder.writeRecord(FileLogHolder.java:279(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/ws/logging/utils/FileLogHolder@0x00000000E0201FB0, entry count: 1)
4XESTACKTRACE at com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter.writeRecord(CapturedOutputHolder.java:355(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E0201EF0, entry count: 1)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/MessageLogHandler.synchronousWrite(MessageLogHandler.java:75(Compiled Code))
4XESTACKTRACE at com/ibm/ws/collector/manager/buffer/BufferManagerImpl.add(BufferManagerImpl.java:100(Compiled Code))
4XESTACKTRACE at com/ibm/ws/logging/source/LogSource.publish(LogSource.java:100)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishToLogSource(BaseTraceService.java:840)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishLogRecord(BaseTraceService.java:819)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.error(BaseTraceService.java:518)
4XESTACKTRACE at com/ibm/websphere/ras/Tr.error(Tr.java:485)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/TCPPort.initServerSocket(TCPPort.java:240)
5XESTACKTRACE (entered lock: com/ibm/ws/tcpchannel/internal/TCPPort@0x00000000E0893830, entry count: 1)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/TCPChannel.initializePort(TCPChannel.java:327)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/TCPChannel.init(TCPChannel.java:313)
4XESTACKTRACE at com/ibm/ws/channelfw/internal/ChannelFrameworkImpl.initChannelInChain(ChannelFrameworkImpl.java:1317)
4XESTACKTRACE at com/ibm/ws/channelfw/internal/ChannelFrameworkImpl.initChainInternal(ChannelFrameworkImpl.java:2422)
5XESTACKTRACE (entered lock: com/ibm/ws/channelfw/internal/ChannelFrameworkImpl@0x00000000E077D8B8, entry count: 2)
4XESTACKTRACE at com/ibm/ws/channelfw/internal/ChannelFrameworkImpl.initChain(ChannelFrameworkImpl.java:2308)
5XESTACKTRACE (entered lock: com/ibm/ws/channelfw/internal/ChannelFrameworkImpl@0x00000000E077D8B8, entry count: 1)
4XESTACKTRACE at com/ibm/ws/channelfw/testsuite/junit/ChainEventListenerTest.testAddChainEventListener(ChainEventListenerTest.java:210)
4XESTACKTRACE at jdk/internal/reflect/NativeMethodAccessorImpl.invoke0(Native Method)
4XESTACKTRACE at jdk/internal/reflect/NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
4XESTACKTRACE at jdk/internal/reflect/DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
4XESTACKTRACE at java/lang/reflect/Method.invoke(Method.java:566)
4XESTACKTRACE at org/junit/runners/model/FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
4XESTACKTRACE at org/junit/internal/runners/model/ReflectiveCallable.run(ReflectiveCallable.java:15)
4XESTACKTRACE at org/junit/runners/model/FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
4XESTACKTRACE at org/junit/internal/runners/statements/InvokeMethod.evaluate(InvokeMethod.java:20)
4XESTACKTRACE at test/common/SharedOutputManager$1.evaluate(SharedOutputManager.java:620)
4XESTACKTRACE at org/junit/rules/RunRules.evaluate(RunRules.java:18)
4XESTACKTRACE at org/junit/runners/ParentRunner.runLeaf(ParentRunner.java:263)
4XESTACKTRACE at org/junit/runners/BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
4XESTACKTRACE at org/junit/runners/BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
4XESTACKTRACE at org/junit/runners/ParentRunner$3.run(ParentRunner.java:231)
4XESTACKTRACE at org/junit/runners/ParentRunner$1.schedule(ParentRunner.java:60)
4XESTACKTRACE at org/junit/runners/ParentRunner.runChildren(ParentRunner.java:229)
4XESTACKTRACE at org/junit/runners/ParentRunner.access$000(ParentRunner.java:50)
4XESTACKTRACE at org/junit/runners/ParentRunner$2.evaluate(ParentRunner.java:222)
4XESTACKTRACE at org/junit/internal/runners/statements/RunBefores.evaluate(RunBefores.java:28)
4XESTACKTRACE at org/junit/runners/ParentRunner.run(ParentRunner.java:300)
``` | 1.0 | Deadlock in unit test tracing - Ocassionally a deadlock occurs when running the com.ibm.ws.channelfw unit tests.
## Javacore info
```
1LKDEADLOCK Deadlock detected !!!
NULL ---------------------
NULL
2LKDEADLOCKTHR Thread "Inbound Read Selector.1" (0x0000000000C38800)
3LKDEADLOCKWTR is waiting for:
4LKDEADLOCKMON sys_mon_t:0x00007FB7604F0038 infl_mon_t: 0x00007FB7604F00B8:
4LKDEADLOCKOBJ com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E0201EF0
3LKDEADLOCKOWN which is owned by:
2LKDEADLOCKTHR Thread "Test worker" (0x0000000000922A00)
3LKDEADLOCKWTR which is waiting for:
4LKDEADLOCKMON sys_mon_t:0x00007FB7604F0878 infl_mon_t: 0x00007FB7604F08F8:
4LKDEADLOCKOBJ com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E05341B0
3LKDEADLOCKOWN which is owned by:
2LKDEADLOCKTHR Thread "Inbound Read Selector.1" (0x0000000000C38800)
3XMTHREADINFO "Inbound Read Selector.1" J9VMThread:0x0000000000C38800, omrthread_t:0x00007FB7604F5FE8, java/lang/Thread:0x00000000E0797850, state:B, prio=5
3XMJAVALTHREAD (java/lang/Thread getId:0x144, isDaemon:true)
3XMTHREADINFO1 (native thread ID:0x44E4, native priority:0x5, native policy:UNKNOWN, vmstate:B, vm thread flags:0x00000281)
3XMTHREADINFO2 (native stack address range from:0x00007FB770C80000, to:0x00007FB770CC1000, size:0x41000)
3XMCPUTIME CPU usage total: 33.385212355 secs, current category="Application"
3XMTHREADBLOCK Blocked on: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E0201EF0 Owned by: "Test worker" (J9VMThread:0x0000000000922A00, java/lang/Thread:0x00000000E0285AB0)
3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0)
3XMTHREADINFO3 Java callstack:
4XESTACKTRACE at com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter.writeRecord(CapturedOutputHolder.java:355(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E0201EF0, entry count: 1)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/MessageLogHandler.synchronousWrite(MessageLogHandler.java:75(Compiled Code))
4XESTACKTRACE at com/ibm/ws/collector/manager/buffer/BufferManagerImpl.add(BufferManagerImpl.java:100(Compiled Code))
4XESTACKTRACE at com/ibm/ws/logging/source/LogSource.publish(LogSource.java:100)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishToLogSource(BaseTraceService.java:840)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishLogRecord(BaseTraceService.java:819)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.audit(BaseTraceService.java:506)
4XESTACKTRACE at com/ibm/websphere/ras/Tr.audit(Tr.java:365)
4XESTACKTRACE at com/ibm/ws/logging/utils/FileLogHolder.getPrintStream(FileLogHolder.java:363(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/ws/logging/utils/FileLogHolder@0x00000000E0534458, entry count: 2)
4XESTACKTRACE at com/ibm/ws/logging/utils/FileLogHolder.writeRecord(FileLogHolder.java:279(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/ws/logging/utils/FileLogHolder@0x00000000E0534458, entry count: 1)
4XESTACKTRACE at com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter.writeRecord(CapturedOutputHolder.java:355(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E05341B0, entry count: 1)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/MessageLogHandler.synchronousWrite(MessageLogHandler.java:75(Compiled Code))
4XESTACKTRACE at com/ibm/ws/collector/manager/buffer/BufferManagerImpl.add(BufferManagerImpl.java:100(Compiled Code))
4XESTACKTRACE at com/ibm/ws/logging/source/LogSource.publish(LogSource.java:100)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishToLogSource(BaseTraceService.java:840)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishLogRecord(BaseTraceService.java:819)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.error(BaseTraceService.java:518)
4XESTACKTRACE at com/ibm/websphere/ras/Tr.error(Tr.java:485)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/WorkQueueManager.dispatchWorker(WorkQueueManager.java:759)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/WorkQueueManager.dispatch(WorkQueueManager.java:746)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/SocketRWChannelSelector.performRequest(SocketRWChannelSelector.java:134)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/ChannelSelector.run(ChannelSelector.java:220)
4XESTACKTRACE at java/lang/Thread.run(Thread.java:825)
3XMTHREADINFO3 Java callstack:
4XESTACKTRACE at com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter.writeRecord(CapturedOutputHolder.java:355(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E05341B0, entry count: 1)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/MessageLogHandler.synchronousWrite(MessageLogHandler.java:75(Compiled Code))
4XESTACKTRACE at com/ibm/ws/collector/manager/buffer/BufferManagerImpl.add(BufferManagerImpl.java:100(Compiled Code))
4XESTACKTRACE at com/ibm/ws/logging/source/LogSource.publish(LogSource.java:100)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishToLogSource(BaseTraceService.java:840)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishLogRecord(BaseTraceService.java:819)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.audit(BaseTraceService.java:506)
4XESTACKTRACE at com/ibm/websphere/ras/Tr.audit(Tr.java:365)
4XESTACKTRACE at com/ibm/ws/logging/utils/FileLogHolder.getPrintStream(FileLogHolder.java:363(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/ws/logging/utils/FileLogHolder@0x00000000E0201FB0, entry count: 2)
4XESTACKTRACE at com/ibm/ws/logging/utils/FileLogHolder.writeRecord(FileLogHolder.java:279(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/ws/logging/utils/FileLogHolder@0x00000000E0201FB0, entry count: 1)
4XESTACKTRACE at com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter.writeRecord(CapturedOutputHolder.java:355(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/websphere/ras/CapturedOutputHolder$DelegateTrWriter@0x00000000E0201EF0, entry count: 1)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/MessageLogHandler.synchronousWrite(MessageLogHandler.java:75(Compiled Code))
4XESTACKTRACE at com/ibm/ws/collector/manager/buffer/BufferManagerImpl.add(BufferManagerImpl.java:100(Compiled Code))
4XESTACKTRACE at com/ibm/ws/logging/source/LogSource.publish(LogSource.java:100)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishToLogSource(BaseTraceService.java:840)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.publishLogRecord(BaseTraceService.java:819)
4XESTACKTRACE at com/ibm/ws/logging/internal/impl/BaseTraceService.error(BaseTraceService.java:518)
4XESTACKTRACE at com/ibm/websphere/ras/Tr.error(Tr.java:485)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/TCPPort.initServerSocket(TCPPort.java:240)
5XESTACKTRACE (entered lock: com/ibm/ws/tcpchannel/internal/TCPPort@0x00000000E0893830, entry count: 1)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/TCPChannel.initializePort(TCPChannel.java:327)
4XESTACKTRACE at com/ibm/ws/tcpchannel/internal/TCPChannel.init(TCPChannel.java:313)
4XESTACKTRACE at com/ibm/ws/channelfw/internal/ChannelFrameworkImpl.initChannelInChain(ChannelFrameworkImpl.java:1317)
4XESTACKTRACE at com/ibm/ws/channelfw/internal/ChannelFrameworkImpl.initChainInternal(ChannelFrameworkImpl.java:2422)
5XESTACKTRACE (entered lock: com/ibm/ws/channelfw/internal/ChannelFrameworkImpl@0x00000000E077D8B8, entry count: 2)
4XESTACKTRACE at com/ibm/ws/channelfw/internal/ChannelFrameworkImpl.initChain(ChannelFrameworkImpl.java:2308)
5XESTACKTRACE (entered lock: com/ibm/ws/channelfw/internal/ChannelFrameworkImpl@0x00000000E077D8B8, entry count: 1)
4XESTACKTRACE at com/ibm/ws/channelfw/testsuite/junit/ChainEventListenerTest.testAddChainEventListener(ChainEventListenerTest.java:210)
4XESTACKTRACE at jdk/internal/reflect/NativeMethodAccessorImpl.invoke0(Native Method)
4XESTACKTRACE at jdk/internal/reflect/NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
4XESTACKTRACE at jdk/internal/reflect/DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
4XESTACKTRACE at java/lang/reflect/Method.invoke(Method.java:566)
4XESTACKTRACE at org/junit/runners/model/FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
4XESTACKTRACE at org/junit/internal/runners/model/ReflectiveCallable.run(ReflectiveCallable.java:15)
4XESTACKTRACE at org/junit/runners/model/FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
4XESTACKTRACE at org/junit/internal/runners/statements/InvokeMethod.evaluate(InvokeMethod.java:20)
4XESTACKTRACE at test/common/SharedOutputManager$1.evaluate(SharedOutputManager.java:620)
4XESTACKTRACE at org/junit/rules/RunRules.evaluate(RunRules.java:18)
4XESTACKTRACE at org/junit/runners/ParentRunner.runLeaf(ParentRunner.java:263)
4XESTACKTRACE at org/junit/runners/BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
4XESTACKTRACE at org/junit/runners/BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
4XESTACKTRACE at org/junit/runners/ParentRunner$3.run(ParentRunner.java:231)
4XESTACKTRACE at org/junit/runners/ParentRunner$1.schedule(ParentRunner.java:60)
4XESTACKTRACE at org/junit/runners/ParentRunner.runChildren(ParentRunner.java:229)
4XESTACKTRACE at org/junit/runners/ParentRunner.access$000(ParentRunner.java:50)
4XESTACKTRACE at org/junit/runners/ParentRunner$2.evaluate(ParentRunner.java:222)
4XESTACKTRACE at org/junit/internal/runners/statements/RunBefores.evaluate(RunBefores.java:28)
4XESTACKTRACE at org/junit/runners/ParentRunner.run(ParentRunner.java:300)
``` | infrastructure | deadlock in unit test tracing ocassionally a deadlock occurs when running the com ibm ws channelfw unit tests javacore info deadlock detected null null thread inbound read selector is waiting for sys mon t infl mon t com ibm websphere ras capturedoutputholder delegatetrwriter which is owned by thread test worker which is waiting for sys mon t infl mon t com ibm websphere ras capturedoutputholder delegatetrwriter which is owned by thread inbound read selector inbound read selector omrthread t java lang thread state b prio java lang thread getid isdaemon true native thread id native priority native policy unknown vmstate b vm thread flags native stack address range from to size cpu usage total secs current category application blocked on com ibm websphere ras capturedoutputholder delegatetrwriter owned by test worker java lang thread heap bytes allocated since last gc cycle java callstack at com ibm websphere ras capturedoutputholder delegatetrwriter writerecord capturedoutputholder java compiled code entered lock com ibm websphere ras capturedoutputholder delegatetrwriter entry count at com ibm ws logging internal impl messageloghandler synchronouswrite messageloghandler java compiled code at com ibm ws collector manager buffer buffermanagerimpl add buffermanagerimpl java compiled code at com ibm ws logging source logsource publish logsource java at com ibm ws logging internal impl basetraceservice publishtologsource basetraceservice java at com ibm ws logging internal impl basetraceservice publishlogrecord basetraceservice java at com ibm ws logging internal impl basetraceservice audit basetraceservice java at com ibm websphere ras tr audit tr java at com ibm ws logging utils filelogholder getprintstream filelogholder java compiled code entered lock com ibm ws logging utils filelogholder entry count at com ibm ws logging utils filelogholder writerecord filelogholder java compiled code entered lock com ibm ws logging utils filelogholder entry count at com ibm websphere ras capturedoutputholder delegatetrwriter writerecord capturedoutputholder java compiled code entered lock com ibm websphere ras capturedoutputholder delegatetrwriter entry count at com ibm ws logging internal impl messageloghandler synchronouswrite messageloghandler java compiled code at com ibm ws collector manager buffer buffermanagerimpl add buffermanagerimpl java compiled code at com ibm ws logging source logsource publish logsource java at com ibm ws logging internal impl basetraceservice publishtologsource basetraceservice java at com ibm ws logging internal impl basetraceservice publishlogrecord basetraceservice java at com ibm ws logging internal impl basetraceservice error basetraceservice java at com ibm websphere ras tr error tr java at com ibm ws tcpchannel internal workqueuemanager dispatchworker workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager dispatch workqueuemanager java at com ibm ws tcpchannel internal socketrwchannelselector performrequest socketrwchannelselector java at com ibm ws tcpchannel internal channelselector run channelselector java at java lang thread run thread java java callstack at com ibm websphere ras capturedoutputholder delegatetrwriter writerecord capturedoutputholder java compiled code entered lock com ibm websphere ras capturedoutputholder delegatetrwriter entry count at com ibm ws logging internal impl messageloghandler synchronouswrite messageloghandler java compiled code at com ibm ws collector manager buffer buffermanagerimpl add buffermanagerimpl java compiled code at com ibm ws logging source logsource publish logsource java at com ibm ws logging internal impl basetraceservice publishtologsource basetraceservice java at com ibm ws logging internal impl basetraceservice publishlogrecord basetraceservice java at com ibm ws logging internal impl basetraceservice audit basetraceservice java at com ibm websphere ras tr audit tr java at com ibm ws logging utils filelogholder getprintstream filelogholder java compiled code entered lock com ibm ws logging utils filelogholder entry count at com ibm ws logging utils filelogholder writerecord filelogholder java compiled code entered lock com ibm ws logging utils filelogholder entry count at com ibm websphere ras capturedoutputholder delegatetrwriter writerecord capturedoutputholder java compiled code entered lock com ibm websphere ras capturedoutputholder delegatetrwriter entry count at com ibm ws logging internal impl messageloghandler synchronouswrite messageloghandler java compiled code at com ibm ws collector manager buffer buffermanagerimpl add buffermanagerimpl java compiled code at com ibm ws logging source logsource publish logsource java at com ibm ws logging internal impl basetraceservice publishtologsource basetraceservice java at com ibm ws logging internal impl basetraceservice publishlogrecord basetraceservice java at com ibm ws logging internal impl basetraceservice error basetraceservice java at com ibm websphere ras tr error tr java at com ibm ws tcpchannel internal tcpport initserversocket tcpport java entered lock com ibm ws tcpchannel internal tcpport entry count at com ibm ws tcpchannel internal tcpchannel initializeport tcpchannel java at com ibm ws tcpchannel internal tcpchannel init tcpchannel java at com ibm ws channelfw internal channelframeworkimpl initchannelinchain channelframeworkimpl java at com ibm ws channelfw internal channelframeworkimpl initchaininternal channelframeworkimpl java entered lock com ibm ws channelfw internal channelframeworkimpl entry count at com ibm ws channelfw internal channelframeworkimpl initchain channelframeworkimpl java entered lock com ibm ws channelfw internal channelframeworkimpl entry count at com ibm ws channelfw testsuite junit chaineventlistenertest testaddchaineventlistener chaineventlistenertest java at jdk internal reflect nativemethodaccessorimpl native method at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at test common sharedoutputmanager evaluate sharedoutputmanager java at org junit rules runrules evaluate runrules java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner run parentrunner java | 1 |
7,019 | 6,715,824,411 | IssuesEvent | 2017-10-13 23:30:33 | Daniel-Mietchen/ideas | https://api.github.com/repos/Daniel-Mietchen/ideas | opened | Start a discussion about deprecating rather than deleting obsolete Wikidata properties | hackathon infrastructure Wikidata | if we delete properties used externally, we are likely breaking their workflows.
Need to look for examples. | 1.0 | Start a discussion about deprecating rather than deleting obsolete Wikidata properties - if we delete properties used externally, we are likely breaking their workflows.
Need to look for examples. | infrastructure | start a discussion about deprecating rather than deleting obsolete wikidata properties if we delete properties used externally we are likely breaking their workflows need to look for examples | 1 |
28,528 | 23,316,864,138 | IssuesEvent | 2022-08-08 13:15:14 | JonasPammer/cookiecutter-pypackage-test | https://api.github.com/repos/JonasPammer/cookiecutter-pypackage-test | opened | figure out what ClusterFuzzLite is/does and maybe implement | triage/needs-information priority/low kind/test kind/infrastructure | as per OSSF Scorecard Security Scan Report 58 https://github.com/JonasPammer/cookiecutter-pypackage-test/security/code-scanning/58 | 1.0 | figure out what ClusterFuzzLite is/does and maybe implement - as per OSSF Scorecard Security Scan Report 58 https://github.com/JonasPammer/cookiecutter-pypackage-test/security/code-scanning/58 | infrastructure | figure out what clusterfuzzlite is does and maybe implement as per ossf scorecard security scan report | 1 |
19,309 | 13,212,604,122 | IssuesEvent | 2020-08-16 08:04:13 | wix/wix-style-react | https://api.github.com/repos/wix/wix-style-react | closed | wix-style-react not working properly with CRA | Infrastructure Next Major Version | i created a CRA project with latest React and react-dom , and add the package wix-style-react last version like the documentation guide in this link https://github.com/wix/wix-style-react/blob/master/docs/usage-without-yoshi.md but nothing work
i eject the project and add the webpack.config.js configuration
i installed Stylable version 1 (documentation guidelines)
there are three parts of webpack.config.js i modified
i attached the webpack configuration file



the error i get is the following:
TypeError: _Pagination_st_css__WEBPACK_IMPORTED_MODULE_15___default(...) is not a function
in the console , i got :
<img width="939" alt="Capture" src="https://user-images.githubusercontent.com/2632473/80861573-40143980-8c24-11ea-9e3c-9d70d2d90ff0.PNG">
i think the problem is with stylable integration in webpack or other loaders, any solution please
| 1.0 | wix-style-react not working properly with CRA - i created a CRA project with latest React and react-dom , and add the package wix-style-react last version like the documentation guide in this link https://github.com/wix/wix-style-react/blob/master/docs/usage-without-yoshi.md but nothing work
i eject the project and add the webpack.config.js configuration
i installed Stylable version 1 (documentation guidelines)
there are three parts of webpack.config.js i modified
i attached the webpack configuration file



the error i get is the following:
TypeError: _Pagination_st_css__WEBPACK_IMPORTED_MODULE_15___default(...) is not a function
in the console , i got :
<img width="939" alt="Capture" src="https://user-images.githubusercontent.com/2632473/80861573-40143980-8c24-11ea-9e3c-9d70d2d90ff0.PNG">
i think the problem is with stylable integration in webpack or other loaders, any solution please
| infrastructure | wix style react not working properly with cra i created a cra project with latest react and react dom and add the package wix style react last version like the documentation guide in this link but nothing work i eject the project and add the webpack config js configuration i installed stylable version documentation guidelines there are three parts of webpack config js i modified i attached the webpack configuration file the error i get is the following typeerror pagination st css webpack imported module default is not a function in the console i got img width alt capture src i think the problem is with stylable integration in webpack or other loaders any solution please | 1 |
264,940 | 8,328,513,743 | IssuesEvent | 2018-09-27 01:16:28 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | business.apple.com - Firefox not supported | browser-firefox priority-critical severity-important | <!-- @browser: Firefox 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0 -->
<!-- @reported_with: web -->
**URL**: https://business.apple.com/
**Browser / Version**: Firefox 63.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Site does not allow firefox to use it
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2018/9/01f9c952-0f10-4adf-8874-c898bbbf9eb5.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | business.apple.com - Firefox not supported - <!-- @browser: Firefox 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0 -->
<!-- @reported_with: web -->
**URL**: https://business.apple.com/
**Browser / Version**: Firefox 63.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Site does not allow firefox to use it
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2018/9/01f9c952-0f10-4adf-8874-c898bbbf9eb5.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_infrastructure | business apple com firefox not supported url browser version firefox operating system windows tested another browser yes problem type site is not usable description site does not allow firefox to use it steps to reproduce from with ❤️ | 0 |
19,170 | 13,197,717,531 | IssuesEvent | 2020-08-14 00:00:01 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Namespace for Plant code and folder in studio are different | interface/infrastructure refactor | Plant.cs has
namespace Models.PMF
but the folder in studio is called "Plant" | 1.0 | Namespace for Plant code and folder in studio are different - Plant.cs has
namespace Models.PMF
but the folder in studio is called "Plant" | infrastructure | namespace for plant code and folder in studio are different plant cs has namespace models pmf but the folder in studio is called plant | 1 |
24,464 | 17,288,568,653 | IssuesEvent | 2021-07-24 08:09:50 | tilemill-project/tilemill | https://api.github.com/repos/tilemill-project/tilemill | opened | windows support | infrastructure | So I'm currently adding back windows support to mapnik (and CMake support, too). That does include node-mapnik and python-mapnik, too.
Just wanted to let you know, that it might be possible to build tilemill on windows again.
| 1.0 | windows support - So I'm currently adding back windows support to mapnik (and CMake support, too). That does include node-mapnik and python-mapnik, too.
Just wanted to let you know, that it might be possible to build tilemill on windows again.
| infrastructure | windows support so i m currently adding back windows support to mapnik and cmake support too that does include node mapnik and python mapnik too just wanted to let you know that it might be possible to build tilemill on windows again | 1 |
112,147 | 24,235,582,082 | IssuesEvent | 2022-09-26 22:46:36 | robert-altom/test | https://api.github.com/repos/robert-altom/test | closed | Add an overview to License section, so users are not confused what restrictions they have | documentation in code review gitlab | Go to License, replace
"AltUnity Tester is licensed under the GNU General Public License v3.0."
with
AltUnity Tester is licensed under the GNU General Public License v3.0. This means that the end user has the freedom to use, share and modify the software.
It's a copyleft license, which means that if any derivative work must be distributed, it should be distributed under the same or equivalent license terms.
As long as it's added to the development build and not distributed, it has no restrictions for the end user.
---
<sub>You can find the original issue from GitLab [here](https://gitlab.com/altom/altunity/altunitytester/-/issues/410).</sub>
| 1.0 | Add an overview to License section, so users are not confused what restrictions they have - Go to License, replace
"AltUnity Tester is licensed under the GNU General Public License v3.0."
with
AltUnity Tester is licensed under the GNU General Public License v3.0. This means that the end user has the freedom to use, share and modify the software.
It's a copyleft license, which means that if any derivative work must be distributed, it should be distributed under the same or equivalent license terms.
As long as it's added to the development build and not distributed, it has no restrictions for the end user.
---
<sub>You can find the original issue from GitLab [here](https://gitlab.com/altom/altunity/altunitytester/-/issues/410).</sub>
| non_infrastructure | add an overview to license section so users are not confused what restrictions they have go to license replace altunity tester is licensed under the gnu general public license with altunity tester is licensed under the gnu general public license this means that the end user has the freedom to use share and modify the software it s a copyleft license which means that if any derivative work must be distributed it should be distributed under the same or equivalent license terms as long as it s added to the development build and not distributed it has no restrictions for the end user you can find the original issue from gitlab | 0 |
14,351 | 10,761,281,722 | IssuesEvent | 2019-10-31 20:26:14 | NachoMemes/NachoMemes | https://api.github.com/repos/NachoMemes/NachoMemes | closed | Utilize a Github site to host image/template data | infrastructure | Use CI to build a GitHub pages site. This is the source that the bot will use to refresh templates. | 1.0 | Utilize a Github site to host image/template data - Use CI to build a GitHub pages site. This is the source that the bot will use to refresh templates. | infrastructure | utilize a github site to host image template data use ci to build a github pages site this is the source that the bot will use to refresh templates | 1 |
23,133 | 15,838,655,731 | IssuesEvent | 2021-04-06 22:57:32 | ManimCommunity/manim | https://api.github.com/repos/ManimCommunity/manim | opened | flake8 check by pre-commit.ci and flake8 check in our normal pipeline don't yield same result | bug infrastructure | ## Description of bug / unexpected behavior
As requested by @kolibril13, I have enabled pre-commit.ci for this repository. It seems that the flake8 check we run usually in our pipeline, and the check run by pre-commit.ci are configured differently: compare
https://results.pre-commit.ci/run/github/265122478/1617749086.1i_sQPmmTjeknEIW3BXU8A
with
https://github.com/ManimCommunity/manim/pull/1262/checks?check_run_id=2282701850
Also, on a related note: if we have the status check by pre-commit.ci, do we still need to run our custom back/isort/flake8 pipelines, or could they be removed then? CC @naveen521kk.
## Expected behavior
`flake8` should be run with the same configuration in both cases, except if there is a good reason not to do so.
| 1.0 | flake8 check by pre-commit.ci and flake8 check in our normal pipeline don't yield same result - ## Description of bug / unexpected behavior
As requested by @kolibril13, I have enabled pre-commit.ci for this repository. It seems that the flake8 check we run usually in our pipeline, and the check run by pre-commit.ci are configured differently: compare
https://results.pre-commit.ci/run/github/265122478/1617749086.1i_sQPmmTjeknEIW3BXU8A
with
https://github.com/ManimCommunity/manim/pull/1262/checks?check_run_id=2282701850
Also, on a related note: if we have the status check by pre-commit.ci, do we still need to run our custom back/isort/flake8 pipelines, or could they be removed then? CC @naveen521kk.
## Expected behavior
`flake8` should be run with the same configuration in both cases, except if there is a good reason not to do so.
| infrastructure | check by pre commit ci and check in our normal pipeline don t yield same result description of bug unexpected behavior as requested by i have enabled pre commit ci for this repository it seems that the check we run usually in our pipeline and the check run by pre commit ci are configured differently compare with also on a related note if we have the status check by pre commit ci do we still need to run our custom back isort pipelines or could they be removed then cc expected behavior should be run with the same configuration in both cases except if there is a good reason not to do so | 1 |
14,214 | 10,706,823,510 | IssuesEvent | 2019-10-24 16:07:17 | Unidata/MetPy | https://api.github.com/repos/Unidata/MetPy | closed | Dropping support for Python 2.7 | Area: Infrastructure Type: Maintenance | So given that the ecosystem is [dropping support for 2.7 in earnest](http://python3statement.org), the time has come for us to start planning for MetPy to move on from Legacy Python:
- Core Python developers will [stop support for Python 2.7 January 1, 2020](https://pythonclock.org)
- NumPy feature releases will be [Python 3 only starting January 1, 2019](https://docs.scipy.org/doc/numpy/neps/dropping-python2.7-proposal.html), and support for the last release supporting Python 2 will end January 1, 2020.
- XArray will drop [2.7 January 1, 2019 as well](https://github.com/pydata/xarray/issues/1830)
- Matplotlib's 3.0 release, tentatively Summer 2018, [will be Python 3 only](https://mail.python.org/pipermail/matplotlib-devel/2017-October/000892.html); the current 2.2 release will be the last long term release that supports 2.7.
Given these events, and based on preliminary input from Unidata's Users' committee, I propose that MetPy drop support for Python 2.7 in the Fall of 2019. That implies that the Fall 2019 release of MetPy will be the first that supports only Python 3. The release before that (Spring or Summer 2019) will be the last that supports Python 2.7; given MetPy's relatively nascent status, I also propose that this release NOT have any particular long term support status.
I imagine this will impact some of our users, so I welcome feedback here. Note that we don't have the capacity to maintain all of MetPy's upstream dependencies, so some of this transition is beyond our control. | 1.0 | Dropping support for Python 2.7 - So given that the ecosystem is [dropping support for 2.7 in earnest](http://python3statement.org), the time has come for us to start planning for MetPy to move on from Legacy Python:
- Core Python developers will [stop support for Python 2.7 January 1, 2020](https://pythonclock.org)
- NumPy feature releases will be [Python 3 only starting January 1, 2019](https://docs.scipy.org/doc/numpy/neps/dropping-python2.7-proposal.html), and support for the last release supporting Python 2 will end January 1, 2020.
- XArray will drop [2.7 January 1, 2019 as well](https://github.com/pydata/xarray/issues/1830)
- Matplotlib's 3.0 release, tentatively Summer 2018, [will be Python 3 only](https://mail.python.org/pipermail/matplotlib-devel/2017-October/000892.html); the current 2.2 release will be the last long term release that supports 2.7.
Given these events, and based on preliminary input from Unidata's Users' committee, I propose that MetPy drop support for Python 2.7 in the Fall of 2019. That implies that the Fall 2019 release of MetPy will be the first that supports only Python 3. The release before that (Spring or Summer 2019) will be the last that supports Python 2.7; given MetPy's relatively nascent status, I also propose that this release NOT have any particular long term support status.
I imagine this will impact some of our users, so I welcome feedback here. Note that we don't have the capacity to maintain all of MetPy's upstream dependencies, so some of this transition is beyond our control. | infrastructure | dropping support for python so given that the ecosystem is the time has come for us to start planning for metpy to move on from legacy python core python developers will numpy feature releases will be and support for the last release supporting python will end january xarray will drop matplotlib s release tentatively summer the current release will be the last long term release that supports given these events and based on preliminary input from unidata s users committee i propose that metpy drop support for python in the fall of that implies that the fall release of metpy will be the first that supports only python the release before that spring or summer will be the last that supports python given metpy s relatively nascent status i also propose that this release not have any particular long term support status i imagine this will impact some of our users so i welcome feedback here note that we don t have the capacity to maintain all of metpy s upstream dependencies so some of this transition is beyond our control | 1 |
6,630 | 6,538,596,115 | IssuesEvent | 2017-09-01 07:15:28 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | cadvisor metrics in master are inconsistent - not all calls return the correct metrics | component/kubernetes priority/P1 sig/cluster-infrastructure | Running the metrics dump from master (in this case alpha.0 and latest) I get only a portion of metrics each time. The number of metrics should be consistent from run to run.
```
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
338
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
338
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
50
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
568
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
``` | 1.0 | cadvisor metrics in master are inconsistent - not all calls return the correct metrics - Running the metrics dump from master (in this case alpha.0 and latest) I get only a portion of metrics each time. The number of metrics should be consistent from run to run.
```
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
338
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
338
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
50
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
568
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
0
○ oc get --raw /metrics/cadvisor --server https://10.1.2.2:10250 | grep container_name | wc -l
82
``` | infrastructure | cadvisor metrics in master are inconsistent not all calls return the correct metrics running the metrics dump from master in this case alpha and latest i get only a portion of metrics each time the number of metrics should be consistent from run to run ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l ○ oc get raw metrics cadvisor server grep container name wc l | 1 |
7,650 | 7,042,276,759 | IssuesEvent | 2017-12-30 10:00:33 | scenarioo/scenarioo | https://api.github.com/repos/scenarioo/scenarioo | closed | E2E Tests: Deploy-Self-Docu - Keep old archived docu builds | 1 - Ready Prio-1 topic:infrastructure | As a scenarioo developer I want to ensure that some old docu builds are marked to be kept and not removed on new e2e-tests run on CI.
Implemented Solution:
Manually run special jenkins job "archive-self-docu" to archive all the currently present docu builds, those will be kept on next run of the e2e-tests.
**Old description:**
To not use too much disk space on amazon, we should have a job that sometimes removes the outdated old scenarioo-self-docu builds.
Acceptance criterias
- remove old self-docu builds that are outdated (e.g. each time when deploying the self docu through a simple build script in build job 'deploy-self-docu')
- Only the last 5-10 builds are kept for the self-docu
- the last successful build is kept in any case
- the most recent build is kept in any case
- specific builds (e.g. the major release builds for each version) can be marked to be kept (e.g. through a simple config file or a properties in the cleanup build script), marked builds are never removed.
Out of Scope:
- for now we not solve this directly in scenarioo for all our projects, such that user can configure in the UI which builds to keep. This probably would be gold-plating, since automating this with a simple script should be quite easy.
<!---
@huboard:{"order":0.583984375,"milestone_order":415.0,"custom_state":""}
-->
| 1.0 | E2E Tests: Deploy-Self-Docu - Keep old archived docu builds - As a scenarioo developer I want to ensure that some old docu builds are marked to be kept and not removed on new e2e-tests run on CI.
Implemented Solution:
Manually run special jenkins job "archive-self-docu" to archive all the currently present docu builds, those will be kept on next run of the e2e-tests.
**Old description:**
To not use too much disk space on amazon, we should have a job that sometimes removes the outdated old scenarioo-self-docu builds.
Acceptance criterias
- remove old self-docu builds that are outdated (e.g. each time when deploying the self docu through a simple build script in build job 'deploy-self-docu')
- Only the last 5-10 builds are kept for the self-docu
- the last successful build is kept in any case
- the most recent build is kept in any case
- specific builds (e.g. the major release builds for each version) can be marked to be kept (e.g. through a simple config file or a properties in the cleanup build script), marked builds are never removed.
Out of Scope:
- for now we not solve this directly in scenarioo for all our projects, such that user can configure in the UI which builds to keep. This probably would be gold-plating, since automating this with a simple script should be quite easy.
<!---
@huboard:{"order":0.583984375,"milestone_order":415.0,"custom_state":""}
-->
| infrastructure | tests deploy self docu keep old archived docu builds as a scenarioo developer i want to ensure that some old docu builds are marked to be kept and not removed on new tests run on ci implemented solution manually run special jenkins job archive self docu to archive all the currently present docu builds those will be kept on next run of the tests old description to not use too much disk space on amazon we should have a job that sometimes removes the outdated old scenarioo self docu builds acceptance criterias remove old self docu builds that are outdated e g each time when deploying the self docu through a simple build script in build job deploy self docu only the last builds are kept for the self docu the last successful build is kept in any case the most recent build is kept in any case specific builds e g the major release builds for each version can be marked to be kept e g through a simple config file or a properties in the cleanup build script marked builds are never removed out of scope for now we not solve this directly in scenarioo for all our projects such that user can configure in the ui which builds to keep this probably would be gold plating since automating this with a simple script should be quite easy huboard order milestone order custom state | 1 |
540,941 | 15,819,480,804 | IssuesEvent | 2021-04-05 17:33:38 | cagov/ui-claim-tracker | https://api.github.com/repos/cagov/ui-claim-tracker | closed | Choose and integrate an internationalization library | Engineering Priority Size: M | ### Description
<!-- Describe what needs to be done in detail -->
Integrate a translation library into the UI Claim Tracker application. By default we are leaning towards [i18next-react](https://react.i18next.com/) like our [previous project](https://github.com/cagov/unemployment), but we should do a bit of research if there is some NextJS bonus or other best practice that has emerged.
### Acceptance Criteria
- [x] Confirmed i18next-react is a reasonable choice for our project
- [x] Integrated i18next-react into the project base
- [x] Document choices in our [Technical Foundations doc](https://docs.google.com/document/d/1i2pj7_3n0VjqqztOepkd2VcG7JA8N9Tg5P76GSI8tzw/edit#)
| 1.0 | Choose and integrate an internationalization library - ### Description
<!-- Describe what needs to be done in detail -->
Integrate a translation library into the UI Claim Tracker application. By default we are leaning towards [i18next-react](https://react.i18next.com/) like our [previous project](https://github.com/cagov/unemployment), but we should do a bit of research if there is some NextJS bonus or other best practice that has emerged.
### Acceptance Criteria
- [x] Confirmed i18next-react is a reasonable choice for our project
- [x] Integrated i18next-react into the project base
- [x] Document choices in our [Technical Foundations doc](https://docs.google.com/document/d/1i2pj7_3n0VjqqztOepkd2VcG7JA8N9Tg5P76GSI8tzw/edit#)
| non_infrastructure | choose and integrate an internationalization library description integrate a translation library into the ui claim tracker application by default we are leaning towards like our but we should do a bit of research if there is some nextjs bonus or other best practice that has emerged acceptance criteria confirmed react is a reasonable choice for our project integrated react into the project base document choices in our | 0 |
24,801 | 17,787,454,706 | IssuesEvent | 2021-08-31 12:48:47 | deckhouse/deckhouse | https://api.github.com/repos/deckhouse/deckhouse | opened | "remove_csi_taints" did not execute even though it ought to | type/bug area/cluster-and-infrastructure | [Hook](https://github.com/deckhouse/deckhouse/blob/main/modules/040-node-manager/hooks/remove_csi_taints.go) did not execute. After deleting the deckhouse Pod, the problem went away. It could be a problem with filtering or with ExecuteHookOnEvents/Execution parameters. | 1.0 | "remove_csi_taints" did not execute even though it ought to - [Hook](https://github.com/deckhouse/deckhouse/blob/main/modules/040-node-manager/hooks/remove_csi_taints.go) did not execute. After deleting the deckhouse Pod, the problem went away. It could be a problem with filtering or with ExecuteHookOnEvents/Execution parameters. | infrastructure | remove csi taints did not execute even though it ought to did not execute after deleting the deckhouse pod the problem went away it could be a problem with filtering or with executehookonevents execution parameters | 1 |
19,178 | 13,199,950,227 | IssuesEvent | 2020-08-14 07:12:00 | thinktecture/relayserver | https://api.github.com/repos/thinktecture/relayserver | opened | Enhance discovery document handling | enhancement infrastructure | Loading the discovery document from the server might fail.
When the connector starts and reads the config the server might not be started yet or there is a connectivity issue.
The configuration system should be able to handle that.
The connector should reflect the state and if its not yet configured should try to load the config when connecting (i.e. within ConnectAsync.
However, this should not affect the handling of the http client for the relayserver and the automatic access token management. | 1.0 | Enhance discovery document handling - Loading the discovery document from the server might fail.
When the connector starts and reads the config the server might not be started yet or there is a connectivity issue.
The configuration system should be able to handle that.
The connector should reflect the state and if its not yet configured should try to load the config when connecting (i.e. within ConnectAsync.
However, this should not affect the handling of the http client for the relayserver and the automatic access token management. | infrastructure | enhance discovery document handling loading the discovery document from the server might fail when the connector starts and reads the config the server might not be started yet or there is a connectivity issue the configuration system should be able to handle that the connector should reflect the state and if its not yet configured should try to load the config when connecting i e within connectasync however this should not affect the handling of the http client for the relayserver and the automatic access token management | 1 |
88,162 | 11,036,175,569 | IssuesEvent | 2019-12-07 18:49:59 | Students-of-the-city-of-Kostroma/Ray-of-hope | https://api.github.com/repos/Students-of-the-city-of-Kostroma/Ray-of-hope | opened | Макеты ленты в Android не соответствуют спецификации | Android Bug Design O5 PR5 Sprint 9 Лента постов | Epic #157 Story #170 #188 #195
Макеты ленты постов в Android-приложении не соответствуют [стори](https://docs.google.com/document/d/1TP_bUWStqtJZPt8JGelRgwGyq9PCXUbF2qKxWsSFwJQ/edit) и [описанию типов постов](https://docs.google.com/document/d/1p0aFXnl3jeQWKu_cQHuSFoc59WKs-PMpChTdkxWICBA/edit) по следующим пунктам:
- макеты отображают возможность выбрать фильтр "Только в моём городе";
- нет макета, показывающего тип записи "Нужда";
- дата и время публикации должны располагаться в правом нижнем углу;
- неправильное отображение фотографий;
- неправильное отображение дат начала и завершения, а также места проведения у записи типа "Мероприятие".
Ожидаемый результат: макеты приведены в состояние, соответствующее документации, создан макет, изображающий тип поста "Нужда". | 1.0 | Макеты ленты в Android не соответствуют спецификации - Epic #157 Story #170 #188 #195
Макеты ленты постов в Android-приложении не соответствуют [стори](https://docs.google.com/document/d/1TP_bUWStqtJZPt8JGelRgwGyq9PCXUbF2qKxWsSFwJQ/edit) и [описанию типов постов](https://docs.google.com/document/d/1p0aFXnl3jeQWKu_cQHuSFoc59WKs-PMpChTdkxWICBA/edit) по следующим пунктам:
- макеты отображают возможность выбрать фильтр "Только в моём городе";
- нет макета, показывающего тип записи "Нужда";
- дата и время публикации должны располагаться в правом нижнем углу;
- неправильное отображение фотографий;
- неправильное отображение дат начала и завершения, а также места проведения у записи типа "Мероприятие".
Ожидаемый результат: макеты приведены в состояние, соответствующее документации, создан макет, изображающий тип поста "Нужда". | non_infrastructure | макеты ленты в android не соответствуют спецификации epic story макеты ленты постов в android приложении не соответствуют и по следующим пунктам макеты отображают возможность выбрать фильтр только в моём городе нет макета показывающего тип записи нужда дата и время публикации должны располагаться в правом нижнем углу неправильное отображение фотографий неправильное отображение дат начала и завершения а также места проведения у записи типа мероприятие ожидаемый результат макеты приведены в состояние соответствующее документации создан макет изображающий тип поста нужда | 0 |
14,424 | 10,854,788,349 | IssuesEvent | 2019-11-13 17:02:44 | dart-lang/site-www | https://api.github.com/repos/dart-lang/site-www | closed | Either support zsh or clarify that only bash is supported | e0-minutes e2-days help wanted infrastructure p3-low | I am trying to run the site and I am facing an issue with the installation scripts.
At the step "3. Run installation scripts" I can't get past the step:
> `./tool/before-install.sh` # install core set of required tools
The error I get is:
```
dart-lang/site-www [master] » ./tool/before-install.sh
Dart SDK appears to be installed: dart is /usr/local/bin/dart
Dart VM version: 2.4.1 (Wed Aug 7 13:15:56 2019 +0200) on "macos_x64"
tool/shared/before-install.sh: line 26: travis_fold: command not found
```
I installed `travis-fold` using `npm install -g travis-fold` but I still see the error.
I use a MacOs 10.15.1, iTerm2 and zsh.
Thankfully I was able to continue the setup manually by running `npm install` and `bundle install`, then I've been able to run the site by using `jekyll serve --livereload` for example. | 1.0 | Either support zsh or clarify that only bash is supported - I am trying to run the site and I am facing an issue with the installation scripts.
At the step "3. Run installation scripts" I can't get past the step:
> `./tool/before-install.sh` # install core set of required tools
The error I get is:
```
dart-lang/site-www [master] » ./tool/before-install.sh
Dart SDK appears to be installed: dart is /usr/local/bin/dart
Dart VM version: 2.4.1 (Wed Aug 7 13:15:56 2019 +0200) on "macos_x64"
tool/shared/before-install.sh: line 26: travis_fold: command not found
```
I installed `travis-fold` using `npm install -g travis-fold` but I still see the error.
I use a MacOs 10.15.1, iTerm2 and zsh.
Thankfully I was able to continue the setup manually by running `npm install` and `bundle install`, then I've been able to run the site by using `jekyll serve --livereload` for example. | infrastructure | either support zsh or clarify that only bash is supported i am trying to run the site and i am facing an issue with the installation scripts at the step run installation scripts i can t get past the step tool before install sh install core set of required tools the error i get is dart lang site www » tool before install sh dart sdk appears to be installed dart is usr local bin dart dart vm version wed aug on macos tool shared before install sh line travis fold command not found i installed travis fold using npm install g travis fold but i still see the error i use a macos and zsh thankfully i was able to continue the setup manually by running npm install and bundle install then i ve been able to run the site by using jekyll serve livereload for example | 1 |
260,503 | 22,626,784,692 | IssuesEvent | 2022-06-30 11:24:29 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | opened | Intermittent UI test failure - < SmokeTest.goToHomeScreenTopToolbarTest> | b:crash eng:intermittent-test eng:ui-test | ### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/6888094366571130398/executions/bs.8caf093cf2b3198a/testcases/1/test-cases)
### Stacktrace:
java.lang.RuntimeException: Error while connecting UiAutomation@a425af5[id=-1, flags=0]
at android.app.UiAutomation.connect(UiAutomation.java:259)
at android.app.Instrumentation.getUiAutomation(Instrumentation.java:2176)
at androidx.test.uiautomator.UiDevice.getUiAutomation(UiDevice.java:1129)
at androidx.test.uiautomator.QueryController.<init>(QueryController.java:95)
at androidx.test.uiautomator.UiDevice.<init>(UiDevice.java:109)
at androidx.test.uiautomator.UiDevice.getInstance(UiDevice.java:261)
at org.mozilla.fenix.ui.SmokeTest.<init>(SmokeTest.kt:64)
### Build: 6/29 Main
### Notes: Similar with: #25416 #25414 #25342 #25341 #25453 #25468 #25579 #25578 #25624 #25625 #25628 #25630 #25642 #25655 #25656 #25657 #25700 #25711 #25713 #25714 #25715 #25716 #25727 #25742 #25747 #25762 #25780 #25797 #25801
| 2.0 | Intermittent UI test failure - < SmokeTest.goToHomeScreenTopToolbarTest> - ### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/6888094366571130398/executions/bs.8caf093cf2b3198a/testcases/1/test-cases)
### Stacktrace:
java.lang.RuntimeException: Error while connecting UiAutomation@a425af5[id=-1, flags=0]
at android.app.UiAutomation.connect(UiAutomation.java:259)
at android.app.Instrumentation.getUiAutomation(Instrumentation.java:2176)
at androidx.test.uiautomator.UiDevice.getUiAutomation(UiDevice.java:1129)
at androidx.test.uiautomator.QueryController.<init>(QueryController.java:95)
at androidx.test.uiautomator.UiDevice.<init>(UiDevice.java:109)
at androidx.test.uiautomator.UiDevice.getInstance(UiDevice.java:261)
at org.mozilla.fenix.ui.SmokeTest.<init>(SmokeTest.kt:64)
### Build: 6/29 Main
### Notes: Similar with: #25416 #25414 #25342 #25341 #25453 #25468 #25579 #25578 #25624 #25625 #25628 #25630 #25642 #25655 #25656 #25657 #25700 #25711 #25713 #25714 #25715 #25716 #25727 #25742 #25747 #25762 #25780 #25797 #25801
| non_infrastructure | intermittent ui test failure firebase test run stacktrace java lang runtimeexception error while connecting uiautomation at android app uiautomation connect uiautomation java at android app instrumentation getuiautomation instrumentation java at androidx test uiautomator uidevice getuiautomation uidevice java at androidx test uiautomator querycontroller querycontroller java at androidx test uiautomator uidevice uidevice java at androidx test uiautomator uidevice getinstance uidevice java at org mozilla fenix ui smoketest smoketest kt build main notes similar with | 0 |
16,583 | 12,058,983,745 | IssuesEvent | 2020-04-15 18:26:43 | google/iree | https://api.github.com/repos/google/iree | closed | Clang 9.0.1 fails when building absl due to unsupported debug levels | infrastructure | absl appears to have issues building at head with clang 9.0.1.
This doesn't occur if the user has Clang6.0.1. | 1.0 | Clang 9.0.1 fails when building absl due to unsupported debug levels - absl appears to have issues building at head with clang 9.0.1.
This doesn't occur if the user has Clang6.0.1. | infrastructure | clang fails when building absl due to unsupported debug levels absl appears to have issues building at head with clang this doesn t occur if the user has | 1 |
37,149 | 8,222,352,342 | IssuesEvent | 2018-09-06 07:09:51 | akamsteeg/AtleX.CommandLineArguments | https://api.github.com/repos/akamsteeg/AtleX.CommandLineArguments | closed | General code cleanup | code quality | There are quite some unused `using`s and redundant empty lines and whitespace in the code. This is a general issue to do some housekeeping on the code to fix those. | 1.0 | General code cleanup - There are quite some unused `using`s and redundant empty lines and whitespace in the code. This is a general issue to do some housekeeping on the code to fix those. | non_infrastructure | general code cleanup there are quite some unused using s and redundant empty lines and whitespace in the code this is a general issue to do some housekeeping on the code to fix those | 0 |
20,060 | 13,645,826,580 | IssuesEvent | 2020-09-25 21:37:31 | pangeo-data/climpred | https://api.github.com/repos/pangeo-data/climpred | closed | multiple verifs? | cleanup infrastructure | Isn’t having multiple verif files very maintenance heavy? I think on the long term we should get rid of it. It doesn’t add new functionality, only makes it easy for the case of comparing to multiple Verifs which can be done also in a loop.
Probably this should become an issue.
_Originally posted by @aaronspring in https://github.com/bradyrx/climpred/pull/418#issuecomment-687348394_ | 1.0 | multiple verifs? - Isn’t having multiple verif files very maintenance heavy? I think on the long term we should get rid of it. It doesn’t add new functionality, only makes it easy for the case of comparing to multiple Verifs which can be done also in a loop.
Probably this should become an issue.
_Originally posted by @aaronspring in https://github.com/bradyrx/climpred/pull/418#issuecomment-687348394_ | infrastructure | multiple verifs isn’t having multiple verif files very maintenance heavy i think on the long term we should get rid of it it doesn’t add new functionality only makes it easy for the case of comparing to multiple verifs which can be done also in a loop probably this should become an issue originally posted by aaronspring in | 1 |
139,960 | 20,987,827,491 | IssuesEvent | 2022-03-29 06:16:17 | S-CON-SKHU/S_CON_iOS | https://api.github.com/repos/S-CON-SKHU/S_CON_iOS | closed | Main Design 재구성 | design MAIN | <h3> 기존의 가독성이 떨어지는 메인화면을 전면 변경 </h3>
- [x] 파트 부분별 메인 화면 (IT, 미디어, SW)
- [x] 상단 탭바는 년도별로 정렬 -> Open source 사용
- [x] 상단 탭바 디테일은 수상별로 컬렉션뷰 | 1.0 | Main Design 재구성 - <h3> 기존의 가독성이 떨어지는 메인화면을 전면 변경 </h3>
- [x] 파트 부분별 메인 화면 (IT, 미디어, SW)
- [x] 상단 탭바는 년도별로 정렬 -> Open source 사용
- [x] 상단 탭바 디테일은 수상별로 컬렉션뷰 | non_infrastructure | main design 재구성 기존의 가독성이 떨어지는 메인화면을 전면 변경 파트 부분별 메인 화면 it 미디어 sw 상단 탭바는 년도별로 정렬 open source 사용 상단 탭바 디테일은 수상별로 컬렉션뷰 | 0 |
941 | 3,006,304,158 | IssuesEvent | 2015-07-27 09:31:42 | Itseez/opencv | https://api.github.com/repos/Itseez/opencv | opened | Strange search by docs.opencv.org | affected: 2.4 auto-transferred bug category: infrastructure priority: normal | Transferred from http://code.opencv.org/issues/2860
```
|| Andrey Morozov on 2013-03-04 22:12
|| Priority: Normal
|| Affected: branch 'master' (2.4.9)
|| Category: infrastructure
|| Tracker: Bug
|| Difficulty: None
|| PR:
|| Platform: None / None
```
Strange search by docs.opencv.org
-----------
```
I want to found description of cv::Mat class. If I write "cv::Mat" in search line then result is:
<pre>
Search finished, found 3 page(s) matching the search query.
float CvBoost::predict(const cv::Mat& sample, const cv::Mat& missing, const cv::Range& slice, bool rawMode, bool returnSum) const (C++ function, in Boosting)
float CvRTrees::predict_prob(const cv::Mat& sample, const cv::Mat& missing) const (C++ function, in Random Trees)
void CvEM::getCovs(std::vector& covs) const (C++ function, in Expectation Maximization)
</pre>
If I write "Mat" then I have a lot of trash and correct page(http://docs.opencv.org/modules/core/doc/basic_structures.html?highlight=mat#Mat) hard to found
```
History
------- | 1.0 | Strange search by docs.opencv.org - Transferred from http://code.opencv.org/issues/2860
```
|| Andrey Morozov on 2013-03-04 22:12
|| Priority: Normal
|| Affected: branch 'master' (2.4.9)
|| Category: infrastructure
|| Tracker: Bug
|| Difficulty: None
|| PR:
|| Platform: None / None
```
Strange search by docs.opencv.org
-----------
```
I want to found description of cv::Mat class. If I write "cv::Mat" in search line then result is:
<pre>
Search finished, found 3 page(s) matching the search query.
float CvBoost::predict(const cv::Mat& sample, const cv::Mat& missing, const cv::Range& slice, bool rawMode, bool returnSum) const (C++ function, in Boosting)
float CvRTrees::predict_prob(const cv::Mat& sample, const cv::Mat& missing) const (C++ function, in Random Trees)
void CvEM::getCovs(std::vector& covs) const (C++ function, in Expectation Maximization)
</pre>
If I write "Mat" then I have a lot of trash and correct page(http://docs.opencv.org/modules/core/doc/basic_structures.html?highlight=mat#Mat) hard to found
```
History
------- | infrastructure | strange search by docs opencv org transferred from andrey morozov on priority normal affected branch master category infrastructure tracker bug difficulty none pr platform none none strange search by docs opencv org i want to found description of cv mat class if i write cv mat in search line then result is search finished found page s matching the search query float cvboost predict const cv mat sample const cv mat missing const cv range slice bool rawmode bool returnsum const c function in boosting float cvrtrees predict prob const cv mat sample const cv mat missing const c function in random trees void cvem getcovs std vector covs const c function in expectation maximization if i write mat then i have a lot of trash and correct page hard to found history | 1 |
15,631 | 11,622,072,902 | IssuesEvent | 2020-02-27 05:17:27 | enarx/enarx | https://api.github.com/repos/enarx/enarx | closed | Ensure that changed process flow diagrams always include an updated .png | infrastructure | @npmccallum merged some neat new process flow diagrams in #249. These diagrams, described as `.msc` and `.dot` files, generate a corresponding `.png`.
We should implement a test that checks if a commit has changes to `.dot` or `.msc` files and enforces that such changes must also contain an updated `.png` to ensure these are always up to date. | 1.0 | Ensure that changed process flow diagrams always include an updated .png - @npmccallum merged some neat new process flow diagrams in #249. These diagrams, described as `.msc` and `.dot` files, generate a corresponding `.png`.
We should implement a test that checks if a commit has changes to `.dot` or `.msc` files and enforces that such changes must also contain an updated `.png` to ensure these are always up to date. | infrastructure | ensure that changed process flow diagrams always include an updated png npmccallum merged some neat new process flow diagrams in these diagrams described as msc and dot files generate a corresponding png we should implement a test that checks if a commit has changes to dot or msc files and enforces that such changes must also contain an updated png to ensure these are always up to date | 1 |
16,070 | 9,682,628,234 | IssuesEvent | 2019-05-23 09:35:28 | bkimminich/juice-shop | https://api.github.com/repos/bkimminich/juice-shop | closed | CVE-2015-9251 (Medium) detected in jquery-2.2.4.min.js | security vulnerability | ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.2.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js</a></p>
<p>Path to dependency file: /juice-shop/frontend/src/index.html</p>
<p>Path to vulnerable library: /juice-shop/frontend/src/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.4.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: 3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-9251 (Medium) detected in jquery-2.2.4.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.2.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js</a></p>
<p>Path to dependency file: /juice-shop/frontend/src/index.html</p>
<p>Path to vulnerable library: /juice-shop/frontend/src/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.4.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: 3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file juice shop frontend src index html path to vulnerable library juice shop frontend src index html dependency hierarchy x jquery min js vulnerable library vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
23,458 | 11,968,695,593 | IssuesEvent | 2020-04-06 09:05:34 | scalacenter/bloop | https://api.github.com/repos/scalacenter/bloop | closed | Experienced slowdown when starting sbt with sbt-bloop | integrations performance sbt | Using
* SBT 1.3.8
* Scala 2.12.10,
* SBT-Bloop: 1.4.0-RC1
I get the following times:
## Without sbt-bloop (second run)
```
$ time sbt version
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=1G; support was removed in 8.0
[info] Loading settings for project global-plugins from idea.sbt,plugins.sbt ...
[info] Loading global plugins from /Users/joao/.sbt/1.0/plugins
[info] Loading settings for project [REDACTED] from credentials.sbt,plugins.sbt,metals.sbt ...
[info] Loading project definition from [REDACTED]
[info] Loading settings for project [REDACTED] from build.sbt ...
[info] Resolving key references (99462 settings) ...
[REDACTED]
sbt version 77.23s user 3.75s system 361% cpu 22.396 total
```
## With sbt-bloop (second run)
```
$ time sbt version
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=1G; support was removed in 8.0
[info] Loading settings for project global-plugins from idea.sbt,plugins.sbt ...
[info] Loading global plugins from /Users/joao/.sbt/1.0/plugins
[info] Loading settings for project [REDACTED] from credentials.sbt,plugins.sbt,metals.sbt ...
[info] Loading project definition from [REDACTED]
[info] Loading settings for project [REDACTED] from build.sbt ...
[info] Resolving key references (104723 settings) ...
[REDACTED]
sbt version 111.85s user 17.71s system 154% cpu 1:24.00 total
```
The project in question has ~70 subprojects. Running `ls -l | wc -l` inside `.bloop` after `sbt bloopInstall` returns 231 files, while `ls -lR | wc -l` returns 1447 files.
Running sbt with `-Dsbt.task.timings=true` doesn't show anything in particular (most of the time is spent after the "Resolving key references" and doesn't show on the output). | True | Experienced slowdown when starting sbt with sbt-bloop - Using
* SBT 1.3.8
* Scala 2.12.10,
* SBT-Bloop: 1.4.0-RC1
I get the following times:
## Without sbt-bloop (second run)
```
$ time sbt version
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=1G; support was removed in 8.0
[info] Loading settings for project global-plugins from idea.sbt,plugins.sbt ...
[info] Loading global plugins from /Users/joao/.sbt/1.0/plugins
[info] Loading settings for project [REDACTED] from credentials.sbt,plugins.sbt,metals.sbt ...
[info] Loading project definition from [REDACTED]
[info] Loading settings for project [REDACTED] from build.sbt ...
[info] Resolving key references (99462 settings) ...
[REDACTED]
sbt version 77.23s user 3.75s system 361% cpu 22.396 total
```
## With sbt-bloop (second run)
```
$ time sbt version
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=1G; support was removed in 8.0
[info] Loading settings for project global-plugins from idea.sbt,plugins.sbt ...
[info] Loading global plugins from /Users/joao/.sbt/1.0/plugins
[info] Loading settings for project [REDACTED] from credentials.sbt,plugins.sbt,metals.sbt ...
[info] Loading project definition from [REDACTED]
[info] Loading settings for project [REDACTED] from build.sbt ...
[info] Resolving key references (104723 settings) ...
[REDACTED]
sbt version 111.85s user 17.71s system 154% cpu 1:24.00 total
```
The project in question has ~70 subprojects. Running `ls -l | wc -l` inside `.bloop` after `sbt bloopInstall` returns 231 files, while `ls -lR | wc -l` returns 1447 files.
Running sbt with `-Dsbt.task.timings=true` doesn't show anything in particular (most of the time is spent after the "Resolving key references" and doesn't show on the output). | non_infrastructure | experienced slowdown when starting sbt with sbt bloop using sbt scala sbt bloop i get the following times without sbt bloop second run time sbt version openjdk bit server vm warning ignoring option maxpermsize support was removed in loading settings for project global plugins from idea sbt plugins sbt loading global plugins from users joao sbt plugins loading settings for project from credentials sbt plugins sbt metals sbt loading project definition from loading settings for project from build sbt resolving key references settings sbt version user system cpu total with sbt bloop second run time sbt version openjdk bit server vm warning ignoring option maxpermsize support was removed in loading settings for project global plugins from idea sbt plugins sbt loading global plugins from users joao sbt plugins loading settings for project from credentials sbt plugins sbt metals sbt loading project definition from loading settings for project from build sbt resolving key references settings sbt version user system cpu total the project in question has subprojects running ls l wc l inside bloop after sbt bloopinstall returns files while ls lr wc l returns files running sbt with dsbt task timings true doesn t show anything in particular most of the time is spent after the resolving key references and doesn t show on the output | 0 |
23,291 | 16,039,320,653 | IssuesEvent | 2021-04-22 05:10:28 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Map Accuracy | bug interface/infrastructure | The accuracy of the map display could be improved. In the attached sim, a Rwandan location (-1.1, 30.35) is shown several km across the border in Uganda.
[Kenya.zip](https://github.com/APSIMInitiative/ApsimX/files/5891231/Kenya.zip)
| 1.0 | Map Accuracy - The accuracy of the map display could be improved. In the attached sim, a Rwandan location (-1.1, 30.35) is shown several km across the border in Uganda.
[Kenya.zip](https://github.com/APSIMInitiative/ApsimX/files/5891231/Kenya.zip)
| infrastructure | map accuracy the accuracy of the map display could be improved in the attached sim a rwandan location is shown several km across the border in uganda | 1 |
29,826 | 24,306,474,622 | IssuesEvent | 2022-09-29 17:53:38 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | [jesse-test-app] Application onboarding infrastructure request | operations devops infrastructure eks infrastructure-request | ## Summary
The purpose of this template is to capture requirements for onboarding a software application to the Platform's delivery infrastructure.
### Overview
- The Platform leverages container orchestration to host and deploy software applications
- Container orchestration automates the deployment, scaling, and management of containerized applications
- To learn more, see ["Application Hosting and Deployment using Container Orchestration (EKS)"](https://vfs.atlassian.net/wiki/spaces/OT/pages/1474593866/Application+Hosting+and+Deployment+using+Container+Orchestration+EKS)
### Guidance
- Please fill out the sections below with as much info as possible
- If you don't have info or know the answer to a given prompt, _it's okay to leave it blank_
- The Infrastructure Team will assist you with gathering requirements _and_ performing key setup and configuration tasks
---
## Description of application
_Please provide a brief description of the application, ie What does it do? Who does it serve?_
### Basic info:
**Team Name:**
Platform Infrastructure Team
**Application Name:**
Jesse Test APP
**Functionality:**
Cool REST API
**Language/Stack:**
Django
**Ports/Networking needed:**
80, 443
**Other infrastructure needed:**
MySQL
### Background/Context/Resources
#41250
### Technical Notes
This is a really important API for my cool test app. It must have the hugest database possible.
---
## Onboarding checklist
_*The responsible parties are listed below each item in the checklist_
### Application repository and container
- [ ] **GitHub repo:** _provide link to repo here_
Note: app should conform to the 12 factor app methodology | [Docs](https://12factor.net/)
_Requesting team; Infrastructure Team can assist_
- [ ] **Dockerfile:** _provide link to Dockerfile here_ | [Example](https://github.com/department-of-veterans-affairs/platform-console-api/blob/master/Dockerfile) | [Docs](https://docs.docker.com/engine/reference/builder/)
_Requesting team; Infrastructure Team can assist_
### Application delivery pipeline (CI/CD)
- [ ] **AWS service account for GitHub actions**, ie `svc-gha-team-name` | [Request here](https://github.com/department-of-veterans-affairs/va.gov-team/issues/new?assignees=&labels=operations%2C+devops%2C+needs-grooming&template=ops_issue_template.md&title=)
_Infrastructure Team_
- [ ] **AWS Elastic Container Registry (ECR) repository for the app container:** _provide name of ECR repo here_
_Infrastructure Team_ | [PRs welcome](https://github.com/department-of-veterans-affairs/devops/blob/master/terraform/environments/global/ecr.tf)
- [ ] **Automation to release and tag the app's GitHub repo with a semantic version number** | [Example](https://github.com/department-of-veterans-affairs/vsp-infra-calico/blob/main/.releaserc) | [Docs](https://semantic-release.gitbook.io/semantic-release/)
_Requesting team; Infrastructure Team can assist_
- [ ] **Automation to push the app's container to ECR with a semantic version number** | [Example](https://github.com/department-of-veterans-affairs/vsp-infra-calico/blob/main/.github/workflows/mirror-images.yaml)
Note: Don't use default "latest" tag. The release system uses modified container image tags to synchronize automation.
_Requesting team; Infrastructure Team can assist_
- [ ] **Kubernetes manifest using Helm charts** in `vsp-infra-application-manifests` | [Example](https://github.com/department-of-veterans-affairs/vsp-infra-application-manifests/tree/main/apps/vsp-amt/grafana) | [Argo CD Docs](https://argo-cd.readthedocs.io/en/stable/)
Note: Manifests _can_ be specified in several formats, however, we are standardizing around [Helm charts](https://helm.sh/docs/topics/charts/).
_Requesting team; Infrastructure Team can assist_
- [ ] **Kubernetes detect and update application** in `vsp-infra-applications-manifests` | [Environments](https://github.com/department-of-veterans-affairs/vsp-infra-application-manifests/tree/main/apps/vsp-operations/argocd-apps)
_Requesting team; Infrastructure Team can assist_
- [ ] **Automation to update the Kubernetes manifest** when a new version of the app's container is pushed to ECR | [Example](https://github.com/department-of-veterans-affairs/vsp-infra-calico/blob/main/.github/workflows/update-manifests.yaml)
_Requesting team; Infrastructure Team can assist_
### Application secrets and parameters
- [ ] **AWS SSM Parameter Store path created for your team or app,** ie `/dsva-vagov/team-name/` | [Request here](https://github.com/department-of-veterans-affairs/va.gov-team/issues/new?assignees=&labels=operations%2C+devops%2C+needs-grooming&template=ops_issue_template.md&title=)
_Infrastructure Team_
- [ ] **AWS SSM Parameter Store parameters that the app needs to run,** ie `/dsva-vagov/team-name/env/secret-name` | [Docs](https://depo-platform-documentation.scrollhelp.site/developer-docs/Store-a-secret-in-Parameter-Store.1474595172.html)
_Requesting team_
---
Once the checklist is complete, you will be ready to onboard your application
Please visit ["Manage applications in EKS"](https://vfs.atlassian.net/wiki/spaces/OT/pages/2348909545/Manage+applications+in+EKS) for more info
### Notes
- Please add comments to this issue as checklist items are completed, and...
- Tag the Infrastructure Team's product manager and product owner to help expedite the process
| 2.0 | [jesse-test-app] Application onboarding infrastructure request - ## Summary
The purpose of this template is to capture requirements for onboarding a software application to the Platform's delivery infrastructure.
### Overview
- The Platform leverages container orchestration to host and deploy software applications
- Container orchestration automates the deployment, scaling, and management of containerized applications
- To learn more, see ["Application Hosting and Deployment using Container Orchestration (EKS)"](https://vfs.atlassian.net/wiki/spaces/OT/pages/1474593866/Application+Hosting+and+Deployment+using+Container+Orchestration+EKS)
### Guidance
- Please fill out the sections below with as much info as possible
- If you don't have info or know the answer to a given prompt, _it's okay to leave it blank_
- The Infrastructure Team will assist you with gathering requirements _and_ performing key setup and configuration tasks
---
## Description of application
_Please provide a brief description of the application, ie What does it do? Who does it serve?_
### Basic info:
**Team Name:**
Platform Infrastructure Team
**Application Name:**
Jesse Test APP
**Functionality:**
Cool REST API
**Language/Stack:**
Django
**Ports/Networking needed:**
80, 443
**Other infrastructure needed:**
MySQL
### Background/Context/Resources
#41250
### Technical Notes
This is a really important API for my cool test app. It must have the hugest database possible.
---
## Onboarding checklist
_*The responsible parties are listed below each item in the checklist_
### Application repository and container
- [ ] **GitHub repo:** _provide link to repo here_
Note: app should conform to the 12 factor app methodology | [Docs](https://12factor.net/)
_Requesting team; Infrastructure Team can assist_
- [ ] **Dockerfile:** _provide link to Dockerfile here_ | [Example](https://github.com/department-of-veterans-affairs/platform-console-api/blob/master/Dockerfile) | [Docs](https://docs.docker.com/engine/reference/builder/)
_Requesting team; Infrastructure Team can assist_
### Application delivery pipeline (CI/CD)
- [ ] **AWS service account for GitHub actions**, ie `svc-gha-team-name` | [Request here](https://github.com/department-of-veterans-affairs/va.gov-team/issues/new?assignees=&labels=operations%2C+devops%2C+needs-grooming&template=ops_issue_template.md&title=)
_Infrastructure Team_
- [ ] **AWS Elastic Container Registry (ECR) repository for the app container:** _provide name of ECR repo here_
_Infrastructure Team_ | [PRs welcome](https://github.com/department-of-veterans-affairs/devops/blob/master/terraform/environments/global/ecr.tf)
- [ ] **Automation to release and tag the app's GitHub repo with a semantic version number** | [Example](https://github.com/department-of-veterans-affairs/vsp-infra-calico/blob/main/.releaserc) | [Docs](https://semantic-release.gitbook.io/semantic-release/)
_Requesting team; Infrastructure Team can assist_
- [ ] **Automation to push the app's container to ECR with a semantic version number** | [Example](https://github.com/department-of-veterans-affairs/vsp-infra-calico/blob/main/.github/workflows/mirror-images.yaml)
Note: Don't use default "latest" tag. The release system uses modified container image tags to synchronize automation.
_Requesting team; Infrastructure Team can assist_
- [ ] **Kubernetes manifest using Helm charts** in `vsp-infra-application-manifests` | [Example](https://github.com/department-of-veterans-affairs/vsp-infra-application-manifests/tree/main/apps/vsp-amt/grafana) | [Argo CD Docs](https://argo-cd.readthedocs.io/en/stable/)
Note: Manifests _can_ be specified in several formats, however, we are standardizing around [Helm charts](https://helm.sh/docs/topics/charts/).
_Requesting team; Infrastructure Team can assist_
- [ ] **Kubernetes detect and update application** in `vsp-infra-applications-manifests` | [Environments](https://github.com/department-of-veterans-affairs/vsp-infra-application-manifests/tree/main/apps/vsp-operations/argocd-apps)
_Requesting team; Infrastructure Team can assist_
- [ ] **Automation to update the Kubernetes manifest** when a new version of the app's container is pushed to ECR | [Example](https://github.com/department-of-veterans-affairs/vsp-infra-calico/blob/main/.github/workflows/update-manifests.yaml)
_Requesting team; Infrastructure Team can assist_
### Application secrets and parameters
- [ ] **AWS SSM Parameter Store path created for your team or app,** ie `/dsva-vagov/team-name/` | [Request here](https://github.com/department-of-veterans-affairs/va.gov-team/issues/new?assignees=&labels=operations%2C+devops%2C+needs-grooming&template=ops_issue_template.md&title=)
_Infrastructure Team_
- [ ] **AWS SSM Parameter Store parameters that the app needs to run,** ie `/dsva-vagov/team-name/env/secret-name` | [Docs](https://depo-platform-documentation.scrollhelp.site/developer-docs/Store-a-secret-in-Parameter-Store.1474595172.html)
_Requesting team_
---
Once the checklist is complete, you will be ready to onboard your application
Please visit ["Manage applications in EKS"](https://vfs.atlassian.net/wiki/spaces/OT/pages/2348909545/Manage+applications+in+EKS) for more info
### Notes
- Please add comments to this issue as checklist items are completed, and...
- Tag the Infrastructure Team's product manager and product owner to help expedite the process
| infrastructure | application onboarding infrastructure request summary the purpose of this template is to capture requirements for onboarding a software application to the platform s delivery infrastructure overview the platform leverages container orchestration to host and deploy software applications container orchestration automates the deployment scaling and management of containerized applications to learn more see guidance please fill out the sections below with as much info as possible if you don t have info or know the answer to a given prompt it s okay to leave it blank the infrastructure team will assist you with gathering requirements and performing key setup and configuration tasks description of application please provide a brief description of the application ie what does it do who does it serve basic info team name platform infrastructure team application name jesse test app functionality cool rest api language stack django ports networking needed other infrastructure needed mysql background context resources technical notes this is a really important api for my cool test app it must have the hugest database possible onboarding checklist the responsible parties are listed below each item in the checklist application repository and container github repo provide link to repo here note app should conform to the factor app methodology requesting team infrastructure team can assist dockerfile provide link to dockerfile here requesting team infrastructure team can assist application delivery pipeline ci cd aws service account for github actions ie svc gha team name infrastructure team aws elastic container registry ecr repository for the app container provide name of ecr repo here infrastructure team automation to release and tag the app s github repo with a semantic version number requesting team infrastructure team can assist automation to push the app s container to ecr with a semantic version number note don t use default latest tag the release system uses modified container image tags to synchronize automation requesting team infrastructure team can assist kubernetes manifest using helm charts in vsp infra application manifests note manifests can be specified in several formats however we are standardizing around requesting team infrastructure team can assist kubernetes detect and update application in vsp infra applications manifests requesting team infrastructure team can assist automation to update the kubernetes manifest when a new version of the app s container is pushed to ecr requesting team infrastructure team can assist application secrets and parameters aws ssm parameter store path created for your team or app ie dsva vagov team name infrastructure team aws ssm parameter store parameters that the app needs to run ie dsva vagov team name env secret name requesting team once the checklist is complete you will be ready to onboard your application please visit for more info notes please add comments to this issue as checklist items are completed and tag the infrastructure team s product manager and product owner to help expedite the process | 1 |
6,969 | 6,688,535,761 | IssuesEvent | 2017-10-08 15:51:56 | omtcyfz/heathen | https://api.github.com/repos/omtcyfz/heathen | opened | Start measuring test coverage | ci enhancement infrastructure | Covering most part of the written code with tests is crucial for ensuring safety and stability.
There are several useful tools for continuous coverage measurements. [llvm-cov](https://llvm.org/docs/CommandGuide/llvm-cov.html) can be used to emit coverage information. [codecov](https://codecov.io/) simplifies the coverage measurement process by a significant margin.
- [ ] Configure CMake scripts to provide an option to emit coverage information
- [ ] Setup codecov
- [ ] Add a badge to README | 1.0 | Start measuring test coverage - Covering most part of the written code with tests is crucial for ensuring safety and stability.
There are several useful tools for continuous coverage measurements. [llvm-cov](https://llvm.org/docs/CommandGuide/llvm-cov.html) can be used to emit coverage information. [codecov](https://codecov.io/) simplifies the coverage measurement process by a significant margin.
- [ ] Configure CMake scripts to provide an option to emit coverage information
- [ ] Setup codecov
- [ ] Add a badge to README | infrastructure | start measuring test coverage covering most part of the written code with tests is crucial for ensuring safety and stability there are several useful tools for continuous coverage measurements can be used to emit coverage information simplifies the coverage measurement process by a significant margin configure cmake scripts to provide an option to emit coverage information setup codecov add a badge to readme | 1 |
746,064 | 26,012,962,708 | IssuesEvent | 2022-12-21 04:53:35 | leapwallet/cosmos-node-setup | https://api.github.com/repos/leapwallet/cosmos-node-setup | closed | Increase CPU allocation | bug priority:medium | Use the next biggest AWS EC2 instance for monitors because the current instance size is insufficient, and causes the monitor to crash after a while. | 1.0 | Increase CPU allocation - Use the next biggest AWS EC2 instance for monitors because the current instance size is insufficient, and causes the monitor to crash after a while. | non_infrastructure | increase cpu allocation use the next biggest aws instance for monitors because the current instance size is insufficient and causes the monitor to crash after a while | 0 |
389,741 | 11,516,431,336 | IssuesEvent | 2020-02-14 05:02:50 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | closed | GTT Translations #2 | Docs: not needed Effort: small Feature Module: dispensary Priority: high | ## Is your feature request related to a problem? Please describe.
A few translation changes for the GTT
- Requisitions: change column name of ‘USAGE MENSUEL’ to ‘CMM’
- Supplier returns: change the column name ‘NOM DU LOT’ to ‘NUMERO DU LOT’
- Prescription confirmation screen - two more changes to labelling
- Montant du paiement -> Montant reçu
- Monnaie Requise -> Montant rendu
## Describe the solution you'd like
N/A
## Implementation
N/A
## Describe alternatives you've considered
N/A
## Additional context
N/A
| 1.0 | GTT Translations #2 - ## Is your feature request related to a problem? Please describe.
A few translation changes for the GTT
- Requisitions: change column name of ‘USAGE MENSUEL’ to ‘CMM’
- Supplier returns: change the column name ‘NOM DU LOT’ to ‘NUMERO DU LOT’
- Prescription confirmation screen - two more changes to labelling
- Montant du paiement -> Montant reçu
- Monnaie Requise -> Montant rendu
## Describe the solution you'd like
N/A
## Implementation
N/A
## Describe alternatives you've considered
N/A
## Additional context
N/A
| non_infrastructure | gtt translations is your feature request related to a problem please describe a few translation changes for the gtt requisitions change column name of ‘usage mensuel’ to ‘cmm’ supplier returns change the column name ‘nom du lot’ to ‘numero du lot’ prescription confirmation screen two more changes to labelling montant du paiement montant reçu monnaie requise montant rendu describe the solution you d like n a implementation n a describe alternatives you ve considered n a additional context n a | 0 |
32,660 | 26,877,222,586 | IssuesEvent | 2023-02-05 06:55:40 | dzeyelid/github-webhooks-by-azure-func-sample | https://api.github.com/repos/dzeyelid/github-webhooks-by-azure-func-sample | closed | Azure リソースの作成 | infrastructure | "Deploy to Azure" ボタンでデプロイできるように、Bicep (ARM template) で作成する
- Azure App plan
- Azure Functions
- 従量課金
- Node.js 16 スタック
- Azure Storage Accounts
- Azure Application Insights
| 1.0 | Azure リソースの作成 - "Deploy to Azure" ボタンでデプロイできるように、Bicep (ARM template) で作成する
- Azure App plan
- Azure Functions
- 従量課金
- Node.js 16 スタック
- Azure Storage Accounts
- Azure Application Insights
| infrastructure | azure リソースの作成 deploy to azure ボタンでデプロイできるように、bicep arm template で作成する azure app plan azure functions 従量課金 node js スタック azure storage accounts azure application insights | 1 |
19,821 | 13,487,942,095 | IssuesEvent | 2020-09-11 11:46:58 | thinktecture/relayserver | https://api.github.com/repos/thinktecture/relayserver | closed | Create IdentityServer host | infrastructure | Create basic IdentityServer host that
* issues tokens for Clients (based on Tenant/ Link configuration)
* issues tokens for Users (ASP.NET Core Identity)
* Dockerfile | 1.0 | Create IdentityServer host - Create basic IdentityServer host that
* issues tokens for Clients (based on Tenant/ Link configuration)
* issues tokens for Users (ASP.NET Core Identity)
* Dockerfile | infrastructure | create identityserver host create basic identityserver host that issues tokens for clients based on tenant link configuration issues tokens for users asp net core identity dockerfile | 1 |
151,096 | 23,761,958,677 | IssuesEvent | 2022-09-01 09:36:01 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | opened | Make it possible to invoke template lock in the UI | Needs Design | With #43037 merged, we need to add a way to invoke this condition in the UI.
In the PR it was hypothesised that the following lock configuration on a container could act as the trigger:
<img src="https://user-images.githubusercontent.com/846565/186475559-90611661-d80c-4974-aefe-6fde6e83a2d2.png">
But this is just one idea, there are doubtless others to explore. | 1.0 | Make it possible to invoke template lock in the UI - With #43037 merged, we need to add a way to invoke this condition in the UI.
In the PR it was hypothesised that the following lock configuration on a container could act as the trigger:
<img src="https://user-images.githubusercontent.com/846565/186475559-90611661-d80c-4974-aefe-6fde6e83a2d2.png">
But this is just one idea, there are doubtless others to explore. | non_infrastructure | make it possible to invoke template lock in the ui with merged we need to add a way to invoke this condition in the ui in the pr it was hypothesised that the following lock configuration on a container could act as the trigger img src but this is just one idea there are doubtless others to explore | 0 |
34,333 | 29,497,706,446 | IssuesEvent | 2023-06-02 18:30:32 | openxla/iree | https://api.github.com/repos/openxla/iree | opened | Use past average latency for comparison on pull request benchmarks | infrastructure/benchmark | Right now we are using the last landed commit's latency for comparison when performing benchmarks on pull requests. With just one single data point, it causes quite some fluctuation and wrong flagging of improvements/regressions. We should expose an API in dana to query the average latency for a benchmark series and use that for comparison on pull requests. (dana already have such information calculated; just need to do the plumbing to expose it.) | 1.0 | Use past average latency for comparison on pull request benchmarks - Right now we are using the last landed commit's latency for comparison when performing benchmarks on pull requests. With just one single data point, it causes quite some fluctuation and wrong flagging of improvements/regressions. We should expose an API in dana to query the average latency for a benchmark series and use that for comparison on pull requests. (dana already have such information calculated; just need to do the plumbing to expose it.) | infrastructure | use past average latency for comparison on pull request benchmarks right now we are using the last landed commit s latency for comparison when performing benchmarks on pull requests with just one single data point it causes quite some fluctuation and wrong flagging of improvements regressions we should expose an api in dana to query the average latency for a benchmark series and use that for comparison on pull requests dana already have such information calculated just need to do the plumbing to expose it | 1 |
8,984 | 7,757,376,866 | IssuesEvent | 2018-05-31 16:07:43 | OpenXRay/xray-16 | https://api.github.com/repos/OpenXRay/xray-16 | opened | Support static linking | Infrastructure Portability | Support compilation mode when all engine binary files will be in a just one executable file. | 1.0 | Support static linking - Support compilation mode when all engine binary files will be in a just one executable file. | infrastructure | support static linking support compilation mode when all engine binary files will be in a just one executable file | 1 |
345,643 | 30,829,881,083 | IssuesEvent | 2023-08-02 00:12:36 | pokt-network/pocket | https://api.github.com/repos/pokt-network/pocket | opened | [Testing] Remove non-determinism from E2E Consensus Tests | consensus tooling testing | ## Objective
Avoid using timers or other non-deterministic approaches in E2E tests.
## Origin Document
The following comment in #874 and as it relates to https://github.com/pokt-network/pocket/pull/948/files#diff-01dec4121ae8acb7a1f4bb72a6c2104827d2c2d2197eb5f45fa5c032ffba32cdR93:
<img width="1175" alt="Screenshot 2023-08-01 at 4 29 47 PM" src="https://github.com/pokt-network/pocket/assets/1892194/6d91848f-af87-440f-996c-b88e004dbd0c">
## Goals
- Prevent non-determinism and flakiness from E2E tests
- Create robust tooling to help iterate on E2E tests
## Deliverable
- [ ] Remove all `the developer waits for ... milliseconds` in all `.feature` files
- [ ] Introduce alternate mechanisms mechanism to that waiting for a node to reach a certain height include and not limited to:
- [ ] Polling for a specific height (with a max timeout)
- [ ] Polling a health check endpoint (with a max timeout)
## Non-goals / Non-deliverables
- Introducing new tests
- Introducing new functionality
## General issue deliverables
- [ ] Update any relevant local/global README(s)
- [ ] Add or update any relevant or supporting [mermaid](https://mermaid-js.github.io/mermaid/) diagrams
## Testing Methodology
- [ ] **All tests**: `make test_all`
- [ ] **E2E Tests**: `make test_e2e`
- [ ] **LocalNet**: verify a `LocalNet` is still functioning correctly by following the instructions at [docs/development/README.md](https://github.com/pokt-network/pocket/tree/main/docs/development)
- [ ] **k8s LocalNet**: verify a `k8s LocalNet` is still functioning correctly by following the instructions [here](https://github.com/pokt-network/pocket/blob/main/build/localnet/README.md)
---
**Creator**: @Olshansk
**Co-Owners**: @0xBigBoss @dylanlott
| 1.0 | [Testing] Remove non-determinism from E2E Consensus Tests - ## Objective
Avoid using timers or other non-deterministic approaches in E2E tests.
## Origin Document
The following comment in #874 and as it relates to https://github.com/pokt-network/pocket/pull/948/files#diff-01dec4121ae8acb7a1f4bb72a6c2104827d2c2d2197eb5f45fa5c032ffba32cdR93:
<img width="1175" alt="Screenshot 2023-08-01 at 4 29 47 PM" src="https://github.com/pokt-network/pocket/assets/1892194/6d91848f-af87-440f-996c-b88e004dbd0c">
## Goals
- Prevent non-determinism and flakiness from E2E tests
- Create robust tooling to help iterate on E2E tests
## Deliverable
- [ ] Remove all `the developer waits for ... milliseconds` in all `.feature` files
- [ ] Introduce alternate mechanisms mechanism to that waiting for a node to reach a certain height include and not limited to:
- [ ] Polling for a specific height (with a max timeout)
- [ ] Polling a health check endpoint (with a max timeout)
## Non-goals / Non-deliverables
- Introducing new tests
- Introducing new functionality
## General issue deliverables
- [ ] Update any relevant local/global README(s)
- [ ] Add or update any relevant or supporting [mermaid](https://mermaid-js.github.io/mermaid/) diagrams
## Testing Methodology
- [ ] **All tests**: `make test_all`
- [ ] **E2E Tests**: `make test_e2e`
- [ ] **LocalNet**: verify a `LocalNet` is still functioning correctly by following the instructions at [docs/development/README.md](https://github.com/pokt-network/pocket/tree/main/docs/development)
- [ ] **k8s LocalNet**: verify a `k8s LocalNet` is still functioning correctly by following the instructions [here](https://github.com/pokt-network/pocket/blob/main/build/localnet/README.md)
---
**Creator**: @Olshansk
**Co-Owners**: @0xBigBoss @dylanlott
| non_infrastructure | remove non determinism from consensus tests objective avoid using timers or other non deterministic approaches in tests origin document the following comment in and as it relates to img width alt screenshot at pm src goals prevent non determinism and flakiness from tests create robust tooling to help iterate on tests deliverable remove all the developer waits for milliseconds in all feature files introduce alternate mechanisms mechanism to that waiting for a node to reach a certain height include and not limited to polling for a specific height with a max timeout polling a health check endpoint with a max timeout non goals non deliverables introducing new tests introducing new functionality general issue deliverables update any relevant local global readme s add or update any relevant or supporting diagrams testing methodology all tests make test all tests make test localnet verify a localnet is still functioning correctly by following the instructions at localnet verify a localnet is still functioning correctly by following the instructions creator olshansk co owners dylanlott | 0 |
409,294 | 27,731,292,399 | IssuesEvent | 2023-03-15 08:15:02 | jupyterlite/jupyterlite | https://api.github.com/repos/jupyterlite/jupyterlite | opened | Provide a README for `jupyterlite-core` | documentation | <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.-->
<!--You can feel free to delete the sections that do not apply.-->
### Problem
Currently the top-level `README.md` is copied to `py/jupyterlite` with the same content: https://github.com/jupyterlite/jupyterlite/blob/main/py/jupyterlite/README.md
The README for `jupyterlite-core` is quite simple: https://github.com/jupyterlite/jupyterlite/blob/main/py/jupyterlite-core/README.md
### Suggested Improvement
We can provide basic information about the `jupyterlite-core` in its README, and point to the docs.
| 1.0 | Provide a README for `jupyterlite-core` - <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.-->
<!--You can feel free to delete the sections that do not apply.-->
### Problem
Currently the top-level `README.md` is copied to `py/jupyterlite` with the same content: https://github.com/jupyterlite/jupyterlite/blob/main/py/jupyterlite/README.md
The README for `jupyterlite-core` is quite simple: https://github.com/jupyterlite/jupyterlite/blob/main/py/jupyterlite-core/README.md
### Suggested Improvement
We can provide basic information about the `jupyterlite-core` in its README, and point to the docs.
| non_infrastructure | provide a readme for jupyterlite core problem currently the top level readme md is copied to py jupyterlite with the same content the readme for jupyterlite core is quite simple suggested improvement we can provide basic information about the jupyterlite core in its readme and point to the docs | 0 |
169,352 | 13,134,976,168 | IssuesEvent | 2020-08-07 01:17:14 | linuxppc/issues | https://api.github.com/repos/linuxppc/issues | closed | Feature: add support for reporting NVDIMM 'life_used_percentage' metric | enhancement in-testing | Hi Mpe,
Please consider following patch series:
http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=192378 | 1.0 | Feature: add support for reporting NVDIMM 'life_used_percentage' metric - Hi Mpe,
Please consider following patch series:
http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=192378 | non_infrastructure | feature add support for reporting nvdimm life used percentage metric hi mpe please consider following patch series | 0 |
12,119 | 9,597,165,567 | IssuesEvent | 2019-05-09 20:33:35 | aspnet/AspNetCore | https://api.github.com/repos/aspnet/AspNetCore | closed | Potentially wrong Microsoft.AspNetCore.Testing version | area-infrastructure more-info-needed | When building the benchmarks I get this build error:
```
Restoring packages for /tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj...
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: Unable to find package Microsoft.AspNetCore.Hosting.Abstractions with version (>= 3.0.0-preview4-19151-06)
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 2214 version(s) in AspNetCore [ Nearest version: 3.0.0-preview4-19123-01 ]
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 96 version(s) in https://dotnetfeed.blob.core.windows.net/dotnet-core/index.json [ Nearest version: 3.0.0-preview4-19121-14 ]
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 71 version(s) in DotnetCore [ Nearest version: 2.2.0 ]
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 28 version(s) in NuGet [ Nearest version: 2.2.0 ]
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 0 version(s) in AspNetCoreTools
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 0 version(s) in roslyn
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 0 version(s) in https://dotnetfeed.blob.core.windows.net/dotnet-windowsdesktop/index.json
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 0 version(s) in Npgsql-Unstable
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Detected package downgrade: Microsoft.NETCore.Platforms from 3.0.0-preview4.19127.11 to 3.0.0-preview.19112.1. Reference the package directly from the project to select a different version.
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Benchmarks -> Microsoft.AspNetCore.Server.IntegrationTesting.IIS 3.0.0-preview4-19151-06 -> System.ServiceProcess.ServiceController 4.6.0-preview4.19127.11 -> System.Diagnostics.EventLog 4.6.0-preview4.19127.11 -> Microsoft.NETCore.Platforms (>= 3.0.0-preview4.19127.11)
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Benchmarks -> Microsoft.NETCore.Platforms (>= 3.0.0-preview.19112.1)
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Detected package downgrade: Microsoft.NETCore.Platforms from 3.0.0-preview4.19123.2 to 3.0.0-preview.19112.1. Reference the package directly from the project to select a different version.
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Benchmarks -> Microsoft.AspNetCore.Server.IntegrationTesting.IIS 3.0.0-preview4-19151-06 -> Microsoft.AspNetCore.Server.IntegrationTesting 3.0.0-preview4-19151-06 -> Microsoft.AspNetCore.Testing 3.0.0-preview4.19125.6 -> Microsoft.Win32.Registry 4.6.0-preview4.19123.2 -> System.Security.AccessControl 4.6.0-preview4.19123.2 -> Microsoft.NETCore.Platforms (>= 3.0.0-preview4.19123.2)
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Benchmarks -> Microsoft.NETCore.Platforms (>= 3.0.0-preview.19112.1)
Generating MSBuild file /tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/obj/Benchmarks.csproj.nuget.g.props.
Generating MSBuild file /tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/obj/Benchmarks.csproj.nuget.g.targets.
Restore failed in 1.28 sec for /tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj.
```
The issue seems to be that `Microsoft.AspNetCore.Server.IntegrationTesting` seems to be pointing to an old `Microsoft.AspNetCore.Testing` package. See this link that shows that it references `3.0.0-preview4.19125.6` which is a week old.
| 1.0 | Potentially wrong Microsoft.AspNetCore.Testing version - When building the benchmarks I get this build error:
```
Restoring packages for /tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj...
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: Unable to find package Microsoft.AspNetCore.Hosting.Abstractions with version (>= 3.0.0-preview4-19151-06)
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 2214 version(s) in AspNetCore [ Nearest version: 3.0.0-preview4-19123-01 ]
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 96 version(s) in https://dotnetfeed.blob.core.windows.net/dotnet-core/index.json [ Nearest version: 3.0.0-preview4-19121-14 ]
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 71 version(s) in DotnetCore [ Nearest version: 2.2.0 ]
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 28 version(s) in NuGet [ Nearest version: 2.2.0 ]
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 0 version(s) in AspNetCoreTools
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 0 version(s) in roslyn
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 0 version(s) in https://dotnetfeed.blob.core.windows.net/dotnet-windowsdesktop/index.json
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1102: - Found 0 version(s) in Npgsql-Unstable
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Detected package downgrade: Microsoft.NETCore.Platforms from 3.0.0-preview4.19127.11 to 3.0.0-preview.19112.1. Reference the package directly from the project to select a different version.
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Benchmarks -> Microsoft.AspNetCore.Server.IntegrationTesting.IIS 3.0.0-preview4-19151-06 -> System.ServiceProcess.ServiceController 4.6.0-preview4.19127.11 -> System.Diagnostics.EventLog 4.6.0-preview4.19127.11 -> Microsoft.NETCore.Platforms (>= 3.0.0-preview4.19127.11)
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Benchmarks -> Microsoft.NETCore.Platforms (>= 3.0.0-preview.19112.1)
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Detected package downgrade: Microsoft.NETCore.Platforms from 3.0.0-preview4.19123.2 to 3.0.0-preview.19112.1. Reference the package directly from the project to select a different version.
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Benchmarks -> Microsoft.AspNetCore.Server.IntegrationTesting.IIS 3.0.0-preview4-19151-06 -> Microsoft.AspNetCore.Server.IntegrationTesting 3.0.0-preview4-19151-06 -> Microsoft.AspNetCore.Testing 3.0.0-preview4.19125.6 -> Microsoft.Win32.Registry 4.6.0-preview4.19123.2 -> System.Security.AccessControl 4.6.0-preview4.19123.2 -> Microsoft.NETCore.Platforms (>= 3.0.0-preview4.19123.2)
/tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj : error NU1605: Benchmarks -> Microsoft.NETCore.Platforms (>= 3.0.0-preview.19112.1)
Generating MSBuild file /tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/obj/Benchmarks.csproj.nuget.g.props.
Generating MSBuild file /tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/obj/Benchmarks.csproj.nuget.g.targets.
Restore failed in 1.28 sec for /tmp/BenchmarksServer/cyxcxe00.nv5/benchmarks/src/Benchmarks/Benchmarks.csproj.
```
The issue seems to be that `Microsoft.AspNetCore.Server.IntegrationTesting` seems to be pointing to an old `Microsoft.AspNetCore.Testing` package. See this link that shows that it references `3.0.0-preview4.19125.6` which is a week old.
| infrastructure | potentially wrong microsoft aspnetcore testing version when building the benchmarks i get this build error restoring packages for tmp benchmarksserver benchmarks src benchmarks benchmarks csproj tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error unable to find package microsoft aspnetcore hosting abstractions with version tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error found version s in aspnetcore tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error found version s in tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error found version s in dotnetcore tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error found version s in nuget tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error found version s in aspnetcoretools tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error found version s in roslyn tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error found version s in tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error found version s in npgsql unstable tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error detected package downgrade microsoft netcore platforms from to preview reference the package directly from the project to select a different version tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error benchmarks microsoft aspnetcore server integrationtesting iis system serviceprocess servicecontroller system diagnostics eventlog microsoft netcore platforms tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error benchmarks microsoft netcore platforms preview tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error detected package downgrade microsoft netcore platforms from to preview reference the package directly from the project to select a different version tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error benchmarks microsoft aspnetcore server integrationtesting iis microsoft aspnetcore server integrationtesting microsoft aspnetcore testing microsoft registry system security accesscontrol microsoft netcore platforms tmp benchmarksserver benchmarks src benchmarks benchmarks csproj error benchmarks microsoft netcore platforms preview generating msbuild file tmp benchmarksserver benchmarks src benchmarks obj benchmarks csproj nuget g props generating msbuild file tmp benchmarksserver benchmarks src benchmarks obj benchmarks csproj nuget g targets restore failed in sec for tmp benchmarksserver benchmarks src benchmarks benchmarks csproj the issue seems to be that microsoft aspnetcore server integrationtesting seems to be pointing to an old microsoft aspnetcore testing package see this link that shows that it references which is a week old | 1 |
20,215 | 13,759,130,353 | IssuesEvent | 2020-10-07 02:06:43 | timhaley94/holdem | https://api.github.com/repos/timhaley94/holdem | closed | Dockerize lint, test, and other scripts | enhancement infrastructure | ### Background
Right now to stand up the environment for local development, you run `docker-compose up`. However, to run tests or to lint, you have to install dependencies outside of docker and run the command yourself. Not only is this weird, but it also will not be tenable when the tests rely on mongo since running the test suite outside of docker would require you standing up mongo locally on your own.
### Current Behavior
What it does:
Lint and test occur outside of docker.
### Proposed Behavior
What it should do:
A new directory should be introduced, probably called `.bin`, which provides shell scripts that do things like linting and testing inside the docker container. For this feature, we could start with just `up.sh`, `down.sh`, `lint.sh`, `test.sh`, `build.sh`.
### Open Questions
Can we make `up.sh` reinstall dependencies. If not, we should figure out how to do that or include a `install.sh` script.
| 1.0 | Dockerize lint, test, and other scripts - ### Background
Right now to stand up the environment for local development, you run `docker-compose up`. However, to run tests or to lint, you have to install dependencies outside of docker and run the command yourself. Not only is this weird, but it also will not be tenable when the tests rely on mongo since running the test suite outside of docker would require you standing up mongo locally on your own.
### Current Behavior
What it does:
Lint and test occur outside of docker.
### Proposed Behavior
What it should do:
A new directory should be introduced, probably called `.bin`, which provides shell scripts that do things like linting and testing inside the docker container. For this feature, we could start with just `up.sh`, `down.sh`, `lint.sh`, `test.sh`, `build.sh`.
### Open Questions
Can we make `up.sh` reinstall dependencies. If not, we should figure out how to do that or include a `install.sh` script.
| infrastructure | dockerize lint test and other scripts background right now to stand up the environment for local development you run docker compose up however to run tests or to lint you have to install dependencies outside of docker and run the command yourself not only is this weird but it also will not be tenable when the tests rely on mongo since running the test suite outside of docker would require you standing up mongo locally on your own current behavior what it does lint and test occur outside of docker proposed behavior what it should do a new directory should be introduced probably called bin which provides shell scripts that do things like linting and testing inside the docker container for this feature we could start with just up sh down sh lint sh test sh build sh open questions can we make up sh reinstall dependencies if not we should figure out how to do that or include a install sh script | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.